Updates from: 11/18/2022 02:13:12
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
Previously updated : 08/25/2022 Last updated : 11/17/2022
export const protectedResources = {
Now that the web API is registered and you've defined its scopes, configure the web API code to work with your Azure AD B2C tenant. Open the *3-Authorization-II/2-call-api-b2c/API* folder with Visual Studio Code.
-In the sample folder, open the *config.json* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token that the web app passes as a bearer token. Update the following properties of the app settings:
+In the sample folder, open the *authConfig.js* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token that the web app passes as a bearer token. Update the following properties of the app settings:
|Section |Key |Value | |||| |credentials|tenantName| Your Azure AD B2C [domain/tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso.ommicrosoft.com`.| |credentials|clientID| The web API application ID from step [2.1](#21-register-the-web-api-application). In the [earlier diagram](#app-registration), it's the application with **App ID: 2**.|
-|credentials| issuer| (Optional) The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
|policies|policyName|The user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). If your application uses multiple user flows or custom policies, specify only one. For example, use the sign-up or sign-in user flow.| | protectedRoutes| scopes | The scopes of your web API application registration from [step 2.5](#25-grant-permissions). |
Your final configuration file should look like the following JSON:
"credentials": { "tenantName": "<your-tenant-name>.ommicrosoft.com", "clientID": "<your-webapi-application-ID>",
- "issuer": "https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/"
}, "policies": { "policyName": "b2c_1_susi"
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md
The following errors can be returned by the Azure Active Directory B2C service.
| `AADB2C90288` | UserJourney with ID '{0}' referenced in TechnicalProfile '{1}' for refresh token redemption for tenant '{2}' does not exist in policy '{3}' or any of its base policies. | | `AADB2C90287` | The request contains invalid redirect URI '{0}'.| [Register a web application](tutorial-register-applications.md), [Sending authentication requests](openid-connect.md#send-authentication-requests) | | `AADB2C90289` | We encountered an error connecting to the identity provider. Please try again later. | [Add an IDP to your Azure AD B2C tenant](add-identity-provider.md) |
+| `AADB2C90289` | We encountered an 'invalid_client' error connecting to the identity provider. Please try again later. | Make sure the application secret is correct or it hasn't expired. Learn how to [Register apps](register-apps.md).|
| `AADB2C90296` | Application has not been configured correctly. Please contact administrator of the site you are trying to access. | [Register a web application](tutorial-register-applications.md) | | `AADB2C99005` | The request contains an invalid scope parameter which includes an illegal character '{0}'. | [Web sign-in with OpenID Connect](openid-connect.md) | | `AADB2C99006` | Azure AD B2C cannot find the extensions app with app ID '{0}'. Please visit https://go.microsoft.com/fwlink/?linkid=851224 for more information. | [Azure AD B2C extensions app](extensions-app.md) |
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Previously updated : 03/10/2022 Last updated : 11/17/2022
You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsP
</CryptographicKeys> <OutputClaims> <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="oid"/>
- <OutputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="tid"/>
<OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" /> <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="family_name" /> <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
If the sign-in process is successful, your browser is redirected to `https://jwt
- Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md). - Check out the Azure AD multi-tenant federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory), and how to pass Azure AD access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory-with-access-token)
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/relyingparty.md
Previously updated : 06/26/2022 Last updated : 11/17/2022
The **UserJourneyBehaviors** element contains the following elements:
| JourneyFraming | 0:1| Allows the user interface of this policy to be loaded in an iframe. | | ScriptExecution| 0:1| The supported [JavaScript](javascript-and-page-layout.md) execution modes. Possible values: `Allow` or `Disallow` (default). -
+When you use the above elements, you need add them to your **UserJourneyBehaviors** element in the order specified in the table. For example, the **JourneyInsights** element must be added before (above) the **ScriptExecution** element.
+
### SingleSignOn The **SingleSignOn** element contains the following attributes:
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md
Previously updated : 10/27/2022 Last updated : 11/14/2022 zone_pivot_groups: b2c-policy-type
The following table lists the administrative configuration limits in the Azure A
|Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 | |Number of policies per Azure AD B2C tenant (user flows + custom policies) |200 | |Maximum policy file size |1024 KB |
-|Number of API connectors per tenant |19 |
+|Number of API connectors per tenant |20 |
<sup>1</sup> See also [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md).
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
The sign-up and sign-in user flow handles both sign-up and sign-in experiences w
1. Enter a **Name** for the user flow. For example, *signupsignin1*. 1. For **Identity providers**, select **Email signup**.
-1. For **User attributes and claims**, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
+1. For **User attributes and token claims**, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
![Attributes and claims selection page with three claims selected](./media/tutorial-create-user-flows/signup-signin-attributes.png)
active-directory Active Directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/active-directory-app-proxy-protect-ndes.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Back End Kerberos Constrained Delegation How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-how-to.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Config Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-problem.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Config Sso How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-sso-how-to.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Cookie Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
Previously updated : 08/12/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Custom Home Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure For Claims Aware Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-for-claims-aware-applications.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Hard Coded Link Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-hard-coded-link-translation.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Native Client Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-native-client-application.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Single Sign On On Premises Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Single Sign On Password Vaulting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Single Sign On With Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-headers.md
Previously updated : 02/22/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Single Sign On With Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-kcd.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Connectivity No Working Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectivity-no-working-connector.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Connector Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-groups.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Connector Installation Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-installation-problem.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectors.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Debug Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-debug-apps.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Debug Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-debug-connectors.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-deployment-plan.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy High Availability Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-high-availability-load-balancing.md
Previously updated : 04/29/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Microsoft Cloud Application Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-microsoft-cloud-application-security.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-power-bi.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Sharepoint Server Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Sharepoint Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Tableau https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-tableau.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-teams.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Page Appearance Broken Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-appearance-broken-problem.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Page Links Broken Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-links-broken-problem.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Page Load Speed Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-load-speed-problem.md
Previously updated : 07/11/2017 Last updated : 11/17/2022
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-powershell-samples.md
Previously updated : 04/29/2021 Last updated : 11/17/2022
active-directory Application Proxy Qlik https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-qlik.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Register Connector Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-register-connector-powershell.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Remove Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-remove-personal-data.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Secure Api Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-secure-api-access.md
Previously updated : 05/06/2021 Last updated : 11/17/2022
active-directory Application Proxy Sign In Bad Gateway Timeout Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-sign-in-bad-gateway-timeout-error.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-troubleshoot.md
Previously updated : 10/12/2021 Last updated : 11/17/2022
active-directory Application Proxy Understand Cors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-understand-cors-issues.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Wildcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-wildcard.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Sign In Problem On Premises Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-sign-in-problem-on-premises-application-proxy.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory V2 Oauth2 Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-device-code.md
Previously updated : 06/25/2021 Last updated : 11/15/2022 -+ # Microsoft identity platform and the OAuth 2.0 device authorization grant flow
-The Microsoft identity platform supports the [device authorization grant](https://tools.ietf.org/html/rfc8628), which allows users to sign in to input-constrained devices such as a smart TV, IoT device, or printer. To enable this flow, the device has the user visit a webpage in their browser on another device to sign in. Once the user signs in, the device is able to get access tokens and refresh tokens as needed.
+The Microsoft identity platform supports the [device authorization grant](https://tools.ietf.org/html/rfc8628), which allows users to sign in to input-constrained devices such as a smart TV, IoT device, or a printer. To enable this flow, the device has the user visit a webpage in a browser on another device to sign in. Once the user signs in, the device is able to get access tokens and refresh tokens as needed.
-This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
+This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). You can refer to [sample apps that use MSAL](sample-v2-code.md) for examples.
[!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)] ## Protocol diagram
-The entire device code flow looks similar to the next diagram. We describe each of the steps later in this article.
+The entire device code flow is shown in the following diagram. Each step is explained throughout this article.
![Device code flow](./media/v2-oauth2-device-code/v2-oauth-device-flow.svg) ## Device authorization request
-The client must first check with the authentication server for a device and user code that's used to initiate authentication. The client collects this request from the `/devicecode` endpoint. In this request, the client should also include the permissions it needs to acquire from the user. From the moment this request is sent, the user has only 15 minutes to sign in (the usual value for `expires_in`), so only make this request when the user has indicated they're ready to sign in.
+The client must first check with the authentication server for a device and user code that's used to initiate authentication. The client collects this request from the `/devicecode` endpoint. In the request, the client should also include the permissions it needs to acquire from the user.
+
+From the moment the request is sent, the user has 15 minutes to sign in. This is the default value for `expires_in`. The request should only be made when the user has indicated they're ready to sign in.
```HTTP // Line breaks are for legibility only.
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Parameter | Condition | Description | | | | |
-| `tenant` | Required | Can be /common, /consumers, or /organizations. It can also be the directory tenant that you want to request permission from in GUID or friendly name format. |
+| `tenant` | Required | Can be `/common`, `/consumers`, or `/organizations`. It can also be the directory tenant that you want to request permission from in GUID or friendly name format. |
| `client_id` | Required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | | `scope` | Required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. |
A successful response will be a JSON object containing the required information
## Authenticating the user
-After receiving the `user_code` and `verification_uri`, the client displays these to the user, instructing them to sign in using their mobile phone or PC browser.
+After receiving the `user_code` and `verification_uri`, the client displays these to the user, instructing them to use their mobile phone or PC browser to sign in.
-If the user authenticates with a personal account (on /common or /consumers), they will be asked to sign in again in order to transfer authentication state to the device. They will also be asked to provide consent, to ensure they are aware of the permissions being granted. This does not apply to work or school accounts used to authenticate.
+If the user authenticates with a personal account, using `/common` or `/consumers`, they'll be asked to sign in again in order to transfer authentication state to the device. This is because the device is unable to access the user's cookies. They'll also be asked to consent to the permissions requested by the client. This however doesn't apply to work or school accounts used to authenticate.
While the user is authenticating at the `verification_uri`, the client should be polling the `/token` endpoint for the requested token using the `device_code`.
While the user is authenticating at the `verification_uri`, the client should be
POST https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token Content-Type: application/x-www-form-urlencoded
-grant_type=urn:ietf:params:oauth:grant-type:device_code
-&client_id=6731de76-14a6-49ae-97bc-6eba6914391e
-&device_code=GMMhmHCXhWEzkobqIHGG_EnNYYsAkukHspeYUk9E8...
+grant_type=urn:ietf:params:oauth:grant-type:device_code&client_id=6731de76-14a6-49ae-97bc-6eba6914391e&device_code=GMMhmHCXhWEzkobqIHGG_EnNYYsAkukHspeYUk9E8...
``` | Parameter | Required | Description|
grant_type=urn:ietf:params:oauth:grant-type:device_code
### Expected errors
-The device code flow is a polling protocol so your client must expect to receive errors before the user has finished authenticating.
+The device code flow is a polling protocol so errors served to the client must be expected prior to completion of user authentication.
| Error | Description | Client Action | | | -- | -| | `authorization_pending` | The user hasn't finished authenticating, but hasn't canceled the flow. | Repeat the request after at least `interval` seconds. |
-| `authorization_declined` | The end user denied the authorization request.| Stop polling, and revert to an unauthenticated state. |
+| `authorization_declined` | The end user denied the authorization request.| Stop polling and revert to an unauthenticated state. |
| `bad_verification_code`| The `device_code` sent to the `/token` endpoint wasn't recognized. | Verify that the client is sending the correct `device_code` in the request. |
-| `expired_token` | At least `expires_in` seconds have passed, and authentication is no longer possible with this `device_code`. | Stop polling and revert to an unauthenticated state. |
+| `expired_token` | Value of `expires_in` has been exceeded and authentication is no longer possible with `device_code`. | Stop polling and revert to an unauthenticated state. |
### Successful authentication response
A successful token response will look like:
| Parameter | Format | Description | | | | -- | | `token_type` | String| Always `Bearer`. |
-| `scope` | Space separated strings | If an access token was returned, this lists the scopes the access token is valid for. |
-| `expires_in`| int | Number of seconds before the included access token is valid for. |
+| `scope` | Space separated strings | If an access token was returned, this lists the scopes in which the access token is valid for. |
+| `expires_in`| int | Number of seconds the included access token is valid for. |
| `access_token`| Opaque string | Issued for the [scopes](v2-permissions-and-consent.md) that were requested. | | `id_token` | JWT | Issued if the original `scope` parameter included the `openid` scope. | | `refresh_token` | Opaque string | Issued if the original `scope` parameter included `offline_access`. |
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-manage.md
Previously updated : 06/23/2022 Last updated : 11/11/2022
# Managing custom domain names in your Azure Active Directory
-A domain name is an important part of the identifier for resources in many Azure Active Directory (Azure AD), part of Microsoft Entra: it's part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
+A domain name is an important part of the identifier for resources in many Azure Active Directory (Azure AD) deployments. It is part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
## Set the primary domain name for your Azure AD organization
If you have already added a contoso.com domain to one Azure AD organization, you
## What to do if you change the DNS registrar for your custom domain name
-If you change the DNS registrars, there are no additional configuration tasks in Azure AD. You can continue using the domain name with Azure AD without interruption. If you use your custom domain name with Microsoft 365, Intune, or other services that rely on custom domain names in Azure AD, see the documentation for those services.
+If you change the DNS registrars, there are no other configuration tasks in Azure AD. You can continue using the domain name with Azure AD without interruption. If you use your custom domain name with Microsoft 365, Intune, or other services that rely on custom domain names in Azure AD, see the documentation for those services.
## Delete a custom domain name
You must change or delete any such resource in your Azure AD organization before
> [!Note] > To delete the custom domain, use a Global Administrator account that is based on either the default domain (onmicrosoft.com) or a different custom domain (mydomainname.com).
-### ForceDelete option
+## ForceDelete option
You can **ForceDelete** a domain name in the [Azure AD Admin Center](https://aad.portal.azure.com) or using [Microsoft Graph API](/graph/api/domain-forcedelete). These options use an asynchronous operation and update all references from the custom domain name like ΓÇ£user@contoso.comΓÇ¥ to the initial default domain name such as ΓÇ£user@contoso.onmicrosoft.com.ΓÇ¥
An error is returned when:
* The number of objects to be renamed is greater than 1000 * One of the applications to be renamed is a multi-tenant app
-### Frequently asked questions
+## Best Practices for Domain Hygiene
+
+Use a reputable registrar that provides ample notifications for domain name changes, registration expiry, a grace period for expired domains, and maintains high security standards for controlling who has access to your domain name configuration and TXT records.
+Keep your domain names current with your Registrar, and verify TXT records for accuracy.
+
+* If you purposefully are expiring your domain name or turning over ownership to someone else (separately from your Azure AD tenant), you should delete it from your Azure AD tenant prior to expiring or transferring.
+* If you do allow your domain name to expire, if you are able to reactivate it/regain control of it, carefully review all TXT records with the registrar to ensure no tampering of your domain name took place.
+* If you can't reactivate or regain control of your domain name immediately, you should delete it from your Azure AD tenant. Dom't readd/re-verify until you are able to resolve ownership of the domain name and verify the full TXT record for correctness.
+
+>[!NOTE]
+> Microsoft will not allow a domain name to be verified with more than Azure AD tenant. Once you delete a domain name from your tenant, you will not be able to re-add/re-verify it with your Azure AD tenant if it is subsequently added and verified with another Azure AD tenant.
+
+## Frequently asked questions
**Q: Why is the domain deletion failing with an error that states that I have Exchange mastered groups on this domain name?** <br>
-**A:** Today, certain groups like Mail-Enabled Security groups and distributed lists are provisioned by Exchange and need to be manually cleaned up in [Exchange Admin Center (EAC)](https://outlook.office365.com/ecp/). There may be lingering ProxyAddresses which rely on the custom domain name and will need to be updated manually to another domain name.
+**A:** Today, certain groups like Mail-Enabled Security groups and distributed lists are provisioned by Exchange and need to be manually cleaned up in [Exchange Admin Center (EAC)](https://outlook.office365.com/ecp/). There may be lingering ProxyAddresses, which rely on the custom domain name and will need to be updated manually to another domain name.
**Q: I am logged in as admin\@contoso.com but I cannot delete the domain name ΓÇ£contoso.comΓÇ¥?**<br>
-**A:** You cannot reference the custom domain name you are trying to delete in your user account name. Ensure that the Global Administrator account is using the initial default domain name (.onmicrosoft.com) such as admin@contoso.onmicrosoft.com. Sign in with a different Global Administrator account that such as admin@contoso.onmicrosoft.com or another custom domain name like ΓÇ£fabrikam.comΓÇ¥ where the account is admin@fabrikam.com.
+**A:** You can't reference the custom domain name you are trying to delete in your user account name. Ensure that the Global Administrator account is using the initial default domain name (.onmicrosoft.com) such as admin@contoso.onmicrosoft.com. Sign in with a different Global Administrator account that such as admin@contoso.onmicrosoft.com or another custom domain name like ΓÇ£fabrikam.comΓÇ¥ where the account is admin@fabrikam.com.
**Q: I clicked the Delete domain button and see `In Progress` status for the Delete operation. How long does it take? What happens if it fails?**<br>
-**A:** The delete domain operation is an asynchronous background task that renames all references to the domain name. It should complete within a minute or two. If domain deletion fails, ensure that you donΓÇÖt have:
+**A:** The delete domain operation is an asynchronous background task that renames all references to the domain name. It may take up to 24 hours to complete. If domain deletion fails, ensure that you donΓÇÖt have:
* Apps configured on the domain name with the appIdentifierURI * Any mail-enabled group referencing the custom domain name * More than 1000 references to the domain name
+* The domain to be removed the set as the Primary domain of your organization
-If you find that any of the conditions havenΓÇÖt been met, manually clean up the references and try to delete the domain again.
+Also note that the ForceDelete option won't work if the domain uses Federated authentication type. In that case the users/groups on the domain must be renamed or removed using the on-premises Active Directory before reattempting the domain removal.
+If you find that any of the conditions havenΓÇÖt been met, manually clean up the references, and try to delete the domain again.
## Use PowerShell or the Microsoft Graph API to manage domain names
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
You can also create a rule that selects device objects for membership in a group
> [!NOTE] > systemlabels is a read-only attribute that cannot be set with Intune. >
-> For Windows 10, the correct format of the deviceOSVersion attribute is as follows: (device.deviceOSVersion -startsWith "10.0.1"). The formatting can be validated with the Get-MsolDevice PowerShell cmdlet.
+> For Windows 10, the correct format of the deviceOSVersion attribute is as follows: (device.deviceOSVersion -startsWith "10.0.1"). The formatting can be validated with the Get-MgDevice PowerShell cmdlet:
+> ```
+> Get-MgDevice -Search "displayName:YourMachineNameHere" -ConsistencyLevel eventual | Select-Object -ExpandProperty 'OperatingSystemVersion'
+> ```
The following device attributes can be used.
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
Previously updated : 11/05/2021 Last updated : 11/17/2022 +
+# Customer intent: As a tenant administrator, I want to enable B2B user access to on-premises apps.
# Grant B2B users in Azure AD access to your on-premises applications
As an organization that uses Azure Active Directory (Azure AD) B2B collaboration
If your on-premises app uses SAML-based authentication, you can easily make these apps available to your Azure AD B2B collaboration users through the Azure portal using Azure AD Application Proxy.
-You must do the following :
+You must do the following:
- Enable Application Proxy and install a connector. For instructions, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md). - Publish the on-premises SAML-based application through Azure AD Application Proxy by following the instructions in [SAML single sign-on for on-premises applications with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md).
To provide B2B users access to on-premises applications that are secured with in
For the B2B user scenario, there are two methods you can use to create the guest user objects that are required for authorization in the on-premises directory: - Microsoft Identity Manager (MIM) and the MIM management agent for Microsoft Graph.
- - A PowerShell script, which is a more lightweight solution that does not require MIM.
+ - A PowerShell script, which is a more lightweight solution that doesn't require MIM.
The following diagram provides a high-level overview of how Azure AD Application Proxy and the generation of the B2B user object in the on-premises directory work together to grant B2B users access to your on-premises IWA and KCD apps. The numbered steps are described in detail below the diagram.
-![Diagram of MIM and B2B script solutions](media/hybrid-cloud-to-on-premises/MIMScriptSolution.PNG)
+![Diagram of MIM and B2B script solutions.](media/hybrid-cloud-to-on-premises/MIMScriptSolution.PNG)
1. A user from a partner organization (the Fabrikam tenant) is invited to the Contoso tenant. 2. A guest user object is created in the Contoso tenant (for example, a user object with a UPN of guest_fabrikam.com#EXT#@contoso.onmicrosoft.com).
You can use an [Azure AD B2B sample script](https://github.com/Azure-Samples/B2B
### Create B2B guest user objects through MIM
-For information about how to use MIM 2016 Service Pack 1 and the MIM management agent for Microsoft Graph to create the guest user objects in the on-premises directory, see [Azure AD business-to-business (B2B) collaboration with Microsoft Identity Manager (MIM) 2016 SP1 with Azure Application Proxy](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario).
+You can use MIM 2016 Service Pack 1, and the MIM management agent for Microsoft Graph to create the guest user objects in the on-premises directory. To learn more, see [Azure AD business-to-business (B2B) collaboration with Microsoft Identity Manager (MIM) 2016 SP1 with Azure Application Proxy](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario).
## License considerations
Make sure that you have the correct Client Access Licenses (CALs) for external g
## Next steps -- See also [Azure Active Directory B2B collaboration for hybrid organizations](hybrid-organizations.md)--- For an overview of Azure AD Connect, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
+- [Grant local users access to cloud apps](hybrid-on-premises-to-cloud.md)
+- [Azure Active Directory B2B collaboration for hybrid organizations](hybrid-organizations.md)
+- For an overview of Azure AD Connect, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
active-directory Hybrid On Premises To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-on-premises-to-cloud.md
Title: Sync local partner accounts to cloud as B2B users - Azure AD
-description: Give locally-managed external partners access to both local and cloud resources using the same credentials with Azure AD B2B collaboration.
+description: Give locally managed external partners access to both local and cloud resources using the same credentials with Azure AD B2B collaboration.
Previously updated : 11/03/2020 Last updated : 11/17/2022 +
+# Customer intent: As a tenant administrator, I want to enable locally-managed external partners' access to both local and cloud resources via the Azure AD B2B collaboration.
-# Grant locally-managed partner accounts access to cloud resources using Azure AD B2B collaboration
+# Grant locally managed partner accounts access to cloud resources using Azure AD B2B collaboration
Before Azure Active Directory (Azure AD), organizations with on-premises identity systems have traditionally managed partner accounts in their on-premises directory. In such an organization, when you start to move apps to Azure AD, you want to make sure your partners can access the resources they need. It shouldn't matter whether the resources are on-premises or in the cloud. Also, you want your partner users to be able to use the same sign-in credentials for both on-premises and Azure AD resources.
-If you create accounts for your external partners in your on-premises directory (for example, you create an account with a sign-in name of "wmoran" for an external user named Wendy Moran in your partners.contoso.com domain), you can now sync these accounts to the cloud. Specifically, you can use Azure AD Connect to sync the partner accounts to the cloud, which creates a user account with UserType = Guest. This enables your partner users to access cloud resources using the same credentials as their local accounts, without giving them more access than they need.
+If you create accounts for your external partners in your on-premises directory (for example, you create an account with a sign-in name of "msullivan" for an external user named Maria Sullivan in your partners.contoso.com domain), you can now sync these accounts to the cloud. Specifically, you can use [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) to sync the partner accounts to the cloud, which creates a user account with UserType = Guest. This enables your partner users to access cloud resources using the same credentials as their local accounts, without giving them more access than they need.
> [!NOTE] > See also how to [invite internal users to B2B collaboration](invite-internal-users.md). With this feature, you can invite internal guest users to use B2B collaboration, regardless of whether you've synced their accounts from your on-premises directory to the cloud. Once the user accepts the invitation to use B2B collaboration, they'll be able to use their own identities and credentials to sign in to the resources you want them to access. You wonΓÇÖt need to maintain passwords or manage account lifecycles.
active-directory How To Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-groups.md
We currently don't support:
1. Locate the group you want your group to be a member of and choose **Select**.
- For this exercise, we're adding "MDM policy - West" to the "MDM policy - All org" group, so "MDM - policy - West" inherits all the properties and configurations of the "MDM policy - All org" group.
+ For this exercise, we're adding "MDM policy - West" to the "MDM policy - All org" group. The "MDM - policy - West" group will have the same access as the "MDM policy - All org" group.
![Screenshot of making a group the member of another group with 'Group membership' from the side menu and 'Add membership' option highlighted.](media/how-to-manage-groups/nested-groups-selected.png)
Now you can review the "MDM policy - West - Group memberships" page to see the g
For a more detailed view of the group and member relationship, select the parent group name (MDM policy - All org) and take a look at the "MDM policy - West" page details. ### Remove a group from another group
-You can remove an existing Security group from another Security group; however, removing the group also removes any inherited settings for its members.
+You can remove an existing Security group from another Security group; however, removing the group also removes any inherited access for its members.
1. On the **Groups - All groups** page, search for and select the group you need to remove as a member of another group.
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
For Example:
``` powershell $credential = Get-Credential
-Set-ADSyncRestrictedPermissions -ADConnectorAccountDN 'CN=ADConnectorAccount,CN=Users,DC=Contoso,DC=com' -Credential $credential
+Set-ADSyncRestrictedPermissions -ADConnectorAccountDN 'CN=ADConnectorAccount,OU=Users,DC=Contoso,DC=com' -Credential $credential
``` This cmdlet will set the following permissions:
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
To configure the application properties:
1. In the **Manage** section, select **Properties** to open the **Properties** pane for editing. 1. Configure the properties based on the needs of your application.
+## Use Microsoft Graph to configure application properties
+
+You can also configure properties of both app registrations and enterprise applications (service principals) through Microsoft Graph. These can include basic properties, permissions, and role assignments. For more information, see [Create and manage an Azure AD application using Microsoft Graph](/graph/tutorial-applications-basics#configure-other-basic-properties-for-your-app).
+ ## Next steps Learn more about how to manage enterprise applications.
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
To learn more about VU Security and its complete set of solutions, visit
To get started with the VU Identity Card, ensure the following prerequisites are met: -- A tenant [configured](/azure/active-directory/verifiable-credentials/verifiablee-credentials-configure-tenant)
+- A tenant [configured](/azure/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant)
for Entra Verified ID service. - If you don\'t have an existing tenant, you can [create an Azure
User flow is specific to your application or website. However if you are using o
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
Title: Integrate Azure Container Registry with Azure Kubernetes Service description: Learn how to integrate Azure Kubernetes Service (AKS) with Azure Container Registry (ACR) - Previously updated : 06/10/2021 Last updated : 11/16/2022 ms.tool: azure-cli, azure-powershell ms.devlang: azurecli # Authenticate with Azure Container Registry from Azure Kubernetes Service
-When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. This operation is implemented as part of the CLI, PowerShell, and Portal experience by granting the required permissions to your ACR. This article provides examples for configuring authentication between these two Azure services.
+You need to establish an authentication mechanism when using [Azure Container Registry (ACR)][acr-intro] with Azure Kubernetes Service (AKS). This operation is implemented as part of the Azure CLI, Azure PowerShell, and Azure portal experiences by granting the required permissions to your ACR. This article provides examples for configuring authentication between these Azure services.
-You can set up the AKS to ACR integration in a few simple commands with the Azure CLI or Azure PowerShell. This integration assigns the AcrPull role to the managed identity associated to the AKS Cluster.
+You can set up the AKS to ACR integration using the Azure CLI or Azure PowerShell. The AKS to ACR integration assigns the [**AcrPull** role][acr-pull] to the [Azure Active Directory (Azure AD) **managed identity**][aad-identity] associated with your AKS cluster.
> [!NOTE]
-> This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][Image Pull Secret].
+> This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][image-pull-secret].
## Before you begin
-These examples require:
+* You need to have the [**Owner**][rbac-owner], [**Azure account administrator**][rbac-classic], or [**Azure co-administrator**][rbac-classic] role on your **Azure subscription**.
+ * To avoid needing one of these roles, you can instead use an existing managed identity to authenticate ACR from AKS. For more information, see [Use an Azure managed identity to authenticate to an ACR](../container-registry/container-registry-authentication-managed-identity.md).
+* If you're using Azure CLI, this article requires that you're running Azure CLI version 2.7.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this article requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-### [Azure CLI](#tab/azure-cli)
+## Create a new AKS cluster with ACR integration
-* **Owner**, **Azure account administrator**, or **Azure co-administrator** role on the **Azure subscription**
-* Azure CLI version 2.7.0 or later
+You can set up AKS and ACR integration during the creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure AD managed identity is used.
-### [Azure PowerShell](#tab/azure-powershell)
+### Create an ACR
-* **Owner**, **Azure account administrator**, or **Azure co-administrator** role on the **Azure subscription**
-* Azure PowerShell version 5.9.0 or later
+If you don't already have an ACR, create one using the following command.
-
+#### [Azure CLI](#tab/azure-cli)
-To avoid needing an **Owner**, **Azure account administrator**, or **Azure co-administrator** role, you can use an existing managed identity to authenticate ACR from AKS. For more information, see [Use an Azure managed identity to authenticate to an Azure container registry](../container-registry/container-registry-authentication-managed-identity.md).
+```azurecli
+# Set this variable to the name of your ACR. The name must be globally unique.
-## Create a new AKS cluster with ACR integration
+MYACR=myContainerRegistry
-You can set up AKS and ACR integration during the initial creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure Active Directory **managed identity** is used. The following command allows you to authorize an existing ACR in your subscription and configures the appropriate **ACRPull** role for the managed identity. Supply valid values for your parameters below.
+az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic
+```
-### [Azure CLI](#tab/azure-cli)
+#### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+# Set this variable to the name of your ACR. The name must be globally unique.
+
+$MYACR = 'myContainerRegistry'
+
+New-AzContainerRegistry -Name $MYACR -ResourceGroupName myContainerRegistryResourceGroup -Sku Basic
+```
+++
+### Create a new AKS cluster and integrate with an existing ACR
+
+If you already have an ACR, use the following command to create a new AKS cluster with ACR integration. This command allows you to authorize an existing ACR in your subscription and configures the appropriate **AcrPull** role for the managed identity. Supply valid values for your parameters below.
+
+#### [Azure CLI](#tab/azure-cli)
```azurecli
-# set this to the name of your Azure Container Registry. It must be globally unique
+# Set this variable to the name of your ACR. The name must be globally unique.
+ MYACR=myContainerRegistry
-# Run the following line to create an Azure Container Registry if you do not already have one
-az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic
+# Create an AKS cluster with ACR integration.
-# Create an AKS cluster with ACR integration
az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr $MYACR ```
-Alternatively, you can specify the ACR name using an ACR resource ID, which has the following format:
+Alternatively, you can specify the ACR name using an ACR resource ID using the following format:
`/subscriptions/\<subscription-id\>/resourceGroups/\<resource-group-name\>/providers/Microsoft.ContainerRegistry/registries/\<name\>` > [!NOTE]
-> If you are using an ACR that is located in a different subscription from your AKS cluster, use the ACR resource ID when attaching or detaching from an AKS cluster.
-
-```azurecli
-az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr /subscriptions/<subscription-id>/resourceGroups/myContainerRegistryResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry
-```
+> If you're using an ACR located in a different subscription from your AKS cluster, use the ACR *resource ID* when attaching or detaching from the cluster.
+>
+> ```azurecli
+> az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr /subscriptions/<subscription-id>/resourceGroups/myContainerRegistryResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry
+> ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell
-# set this to the name of your Azure Container Registry. It must be globally unique
+# Set this variable to the name of your ACR. The name must be globally unique.
+ $MYACR = 'myContainerRegistry'
-# Run the following line to create an Azure Container Registry if you do not already have one
-New-AzContainerRegistry -Name $MYACR -ResourceGroupName myContainerRegistryResourceGroup -Sku Basic
+# Create an AKS cluster with ACR integration.
-# Create an AKS cluster with ACR integration
New-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -GenerateSshKey -AcrNameToAttach $MYACR ```
This step may take several minutes to complete.
## Configure ACR integration for existing AKS clusters
-### [Azure CLI](#tab/azure-cli)
+### Attach an ACR to an AKS cluster
+
+#### [Azure CLI](#tab/azure-cli)
-Integrate an existing ACR with existing AKS clusters by supplying valid values for **acr-name** or **acr-resource-id** as below.
+Integrate an existing ACR with an existing AKS cluster using the [`--attach-acr` parameter][cli-param] and valid values for **acr-name** or **acr-resource-id**.
```azurecli
+# Attach using acr-name
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>
-```
-or,
-
-```azurecli
+# Attach using acr-resource-id
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-resource-id> ``` > [!NOTE]
-> Running `az aks update --attach-acr` uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the kubelet managed identity. For more information on the AKS managed identities, see [Summary of managed identities][summary-msi].
+> The `az aks update --attach-acr` command uses the permissions of the user running the command to create the ACR role assignment. This role is assigned to the [kubelet][kubelet] managed identity. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
-You can also remove the integration between an ACR and an AKS cluster with the following
+#### [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-name>
+Integrate an existing ACR with an existing AKS cluster using the [`-AcrNameToAttach` parameter][ps-attach] and valid values for **acr-name**.
+
+```azurepowershell
+Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToAttach <acr-name>
```
-or
+> [!NOTE]
+> Running the `Set-AzAksCluster -AcrNameToAttach` cmdlet uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the [kubelet][kubelet] managed identity. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
-```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-resource-id>
-```
+
-### [Azure PowerShell](#tab/azure-powershell)
+### Detach an ACR from an AKS cluster
-Integrate an existing ACR with existing AKS clusters by supplying valid values for **acr-name** as below.
+#### [Azure CLI](#tab/azure-cli)
-```azurepowershell
-Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToAttach <acr-name>
+Remove the integration between an ACR and an AKS cluster using the [`--detach-acr` parameter][cli-param] and valid values for **acr-name** or **acr-resource-id**.
+
+```azurecli
+# Detach using acr-name
+az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-name>
+
+# Detach using acr-resource-id
+az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-resource-id>
```
-> [!NOTE]
-> Running `Set-AzAksCluster -AcrNameToAttach` uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the kubelet managed identity. For more information on the AKS managed identities, see [Summary of managed identities][summary-msi].
+#### [Azure PowerShell](#tab/azure-powershell)
-You can also remove the integration between an ACR and an AKS cluster with the following
+Remove the integration between an ACR and an AKS cluster using the [`-AcrNameToDetach` parameter][ps-detach] and valid values for **acr-name**.
```azurepowershell Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToDetach <acr-name>
Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameT
### Import an image into your ACR
-Import an image from docker hub into your ACR by running the following:
+Run the following command to import an image from Docker Hub into your ACR.
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
```azurecli az acr import -n <acr-name> --source docker.io/library/nginx:latest --image nginx:v1 ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell Import-AzContainerRegistryImage -RegistryName <acr-name> -ResourceGroupName myResourceGroup -SourceRegistryUri docker.io -SourceImage library/nginx:latest
Import-AzContainerRegistryImage -RegistryName <acr-name> -ResourceGroupName myRe
### Deploy the sample image from ACR to AKS
-Ensure you have the proper AKS credentials
+Ensure you have the proper AKS credentials.
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
```azurecli az aks get-credentials -g myResourceGroup -n myAKSCluster ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
-Create a file called **acr-nginx.yaml** that contains the following. Substitute the resource name of your registry for **acr-name**. Example: *myContainerRegistry*.
+Create a file called **acr-nginx.yaml** using the sample YAML below. Replace **acr-name** with the name of your ACR.
```yaml apiVersion: apps/v1
spec:
- containerPort: 80 ```
-Next, run this deployment in your AKS cluster:
+After creating the file, run the following deployment in your AKS cluster.
```console kubectl apply -f acr-nginx.yaml ```
-You can monitor the deployment by running:
+You can monitor the deployment by running `kubectl get pods`.
```console kubectl get pods ```
-You should have two running pods.
+The output should show two running pods.
```output NAME READY STATUS RESTARTS AGE
nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
``` ### Troubleshooting
-* Run the [az aks check-acr](/cli/azure/aks#az-aks-check-acr) command to validate that the registry is accessible from the AKS cluster.
-* Learn more about [ACR Monitoring](../container-registry/monitor-service.md)
-* Learn more about [ACR Health](../container-registry/container-registry-check-health.md)
+
+* Run the [`az aks check-acr`](/cli/azure/aks#az-aks-check-acr) command to validate that the registry is accessible from the AKS cluster.
+* Learn more about [ACR monitoring](../container-registry/monitor-service.md).
+* Learn more about [ACR health](../container-registry/container-registry-check-health.md).
<!-- LINKS - external -->
-[AKS AKS CLI]: /cli/azure/aks#az_aks_create
-[Image Pull secret]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+[image-pull-secret]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
[summary-msi]: use-managed-identity.md#summary-of-managed-identities
+[acr-pull]: ../role-based-access-control/built-in-roles.md#acrpull
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
+[acr-intro]: ../container-registry/container-registry-intro.md
+[aad-identity]: ../active-directory/managed-identities-azure-resources/overview.md
+[rbac-owner]: ../role-based-access-control/built-in-roles.md#owner
+[rbac-classic]: ../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles
+[kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
+[ps-detach]: /powershell/module/az.aks/set-azakscluster#-acrnametodetach
+[cli-param]: /cli/azure/aks#az-aks-update-optional-parameters
+[ps-attach]: /powershell/module/az.aks/set-azakscluster#-acrnametoattach
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
A breakdown of the deployment specifications in the YAML manifest file is as fol
| -- | - | | `.apiVersion` | Specifies the API group and API resource you want to use when creating the resource. | | `.kind` | Specifies the type of resource you want to create. |
-| `.metadata.name` | Specifies the image to run. This file will run the *nginx* image from Docker Hub. |
+| `.metadata.name` | Specifies the name of the deployment. This file will run the *nginx* image from Docker Hub. |
| `.spec.replicas` | Specifies how many pods to create. This file will create three deplicated pods. | | `.spec.selector` | Specifies which pods will be affected by this deployment. | | `.spec.selector.matchLabels` | Contains a map of *{key, value}* pairs that allows the deployment to find and manage the created pods. |
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
To use Managed NAT gateway, you must have the following:
* Kubernetes version 1.20.x or above ## Create an AKS cluster with a Managed NAT Gateway
-To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway` as well as `--nat-gateway-managed-outbound-ip-count` and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myresourcegroup* resource group, then creates a *natcluster* AKS cluster in *myresourcegroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 30 seconds.
+To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway` as well as `--nat-gateway-managed-outbound-ip-count` and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myresourcegroup* resource group, then creates a *natcluster* AKS cluster in *myresourcegroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 4 minutes.
```azurecli-interactive
az aks create \
--node-count 3 \ --outbound-type managedNATGateway \ --nat-gateway-managed-outbound-ip-count 2 \
- --nat-gateway-idle-timeout 30
+ --nat-gateway-idle-timeout 4
``` > [!IMPORTANT]
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quotas-skus-regions.md
Each node in an AKS cluster contains a fixed amount of compute resources such as
For more information on VM types and their compute resources, see [Sizes for virtual machines in Azure][vm-skus].
+## Supported container image sizes
+
+AKS doesn't set a limit on the container image size. However, it's important to understand that the larger the container image, the higher the memory demand. This could potentially exceed resource limits or the overall available memory of worker nodes. By default, memory for VM size Standard_DS2_v2 for an AKS cluster is set to 7 GiB.
+
+When a container image is very large (1 TiB or more), kubelet might not be able to pull it from your container registry to a node due to lack of disk space.
+ ## Region availability For the latest list of where you can deploy and run clusters, see [AKS region availability][region-availability].
aks Use Mariner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md
description: Learn how to use the Mariner container host on Azure Kubernetes Ser
Previously updated : 09/22/2022 Last updated : 11/17/2022 # Use the Mariner container host on Azure Kubernetes Service (AKS)
Mariner currently has the following limitations:
* Mariner does not yet have image SKUs for GPU, ARM64, SGX, or FIPS. * Mariner does not yet have FedRAMP, FIPS, or CIS certification. * Mariner cannot yet be deployed through Azure portal or Terraform.
-* Qualys and Trivy are the only vulnerability scanning tools that support Mariner today.
+* Qualys, Trivy, and Microsoft Defender for Containers are the only vulnerability scanning tools that support Mariner today.
* The Mariner container host is a Gen 2 image. Mariner does not plan to offer a Gen 1 SKU. * Node configurations are not yet supported. * Mariner is not yet supported in GitHub actions.
aks Use Windows Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-windows-hpc.md
spec:
hostProcess: true runAsUserName: "NT AUTHORITY\\SYSTEM" command:
- - pwsh.exe
+ - powershell.exe
- -command - | $AdminRights = ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]"Administrator")
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
To register the app, perform the following steps:
You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+## Additional validations (optional)
+
+The steps defined above allow you to authenticate incoming requests for your Azure AD tenant. This allows anyone within the tenant to access the application, which is fine for many applications. However, some applications need to restrict access further by making authorization decisions. Your application code is often the best place to handle custom authorization logic. However, for common scenarios, the platform provides built-in checks that you can use to limit access.
+
+This section shows how to enable built-in checks using the [App Service authentication V2 API](./configure-authentication-api-version.md). Currently, the only way to configure these built-in checks is via [Azure Resource Manager templates](/azure/templates/microsoft.web/sites/config-authsettingsv2) or the [REST API](/rest/api/appservice/web-apps/update-auth-settings-v2).
+
+Within the API object, the Azure Active Directory identity provider configuration has a `valdation` section that can include a `defaultAuthorizationPolicy` object as in the following structure:
+
+```json
+{
+ "validation": {
+ "defaultAuthorizationPolicy": {
+ "allowedApplications": [],
+ "allowedPrincipals": {
+ "identities": []
+ }
+ }
+ }
+}
+```
+
+| Property | Description |
+||-|
+| `defaultAuthorizationPolicy` | A grouping of requirements that must be met in order to access the app. Access is granted based on a logical `AND` over each of its configured properties. When `allowedApplications` and `allowedPrincipals` are both configured, the incoming request must satisfy both requirements in order to be accepted. |
+| `allowedApplications` | An allowlist of string application **client IDs** representing the client resource that is calling into the app. When this property is configured as a nonempty array, only tokens obtained by an application specified in the list will be accepted.<br/><br/>This policy evaluates the `appid` or `azp` claim of the incoming token, which must be an access token. See the [Microsoft Identity Platform claims reference]. |
+| `allowedPrincipals` | A grouping of checks that determine if the principal represented by the incoming request may access the app. Satisfaction of `allowedPrincipals` is based on a logical `OR` over its configured properties. |
+| `identities` (under `allowedPrincipals`) | An allowlist of string **object IDs** representing users or applications that have access. When this property is configured as a nonempty array, the `allowedPrincipals` requirement can be satisfied if the user or application represented by the request is specified in the list.<br/><br/>This policy evaluates the `oid` claim of the incoming token. See the [Microsoft Identity Platform claims reference]. |
+
+Requests that fail these built-in checks are given an HTTP `403 Forbidden` response.
+
+[Microsoft Identity Platform claims reference]: ../active-directory/develop/access-tokens.md#payload-claims
+ ## Configure client apps to access your App Service In the prior section, you registered your App Service or Azure Function to authenticate users. This section explains how to register native client or daemon apps so that they can request access to APIs exposed by your App Service on behalf of users or themselves. Completing the steps in this section is not required if you only wish to authenticate users.
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Azure generates the activity log by default. The logs are preserved for 90 days
The access log is generated only if you've enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format as shown below.
-#### For Application Gateway Standard and WAF SKU (v1)
-
-|Value |Description |
-|||
-|instanceId | Application Gateway instance that served the request. |
-|clientIP | Originating IP for the request. |
-|clientPort | Originating port for the request. |
-|httpMethod | HTTP method used by the request. |
-|requestUri | URI of the received request. |
-|RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
-|UserAgent | User agent from the HTTP request header. |
-|httpStatus | HTTP status code returned to the client from Application Gateway. |
-|httpVersion | HTTP version of the request. |
-|receivedBytes | Size of packet received, in bytes. |
-|sentBytes| Size of packet sent, in bytes.|
-|timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
-|sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
-|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name will reflect that.|
-|originalHost| The hostname with which the request was received by the Application Gateway from the client.|
-
-```json
-{
- "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
- "operationName": "ApplicationGatewayAccess",
- "time": "2017-04-26T19:27:38Z",
- "category": "ApplicationGatewayAccessLog",
- "properties": {
- "instanceId": "ApplicationGatewayRole_IN_0",
- "clientIP": "191.96.249.97",
- "clientPort": 46886,
- "httpMethod": "GET",
- "requestUri": "/phpmyadmin/scripts/setup.php",
- "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404",
- "userAgent": "-",
- "httpStatus": 404,
- "httpVersion": "HTTP/1.0",
- "receivedBytes": 65,
- "sentBytes": 553,
- "timeTaken": 205,
- "sslEnabled": "off",
- "host": "www.contoso.com",
- "originalHost": "www.contoso.com"
- }
-}
-```
#### For Application Gateway and WAF v2 SKU |Value |Description | ||| |instanceId | Application Gateway instance that served the request. |
-|clientIP | Originating IP for the request. |
+|clientIP | IP of the immediate client of Application Gateway. If another proxy fronts your application gateway, this displays the IP of that fronting proxy. |
|httpMethod | HTTP method used by the request. | |requestUri | URI of the received request. | |UserAgent | User agent from the HTTP request header. |
The access log is generated only if you've enabled it on each Application Gatewa
> [!Note] >Access logs with clientIP value 127.0.0.1 originate from an internal security process running on the application gateway instances. You can safely ignore these log entries.
+#### For Application Gateway Standard and WAF SKU (v1)
+
+|Value |Description |
+|||
+|instanceId | Application Gateway instance that served the request. |
+|clientIP | Originating IP for the request. |
+|clientPort | Originating port for the request. |
+|httpMethod | HTTP method used by the request. |
+|requestUri | URI of the received request. |
+|RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
+|UserAgent | User agent from the HTTP request header. |
+|httpStatus | HTTP status code returned to the client from Application Gateway. |
+|httpVersion | HTTP version of the request. |
+|receivedBytes | Size of packet received, in bytes. |
+|sentBytes| Size of packet sent, in bytes.|
+|timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
+|sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
+|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name will reflect that.|
+|originalHost| The hostname with which the request was received by the Application Gateway from the client.|
+
+```json
+{
+ "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
+ "operationName": "ApplicationGatewayAccess",
+ "time": "2017-04-26T19:27:38Z",
+ "category": "ApplicationGatewayAccessLog",
+ "properties": {
+ "instanceId": "ApplicationGatewayRole_IN_0",
+ "clientIP": "191.96.249.97",
+ "clientPort": 46886,
+ "httpMethod": "GET",
+ "requestUri": "/phpmyadmin/scripts/setup.php",
+ "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404",
+ "userAgent": "-",
+ "httpStatus": 404,
+ "httpVersion": "HTTP/1.0",
+ "receivedBytes": 65,
+ "sentBytes": 553,
+ "timeTaken": 205,
+ "sslEnabled": "off",
+ "host": "www.contoso.com",
+ "originalHost": "www.contoso.com"
+ }
+}
+```
+ ### Performance log The performance log is generated only if you have enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. The performance log data is generated in 1-minute intervals. It is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. The following data is logged:
automanage Reference Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/reference-sdk.md
Previously updated : 08/25/2022 Last updated : 11/17/2022 # Automanage SDK overview
-Azure Automanage currently supports the following SDKs:
+Azure Automanage currently supports the following SDKs:
- [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/automanage/azure-mgmt-automanage) - [Go](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/automanage/armautomanage) - [Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/automanage/azure-resourcemanager-automanage) - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/automanage/arm-automanage)-- CSharp (pending)
+- [CSharp](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/automanage/Azure.ResourceManager.Automanage)
- PowerShell (pending) - Azure CLI (pending) - Terraform (pending)
-Here's a list of a few of the primary operations the SDKs provide:
+Here's a list of a few of the primary operations the SDKs provide:
- Create custom configuration profiles - Delete custom configuration profiles-- Create Best Practices profile assignments -- Create custom profile assignments
+- Create Best Practices profile assignments
+- Create custom profile assignments
- Remove assignments
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
Use the App Configuration provider or SDK libraries to access App Configuration
You can also make your App Configuration data accessible to your application as *Application settings* or environment variables. With this approach, you can avoid changing your application code.
-* Add references to your App Configuration data in the *Application settings* of your App Service or Azure Functions. For more information, see [Use App Configuration references for App Service and Azure Functions](../app-service/app-service-configuration-references.md).
-* [Export your App Configuration data](howto-import-export-data.md#export-data-to-azure-app-service) to the *Application settings* of your App Service or Azure Functions. Export your data again every time you make new changes in App Configuration if you like your application to pick up the change.
+* Add references to your App Configuration data in the *Application settings* of your App Service or Azure Functions. App Configuration offers tools to [export a collection of key-values as references](howto-import-export-data.md#export-data-to-azure-app-service) at once. For more information, see [Use App Configuration references for App Service and Azure Functions](../app-service/app-service-configuration-references.md).
+* [Export your App Configuration data](howto-import-export-data.md#export-data-to-azure-app-service) to the *Application settings* of your App Service or Azure Functions without selecting the option of export-as-reference. Export your data again every time you make new changes in App Configuration if you like your application to pick up the change.
## Reduce requests made to App Configuration
azure-cache-for-redis Cache Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-failover.md
Title: Failover and patching - Azure Cache for Redis description: Learn about failover, patching, and the update process for Azure Cache for Redis. + Previously updated : 03/15/2022 Last updated : 11/16/2022+
An *unplanned failover* might happen because of hardware failure, network failur
The Azure Cache for Redis service regularly updates your cache with the latest platform features and fixes. To patch a cache, the service follows these steps:
-1. The management service selects one node to be patched.
-1. If the selected node is a primary node, the corresponding replica node cooperatively promotes itself. This promotion is considered a planned failover.
-1. The selected node reboots to take the new changes and comes back up as a replica node.
+1. The service patches the replica node first.
+1. The patched replica cooperatively promotes itself to primary. This promotion is considered a planned failover.
+1. The former primary node reboots to take the new changes and comes back up as a replica node.
1. The replica node connects to the primary node and synchronizes data. 1. When the data sync is complete, the patching process repeats for the remaining nodes.
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
Previously updated : 09/29/2022- Last updated : 11/16/2022+ # How to upgrade an existing Redis 4 cache to Redis 6
-Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is permanent, and it might cause a brief connection issue similar to regular monthly maintenance. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading.
+Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is similar to regular monthly maintenance. Upgrading follows the same pattern as maintenance: First, the Redis version on the replica node is updated, followed by an update to the primary node. Your client application should treat the upgrade operation exactly like a planned maintenance event.
+
+As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading.
For more details on how to export, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
For more details on how to export, see [Import and Export data in Azure Cache fo
### Limitations -- Upgrading a Basic tier cache results in brief unavailability and data loss.
+- When you upgrade a cache in the Basic tier, it is unavailable for several minutes and results in data loss.
- Upgrading on geo-replicated cache isn't supported. You must manually unlink the cache instances before upgrading. - Upgrading a cache with a dependency on Cloud Services isn't supported. You should migrate your cache instance to virtual machine scale set before upgrading. For more information, see [Caches with a dependency on Cloud Services (classic)](./cache-faq.yml) for details on cloud services hosted caches.
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
You can exclude certain types of telemetry from sampling. In this example, data
```
-For more information, see [Sampling in Application Insights](../azure-monitor/app/sampling.md).
+If your project takes a dependency on the Application Insights SDK to do manual telemetry tracking, you may experience strange behavior if your sampling configuration differs from the sampling configuration in your function app. In such cases, use the same sampling configuration as the function app. For more information, see [Sampling in Application Insights](../azure-monitor/app/sampling.md).
## Enable SQL query collection
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
Typically, you create an Application Insights instance when you create your func
> [!IMPORTANT] > Sovereign clouds, such as Azure Government, require the use of the Application Insights connection string (`APPLICATIONINSIGHTS_CONNECTION_STRING`) instead of the instrumentation key. To learn more, see the [APPLICATIONINSIGHTS_CONNECTION_STRING reference](functions-app-settings.md#applicationinsights_connection_string).
+The following table details the supported features of Application Insights available for monitoring your function apps:
+
+| Azure Functions runtime version | 1.x | 2.x+ |
+|--|::|::|
+| | | |
+| **Automatic collection of** | | |
+| &bull; Requests | Γ£ô | Γ£ô |
+| &bull; Exceptions | Γ£ô | Γ£ô |
+| &bull; Performance Counters | Γ£ô | Γ£ô |
+| &bull; Dependencies | | |
+| &nbsp;&nbsp;&nbsp;&mdash; HTTP | | Γ£ô |
+| &nbsp;&nbsp;&nbsp;&mdash; Service Bus| | Γ£ô |
+| &nbsp;&nbsp;&nbsp;&mdash; Event Hubs | | Γ£ô |
+| &nbsp;&nbsp;&nbsp;&mdash; SQL\* | | Γ£ô |
+| | | |
+| **Supported features** | | |
+| &bull; QuickPulse/LiveMetrics | Yes | Yes |
+| &nbsp;&nbsp;&nbsp;&mdash; Secure Control Channel | | Yes |
+| &bull; Sampling | Yes | Yes |
+| &bull; Heartbeats | | Yes |
+| | | |
+| **Correlation** | | |
+| &bull; Service Bus | | Yes |
+| &bull; Event Hubs | | Yes |
+| | | |
+| **Configurable** | | |
+| &bull;[Fully configurable](#custom-telemetry-data) | | Yes |
+
+\* To enable the collection of SQL query string text, see [Enable SQL query collection](./configure-monitoring.md#enable-sql-query-collection).
+ ## Collecting telemetry data With Application Insights integration enabled, telemetry data is sent to your connected Application Insights instance. This data includes logs generated by the Functions host, traces written from your functions code, and performance data.
In addition to automatic dependency data collection, you can also use one of the
+ [Log custom telemetry in JavaScript functions](functions-reference-node.md#log-custom-telemetry) + [Log custom telemetry in Python functions](functions-reference-python.md#log-custom-telemetry)
+### Performance Counters
+
+Automatic collection of Performance Counters isn't supported when running on Linux.
+ ## Writing to logs The way that you write to logs and the APIs you use depend on the language of your function app project.
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** blade from left menu > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
2. If you don't see the extension listed, check if machine can reach Azure and find the extension to install using the command below: ```azurecli az vm extension image list-versions --location <machine-region> --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
4. **Verify that the DCR exists and is associated with the virtual machine:** 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
- 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the virtual machine listed here.
+ 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here.
3. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs. 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** blade from left menu > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
+ 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
2. If not, check if the Arc agent (Connected Machine Agent) is able to connect to Azure and the extension service is running. ```azurecli azcmagent show
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
4. **Verify that the DCR exists and is associated with the Arc-enabled server:** 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace. 2. On your Arc-enabled server, verify the existence of the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.latest.xml`. If this file doesn't exist, the Arc-enabled server may not be associated with a DCR.
- 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the Arc-enabled server listed here
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the Arc-enabled server listed here
4. If not listed, click 'Add' and select your Arc-enabled server from the resource picker. Repeat across all DCRs. 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** blade from left menu > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
2. If not, check if machine can reach Azure and find the extension to install using the command below: ```azurecli az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
- The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable. - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'. - AMA cannot access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'.
- 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the virtual machine listed here
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here
4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs. 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
azure-monitor Data Sources Event Tracing Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-event-tracing-windows.md
Ensure the *Windows diagnostics extension* is [installed](./diagnostics-extensio
### Step 3: Configure ETW log collection
-1. Navigate to the **Diagnostic Settings** blade of the virtual machine
+1. From the pane on the left, navigate to the **Diagnostic Settings** for the virtual machine
-2. Select the **Logs** tab
+2. Select the **Logs** tab.
3. Scroll down and enable the **Event tracing for Windows (ETW) events** option ![Screenshot of diagnostics settings](./media/data-sources-event-tracing-windows/enable-event-tracing-windows-collection.png)
azure-monitor Alerts Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-logic-apps.md
To trigger your Logic app, create an action group, then create an alert that use
1. Select **OK**. 1. Enter a name in the **Name** field. 1. Select **Review + create**, the **Create**. ## Test your action group
The following email will be sent to the specified account:
1. Select your action group from the list. 1. Select **Select**. 1. Finish the creation of your rule.
- :::image type="content" source="./media/alerts-logic-apps/select-action-groups.png" alt-text="A screenshot showing the actions tab of the create rules page and the select action groups blade.":::
+ :::image type="content" source="./media/alerts-logic-apps/select-action-groups.png" alt-text="A screenshot showing the actions tab of the create rules page and the select action groups pane.":::
## Next steps
azure-monitor Alerts Smart Detections Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smart-detections-migration.md
A new set of alert rules is created when migrating an Application Insights resou
| Potential security issue detected (preview) | *discontinued* <sup>(3)</sup> | | Abnormal rise in daily data volume (preview) | *discontinued* <sup>(3)</sup> |
-<sup>(1)</sup> Name of rule as appears in smart detection Settings blade
+<sup>(1)</sup> Name of rule as appears in smart detection Settings pane
<sup>(2)</sup> Name of new alert rule after migration <sup>(3)</sup> These smart detection capabilities aren't converted to alerts, because of low usage and reassessment of detection effectiveness. These detectors will no longer be supported for this resource once its migration is completed.
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md
Smart detection automatically warns you of potential performance problems and failure anomalies in your web application. It performs proactive analysis of the telemetry that your app sends to [Application Insights](../app/app-insights-overview.md). If there is a sudden rise in failure rates, or abnormal patterns in client or server performance, you get an alert. This feature needs no configuration. It operates if your application sends enough telemetry.
-You can access the detections issued by smart detection both from the emails you receive, and from the smart detection blade.
+You can access the detections issued by smart detection both from the emails you receive, and from the smart detection pane.
## Review your smart detections You can discover detections in two ways:
You can discover detections in two ways:
![Email alert](./media/proactive-diagnostics/03.png) Click the large button to open more detail in the portal.
-* **The smart detection blade** in Application Insights. Select **Smart detection** under the **Investigate** menu to see a list of recent detections.
+* **The smart detection pane** in Application Insights. Select **Smart detection** under the **Investigate** menu to see a list of recent detections.
![View recent detections](./media/proactive-diagnostics/04.png)
Smart detection detects and notifies about various issues, such as:
All smart detection rules, except for rules marked as _preview_, are configured by default to send email notifications when detections are found.
-Configuring email notifications for a specific smart detection rule can be done by opening the smart detection **Settings** blade and selecting the rule, which will open the **Edit rule** blade.
+Configuring email notifications for a specific smart detection rule can be done by opening the smart detection **Settings** pane and selecting the rule, which will open the **Edit rule** pane.
Alternatively, you can change the configuration using Azure Resource Manager templates. For more information, see [Manage Application Insights smart detection rules using Azure Resource Manager templates](./proactive-arm-config.md) for more details.
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
The notifications include diagnostic information. Here's an example:
2. **Scope**. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or locations? This information can be obtained from the notification. 3. **Diagnose**. Often, the diagnostic information in the notification will suggest the nature of the problem. For example, if response time slows down when request rate is high, it may indicate that your server or dependencies are beyond their capacity.
- Otherwise, open the Performance blade in Application Insights. You'll find there [Profiler](../profiler/profiler.md) data. If exceptions are thrown, you can also try the [snapshot debugger](../snapshot-debugger/snapshot-debugger.md).
+ Otherwise, open the Performance pane in Application Insights. You'll find there [Profiler](../profiler/profiler.md) data. If exceptions are thrown, you can also try the [snapshot debugger](../snapshot-debugger/snapshot-debugger.md).
## Configure Email Notifications
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Filters can be reused in two ways:
later time may be different from the one observed when the link was created. -- The dashboard pin :::image type="content" source="media/app-map/image-10.png" alt-text="A screenshot displaying the dashboard pin button."::: is located next to the title bar of the Application Map blade. This button pins the map to a dashboard, along with the filters applied to it. This action can be useful for filters that are frequently interesting. As an example, the user can pin a map with "Error connector" filter applied to it, and the dashboard view will only show nodes that have errors in their HTTP calls.
+- The dashboard pin :::image type="content" source="media/app-map/image-10.png" alt-text="A screenshot displaying the dashboard pin button."::: is located next to the title bar of the Application Map pane. This button pins the map to a dashboard, along with the filters applied to it. This action can be useful for filters that are frequently interesting. As an example, the user can pin a map with "Error connector" filter applied to it, and the dashboard view will only show nodes that have errors in their HTTP calls.
#### Filter usage scenarios
azure-monitor Azure Functions Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-functions-supported-features.md
- Title: Azure Application Insights - Azure Functions Supported Features
-description: Application Insights Supported Features for Azure Functions
- Previously updated : 4/23/2019---
-# Application Insights for Azure Functions supported features
-
-Azure Functions offers [built-in integration](../../azure-functions/functions-monitoring.md) with Application Insights, which is available through the ILogger Interface. Below is the list of currently supported features. Review Azure Functions' guide for [Getting started](../../azure-functions/configure-monitoring.md#enable-application-insights-integration).
-
-For more information about Functions runtime versions, see [here](../../azure-functions/functions-versions.md).
-
-For more information about compatible versions of Application Insights, see [Dependencies](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Logging.ApplicationInsights/).
-
-## Supported features
-
-| Azure Functions | V1 | V2 & V3 |
-|--|||
-| | | |
-| **Automatic collection of** | | |
-| &bull; Requests | Yes | Yes |
-| &bull; Exceptions | Yes | Yes |
-| &bull; Performance Counters | Yes | Yes |
-| &bull; Dependencies | | |
-| &nbsp;&nbsp;&nbsp;&mdash; HTTP | | Yes |
-| &nbsp;&nbsp;&nbsp;&mdash; ServiceBus| | Yes |
-| &nbsp;&nbsp;&nbsp;&mdash; EventHub | | Yes |
-| &nbsp;&nbsp;&nbsp;&mdash; SQL | | Yes |
-| | | |
-| **Supported features** | | |
-| &bull; QuickPulse/LiveMetrics | Yes | Yes |
-| &nbsp;&nbsp;&nbsp;&mdash; Secure Control Channel | | Yes |
-| &bull; Sampling | Yes | Yes |
-| &bull; Heartbeats | | Yes |
-| | | |
-| **Correlation** | | |
-| &bull; ServiceBus | | Yes |
-| &bull; EventHub | | Yes |
-| | | |
-| **Configurable** | | |
-| &bull;Fully configurable.<br/>See [Azure Functions](https://github.com/Microsoft/ApplicationInsights-aspnetcore/issues/759#issuecomment-426687852) for instructions.<br/>See [ASP.NET Core](https://github.com/Microsoft/ApplicationInsights-aspnetcore/wiki/Custom-Configuration) for all options. | | Yes |
-
-## Performance Counters
-
-Automatic collection of Performance Counters only work Windows machines.
-
-## Live Metrics & Secure Control Channel
-
-The custom filters criteria you specify are sent back to the Live Metrics component in the Application Insights SDK. The filters could potentially contain sensitive information such as customerIDs. You can make the channel secure with a secret API key. See [Secure the control channel](./live-stream.md#secure-the-control-channel) for instructions.
-
-## Sampling
-
-Azure Functions enables Sampling by default in their configuration. For more information, see [Configure Sampling](../../azure-functions/configure-monitoring.md#configure-sampling).
-
-If your project takes a dependency on the Application Insights SDK to do manual telemetry tracking, you may experience strange behavior if your sampling configuration is different than the Functions' sampling configuration.
-
-We recommend using the same configuration as Functions. With **Functions v2**, you can get the same configuration using dependency injection in your constructor:
-
-```csharp
-using Microsoft.ApplicationInsights;
-using Microsoft.ApplicationInsights.Extensibility;
-
-public class Function1
-{
-
- private readonly TelemetryClient telemetryClient;
-
- public Function1(TelemetryConfiguration configuration)
- {
- this.telemetryClient = new TelemetryClient(configuration);
- }
-
- [FunctionName("Function1")]
- public async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger logger)
- {
- this.telemetryClient.TrackTrace("C# HTTP trigger function processed a request.");
- }
-}
-```
azure-monitor Java 2X Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-collectd.md
Restart collectd according to its [manual](https://collectd.org/wiki/index.php/F
## View the data in Application Insights In your Application Insights resource, open [Metrics and add charts][metrics], selecting the metrics you want to see from the Custom category.
-By default, the metrics are aggregated across all host machines from which the metrics were collected. To view the metrics per host, in the Chart details blade, turn on Grouping and then choose to group by CollectD-Host.
+By default, the metrics are aggregated across all host machines from which the metrics were collected. To view the metrics per host, in the Chart details pane, turn on Grouping and then choose to group by CollectD-Host.
## To exclude upload of specific statistics By default, the Application Insights plugin sends all the data collected by all the enabled collectd 'read' plugins.
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Either run it in debug mode on your development machine, or publish to your serv
## View your telemetry in Application Insights Return to your Application Insights resource in [Microsoft Azure portal](https://portal.azure.com).
-HTTP requests data appears on the overview blade. (If it isn't there, wait a few seconds and then click Refresh.)
+HTTP requests data appears on the overview pane. (If it isn't there, wait a few seconds and then click Refresh.)
![Screenshot of overview sample data](./media/java-get-started/overview-graphs.png)
azure-monitor Java 2X Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-trace-logs.md
To start getting traces, merge the relevant snippet of code to the Log4J or Logb
The Application Insights appenders can be referenced by any configured logger, and not necessarily by the root logger (as shown in the code samples above). ## Explore your traces in the Application Insights portal
-Now that you've configured your project to send traces to Application Insights, you can view and search these traces in the Application Insights portal, in the [Search][diagnostic] blade.
+Now that you've configured your project to send traces to Application Insights, you can view and search these traces in the Application Insights portal, in the [Search][diagnostic] pane.
Exceptions submitted via loggers will be displayed on the portal as Exception Telemetry.
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
The Application Insights Java profiler uses the JFR profiler provided by the JVM
This data is gathered on demand when trigger conditions are met. The available triggers are thresholds over CPU usage and Memory consumption.
-When a threshold is reached, a profile of the configured type and duration is gathered and uploaded. This profile is then visible within the performance blade of the associated Application Insights Portal UI.
+When a threshold is reached, a profile of the configured type and duration is gathered and uploaded. This profile is then visible within the performance pane of the associated Application Insights Portal UI.
> [!WARNING] > The JFR profiler by default executes the "profile-without-env-data" profile. A JFR file is a series of events emitted by the JVM. The "profile-without-env-data" configuration, is similar to the "profile" configuration that ships with the JVM, however has had some events disabled that have the potential to contain sensitive deployment information such as environment variables, arguments provided to the JVM and processes running on the system.
The following steps will guide you through enabling the profiling component on t
1. Configure the resource thresholds that will cause a profile to be collected: 1. Browse to the Performance -> Profiler section of the Application Insights instance.
- :::image type="content" source="./media/java-standalone-profiler/performance-blade.png" alt-text="Screenshot of the link to open performance blade." lightbox="media/java-standalone-profiler/performance-blade.png":::
- :::image type="content" source="./media/java-standalone-profiler/profiler-button.png" alt-text="Screenshot of the Profiler button from the Performance blade." lightbox="media/java-standalone-profiler/profiler-button.png":::
+ :::image type="content" source="./media/java-standalone-profiler/performance-blade.png" alt-text="Screenshot of the link to open performance pane." lightbox="media/java-standalone-profiler/performance-blade.png":::
+ :::image type="content" source="./media/java-standalone-profiler/profiler-button.png" alt-text="Screenshot of the Profiler button from the Performance pane." lightbox="media/java-standalone-profiler/profiler-button.png":::
2. Select "Triggers"
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
The end-to-end diagnostics and the application map provide visibility into one s
### How to enable distributed tracing for Java Function apps
-Navigate to the functions app Overview blade and go to configurations. Under Application Settings, click "+ New application setting".
+Navigate to the functions app Overview pane and go to configurations. Under Application Settings, click "+ New application setting".
> [!div class="mx-imgBorder"] > ![Under Settings, add new application settings](./media//functions/create-new-setting.png)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
for i in range(100):
> [!TIP]
-> If you're not sure where to set the sampling rate, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling.
+> If you're not sure where to set the sampling rate, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance panes. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling.
## Instrumentation libraries
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
This article explains the difference between "traditional" Application Insights
## Log-based Metrics
-In the past, the application monitoring telemetry data model in Application Insights was solely based on a small number of predefined types of events, such as requests, exceptions, dependency calls, page views, etc. Developers can use the SDK to either emit these events manually (by writing code that explicitly invokes the SDK) or they can rely on the automatic collection of events from auto-instrumentation. In either case, the Application Insights backend stores all collected events as logs, and the Application Insights blades in the Azure portal act as an analytical and diagnostic tool for visualizing event-based data from logs.
+In the past, the application monitoring telemetry data model in Application Insights was solely based on a small number of predefined types of events, such as requests, exceptions, dependency calls, page views, etc. Developers can use the SDK to either emit these events manually (by writing code that explicitly invokes the SDK) or they can rely on the automatic collection of events from auto-instrumentation. In either case, the Application Insights backend stores all collected events as logs, and the Application Insights panes in the Azure portal act as an analytical and diagnostic tool for visualizing event-based data from logs.
Using logs to retain a complete set of events can bring great analytical and diagnostic value. For example, you can get an exact count of requests to a particular URL with the number of distinct users who made these calls. Or you can get detailed diagnostic traces, including exceptions and dependency calls for any user session. Having this type of information can significantly improve visibility into the application health and usage, allowing to cut down the time necessary to diagnose issues with an app.
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Use this type of sampling if your app often goes over its monthly quota and you
Set the sampling rate in the Usage and estimated costs page:
-![From the application's Overview blade, click Settings, Quota, Samples, then select a sampling rate, and click Update.](./media/sampling/data-sampling.png)
+![From the application's Overview pane, click Settings, Quota, Samples, then select a sampling rate, and click Update.](./media/sampling/data-sampling.png)
Like other types of sampling, the algorithm retains related telemetry items. For example, when you're inspecting the telemetry in Search, you'll be able to find the request related to a particular exception. Metric counts such as request rate and exception rate are correctly retained.
As the application is scaled up, it may be processing dozens, hundreds, or thous
As sampling rates increase log based queries accuracy decrease and are usually inflated. This only impacts the accuracy of log-based queries when sampling is enabled and the sample rates are in a higher range (~ 60%). The impact varies based on telemetry types, telemetry counts per operation as well as other factors.
-To address the problems introduced by sampling pre-aggregated metrics are used in the SDKs. Additional details about these metrics, log-based and pre-aggregated, can be referenced in [Azure Application Insights - Azure Monitor | Microsoft Docs](./pre-aggregated-metrics-log-metrics.md#sdk-supported-pre-aggregated-metrics-table). Relevant properties of the logged data are identified and statistics extracted before sampling occurs. To avoid resource and cost issues, metrics are aggregated. The resulting aggregate data is represented by only a few metric telemetry items per minute, instead of potentially thousands of event telemetry items. These metrics calculate the 25 requests from the example and send a metric to the MDM account reporting ΓÇ£this web app processed 25 requestsΓÇ¥, but the sent request telemetry record will have an `itemCount` of 100. These pre-aggregated metrics report the correct numbers and can be relied upon when sampling affects the log-based queries results. They can be viewed on the Metrics blade of the Application Insights portal.
+To address the problems introduced by sampling pre-aggregated metrics are used in the SDKs. Additional details about these metrics, log-based and pre-aggregated, can be referenced in [Azure Application Insights - Azure Monitor | Microsoft Docs](./pre-aggregated-metrics-log-metrics.md#sdk-supported-pre-aggregated-metrics-table). Relevant properties of the logged data are identified and statistics extracted before sampling occurs. To avoid resource and cost issues, metrics are aggregated. The resulting aggregate data is represented by only a few metric telemetry items per minute, instead of potentially thousands of event telemetry items. These metrics calculate the 25 requests from the example and send a metric to the MDM account reporting ΓÇ£this web app processed 25 requestsΓÇ¥, but the sent request telemetry record will have an `itemCount` of 100. These pre-aggregated metrics report the correct numbers and can be relied upon when sampling affects the log-based queries results. They can be viewed on the Metrics pane of the Application Insights portal.
## Frequently asked questions
azure-monitor Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sharepoint.md
Insert a web part and embed the code snippet in it.
## View data about your app Redeploy your app.
-Return to your application blade in the [Azure portal](https://portal.azure.com).
+Return to your application pane in the [Azure portal](https://portal.azure.com).
The first events will appear in Search.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
The decision whether to configure a table for Basic Logs is based on the followi
- You only require basic queries of the data using a limited version of the query language. - The cost savings for data ingestion over a month exceed the expected cost for any expected queries
-See [Query Basic Logs in Azure Monitor (preview)](.//logs/basic-logs-query.md) for information on query limitations. See [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more information about Basic Logs.
+See [Query Basic Logs in Azure Monitor](.//logs/basic-logs-query.md) for information on query limitations. See [Configure Basic Logs in Azure Monitor](logs/basic-logs-configure.md) for more information about Basic Logs.
## Reduce the amount of data collected
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
If you are utilizing [Prometheus metric scraping](container-insights-prometheus.
### Configure Basic Logs
-You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs (preview)](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
+You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
>[!IMPORTANT] > If you are deploying Azure Monitor on a Kubernetes cluster running on top of Azure Stack Edge, then the Azure CLI option needs to be followed instead of the Azure portal option as a custom mount path needs to be set for these clusters.
-### Onboarding from the Azure Arc-enabled Kubernetes resource blade
+### Onboarding from the Azure Arc-enabled Kubernetes resource pane
1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster that you wish to monitor.
-2. Select the 'Insights' item under the 'Monitoring' section of the resource blade.
+2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
3. On the onboarding page, select the 'Configure Azure Monitor' button
Once you have successfully created the Azure Monitor extension for your Azure Ar
### [Azure portal](#tab/verify-portal) 1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster with the extension installing
-2. Select the 'Extensions' item under the 'Settings' section of the resource blade
+2. From the resource pane on the left, select the 'Extensions' item under the 'Settings' section.
3. You should see an extension with the name 'azuremonitor-containers' listed, with the listed status in the 'Install status' column ### [CLI](#tab/verify-cli) Run the following command to show the latest status of the `Microsoft.AzureMonitor.Containers` extension
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
The process that's outlined in this article only works on classic virtual machin
1. When you're creating this VM, choose the option to create a new classic storage account. We use this storage account in later steps.
-1. In the Azure portal, go to the **Storage accounts** resource blade. Select **Keys**, and take note of the storage account name and storage account key. You need this information in later steps.
+1. In the Azure portal, go to the **Storage accounts** resource pane. Select **Keys**, and take note of the storage account name and storage account key. You need this information in later steps.
![Storage access keys](./media/collect-custom-metrics-guestos-vm-classic/storage-access-keys.png) ## Create a service principal
Give this app ΓÇ£Monitoring Metrics PublisherΓÇ¥ permissions to the resource tha
1. On the left menu, select **Monitor.**
-1. On the **Monitor** blade, select **Metrics**.
+1. On the **Monitor** pane on the left, select **Metrics**.
![Navigate metrics](./media/collect-custom-metrics-guestos-vm-classic/navigate-metrics.png)
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 09/13/2022 Last updated : 11/17/2022
This latest update adds a new column and reorders the metrics to be alphabetical
|PendingCPU|Yes|Pending CPU|Count|Maximum|Pending CPU Requests in YARN|No Dimensions| |PendingMemory|Yes|Pending Memory|Count|Maximum|Pending Memory Requests in YARN|No Dimensions|
+> [!NOTE]
+> NumActiveWorkers is supported only if YARN is installed, and the Resource Manager is running.
## Microsoft.HealthcareApis/services
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
See the documentation for different services and solutions for any unique billin
In addition to the pay-as-you-go model, Log Analytics has *commitment tiers*, which can save you as much as 30 percent compared to the pay-as-you-go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB per day, at a lower price than pay-as-you-go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. - During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period.-- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to pay-as-you-go or to a different commitment tier at any time.
+- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a lower commitment tier at any time.
+- If a workspace is inadvertently moved into a commitment tier, contact Microsoft Support to reset the commitment period so you can move back to the Pay-As-You-Go pricing tier.
Billing for the commitment tiers is done per workspace on a daily basis. If the workspace is part of a [dedicated cluster](#dedicated-clusters), the billing is done for the cluster. See the following "Dedicated clusters" section. For a list of the commitment tiers and their prices, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
The *principalId* GUID is generated by the managed identity service at cluster c
## Link a workspace to a cluster
-When a Log Analytics workspace is linked to a dedicated cluster, new data ingested to the workspace is routed to the new cluster while existing data remains on the existing cluster. If the dedicated cluster is encrypted using customer-managed keys (CMK), only new data is encrypted with the key. The system abstracts this difference, so you can query the workspace as usual while the system performs cross-cluster queries in the background.
+When a Log Analytics workspace is linked to a dedicated cluster, new data ingested to the workspace, is routed to the cluster while existing data remains in the existing Log Analytics cluster. If the dedicated cluster is configured with customer-managed keys (CMK), new ingested data is encrypted with your key. The system abstracts the data location, you can query data as usual while the system performs cross-cluster queries in the background.
-A cluster can be linked to up to 1,000 workspaces. Linked workspaces are located in the same region as the cluster. To protect the system backend and avoid fragmentation of data, a workspace can't be linked to a cluster more than twice a month.
+A cluster can be linked to up to 1,000 workspaces. Linked workspaces can be located in the same region as the cluster. A workspace can't be linked to a cluster more than twice a month, to prevent data fragmentation.
-To perform the link operation, you need to have 'write' permissions to both the workspace and the cluster resource:
+You need 'write' permissions to both the workspace and the cluster resource for workspace link operation:
- In the workspace: *Microsoft.OperationalInsights/workspaces/write* - In the cluster resource: *Microsoft.OperationalInsights/clusters/write*
-Other than the billing aspects, the linked workspace keeps its own settings such as the length of data retention.
+Other than the billing aspects, configuration of linked workspace remain, including data retention settings.
The workspace and the cluster can be in different subscriptions. It's possible for the workspace and cluster to be in different tenants if Azure Lighthouse is used to map both of them to a single tenant.
Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -Cl
``` The *billingType* property determines the billing attribution for the cluster and its data:-- *Cluster* (default) -- The billing is attributed to the Cluster resource-- *Workspaces* -- The billing is attributed to linked workspaces proportionally. When data volume from all workspaces is below the Commitment Tier level, the remaining volume is attributed to the cluster
+- *Cluster* (default) -- billing is attributed to the Cluster resource
+- *Workspaces* -- billing is attributed to linked workspaces proportionally. When data volume from all linked workspaces is below Commitment Tier level, the bill for the remaining volume is attributed to the cluster
**REST**
Content-type: application/json
### Unlink a workspace from cluster
-You can unlink a workspace from a cluster, and new data to workspace isn't ingested to cluster. Also, the workspace pricing tier is set to per-GB.
-Old data of the unlinked workspace might be left on the cluster. If this data is encrypted using customer-managed keys (CMK), the Key Vault secrets are kept. The system is abstracts this change from Log Analytics users. Users can just query the workspace as usual. The system performs cross-cluster queries on the backend as needed with no indication to users.
+You can unlink a workspace from a cluster at any time. The workspace pricing tier is changed to per-GB, data ingested to cluster before the unlink operation remains in the cluster, and new data to workspace get ingested to Log Analytics. You can query data as usual and the service performs cross-cluster queries seamlessly. If cluster was configured with Customer-managed key (CMK), data remains encrypted with your key and accessible, while your key and permissions to Key Vault remain.
-> [!WARNING]
-> There is a limit of two link operations for a specific workspace within a month. Take time to consider and plan unlinking actions accordingly.
+> [!NOT]
+> There is a limit of two link operations for a specific workspace within a month to prevent data distribution across clusters. Contact support if you reach limit.
Use the following commands to unlink a workspace from cluster:
Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-nam
## Delete cluster
-It's recommended that you unlink all workspaces from a dedicated cluster before deleting it. You need to have *write* permissions on the cluster resource. When deleting a cluster, you're losing access to all data ingested to the cluster from linked workspaces and from workspaces that were linked previously. This operation isn't reversible. If you delete your cluster when workspaces are linked, these get unlinked automatically and new data get ingested to Log Analytics storage instead.
+You need to have *write* permissions on the cluster resource.
-A cluster resource that was deleted in the last 14 days is kept in soft-delete state and its name remained reserved. After the soft-delete period, the cluster is permanently deleted and its name can be reused to create a cluster.
+When deleting a cluster, you're losing access to all data in cluster, ingested from workspaces that are linked to it, or were linked previously. This operation isn't reversible. If you delete your cluster while workspaces are linked, the workspaces get automatically unlinked from the cluster before the delete, and new data to workspaces gets ingested to Log Analytics. If workspace data retention is longer than the period it was linked to the cluster, you can query workspace for the time range before the link to cluster and after the unlink, and the service performs cross-cluster queries seamlessly.
-> [!WARNING]
-> - The recovery of soft-deleted clusters isn't supported and it can't be recovered once deleted.
-> - There is a limit of 4 clusters per subscription. Both active and soft-deleted clusters are counted as part of this. Customers shouldn't create recurrent procedures that create and delete clusters. It has a significant impact on Log Analytics backend systems.
+> [!NOTE]
+> - There is a limit of seven clusters per subscription, five active plus two if were deleted in past 14 days.
+> - Cluster's name remain reserved for 14 days after deletion, and can't be used for creating a new cluster. Deleted cluster's name is released and can be reused after 14 days.
Use the following commands to delete a cluster:
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
The `/read` permission is usually granted from a role that includes _\*/read or_
In addition to using the built-in roles for a Log Analytics workspace, you can create custom roles to assign more granular permissions. Here are some common examples.
-Grant a user access to log data from their resources:
+**Example 1: Grant a user access to log data from their resources.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users `*/read` or `Microsoft.Insights/logs/*/read` permissions to their resources. If they're already assigned the [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#reader) role on the workspace, it's sufficient.
-Grant a user access to log data from their resources and configure their resources to send logs to the workspace:
+**Example 2: Grant a user access to log data from their resources and configure their resources to send logs to the workspace.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions on the workspace: `Microsoft.OperationalInsights/workspaces/read` and `Microsoft.OperationalInsights/workspaces/sharedKeys/action`. With these permissions, users can't perform any workspace-level queries. They can only enumerate the workspace and use it as a destination for diagnostic settings or agent configuration. - Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read` and `Microsoft.Insights/diagnosticSettings/write`. If they're already assigned the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, assigned the Reader role, or granted `*/read` permissions on this resource, it's sufficient.
-Grant a user access to log data from their resources without being able to read security events and send data:
+**Example 3: Grant a user access to log data from their resources without being able to read security events and send data.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read`. - Add the following NonAction to block users from reading the SecurityEvent type: `Microsoft.Insights/logs/SecurityEvent/read`. The NonAction shall be in the same custom role as the action that provides the read permission (`Microsoft.Insights/logs/*/read`). If the user inherits the read action from another role that's assigned to this resource or to the subscription or resource group, they could read all log types. This scenario is also true if they inherit `*/read` that exists, for example, with the Reader or Contributor role.
-Grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace:
+**Example 4: Grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions on the workspace:
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
na Previously updated : 11/14/2022 Last updated : 11/17/2022 # Manage availability zone volume placement for Azure NetApp Files
Azure NetApp Files lets you deploy new volumes in the logical availability zone
* VMs and Azure NetApp Files volumes are to be deployed separately, within the same logical availability zone to create zone alignment between VMs and Azure NetApp Files. The availability zone volume placement feature does not create zonal VMs upon volume creation, or vice versa.
-> [!IMPORTANT]
-> Once the volume is created using the availability zone volume placement feature, the volume has the same level of support as other volumes deployed in the subscription without this feature enabled. For example, if there is an issue with backup and restore on the volume, it will be supported because the problem is not with the availability zone volume placement feature itself.
## Register the feature
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
na Previously updated : 10/31/2022 Last updated : 11/17/2022 # Use availability zones for high availability in Azure NetApp Files (preview)
The use of high availability (HA) architectures with availability zones are now
Azure NetApp Files' [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy volumes in availability zones of your choice, in alignment with Azure compute and other services in the same zone. + All Virtual Machines within the region in (peered) VNets can access all Azure NetApp Files resources (blue arrows). Virtual Machines accessing Azure NetApp Files volumes in the same zone (green arrows) share the availability zone failure domain. Azure NetApp Files deployments will occur in the availability of zone of choice if Azure NetApp Files is present in that availability zone and has sufficient capacity.
azure-portal Azure Portal Quickstart Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-quickstart-center.md
You can also select **Browse our full Azure catalog** to see all Azure learning
## Next steps * Learn more about Azure setup and migration in the [Microsoft Cloud Adoption Framework for Azure](/azure/architecture/cloud-adoption/).
-* Unlock your cloud skills with more [Learn modules]](/training/azure/).
+* Unlock your cloud skills with more [Learn modules](/training/azure/).
azure-resource-manager Bicep Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-scope.md
Title: Bicep functions - scopes description: Describes the functions to use in a Bicep file to retrieve values about deployment scopes. Previously updated : 11/23/2021 Last updated : 11/17/2022 # Scope functions for Bicep
Returns an object used for setting the scope to the tenant.
Or
-Returns properties about the tenant for the current deployment.
+Returns the tenant of the user.
Namespace: [az](bicep-functions.md#namespaces-for-functions).
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep
description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 11/10/2022 Last updated : 11/17/2022 # Azure Resource Manager template specs in Bicep
To learn more about template specs, and for hands-on guidance, see [Publish libr
## Required permissions
-To create a template spec, you need **write** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`.
+There are two Azure build-in roles defined for template spec:
-To deploy a template spec, you need **read** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`. In addition, you also need the permissions for deploying a Bicep file. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
+- [Template Spec Reader](../../role-based-access-control//built-in-roles.md#template-spec-reader)
+- [Template Spec Contributor](../../role-based-access-control//built-in-roles.md#template-spec-contributor)
+
+In addition, you also need the permissions for deploying a Bicep file. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
## Why use template specs?
az ts show \
## Deploy template spec
-After you've created the template spec, users with **read** access to the template spec can deploy it. For information about granting access, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md).
+After you've created the template spec, users with the [Template Specs Reader](#required-permissions) role can deploy it. In addition, you also need the permissions for deploying an ARM template. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
Template specs can be deployed through the portal, PowerShell, Azure CLI, or as a Bicep module in a larger template deployment. Users in an organization can deploy a template spec to any scope in Azure (resource group, subscription, management group, or tenant).
azure-resource-manager Template Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-scope.md
Title: Template functions - scope description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about deployment scope. Previously updated : 03/10/2022 Last updated : 11/17/2022 # Scope functions for ARM templates
The following example shows the subscription function called in the outputs sect
`tenant()`
-Returns properties about the tenant for the current deployment.
+Returns the tenant of the user.
In Bicep, use the [tenant](../bicep/bicep-functions-scope.md#tenant) scope function.
azure-resource-manager Template Specs Create Portal Forms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs-create-portal-forms.md
Title: Create portal forms for template spec
-description: Learn how to create forms that are displayed in the Azure portal forms. Use the form to deploying a template spec
+description: Learn how to create forms that are displayed in the Azure portal forms. Use the form to deploying a template spec
Previously updated : 11/02/2021 Last updated : 11/15/2022 # Tutorial: Create Azure portal forms for a template spec
-To help users deploy a [template spec](template-specs.md), you can create a form that is displayed in the Azure portal. The form lets users provide values that are passed to the template spec as parameters.
+You can create a form that appears in the Azure portal to assist users in deploying a [template spec](template-specs.md). The form allows users to enter values that are passed as parameters to the template spec.
When you create the template spec, you package the form and Azure Resource Manager template (ARM template) together. Deploying the template spec through the portal automatically launches the form.
+The following screenshot shows a form opened in the Azure portal.
+ ## Prerequisites
Copy this file and save it locally. This tutorial assumes you've named it **keyv
}, "skuName": { "type": "string",
- "defaultValue": "Standard",
+ "defaultValue": "standard",
"allowedValues": [
- "Standard",
- "Premium"
+ "standard",
+ "premium"
], "metadata": { "description": "Specifies whether the key vault is a standard vault or a premium vault."
Copy this file and save it locally. This tutorial assumes you've named it **keyv
} }, "secretValue": {
- "type": "securestring",
+ "type": "secureString",
"metadata": { "description": "Specifies the value of the secret that you want to create." }
Copy this file and save it locally. This tutorial assumes you've named it **keyv
"resources": [ { "type": "Microsoft.KeyVault/vaults",
- "apiVersion": "2019-09-01",
+ "apiVersion": "2022-07-01",
"name": "[parameters('keyVaultName')]", "location": "[parameters('location')]", "properties": {
Copy this file and save it locally. This tutorial assumes you've named it **keyv
}, { "type": "Microsoft.KeyVault/vaults/secrets",
- "apiVersion": "2019-09-01",
- "name": "[concat(parameters('keyVaultName'), '/', parameters('secretName'))]",
- "location": "[parameters('location')]",
+ "apiVersion": "2022-07-01",
+ "name": "[format('{0}/{1}', parameters('keyVaultName'), parameters('secretName'))]",
"dependsOn": [ "[resourceId('Microsoft.KeyVault/vaults', parameters('keyVaultName'))]" ],
Copy this file and save it locally. This tutorial assumes you've named it **keyv
## Create default form
-The Azure portal provides a sandbox for creating and previewing forms. This sandbox can generate a form from an existing ARM template. You'll use this default form to get started with creating a form for your template spec.
+The Azure portal provides a sandbox for creating and previewing forms. This sandbox can render a form from an existing ARM template. You'll use this default form to get started with creating a form for your template spec. For more information about the form structure, see [FormViewType](https://github.com/Azure/portaldocs/blob/main/portal-sdk/generated/dx-view-formViewType.md).
1. Open the [Form view sandbox](https://aka.ms/form/sandbox).
-1. Set **Package Type** to **CustomTemplate**.
+ :::image type="content" source="./media/template-specs-create-portal-forms/deploy-template-spec-config.png" alt-text="Screenshot of form view sandbox.":::
- :::image type="content" source="./media/template-specs-create-portal-forms/package-type.png" alt-text="Screenshot of setting package type to custom template":::
+1. In **Package Type**, select **CustomTemplate**. Make sure you select the package type before specify deployment template.
+1. In **Deployment template (optional)**, select the key vault template you saved locally. When prompted if you want to overwrite current changes, select **Yes**. The autogenerated form is displayed in the code window. The form is editable from the portal. To customize the form, see [customize form](#customize-form).
+ If you look closely into the autogenerated form, the default title is called **Test Form View**, and there's only one step called **basics** defined.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2021-09-09/uiFormDefinition.schema.json",
+ "view": {
+ "kind": "Form",
+ "properties": {
+ "title": "Test Form View",
+ "steps": [
+ {
+ "name": "basics",
+ "label": "Basics",
+ "elements": [
+ ...
+ ]
+ }
+ ]
+ },
+ "outputs": {
+ ...
+ }
+ }
+ }
+ ```
-1. Select the icon to open an existing template.
+1. To see that works it without any modifications, select **Preview**.
- :::image type="content" source="./media/template-specs-create-portal-forms/open-template.png" alt-text="Screenshot of icon to open file":::
+ :::image type="content" source="./media/template-specs-create-portal-forms/view-portal-basic.png" alt-text="Screenshot of the generated basic form.":::
-1. Navigate to the key vault template you saved locally. Select it and select **Open**.
-1. When prompted if you want to overwrite current changes, select **Yes**.
-1. The autogenerated form is displayed in the code window. To see that works it without any modifications, select **Preview**.
+ The sandbox displays the form. It has fields for selecting a subscription, resource group, and region. It also fields for all of the parameters from the template.
- :::image type="content" source="./media/template-specs-create-portal-forms/preview-form.png" alt-text="Screenshot of selecting preview":::
+ Most of the fields are text boxes, but some fields are specific for the type of parameter. When your template includes allowed values for a parameter, the autogenerated form uses a drop-down element. The drop-down element is pre-populated with the allowed values.
-1. The sandbox displays the form. It has fields for selecting a subscription, resource group, and region. It also fields for all of the parameters from the template.
+ In between the title and **Project details**, there are no tabs because the default form only has one step defined. In the **Customize form** section, you'll break the parameters into multiple tabs.
- Most of the fields are text boxes, but some fields are specific for the type of parameter. When your template includes allowed values for a parameter, the autogenerated form uses a drop-down element. The drop-down element is prepopulated with the allowed values.
+ > [!WARNING]
+ > Don't select **Create** as it will launch a real deployment. You'll have a chance to deploy the template spec later in this tutorial.
- > [!WARNING]
- > Don't select **Create** as it will launch a real deployment. You'll have a chance to deploy the template spec later in this tutorial.
+1. To exit from the preview, select **Cancel**.
## Customize form The default form is a good starting point for understanding forms but usually you'll want to customize it. You can edit it in the sandbox or in Visual Studio Code. The preview option is only available in the sandbox.
-1. Let's set the correct schema. Replace the schema text with:
-
- ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" range="1-2" highlight="2" :::
- 1. Give the form a **title** that describes its use.
- ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" range="1-6" highlight="6" :::
+ ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" range="1-6" highlight="6" :::
-1. Your default form had all of the fields for your template combined into one step called **Basics**. To help users understand the values they're providing, divide the form into steps. Each step contains fields related to a logical part of the solution to deploy.
+1. Your default form had all of the fields for your template combined into one step called **Basics**. To help users to understand the values they're providing, divide the form into steps. Each step contains fields related to a logical part of the solution to deploy.
- Find the step labeled **Basics**. You'll keep this step but add steps below it. The new steps will focus on configuring the key vault, setting user permissions, and specifying the secret. Make sure you add a comma after the basics step.
+ Find the step labeled **Basics**. You'll keep this step but add steps below it. The new steps will focus on configuring the key vault, setting user permissions, and specifying the secret. Make sure you add a comma after the basics step.
- ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/steps.json" highlight="15-32" :::
+ ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/steps.json" highlight="15-32" :::
- > [!IMPORTANT]
- > Properties in the form are case-sensitive. Make sure you use the casing shown in the examples.
+ > [!IMPORTANT]
+ > Properties in the form are case-sensitive. Make sure you use the casing shown in the examples.
1. Select **Preview**. You'll see the steps, but most of them don't have any elements.
- :::image type="content" source="./media/template-specs-create-portal-forms/view-steps.png" alt-text="Screenshot of form steps":::
+ :::image type="content" source="./media/template-specs-create-portal-forms/view-steps.png" alt-text="Screenshot of form steps.":::
1. Now, move elements to the appropriate steps. Start with the elements labeled **Secret Name** and **Secret Value**. Remove these elements from the **Basics** step and add them to the **Secret** step.
- ```json
- {
- "name": "secret",
- "label": "Secret",
- "elements": [
- {
- "name": "secretName",
- "type": "Microsoft.Common.TextBox",
- "label": "Secret Name",
- "defaultValue": "",
- "toolTip": "Specifies the name of the secret that you want to create.",
- "constraints": {
- "required": true,
- "regex": "",
- "validationMessage": ""
- },
- "visible": true
- },
- {
- "name": "secretValue",
- "type": "Microsoft.Common.PasswordBox",
- "label": {
- "password": "Secret Value",
- "confirmPassword": "Confirm password"
- },
- "toolTip": "Specifies the value of the secret that you want to create.",
- "constraints": {
- "required": true,
- "regex": "",
- "validationMessage": ""
- },
- "options": {
- "hideConfirmation": true
- },
- "visible": true
- }
- ]
- }
- ```
+ ```json
+ {
+ "name": "secret",
+ "label": "Secret",
+ "elements": [
+ {
+ "name": "secretName",
+ "type": "Microsoft.Common.TextBox",
+ "label": "Secret Name",
+ "defaultValue": "",
+ "toolTip": "Specifies the name of the secret that you want to create.",
+ "constraints": {
+ "required": true,
+ "regex": "",
+ "validationMessage": ""
+ },
+ "visible": true
+ },
+ {
+ "name": "secretValue",
+ "type": "Microsoft.Common.PasswordBox",
+ "label": {
+ "password": "Secret Value",
+ "confirmPassword": "Confirm password"
+ },
+ "toolTip": "Specifies the value of the secret that you want to create.",
+ "constraints": {
+ "required": true,
+ "regex": "",
+ "validationMessage": ""
+ },
+ "options": {
+ "hideConfirmation": true
+ },
+ "visible": true
+ }
+ ]
+ }
+ ```
1. When you move elements, you need to fix the `outputs` section. Currently, the outputs section references those elements as if they were still in the basics step. Fix the syntax so it references the elements in the `secret` step.
- ```json
- "outputs": {
- "parameters": {
- ...
- "secretName": "[steps('secret').secretName]",
- "secretValue": "[steps('secret').secretValue]"
- }
- ```
+ ```json
+ "outputs": {
+ "parameters": {
+ ...
+ "secretName": "[steps('secret').secretName]",
+ "secretValue": "[steps('secret').secretValue]"
+ }
+ ```
1. Continue moving elements to the appropriate steps. Rather than go through each one, take a look at the updated form.
- ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" :::
+ ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" :::
-Save this file locally with the name **keyvaultform.json**.
+1. Save this file locally with the name **keyvaultform.json**.
## Create template spec
az ts create \
To test the form, go to the portal and navigate to your template spec. Select **Deploy**. You'll see the form you created. Go through the steps and provide values for the fields.
az ts create \
Redeploy your template spec with the improved portal form. Notice that your permission fields are now drop-down that allow multiple values.
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs description: Describes how to create template specs and share them with other users in your organization. Previously updated : 11/10/2022 Last updated : 11/17/2022
If you currently have your templates in a GitHub repo or storage account, you ru
The templates you include in a template spec should be verified by administrators in your organization to follow the organization's requirements and guidance.
+## Required permissions
+
+There are two Azure build-in roles defined for template spec:
+
+- [Template Spec Reader](../../role-based-access-control//built-in-roles.md#template-spec-reader)
+- [Template Spec Contributor](../../role-based-access-control//built-in-roles.md#template-spec-contributor)
+
+In addition, you also need the permissions for deploying a Bicep file. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
+ ## Create template spec The following example shows a simple template for creating a storage account in Azure.
az ts show \
## Deploy template spec
-After you've created the template spec, users with **read** access to the template spec can deploy it. For information about granting access, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md). In addition, you also need the permissions for deploying an ARM template. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
+After you've created the template spec, users with the [template spec reader](#required-permissions) role can deploy it. In addition, you also need the permissions for deploying an ARM template. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
Template specs can be deployed through the portal, PowerShell, Azure CLI, or as a linked template in a larger template deployment. Users in an organization can deploy a template spec to any scope in Azure (resource group, subscription, management group, or tenant).
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
Title: Logic Apps connector with ARM-based AVI accounts
description: This article shows how to unlock new experiences and monetization opportunities Azure Video Indexer connectors with Logic App and Power Automate with AVI ARM accounts. Previously updated : 08/04/2022 Last updated : 11/16/2022 # Logic Apps connector with ARM-based AVI accounts
The "upload and index your video automatically" scenario covered in this article
* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes. * The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
-The logic apps that you create in this article, contain one flow per app. The second section ("**Create a second flow - JSON extraction**") explains how to connect the two. The second flow stands alone and is triggered by the first one (the section with the callback URL).
+The logic apps that you create in this article, contain one flow per app. The second section (**Create a new logic app of type consumption**) explains how to connect the two. The second flow stands alone and is triggered by the first one (the section with the callback URL).
## Prerequisites
The logic apps that you create in this article, contain one flow per app. The se
## Set up the first flow - file upload
-In this section you'll, you create the following flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
+This section describes how to set up the first ("file upload") flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
The following image shows the first flow: ![Screenshot of the file upload flow.](./media/logic-apps-connector-arm-accounts/first-flow-high-level.png)
-1. Create the [Logic App](https://portal.azure.com/#create/Microsoft.LogicApp). We create a Logic App in the same region as the Azure Video Indexer region (recommended but not required). We call the logic app `UploadIndexVideosApp`.
+1. Create the <a href="https://portal.azure.com/#create/Microsoft.LogicApp" target="_blank">Logic App</a>. We create a Logic App in the same region as the Azure Video Indexer region (recommended but not required). We call the logic app `UploadIndexVideosApp`.
1. Select **Consumption** for **Plan type**. 1. Press **Review + Create** -> **Create**.
The following image shows the first flow:
Select **Save**. > [!TIP]
- > Before moving to the next step step up the right permission between the Logic app and the Azure Video Indexer account.
+ > Before moving to the next step, set up the right permission between the Logic app and the Azure Video Indexer account.
> > Make sure you have followed the steps to enable the system -assigned managed identity of your Logic Apps.
The following image shows the first flow:
The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output.
-## Create a second flow - JSON extraction
+## Create a new logic app of type consumption
Create the second flow, Logic Apps of type consumption. The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
azure-vmware Send Logs To Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/send-logs-to-log-analytics.md
+
+ Title: Send your Azure VMware Solution logs to Log Analytics
+description: Learn about sending logs to log analytics.
++ Last updated : 11/15/2022++
+# Send your Azure VMware Solution logs to Log Analytics
+
+This article shows you how to send Azure VMware Solution logs to Azure Monitor Log Analytics. You can send logs from your AVS private cloud to your Log Analytics workspace, allowing you to take advantage of the Log Analytics feature set, including:
+
+ΓÇó Powerful querying capabilities with Kusto Query Language (KQL)
+
+ΓÇó Interactive report-creation capability based on your data, using Workbooks
+
+...without having to get your logs out of the Microsoft ecosystem!
+
+In the rest of this article, weΓÇÖll show you how easy it is to make this happen.
+
+## How to set up Log Analytics
+
+A Log Analytics workspace:
+
+ΓÇó Contains your AVS private cloud logs.
+
+ΓÇó Is the workspace from which you can take desired actions, such as querying for logs.
+
+In this section, youΓÇÖll:
+
+ΓÇó Configure a Log Analytics workspace
+
+ΓÇó Create a diagnostic setting in your private cloud to send your logs to this workspace
+
+### Create a resource
+
+1. In the Azure portal, go to **Create a resource**.
+2. Search for ΓÇ£Log Analytics WorkspaceΓÇ¥ and click **Create** -> **Log Analytics Workspace**.
++
+### Set up your workspace
+
+1. Enter the Subscription you intend to use, the Resource Group thatΓÇÖll house this workspace. Give it a name and select a region.
+1. Click **Review** + **Create**.
++
+### Add a diagnostic setting
+
+Next, we add a diagnostic setting in your AVS private cloud, so it knows where to send your logs to.
++
+1. Click your AVS private cloud.
+Go to Diagnostic settings on the left-hand menu under Monitoring.
+Select **Add diagnostic setting**.
+2. Give your diagnostic setting a name.
+Select the log categories you are interested in sending to your Log Analytics workspace.
+
+3. Make sure to select the checkbox next to **Send to Log Analytics workspace**.
+Select the Subscription your Log Analytics workspace lives in and the Log Analytics workspace.
+Click **Save** on the top left.
++
+At this point, your Log Analytics workspace has been successfully configured to receive logs from your AVS private cloud.
+
+## Search and analyze logs using Kusto
+
+Now that youΓÇÖve successfully configured your logs to go to your Log Analytics workspace, you can use that data to gain meaningful insights with Log AnalyticsΓÇÖ search feature.
+Log Analytics uses a language called the Kusto Query Language (or Kusto) to search through your logs.
+
+For more information, see
+[Data analysis in Azure Data Explorer with Kusto Query Language](/training/paths/data-analysis-data-explorer-kusto-query-language/).
azure-web-pubsub Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/resource-faq.md
Azure SignalR Service is more suitable if:
Azure Web PubSub service is more suitable for situations where: - You need to build real-time applications based on WebSocket technology or publish-subscribe over WebSocket.-- You want to build your own subprotocol or use existing advanced protocols over WebSocket (for example, MQTT, AMQP over WebSocket).
+- You want to build your own subprotocol or use existing advanced sub-protocols over WebSocket (for example, [GraphQL subscriptions over WebSocket](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-graphql-subscribe)).
- You're looking for a lightweight server, for example, sending messages to client without going through the configured backend. ## Where does my data reside?
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 09/07/2022 Last updated : 09/22/2022
Azure Backup provides several ways to restore a VM.
**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins. **Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).-
+**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs and Trusted Launch VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
>[!Tip] >To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues.
As one of the [restore options](#restore-options), you can create a VM quickly w
:::image type="content" source="./media/backup-azure-arm-restore-vms/backup-azure-cross-subscription-restore.png" alt-text="Screenshot showing the list of all subscriptions under the tenant where you have permissions.":::
+1. Choose the required zone from the **Availability Zone** drop-down list to restore an Azure VM pinned to any zone to a different zone.
+
+ Azure Backup now supports Cross Zonal Restore (CZR), you can now restore an Azure VM from the default zone to any available zones. Default zone is the zone in which Azure VM is running.
+
+ The following screenshot lists all zones that enable you to restore Azure VM to another zone.
+
+ :::image type="content" source="./media/backup-azure-arm-restore-vms/azure-virtual-machine-cross-zonal-restore.png" alt-text="Screenshot showing you how to select an available zone for VM restore.":::
+
+ >[!Note]
+ >Azure Backup supports CZR only for vaults with ZRS or CRR redundancy.
++ 1. Select **Restore** to trigger the restore operation. >[!Note]
As one of the [restore options](#restore-options), you can create a disk from a
Azure Backup now supports Cross Subscription Restore (CSR). Like Azure VM, you can now restore Azure VM disks using a recovery point from default subscription to another. Default subscription is the subscription where recovery point is available.
+1. Choose the required zone from the **Availability Zone** drop-down list to restore the VM disks to a different zone.
+
+ Azure Backup now supports Cross Zonal Restore (CZR). Like Azure VM, you can now restore Azure VM disks from the default zone to any available zones. Default zone is the zone in which the VM disks reside.
+
+ >[!Note]
+ >Azure Backup supports CZR only for vaults with ZRS or CRR redundancy.
+ 1. Select **Restore** to trigger the restore operation. When your virtual machine uses managed disks and you select the **Create virtual machine** option, Azure Backup doesn't use the specified storage account. In the case of **Restore disks** and **Instant Restore**, the storage account is used only for storing the template. Managed disks are created in the specified resource group. When your virtual machine uses unmanaged disks, they're restored as blobs to the storage account.
backup Backup Center Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-community.md
- Title: Access community resources using Backup center
-description: Use Backup center to access sample templates, scripts, and feature requests
- Previously updated : 02/18/2021--
-# Access community resources using Backup center
-
-You can use Backup center to access various community resources useful for a backup admin or operator.
-
-## Using Community Hub
-
-To access the Community Hub, navigate to Backup Center in the Azure portal and select the **Community** menu item.
-
-![Community Hub](./media/backup-center-community/backup-center-community-hub.png)
-
-Some of the resources available via the Community Hub are:
--- **Microsoft Q&A**: You can use this forum to ask and discover questions about various product features and obtain guidance from the community.--- **Feature Requests**: You can navigate to UserVoice and file feature requests.--- **Samples for automated deployments**: Using the Community Hub, you can discover sample Azure Resource Manager(ARM) templates and Azure Policies that you can use out of the box. You can also find sample PowerShell Scripts, CLI commands, and Microsoft Database Backup scripts.-
-## Next Steps
--- [Learn More about Backup center](backup-center-overview.md)
backup Backup Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-overview.md
Title: Overview of Backup center
+ Title: Overview of Backup center for Azure Backup
description: This article provides an overview of Backup center for Azure. Previously updated : 09/30/2020 Last updated : 11/16/2022++++
-# Overview of Backup center
+# Overview of Backup center for Azure Backup
-Backup Center provides a **single unified management experience** in Azure for enterprises to govern, monitor, operate, and analyze backups at scale. As such, it's consistent with AzureΓÇÖs native management experiences.
+Backup center provides a *single unified management experience* in Azure for enterprises to govern, monitor, operate, and analyze backups at scale. So, it's consistent with Azure's native management experiences.
+
+In this article, you'll learn about:
+
+> [!div class="checklist"]
+> - Key benefits
+> - Supported scenarios
+> - Get started
+> - Access community resources on Community Hub
+
+## Key benefits
Some of the key benefits of Backup center include:
-* **Single pane of glass to manage backups** ΓÇô Backup center is designed to function well across a large and distributed Azure environment. You can use Backup center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and [Azure Lighthouse](../lighthouse/overview.md) tenants.
-* **Datasource-centric management** ΓÇô Backup center provides views and filters that are centered on the datasources that you're backing up (for example, VMs and databases). This allows a resource owner or a backup admin to monitor and operate backups of items without needing to focus on which vault an item is backed up to. A key feature of this design is the ability to filter views by datasource-specific properties, such as datasource subscription, datasource resource group, and datasource tags. For example, if your organization follows a practice of assigning different tags to VMs belonging to different departments, you can use Backup center to filter backup information based on the tags of the underlying VMs being backed up without needing to focus on the tag of the vault.
-* **Connected experiences** ΓÇô Backup center provides native integrations to existing Azure services that enable management at scale. For example, Backup center uses the [Azure Policy](../governance/policy/overview.md) experience to help you govern your backups. It also leverages [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) and [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) to help you view detailed reports on backups. So you don't need to learn any new principles to use the varied features that Backup center offers. You can also discover community resources from the Backup center.
+- **Single pane of glass to manage backups**: Backup center is designed to function well across a large and distributed Azure environment. You can use Backup center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and [Azure Lighthouse](../lighthouse/overview.md) tenants.
+- **Datasource-centric management**: Backup center provides views and filters that are centered on the datasources that you're backing up (for example, VMs and databases). This allows a resource owner or a backup admin to monitor and operate backups of items without needing to focus on which vault an item is backed up to. A key feature of this design is the ability to filter views by datasource-specific properties, such as datasource subscription, datasource resource group, and datasource tags. For example, if your organization follows a practice of assigning different tags to VMs belonging to different departments, you can use Backup center to filter backup information based on the tags of the underlying VMs being backed up without needing to focus on the tag of the vault.
+- **Connected experiences**: Backup center provides native integrations to existing Azure services that enable management at scale. For example, Backup center uses the [Azure Policy](../governance/policy/overview.md) experience to help you govern your backups. It also leverages [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) and [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) to help you view detailed reports on backups. So, you don't need to learn any new principles to use the varied features that the Backup center offers. You can also [discover community resources from the Backup center](#access-community-resources-on-community-hub).
## Supported scenarios
-* Backup center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, Azure Blobs backup, Azure Managed Disks backup, and Azure Database for PostgreSQL Server backup.
-* Refer to the [support matrix](backup-center-support-matrix.md) for a detailed list of supported and unsupported scenarios.
+Backup center is currently supported for:
+
+- Azure VM backup
+- SQL in Azure VM backup
+- SAP HANA on Azure VM backup
+- Azure Files backup
+- Azure Blobs backup
+- Azure Managed Disks backup
+- Azure Database for PostgreSQL Server backup
+
+Learn more about [supported and unsupported scenarios](backup-center-support-matrix.md).
## Get started To get started with using Backup center, search for **Backup center** in the Azure portal and navigate to the **Backup center** dashboard.
-![Backup Center Search](./media/backup-center-overview/backup-center-search.png)
+
+On the **Overview** blade, two tiles appear ΓÇô **Jobs** and **Backup instances**.
-The first screen that you see is the **Overview**. It contains two tiles ΓÇô **Jobs** and **Backup instances**.
-![Backup Center tiles](./media/backup-center-overview/backup-center-overview-widgets.png)
+On the **Jobs** tile, you get a summarized view of all backup and restore related jobs that were triggered across your backup estate in the last 24 hours.
-In the **Jobs** tile, you get a summarized view of all backup and restore related jobs that were triggered across your backup estate in the last 24 hours. You can view information on the number of jobs that have completed, failed, and are in-progress. Selecting any of the numbers in this tile allows you to view more information on jobs for a particular datasource type, operation type, and status.
+- You can view information on the number of jobs that have completed, failed, and are in-progress.
+- Select any of the numbers in this tile allows you to view more information on jobs for a particular datasource type, operation type, and status.
-In the **Backup Instances** tile, you get a summarized view of all backup instances across your backup estate. For example, you can see the number of backup instances that are in soft-deleted state compared to the number of instances that are still configured for protection. Selecting any of the numbers in this tile allows you to view more information on backup instances for a particular datasource type and protection state. You can also view all backup instances whose underlying datasource is not found (the datasource might be deleted, or you may not have access to the datasource).
+On the **Backup Instances** tile, you get a summarized view of all backup instances across your backup estate. For example, you can see the number of backup instances that are in soft-deleted state compared to the number of instances that are still configured for protection.
+
+- Select any of the numbers in this tile allows you to view more information on backup instances for a particular datasource type and protection state.
+- You can also view all backup instances whose underlying datasource isn't found (the datasource might be deleted, or you may not have access to the datasource).
Watch the following video to understand the capabilities of Backup center: > [!VIDEO https://www.youtube.com/embed/pFRMBSXZcUk?t=497]
-Follow the [next steps](#next-steps) to understand the different capabilities that Backup center provides, and how you can use these capabilities to manage your backup estate efficiently.
+See the [next steps](#next-steps) to understand the different capabilities that Backup center provides, and how you can use these capabilities to manage your backup estate efficiently.
+
+## Access community resources on Community Hub
+
+You can use Backup center to access various community resources useful for a backup admin or operator.
+
+To access the Community Hub, navigate to the Backup center in the Azure portal and select the **Community** menu item.
++
+Some of the resources available via the Community Hub are:
+
+- **Microsoft Q&A**: You can use this forum to ask and discover questions about various product features and obtain guidance from the community.
+
+- **Feature Requests**: You can navigate to UserVoice and file feature requests.
+
+- **Samples for automated deployments**: Using the Community Hub, you can discover sample Azure Resource Manager (ARM) templates and Azure Policies that you can use out of the box. You can also find sample PowerShell Scripts, CLI commands, and Microsoft Database Backup scripts.
## Next steps
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 11/14/2022 Last updated : 11/22/2022
Back up disks after migrating to managed disks | Supported.<br/><br/> Backup wil
Back up managed disks after enabling resource group lock | Not supported.<br/><br/> Azure Backup can't delete the older restore points, and backups will start to fail when the maximum limit of restore points is reached. Modify backup policy for a VM | Supported.<br/><br/> The VM will be backed up by using the schedule and retention settings in new policy. If retention settings are extended, existing recovery points are marked and kept. If they're reduced, existing recovery points will be pruned in the next cleanup job and eventually deleted. Cancel a backup job| Supported during snapshot process.<br/><br/> Not supported when the snapshot is being transferred to the vault.
-Back up the VM to a different region or subscription |Not supported.<br><br>To successfully back up, virtual machines must be in the same subscription as the vault for backup.
+Back up the VM to a different region or subscription |Not supported.<br><br>For successful backup, virtual machines must be in the same subscription as the vault for backup.
Backups per day (via the Azure VM extension) | Four backups per day - one scheduled backup as per the Backup policy, and three on-demand backups. <br><br> However, to allow user retries in case of failed attempts, hard limit for on-demand backups is set to nine attempts. Backups per day (via the MARS agent) | Three scheduled backups per day. Backups per day (via DPM/MABS) | Two scheduled backups per day.
Monthly/yearly backup| Not supported when backing up with Azure VM extension. On
Automatic clock adjustment | Not supported.<br/><br/> Azure Backup doesn't automatically adjust for daylight saving time changes when backing up a VM.<br/><br/> Modify the policy manually as needed. [Security features for hybrid backup](./backup-azure-security-feature.md) |Disabling security features isn't supported. Back up the VM whose machine time is changed | Not supported.<br/><br/> If the machine time is changed to a future date-time after enabling backup for that VM, however even if the time change is reverted, successful backup isn't guaranteed.
-Multiple Backups Per Day | Supported (in preview), using *Enhanced policy* (in preview). <br><br> For hourly backup, the minimum RPO is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively. Learn about how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
+Multiple Backups Per Day | Supported (in preview), using *Enhanced policy* (in preview). <br><br> For hourly backup, the minimum RPO is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively. Learn how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
Back up a VM with deprecated plan when publisher has removed it from Azure Marketplace | Not supported. <br><br> Backup is possible. However, restore will fail. <br><br> If you've already configured backup for VM with deprecated virtual machine offer and encounter restore error, see [Troubleshoot backup errors with Azure VMs](backup-azure-vms-troubleshoot.md#usererrormarketplacevmnotsupportedvm-creation-failed-due-to-market-place-purchase-request-being-not-present). ## Operating system support (Windows)
The following table summarizes support for backup during VM management tasks, su
| <a name="backup-azure-cross-subscription-restore">Restore across subscription</a> | [Cross Subscription Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs. [Restore across region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported.
-Restore across zone | Unsupported.
+<a name="backup-azure-cross-zonal-restore">Restore across zone</a> | [Cross Zonal Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
Restore to an existing VM | Use replace disk option. Restore disk with storage account enabled for Azure Storage Service Encryption (SSE) | Not supported.<br/><br/> Restore to an account that doesn't have SSE enabled. Restore to mixed storage accounts |Not supported.<br/><br/> Based on the storage account type, all restored disks will be either premium or standard, and not mixed.
Back up VMs that are deployed from a custom image (third-party) |Supported.<br/>
Back up VMs that are migrated to Azure| Supported.<br/><br/> To back up the VM, the VM agent must be installed on the migrated machine. Back up Multi-VM consistency | Azure Backup doesn't provide data and application consistency across multiple VMs. Backup with [Diagnostic Settings](../azure-monitor/essentials/platform-logs-overview.md) | Unsupported. <br/><br/> If the restore of the Azure VM with diagnostic settings is triggered using the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, then the restore fails.
-Restore of Zone-pinned VMs | Supported (for a VM that's backed-up after Jan 2019 and where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>We currently support restoring to the same zone that's pinned in VMs. However, if the zone is unavailable due to an outage, the restore will fail.
+Restore of Zone-pinned VMs | Supported (where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>Azure Backup now supports [restoring Azure VMs to a any available zones](backup-azure-arm-restore-vms.md#restore-options) other that the zone that's pinned in VMs. This enables you to restore VMs when the primary zone is unavailable.d
Gen2 VMs | Supported <br> Azure Backup supports backup and restore of [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). When these VMs are restored from Recovery point, they're restored as [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Supported for managed VMs. [Spot VMs](../virtual-machines/spot-vms.md) | Unsupported. Azure Backup restores Spot VMs as regular Azure VMs.
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where Trusted Launch VM is available. <br><br> - Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. <br><br> - Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm).
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where Trusted Launch VM is available. <br><br> - Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. <br><br> - Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm).
[Confidential VM](../confidential-computing/confidential-vm-overview.md) | The backup support is in Limited Preview. <br><br> Backup is supported only for those Confidential VMs with no confidential disk encryption and for Confidential VMs with confidential OS disk encryption using Platform Managed Key (PMK). <br><br> Backup is currently not supported for Confidential VMs with confidential OS disk encryption using Customer Managed Key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where Confidential VM is available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported using [Enhanced Policy](backup-azure-vms-enhanced-policy.md) only. You can configure backup through [Create VM blade](backup-azure-arm-vms-prepare.md), [VM Manage blade](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and File Recovery (Item level Restore) for Confidential VM are currently not supported. ## VM storage support
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
Previously updated : 09/09/2022 Last updated : 11/17/2022
After you deploy this feature, there are two different sets of connection instru
* Set up concurrent VM sessions with Bastion. * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
-Currently, this feature has the following limitation:
+**Limitations**
* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
+* This feature is not supported on Cloud Shell.
## <a name="prereq"></a>Prerequisites
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
By default, users in your org will have only read access to shared links. If a u
## Considerations
-* Shareable Links isn't currently supported on peered VNets that aren't in the same subscription.
-* Shareable Links is not supported for national clouds during preview.
+* Shareable Links isn't currently supported for peered VNets that aren't in the same subscription.
+* Shareable Links isn't currently supported for peered VNets that aren't in the same region.
+* Shareable Links isn't supported for national clouds during preview.
* The Standard SKU is required for this feature. ## Prerequisites
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
+
+ Title: Batch synthesis API (Preview) for text to speech - Speech service
+
+description: Learn how to use the batch synthesis API for asynchronous synthesis of long-form text to speech.
++++++ Last updated : 11/16/2022+++
+# Batch synthesis API (Preview) for text to speech
+
+The Batch synthesis API (Preview) can synthesize a large volume of text input (long and short) asynchronously. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The batch synthesis API can create synthesized audio longer than 10 minutes.
+
+> [!IMPORTANT]
+> The Batch synthesis API is currently in public preview. Once it's generally available, the Long Audio API will be deprecated. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
+
+The batch synthesis API is asynchronous and doesn't return synthesized audio in real time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success. The text inputs must be plain text or [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) text.
+
+This diagram provides a high-level overview of the workflow.
+
+![Diagram of the Batch Synthesis API workflow.](media/long-audio-api/long-audio-api-workflow.png)
+
+> [!TIP]
+> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks.
+
+You can use the following REST API operations for batch synthesis:
+
+| Operation | Method | REST API call |
+| - | -- | |
+| Create batch synthesis | `POST` | texttospeech/3.1-preview1/batchsynthesis |
+| Get batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+| List batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis |
+| Delete batch synthesis | `DELETE` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+
+## Create batch synthesis
+
+To submit a batch synthesis request, construct the HTTP POST request body according to the following instructions:
+
+- Set the required `textType` property.
+- If the `textType` property is set to "PlainText", then you must also set the `voice` property in the `synthesisConfig`. In the example below, the `textType` is set to "SSML", so the `speechSynthesis` isn't set.
+- Set the required `displayName` property. Choose a name that you can refer to later. The display name doesn't have to be unique.
+- Optionally you can set the `description`, `timeToLive`, and other properties. For more information, see [batch synthesis properties](#batch-synthesis-properties).
+
+> [!NOTE]
+> The maximum JSON payload size that will be accepted is 500 kilobytes. Each Speech resource can have up to 200 batch synthesis jobs that are running concurrently.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key, replace `YourSpeechRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: application/json" -d '{
+ "displayName": "batch synthesis sample",
+ "description": "my ssml test",
+ "textType": "SSML",
+ "inputs": [
+ {
+ "text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''>
+ <voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>
+ The rainbow has seven colors.
+ </voice>
+ </speak>",
+ },
+ ],
+ "properties": {
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false,
+ "concatenateResult": false,
+ "decompressOutputFiles": false
+ },
+}' "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "lastActionDateTime": "2022-11-16T15:07:04.121Z",
+ "status": "NotStarted",
+ "id": "1e2e0fe8-e403-417c-a382-b55eb2ea943d",
+ "createdDateTime": "2022-11-16T15:07:04.121Z",
+ "displayName": "batch synthesis sample",
+ "description": "my ssml test"
+}
+```
+
+The `status` property should progress from `NotStarted` status, to `Running`, and finally to `Succeeded` or `Failed`. You can call the [GET batch synthesis API](#get-batch-synthesis) periodically until the returned status is `Succeeded` or `Failed`.
+
+## Get batch synthesis
+
+To get the status of the batch synthesis job, make an HTTP GET request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "audioSize": 100000,
+ "durationInTicks": 31250000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT3.125S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T14:00:32.523Z",
+ "status": "Succeeded",
+ "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "createdDateTime": "2022-11-05T14:00:31.523Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ }
+```
+
+From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
+
+## List batch synthesis
+
+To list all batch synthesis jobs for the Speech resource, make an HTTP GET request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key and replace `YourSpeechRegion` with your Speech resource region. Optionally, you can set the `skip` and `top` (page size) query parameters in URL. The default value for `skip` is 0 and the default value for `top` is 100.
+
+```azurecli-interactive
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "values": [
+ {
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "audioSize": 100000,
+ "durationInTicks": 31250000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT3.125S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T14:00:32.523Z",
+ "status": "Succeeded",
+ "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "createdDateTime": "2022-11-05T14:00:31.523Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ }
+ {
+ "textType": "PlainText",
+ "synthesisConfig": {
+ "voice": "en-US-JennyNeural",
+ "style": "chat",
+ "rate": "+30.00%",
+ "pitch": "x-high",
+ "volume": "80"
+ },
+ "customVoices": {},
+ "properties": {
+ "audioSize": 79384,
+ "durationInTicks": 24800000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT2.48S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/38e249bf-2607-4236-930b-82f6724048d8/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T18:52:23.210Z",
+ "status": "Succeeded",
+ "id": "38e249bf-2607-4236-930b-82f6724048d8",
+ "createdDateTime": "2022-11-05T18:52:22.807Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ },
+ ],
+ // The next page link of the list of batch synthesis.
+ "@nextLink": "https://{region}.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2"
+}
+```
+
+From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
+
+The `values` property in the json response lists your synthesis requests. The list is paginated, with a maximum page size of 100. The `"@nextLink"` property is provided as needed to get the next page of the paginated list.
+
+## Delete batch synthesis
+
+Delete the batch synthesis job history after you retrieved the audio output results. The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+
+To delete a batch synthesis job, make an HTTP DELETE request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X DELETE "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+The response headers will include `HTTP/1.1 204 No Content` if the delete request was successful.
+
+## Batch synthesis results
+
+After you [get a batch synthesis job](#get-batch-synthesis) with `status` of "Succeeded", you can download the audio output results. Use the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response.
+
+To get the batch synthesis results file, make an HTTP GET request using the URI as shown in the following example. Replace `YourOutputsResultUrl` with the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response. Replace `YourSpeechKey` with your Speech resource key.
+
+```azurecli-interactive
+curl -v -X GET "YourOutputsResultUrl" -H "Ocp-Apim-Subscription-Key: YourSpeechKey" > results.zip
+```
+
+The results are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. The numbered prefix of each filename (shown below as `[nnnn]`) is in the same order as the text inputs used when you created the batch synthesis.
+
+> [!NOTE]
+> The `[nnnn].debug.json` file contains the synthesis result ID and other information that might help with troubleshooting. The properties that it contains might change, so you shouldn't take any dependencies on the JSON format.
+
+The summary file contains the synthesis results for each text input. Here's an example `summary.json` file:
+
+```json
+{
+ "jobID": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "status": "Succeeded",
+ "results": [
+ {
+ "texts": [
+ "<speak version='1.0' xml:lang='en-US'>\n\t\t\t\t<voice xml:lang='en-US' xml:gender='Female' name='en-US-JennyNeural'>\n\t\t\t\t\tThe rainbow has seven colors.\n\t\t\t\t</voice>\n\t\t\t</speak>"
+ ],
+ "status": "Succeeded",
+ "billingDetails": {
+ "CustomNeural": "0",
+ "Neural": "33"
+ },
+ "audioFileName": "0001.wav",
+ "properties": {
+ "audioSize": "100000",
+ "duration": "PT3.1S",
+ "durationInTicks": "31250000"
+ }
+ }
+ ]
+}
+```
+
+If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file will be included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file will be included in the results.
+
+Here's an example word data file with both audio offset and duration in milliseconds:
+
+```json
+[
+ {
+ "Text": "the",
+ "AudioOffset": 38,
+ "Duration": 153
+ },
+ {
+ "Text": "rainbow",
+ "AudioOffset": 201,
+ "Duration": 326
+ },
+ {
+ "Text": "has",
+ "AudioOffset": 567,
+ "Duration": 96
+ },
+ {
+ "Text": "seven",
+ "AudioOffset": 673,
+ "Duration": 96
+ },
+ {
+ "Text": "colors",
+ "AudioOffset": 778,
+ "Duration": 451
+ },
+]
+```
+
+## Batch synthesis properties
+
+Batch synthesis properties are described in the following table.
+
+| Property | Description |
+|-|-|
+|`createdDateTime`|The date and time when the batch synthesis job was created.<br/><br/>This property is read-only.|
+|`customProperties`|A custom set of optional batch synthesis configuration settings.<br/><br/>This property is stored for your convenience to associate the synthesis jobs that you created with the synthesis jobs that you get or list. This property is stored, but isn't used by the Speech service.<br/><br/>You can specify up to 10 custom properties as key and value pairs. The maximum allowed key length is 64 characters, and the maximum allowed value length is 256 characters.|
+|`customVoices`|The map of a custom voice name and its deployment ID.<br/><br/>For example: `"customVoices": {"your-custom-voice-name": "502ac834-6537-4bc3-9fd6-140114daa66d"}`<br/><br/>You can use the voice name in your `synthesisConfig.voice` (when the `textType` is set to `"PlainText"`) or within the SSML text of `inputs` (when the `textType` is set to `"SSML"`).<br/><br/>This property is required to use a custom voice. If you try to use a custom voice that isn't defined here, the service returns an error.|
+|`description`|The description of the batch synthesis.<br/><br/>This property is optional.|
+|`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
+|`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
+|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result will be written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but keep in mind that the maximum JSON payload size (including all text inputs and other properties) that will be accepted is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
+|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.|
+|`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.|
+|`properties`|A defined set of optional batch synthesis configuration settings.|
+|`properties.audioSize`|The audio output size in bytes.<br/><br/>This property is read-only.|
+|`properties.billingDetails`|The number of words that were processed and billed by `customNeural` versus `neural` (prebuilt) voices.<br/><br/>This property is read-only.|
+|`properties.concatenateResult`|Determines whether to concatenate the result. This optional `bool` value ("true" or "false") is "false" by default.|
+|`properties.decompressOutputFiles`|Determines whether to unzip the synthesis result files in the destination container. This property can only be set when the `destinationContainerUrl` property is set or BYOS (Bring Your Own Storage) is configured for the Speech resource. This optional `bool` value ("true" or "false") is "false" by default.|
+|`properties.destinationContainerUrl`|The batch synthesis results can be stored in a writable Azure container. If you don't specify a container URI with [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) token, the Speech service stores the results in a container managed by Microsoft. SAS with stored access policies isn't supported. When the synthesis job is deleted, the result data is also deleted.<br/><br/>This optional property isn't included in the response when you get the synthesis job.|
+|`properties.duration`|The audio output duration. The value is an ISO 8601 encoded duration.<br/><br/>This property is read-only.|
+|`properties.durationInTicks`|The audio output duration in ticks.<br/><br/>This property is read-only.|
+|`properties.failedAudioCount`|The count of batch synthesis inputs to audio output failed.<br/><br/>This property is read-only.|
+|`properties.outputFormat`|The audio output format.<br/><br/>For information about the accepted values, see [audio output formats](rest-text-to-speech.md#audio-outputs). The default output format is `riff-24khz-16bit-mono-pcm`.|
+|`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file will be included in the results data ZIP file.|
+|`properties.succeededAudioCount`|The count of batch synthesis inputs to audio output succeeded.<br/><br/>This property is read-only.|
+|`properties.timeToLive`|A duration after the synthesis job is created, when the synthesis results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. This optional setting is `P31D` (31 days) by default. The maximum time to live is 31 days. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.<br/><br/>Otherwise, you can call the [delete](#delete-batch-synthesis) synthesis method to remove the job sooner.|
+|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file will be included in the results data ZIP file.|
+|`status`|The batch synthesis processing status.<br/><br/>The status should progress from "NotStarted" to "Running", and finally to either "Succeeded" or "Failed".<br/><br/>This property is read-only.|
+|`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.rate`|The rate of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.style`|For some voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=stt-tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property.<br/><br/>This property is required when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.volume`|The volume of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`textType`|Indicates whether the `inputs` text property should be plain text or SSML. The possible case-insensitive values are "PlainText" and "SSML". When the `textType` is set to `"PlainText"`, you must also set the `synthesisConfig` voice property.<br/><br/>This property is required.|
+
+## HTTP status codes
+
+The section details the HTTP response codes and messages from the batch synthesis API.
+
+### HTTP 200 OK
+
+HTTP 200 OK indicates that the request was successful.
+
+### HTTP 201 Created
+
+HTTP 201 Created indicates that the create batch synthesis request (via HTTP POST) was successful.
+
+### HTTP 204 error
+
+An HTTP 204 error indicates that the request was successful, but the resource doesn't exist. For example:
+- You tried to get or delete a synthesis job that doesn't exist.
+- You successfully deleted a synthesis job.
+
+### HTTP 400 error
+
+Here are examples that can result in the 400 error:
+- The `outputFormat` is unsupported or invalid. Provide a valid format value, or leave `outputFormat` empty to use the default setting.
+- The number of requested text inputs exceeded the limit of 1,000.
+- The `top` query parameter exceeded the limit of 100.
+- You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.
+- You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
+- You tried to use a `F0` Speech resource, but the region only supports the `S0` (standard) Speech resource pricing tier.
+- You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
+
+### HTTP 404 error
+
+The specified entity can't be found. Make sure the synthesis ID is correct.
+
+### HTTP 429 error
+
+There are too many recent requests. Each client application can submit up to 50 requests per 5 seconds for each Speech resource. Reduce the number of requests per second.
+
+You can check the rate limit and quota remaining via the HTTP headers as shown in the following example:
+
+```http
+X-RateLimit-Limit: 50
+X-RateLimit-Remaining: 49
+X-RateLimit-Reset: 2022-11-11T01:49:43Z
+```
+
+### HTTP 500 error
+
+HTTP 500 Internal Server Error indicates that the request failed. The response body contains the error message.
+
+### HTTP error example
+
+Here's an example request that results in an HTTP 400 error, because the `top` query parameter is set to a value greater than 100.
+
+```console
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+In this case, the response headers will include `HTTP/1.1 400 Bad Request`.
+
+The response body will resemble the following JSON example:
+
+```json
+{
+ "code": "InvalidRequest",
+ "message": "The top parameter should not be greater than 100.",
+ "innerError": {
+ "code": "InvalidParameter",
+ "message": "The top parameter should not be greater than 100."
+ }
+}
+```
+
+## Next steps
+
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Text-to-speech quickstart](get-started-text-to-speech.md)
+- [Migrate to batch synthesis](migrate-to-batch-synthesis.md)
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions" ``` - You should receive a response body in the following format: ```json
Here are some property options that you can use to configure a transcription whe
|`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models).| |`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. | |`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.|
-|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. For example, specify `PT12H` for 12 hours. As an alternative, you can call [DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription) regularly after you retrieve the transcription results.|
+|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription) regularly after you retrieve the transcription results.|
|`wordLevelTimestampsEnabled`|Specifies if word level timestamps should be included in the output. The default value is `false`.|
cognitive-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md
Depending in part on the request parameters set when you created the transcripti
|`confidence`|The confidence value for the recognition.| |`display`|The display form of the recognized text. Added punctuation and capitalization are included.| |`displayPhraseElements`|A list of results with display text for each word of the phrase. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.|
-|`duration`|The audio duration, ISO 8601 encoded duration.|
+|`duration`|The audio duration. The value is an ISO 8601 encoded duration.|
|`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).| |`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.| |`lexical`|The actual words recognized.| |`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.| |`maskedITN`|The ITN form with profanity masking applied.| |`nBest`|A list of possible transcriptions for the current phrase with confidences.|
-|`offset`|The offset in audio of this phrase, ISO 8601 encoded duration.|
+|`offset`|The offset in audio of this phrase. The value is an ISO 8601 encoded duration.|
|`offsetInTicks`|The offset in audio of this phrase in ticks (1 tick is 100 nanoseconds).| |`recognitionStatus`|The recognition state. For example: "Success" or "Failure".| |`recognizedPhrases`|The list of results for each phrase.| |`source`|The URL that was provided as the input audio source. The source corresponds to the `contentUrls` or `contentContainerUrl` request property. The `source` property is the only way to confirm the audio input for a transcription.| |`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property is not present.|
-|`timestamp`|The creation time of the transcription, ISO 8601 encoded timestamp, combined date and time.|
+|`timestamp`|The creation date and time of the transcription. The value is an ISO 8601 encoded timestamp.|
|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.|
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
If you want to allow a user to grant access to other users, you need to assign t
## Next steps
-* [Long Audio API](./long-audio-api.md)
-
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Batch synthesis](batch-synthesis.md)
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Navigate to the project where you copied the model to [deploy the model copy](ho
- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) - [How to record voice samples](record-custom-voice-samples.md) - [Text-to-Speech API reference](rest-text-to-speech.md)-- [Long Audio API](long-audio-api.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
The HTTP status code for each response indicates success or common errors.
- [How to record voice samples](record-custom-voice-samples.md) - [Text-to-Speech API reference](rest-text-to-speech.md)-- [Long Audio API](long-audio-api.md)
+- [Batch synthesis](batch-synthesis.md)
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
Previously updated : 01/24/2022 Last updated : 11/12/2022
-zone_pivot_groups: acs-js-csharp
+zone_pivot_groups: acs-js-csharp-python
ms.devlang: csharp, javascript
You can transcribe meetings and other conversations with the ability to add, rem
[!INCLUDE [C# Basics include](includes/how-to/conversation-transcription/real-time-csharp.md)] ::: zone-end + ## Next steps > [!div class="nextstepaction"]
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
- Title: Synthesize long-form text to speech - Speech service-
-description: Learn how the Long Audio API is designed for asynchronous synthesis of long-form text to speech.
------ Previously updated : 11/11/2022---
-# Synthesize long-form text to speech
-
-The Long Audio API provides asynchronous synthesis of long-form text to speech. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The Long Audio API can create synthesized audio longer than 10 minutes.
-
-> [!TIP]
-> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks. For a C# example, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs).
-
-## Workflow
-
-The Long Audio API is asynchronous and doesn't return synthesized audio in real time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success.
-
-This diagram provides a high-level overview of the workflow.
-
-![Long Audio API workflow diagram](media/long-audio-api/long-audio-api-workflow.png)
-
-## Prepare content for synthesis
-
-When preparing your text file, make sure it:
-
-* Is a single plain text (.txt) or SSML text (.txt). Don't use compressed files such as ZIP.
-* Is encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom).
-* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs.
- * For plain text, each paragraph is separated by pressing **Enter/Return**. See [plain text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/en-US.txt).
- * For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs. See [SSML text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/SSMLTextInputSample.txt).
-
-> [!NOTE]
-> When using SSML text, be sure to use the [supported SSML elements](speech-synthesis-markup.md?tabs=csharp#supported-ssml-elements) except the `audio`, `mstts:backgroundaudio`, and `lexicon` elements. The `audio`, `mstts:backgroundaudio`, and `lexicon` elements are not supported by Long Audio API. The `audio` and `lexicon` elements will be ignored without any error message. The `mstts:backgroundaudio` element will cause the systhesis task failure. If your synthesis task fails, download the audio result (.zip file) and check the error report with suffix name "err.txt" within the zip file for details.
-
-## Sample code
-
-The rest of this page focuses on Python, but sample code for the Long Audio API is available on GitHub for the following programming languages:
-
-* [Sample code: Python](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/Python)
-* [Sample code: C#](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/LongAudioAPI/CSharp/LongAudioAPISample)
-* [Sample code: Java](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/)
-
-## Python example
-
-This section contains Python examples that show the basic usage of the Long Audio API. Create a new Python project using your favorite IDE or editor. Then copy this code snippet into a file named `long_audio_synthesis_client.py`.
-
-```python
-import json
-import ntpath
-import requests
-```
-
-These libraries are used to construct the HTTP request, and call the text-to-speech long audio synthesis REST API.
-
-### Get a list of supported voices
-
-The Long Audio API supports a subset of [Public Neural Voices](language-support.md?tabs=stt-tts) and [Custom Neural Voices](language-support.md?tabs=stt-tts).
-
-To get a list of supported voices, send a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
-
-This code gets a full list of voices you can use at a specific region/endpoint.
-
-```python
-def get_voices():
- region = '<region>'
- key = '<your_key>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/voices'.format(region)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- response = requests.get(url, headers=header)
- print(response.text)
-
-get_voices()
-```
-
-Replace the following values:
-
-* Replace `<your_key>` with your Speech resource key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-* Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-
-You'll see output that looks like this:
-
-```json
-{
- "values": [
- {
- "locale": "en-US",
- "voiceName": "en-US-AriaNeural",
- "description": "",
- "gender": "Female",
- "createdDateTime": "2020-05-21T05:57:39.123Z",
- "properties": {
- "publicAvailable": true
- }
- },
- {
- "id": "8fafd8cd-5f95-4a27-a0ce-59260f873141"
- "locale": "en-US",
- "voiceName": "my custom neural voice",
- "description": "",
- "gender": "Male",
- "createdDateTime": "2020-05-21T05:25:40.243Z",
- "properties": {
- "publicAvailable": false
- }
- }
- ]
-}
-```
-
-If **properties.publicAvailable** is **true**, the voice is a public neural voice. Otherwise, it's a custom neural voice.
-
-### Convert text to speech
-
-Prepare an input text file, in either plain text or SSML text, then add the following code to `long_audio_synthesis_client.py`:
-
-> [!NOTE]
-> `concatenateResult` is an optional parameter. If this parameter isn't set, the audio outputs will be generated per paragraph. You can also concatenate the audios into one output by including the parameter.
-> `outputFormat` is also optional. By default, the audio output is set to `riff-24khz-16bit-mono-pcm`. For more information about supported audio output formats, see [Audio output formats](#audio-output-formats).
-
-```python
-def submit_synthesis():
- region = '<region>'
- key = '<your_key>'
- input_file_path = '<input_file_path>'
- locale = '<locale>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis'.format(region)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- voice_identities = [
- {
- 'voicename': '<voice_name>'
- }
- ]
-
- payload = {
- 'displayname': 'long audio synthesis sample',
- 'description': 'sample description',
- 'locale': locale,
- 'voices': json.dumps(voice_identities),
- 'outputformat': 'riff-24khz-16bit-mono-pcm',
- 'concatenateresult': True,
- }
-
- filename = ntpath.basename(input_file_path)
- files = {
- 'script': (filename, open(input_file_path, 'rb'), 'text/plain')
- }
-
- response = requests.post(url, payload, headers=header, files=files)
- print('response.status_code: %d' % response.status_code)
- print(response.headers['Location'])
-
-submit_synthesis()
-```
-
-Replace the following values:
-
-* Replace `<your_key>` with your Speech resource key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-* Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-* Replace `<input_file_path>` with the path to the text file you've prepared for text-to-speech.
-* Replace `<locale>` with the desired output locale. For more information, see [language support](language-support.md?tabs=stt-tts).
-
-Use one of the voices returned by your previous call to the `/voices` endpoint.
-
-* If you're using public neural voice, replace `<voice_name>` with the desired output voice.
-* To use a custom neural voice, replace `voice_identities` variable with following, and replace `<voice_id>` with the `id` of your custom neural voice.
-```Python
-voice_identities = [
- {
- 'id': '<voice_id>'
- }
-]
-```
-
-You'll see output that looks like this:
-
-```console
-response.status_code: 202
-https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/<guid>
-```
-
-> [!NOTE]
-> If you have more than one input file, you will need to submit multiple requests, and there are limitations to consider.
-> * The client can submit up to **5** requests per second for each Azure subscription account. If it exceeds the limitation, a **429 error code (too many requests)** is returned. Reduce the rate of submissions to avoid this limit.
-> * The server can queue up to **120** requests for each Azure subscription account. If the queue exceeds this limitation, server will return **429 error code(too many requests)**. Wait for completed requests before submitting additional requests.
-
-You can use the URL in output to get the request status.
-
-### Get details about a submitted request
-
-To get status of a submitted synthesis request, send a GET request to the URL returned in previous step.
-
-```Python
-
-def get_synthesis():
- url = '<url>'
- key = '<your_key>'
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
- response = requests.get(url, headers=header)
- print(response.text)
-
-get_synthesis()
-```
-
-Output will be like this:
-
-```json
-response.status_code: 200
-{
- "models": [
- {
- "voiceName": "en-US-AriaNeural"
- }
- ],
- "properties": {
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "totalDuration": "PT5M57.252S",
- "billableCharacterCount": 3048
- },
- "id": "eb3d7a81-ee3e-4e9a-b725-713383e71677",
- "lastActionDateTime": "2021-01-14T11:12:27.240Z",
- "status": "Succeeded",
- "createdDateTime": "2021-01-14T11:11:02.557Z",
- "locale": "en-US",
- "displayName": "long audio synthesis sample",
- "description": "sample description"
-}
-```
-
-The `status` property changes from `NotStarted` status, to `Running`, and finally to `Succeeded` or `Failed`. You can poll this API in a loop until the status becomes `Succeeded` or `Failed`.
-
-### Download audio result
-
-Once a synthesis request succeeds, you can download the audio result by calling the GET `/files` API.
-
-```python
-def get_files():
- id = '<request_id>'
- region = '<region>'
- key = '<your_key>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/{}/files'.format(region, id)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- response = requests.get(url, headers=header)
- print('response.status_code: %d' % response.status_code)
- print(response.text)
-
-get_files()
-```
-
-Replace `<request_id>` with the ID of request you want to download the result. It can be found in the response of previous step.
-
-Output will be like this:
-
-```json
-response.status_code: 200
-{
- "values": [
- {
- "name": "2779f2aa-4e21-4d13-8afb-6b3104d6661a.txt",
- "kind": "LongAudioSynthesisScript",
- "properties": {
- "size": 4200
- },
- "createdDateTime": "2021-01-14T11:11:02.410Z",
- "links": {
- "contentUrl": "https://customvoice-usw.blob.core.windows.net/artifacts/input.txt?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
- }
- },
- {
- "name": "voicesynthesis_waves.zip",
- "kind": "LongAudioSynthesisResult",
- "properties": {
- "size": 9290000
- },
- "createdDateTime": "2021-01-14T11:12:27.226Z",
- "links": {
- "contentUrl": "https://customvoice-usw.blob.core.windows.net/artifacts/voicesynthesis_waves.zip?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
- }
- }
- ]
-}
-```
-This example output contains information for two files. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request.
-
-The result is zip that contains the audio output files generated, along with a copy of the input text.
-
-Both files can be downloaded from the URL in their `links.contentUrl` property.
-
-### Get all synthesis requests
-
-The following code lists all submitted requests:
-
-```python
-def get_synthesis():
- region = '<region>'
- key = '<your_key>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/'.format(region)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- response = requests.get(url, headers=header)
- print('response.status_code: %d' % response.status_code)
- print(response.text)
-
-get_synthesis()
-```
-
-Output will be like:
-
-```json
-response.status_code: 200
-{
- "values": [
- {
- "models": [
- {
- "id": "8fafd8cd-5f95-4a27-a0ce-59260f873141",
- "voiceName": "my custom neural voice"
- }
- ],
- "properties": {
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "totalDuration": "PT1S",
- "billableCharacterCount": 5
- },
- "id": "f9f0bb74-dfa5-423d-95e7-58a5e1479315",
- "lastActionDateTime": "2021-01-05T07:25:42.433Z",
- "status": "Succeeded",
- "createdDateTime": "2021-01-05T07:25:13.600Z",
- "locale": "en-US",
- "displayName": "Long Audio Synthesis",
- "description": "Long audio synthesis sample"
- },
- {
- "models": [
- {
- "voiceName": "en-US-AriaNeural"
- }
- ],
- "properties": {
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "totalDuration": "PT5M57.252S",
- "billableCharacterCount": 3048
- },
- "id": "eb3d7a81-ee3e-4e9a-b725-713383e71677",
- "lastActionDateTime": "2021-01-14T11:12:27.240Z",
- "status": "Succeeded",
- "createdDateTime": "2021-01-14T11:11:02.557Z",
- "locale": "en-US",
- "displayName": "long audio synthesis sample",
- "description": "sample description"
- }
- ]
-}
-```
-
-The `values` property lists your synthesis requests. The list is paginated, with a maximum page size of 100. If there are more than 100 requests, a `"@nextLink"` property is provided to get the next page of the paginated list.
-
-```console
- "@nextLink": "https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/?top=100&skip=100"
-```
-
-You can also customize page size and skip number by providing `skip` and `top` in URL parameter.
-
-### Remove previous requests
-
-The service will keep up to **20,000** requests for each Azure subscription account. If your request amount exceeds this limitation, remove previous requests before making new ones. If you don't remove existing requests, you'll receive an error notification.
-
-The following code shows how to remove a specific synthesis request.
-
-```python
-def delete_synthesis():
- id = '<request_id>'
- region = '<region>'
- key = '<your_key>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/{}/'.format(region, id)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- response = requests.delete(url, headers=header)
- print('response.status_code: %d' % response.status_code)
-```
-
-If the request is successfully removed, the response status code will be HTTP 204 (No Content).
-
-```console
-response.status_code: 204
-```
-
-> [!NOTE]
-> Requests with a status of `NotStarted` or `Running` cannot be removed or deleted.
-
-The completed `long_audio_synthesis_client.py` is available on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Python/voiceclient.py).
-
-## HTTP status codes
-
-The following table details the HTTP response codes and messages from the REST API.
-
-| API | HTTP status code | Description | Solution |
-|--||-|-|
-| Create | 400 | The voice synthesis isn't enabled in this region. | Change the speech resource key with a supported region. |
-| | 400 | Only the **Standard** speech resource for this region is valid. | Change the speech resource key to the "Standard" pricing tier. |
-| | 400 | Exceed the 20,000 request limit for the Azure account. Remove some requests before submitting new ones. | The server will keep up to 20,000 requests for each Azure account. Delete some requests before submitting new ones. |
-| | 400 | This model can't be used in the voice synthesis: {modelID}. | Make sure the {modelID}'s state is correct. |
-| | 400 | The region for the request doesn't match the region for the model: {modelID}. | Make sure the {modelID}'s region match with the request's region. |
-| | 400 | The voice synthesis only supports the text file in the UTF-8 encoding with the byte-order marker. | Make sure the input files are in UTF-8 encoding with the byte-order marker. |
-| | 400 | Only valid SSML inputs are allowed in the voice synthesis request. | Make sure the input SSML expressions are correct. |
-| | 400 | The voice name {voiceName} isn't found in the input file. | The input SSML voice name isn't aligned with the model ID. |
-| | 400 | The number of paragraphs in the input file should be less than 10,000. | Make sure the number of paragraphs in the file is less than 10,000. |
-| | 400 | The input file should be more than 400 characters. | Make sure your input file exceeds 400 characters. |
-| | 404 | The model declared in the voice synthesis definition can't be found: {modelID}. | Make sure the {modelID} is correct. |
-| | 429 | Exceed the active voice synthesis limit. Wait until some requests finish. | The server is allowed to run and queue up to 120 requests for each Azure account. Wait and avoid submitting new requests until some requests are completed. |
-| All | 429 | There are too many requests. | The client is allowed to submit up to five requests to the server per second for each Azure account. Reduce the request amount per second. |
-| Delete | 400 | The voice synthesis task is still in use. | You can only delete requests that are **Completed** or **Failed**. |
-| GetByID | 404 | The specified entity can't be found. | Make sure the synthesis ID is correct. |
-
-## Regions and endpoints
-
-The Long audio API is available in multiple regions with unique endpoints.
-
-| Region | Endpoint |
-|--|-|
-| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
-| East US | `https://eastus.customvoice.api.speech.microsoft.com` |
-| India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
-| South Central US | `https://southcentralus.customvoice.api.speech.microsoft.com` |
-| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` |
-| UK South | `https://uksouth.customvoice.api.speech.microsoft.com` |
-| West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
-
-## Audio output formats
-
-We support flexible audio output formats. You can generate audio outputs per paragraph or concatenate the audio outputs into a single output by setting the `concatenateResult` parameter. The following audio output formats are supported by the Long Audio API:
-
-> [!NOTE]
-> The default audio format is riff-24khz-16bit-mono-pcm.
->
-> The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
-
-* riff-8khz-16bit-mono-pcm
-* riff-16khz-16bit-mono-pcm
-* riff-24khz-16bit-mono-pcm
-* riff-48khz-16bit-mono-pcm
-* audio-16khz-32kbitrate-mono-mp3
-* audio-16khz-64kbitrate-mono-mp3
-* audio-16khz-128kbitrate-mono-mp3
-* audio-24khz-48kbitrate-mono-mp3
-* audio-24khz-96kbitrate-mono-mp3
-* audio-24khz-160kbitrate-mono-mp3
cognitive-services Migrate To Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-to-batch-synthesis.md
+
+ Title: Migrate to Batch synthesis API - Speech service
+
+description: This document helps developers migrate code from Long Audio REST API to Batch synthesis REST API.
++++++ Last updated : 09/01/2022+
+ms.devlang: csharp
+++
+# Migrate code from Long Audio API to Batch synthesis API
+
+The [Batch synthesis API](batch-synthesis.md) (Preview) provides asynchronous synthesis of long-form text to speech. Benefits of upgrading from Long Audio API to Batch synthesis API, and details about how to do so, are described in the sections below.
+
+> [!IMPORTANT]
+> [Batch synthesis API](batch-synthesis.md) is currently in public preview. Once it's generally available, the Long Audio API will be deprecated.
+
+## Base path
+
+You must update the base path in your code from `/texttospeech/v3.0/longaudiosynthesis` to `/texttospeech/3.1-preview1/batchsynthesis`. For example, to list synthesis jobs for your Speech resource in the `eastus` region, use `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis` instead of `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis`.
+
+## Regions and endpoints
+
+Batch synthesis API is available in all [Speech regions](regions.md).
+
+The Long Audio API is limited to the following regions:
+
+| Region | Endpoint |
+|--|-|
+| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
+| East US | `https://eastus.customvoice.api.speech.microsoft.com` |
+| India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
+| South Central US | `https://southcentralus.customvoice.api.speech.microsoft.com` |
+| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` |
+| UK South | `https://uksouth.customvoice.api.speech.microsoft.com` |
+| West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
+
+## Voices list
+
+Batch synthesis API supports all [text-to-speech voices and styles](language-support.md?tabs=stt-tts).
+
+The Long Audio API is limited to the set of voices returned by a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
+
+## Text inputs
+
+Batch synthesis text inputs are sent in a JSON payload of up to 500 kilobytes.
+
+Long Audio API text inputs are uploaded from a file that meets the following requirements:
+* One plain text (.txt) or SSML text (.txt) file encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom). Don't use compressed files such as ZIP. If you have more than one input file, you must submit multiple requests.
+* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs. For plain text, each paragraph is separated by a new line. For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs.
+
+With Batch synthesis API, you can use any of the [supported SSML elements](speech-synthesis-markup.md?tabs=csharp#supported-ssml-elements), including the `audio`, `mstts:backgroundaudio`, and `lexicon` elements. The `audio`, `mstts:backgroundaudio`, and `lexicon` elements aren't supported by Long Audio API.
+
+## Audio output formats
+
+Batch synthesis API supports all [text-to-speech audio output formats](rest-text-to-speech.md#audio-outputs).
+
+The Long Audio API is limited to the following set of audio output formats. The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
+
+* riff-8khz-16bit-mono-pcm
+* riff-16khz-16bit-mono-pcm
+* riff-24khz-16bit-mono-pcm
+* riff-48khz-16bit-mono-pcm
+* audio-16khz-32kbitrate-mono-mp3
+* audio-16khz-64kbitrate-mono-mp3
+* audio-16khz-128kbitrate-mono-mp3
+* audio-24khz-48kbitrate-mono-mp3
+* audio-24khz-96kbitrate-mono-mp3
+* audio-24khz-160kbitrate-mono-mp3
+
+## Getting results
+
+With batch synthesis API, use the URL from the `outputs.result` property of the GET batch synthesis response. The [results](batch-synthesis.md#batch-synthesis-results) are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details.
+
+Long Audio API text inputs and results are returned via two separate content URLs as shown in the following example. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request. Both ZIP files can be downloaded from the URL in their `links.contentUrl` property.
+
+## Cleaning up resources
+
+Batch synthesis API supports up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed". The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+
+The Long Audio API is limited to 20,000 requests for each Azure subscription account. The Speech service doesn't remove job history automatically. You must remove the previous job run history before making new requests that would otherwise exceed the limit.
+
+## Next steps
+
+- [Batch synthesis API](batch-synthesis.md)
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Text-to-speech quickstart](get-started-text-to-speech.md)
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
riff-48khz-16bit-mono-pcm
## Next steps - [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)-- [Asynchronous synthesis for long-form audio](./long-audio-api.md) - [Get started with custom neural voice](how-to-custom-voice.md)
+- [Batch synthesis](batch-synthesis.md)
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp
<sup>1</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/> <sup>2</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit).<br/>
-### Text-to-speech quotas and limits per resource
+### Text-to-speech quotas and limits per Speech resource
In the following tables, the parameters without the **Adjustable** row aren't adjustable for all price tiers.
In the following tables, the parameters without the **Adjustable** row aren't ad
| Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| **Max number of transactions per certain time period per Speech service resource** | | |
+| **Max number of transactions per certain time period** | | |
| Real-time API. Prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds | 200 transactions per second (TPS) (default value) | | Adjustable | No<sup>4</sup> | Yes<sup>5</sup>, up to 1000 TPS | | **HTTP-specific quotas** | | |
In the following tables, the parameters without the **Adjustable** row aren't ad
| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 | | Max SSML message size per turn | 64 KB | 64 KB |
-#### Long Audio API
-
-| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
-|--|--|--|
-| Min text length | N/A | 400 characters for plain text; 400 [billable characters](text-to-speech.md#pricing-note) for SSML |
-| Max text length | N/A | 10000 paragraphs |
-| Start time | N/A | 10 tasks or 10000 characters accumulated |
- #### Custom Neural Voice | Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| Max number of transactions per second (TPS) per Speech service resource | Not available for F0 | See [General](#general) |
-| Max number of datasets per Speech service resource | N/A | 500 |
-| Max number of simultaneous dataset uploads per Speech service resource | N/A | 5 |
+| Max number of transactions per second (TPS) | Not available for F0 | See [General](#general) |
+| Max number of datasets | N/A | 500 |
+| Max number of simultaneous dataset uploads | N/A | 5 |
| Max data file size for data import per dataset | N/A | 2 GB | | Upload of long audios or audios without script | N/A | Yes |
-| Max number of simultaneous model trainings per Speech service resource | N/A | 3 |
-| Max number of custom endpoints per Speech service resource | N/A | 50 |
-| *Concurrent request limit for Custom Neural Voice* | | |
+| Max number of simultaneous model trainings | N/A | 3 |
+| Max number of custom endpoints | N/A | 50 |
+| Concurrent request limit for Custom Neural Voice | | |
| Default value | N/A | 10 | | Adjustable | N/A | Yes<sup>5</sup> |
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Use the `mstts:silence` element to insert pauses before or after text, or betwee
| Attribute | Description | Required or optional | | - | - | -- |
-| `type` | Specifies the location of silence to be added: <ul><li>`Leading` ΓÇô At the beginning of text </li><li>`Tailing` ΓÇô At the end of text </li><li>`Sentenceboundary` ΓÇô Between adjacent sentences </li></ul> | Required |
-| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`.| Required |
+| `type` | Specifies where and how to add silence. The following silence types are supported:<br/><ul><li>`Leading` ΓÇô Additional silence at the beginning of the text. The value that you set is added to the natural silence before the start of text.</li><li>`Leading-exact` ΓÇô Silence at the beginning of the text. The value is an absolute silence length.</li><li>`Tailing` ΓÇô Additional silence at the end of text. The value that you set is added to the natural silence after the last word.</li><li>`Tailing-exact` ΓÇô Silence at the end of the text. The value is an absolute silence length.</li><li>`Sentenceboundary` ΓÇô Additional silence between adjacent sentences. The actual silence length for this type includes the natural silence after the last word in the previous sentence, the value you set for this type, and the natural silence before the starting word in the next sentence.</li><li>`Sentenceboundary-exact` ΓÇô Silence between adjacent sentences. The value is an absolute silence length.</li></ul><br/>An absolute silence type (with the `-exact` suffix) replaces any otherwise natural leading or trailing silence. Absolute silence types take precedence over the corresponding non-absolute type. For example, if you set both `Leading` and `Leading-exact` types, the `Leading-exact` type will take effect.| Required |
+| `Value` | Specifies the duration of a pause in seconds or milliseconds. This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`.| Required |
**Example**
Sometimes text-to-speech can't accurately pronounce a word. Examples might be th
``` > [!NOTE]
-> The `lexicon` element is not supported by the [Long Audio API](long-audio-api.md).
+> The `lexicon` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
**Attribute**
Any audio included in the SSML document must meet these requirements:
* The audio must not contain any customer-specific or other sensitive information. > [!NOTE]
-> The 'audio' element is not supported by the [Long Audio API](long-audio-api.md).
+> The 'audio' element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
**Syntax**
Only one background audio file is allowed per SSML document. You can intersperse
> [!NOTE] > The `mstts:backgroundaudio` element should be put in front of all `voice` elements, i.e., the first child of the `speak` element. >
-> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](long-audio-api.md).
+> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
**Syntax**
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Here's more information about neural text-to-speech features in the Speech servi
* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md?tabs=stt-tts) or [custom neural voices](custom-neural-voice.md).
-* **Asynchronous synthesis of long audio**: Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
+* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to:
cognitive-services Modifications Deprecations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/modifications-deprecations.md
+
+ Title: Modifications to Translator Service
+description: Translator Service changes, modifications, and deprecations
++++++ Last updated : 11/15/2022+++
+# Modifications to Translator Service
+
+Learn about Translator service changes, modification, and deprecations.
+
+> [!NOTE]
+> Looking for updates and preview announcements? Visit our [What's new](whats-new.md) page to stay up to date with release notes, feature enhancements, and our newest documentation.
+
+## November 2022
+
+### Changes to Translator `Usage` metrics
+
+> [!IMPORTANT]
+> **`Characters Translated`** and **`Characters Trained`** metrics are deprecated and have been removed from the Azure portal.
+
+|Deprecated metric| Current metric(s) | Description|
+||||
+|Characters Translated (Deprecated)</br></br></br></br>|**&bullet; Text Characters Translated**</br></br>**&bullet;Text Custom Characters Translated**| &bullet; Number of characters in incoming **text** translation request.</br></br> &bullet; Number of characters in incoming **custom** translation request. |
+|Characters Trained (Deprecated) | **&bullet; Text Trained Characters** | &bullet; Number of characters **trained** using text translation service.|
+
+* In 2021, two new metrics, **Text Characters Translated** and **Text Custom Characters Translated**, were added to help with granular metrics data service usage. These metrics replaced **Characters Translated** which provided combined usage data for the general and custom text translation service.
+
+* Similarly, the **Text Trained Characters** metric was added to replace the **Characters Trained** metric.
+
+* **Characters Trained** and **Characters Translated** metrics have had continued support in the Azure portal with the deprecated flag to allow migration to the current metrics. As of October 2022, Characters Trained and Characters Translated are no longer available in the Azure portal.
++
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-overview.md
Previously updated : 07/11/2022 Last updated : 11/16/2022 -
-keywords: translator, text translation, machine translation, translation service, custom translator
# What is Azure Cognitive Services Translator?
-Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Translator can be used with any operating system and powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
+Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
Translator documentation contains the following article types:
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Previously updated : 06/14/2022 Last updated : 11/16/2022 <!-- markdownlint-disable MD024 -->
# What's new in Azure Cognitive Services Translator?
-Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
Translator is a language service that enables users to translate text and documents, helps entities expand their global outreach, and supports preservation of at-risk and endangered languages.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
Creating a custom text classification project typically involves several differe
Follow these steps to get the most out of your model:
-1. **Define your schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, avoid ambiguity.
+1. **Define your schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, to avoid ambiguity.
2. **Label your data**: The quality of data labeling is a key factor in determining model performance. Documents that belong to the same class should always have the same class, if you have a document that can fall into two classes use **Multi label classification** projects. Avoid class ambiguity, make sure that your classes are clearly separable from each other, especially with single label classification projects.
cognitive-services Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md
The entity in this category can have the following subcategories.
:::column-end::: :::row-end:::
-## Category: Quantity
+
+## Category: Age
This category contains the following entities:
This category contains the following entities:
:::column span=""::: **Entity**
- Quantity
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Numbers and numeric quantities.
-
- To get this entity category, add `Quantity` to the `piiCategories` parameter. `Quantity` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Age
+ Age
:::column-end::: :::column span="2":::
The entity in this category can have the following subcategories.
Ages. To get this entity category, add `Age` to the `piiCategories` parameter. `Age` will be returned in the API response if detected.
-
+ :::column-end::: :::column span="2"::: **Supported document languages**
cognitive-services How To Call For Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
The API will attempt to detect all the [defined entity categories](concepts/conv
For spoken transcripts, the entities detected will be returned on the `redactionSource` parameter value provided. Currently, the supported values for `redactionSource` are `text`, `lexical`, `itn`, and `maskedItn` (which maps to Microsoft Speech to Text API's `display`\\`displayText`, `lexical`, `itn` and `maskedItn` format respectively). Additionally, for the spoken transcript input, this API will also provide audio timing information to empower audio redaction. For using the audioRedaction feature, use the optional `includeAudioRedaction` flag with `true` value. The audio redaction is performed based on the lexical input format.
+> [!NOTE]
+> Conversation PII now supports 40,000 characters as document size.
+ ## Getting PII results
When you get results from PII detection, you can stream the results to an applic
|Language |Package version | ||| |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
- |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.1.0b2) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## November 2022
+* Conversational PII now supports up to 40,000 characters as document size.
+ ## October 2022 * The summarization feature now has the following capabilities:
communication-services Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights.md
The **Recording** tab displays data relevant to total recordings, recording form
:::image type="content" source="media\workbooks\azure-communication-services-recording-insights.png" alt-text="Screenshot displays recording count, duration, recording usage by format and type as well as number of recordings per call."::: -
+The **Call Automation** tab displays data about calls placed or answered using Call Automation SDK, like active call count, operations executed and errors encountered by your resource over time. You can also examine a particular call by looking at the sequence of operations taken on that call using the SDK:
## Editing dashboards
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
Previously updated : 03/29/2022 Last updated : 11/16/2022 # Network Diagnostics Tool The **Network Diagnostics Tool** enables Azure Communication Services developers to ensure that their device and network conditions are optimal for connecting to the service to ensure a great call experience. The tool can be found at [aka.ms/acsdiagnostics](https://azurecommdiagnostics.net/). Users can quickly run a test, by pressing the start test button. The tool performs diagnostics on the network, devices, and call quality. The results of the diagnostics are directly provided through the tools UI. No sign-in required to use the tool. After the test, a GUID is presented which can be provided to our support team for further help.
The **Network Diagnostics Tool** enables Azure Communication Services developers
As part of the diagnostics performed, the user is asked to enable permissions for the tool to access their devices. Next, the user is asked to record their voice, which is then played back using an echo bot to ensure that the microphone is working. The tool finally, performs a video test. The test uses the camera to detect video and measure the quality for sent and received frames.
-If you are looking to build your own Network Diagnostic Tool or to perform deeper integration of this tool into your application, you can levearge [pre-call diagnostic APIs](../voice-video-calling/pre-call-diagnostics.md) for the calling SDK.
+If you are looking to build your own Network Diagnostic Tool or to perform deeper integration of this tool into your application, you can leverage [pre-call diagnostic APIs](../voice-video-calling/pre-call-diagnostics.md) for the calling SDK.
## Performed tests
If you are looking to build your own Network Diagnostic Tool or to perform deepe
|--|| | Browser Diagnostic | Checks for browser compatibility. Azure Communication Services supports specific browsers for [calling](../voice-video-calling/calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser) and [chat](../chat/sdk-features.md#javascript-chat-sdk-support-by-os-and-browser). | | Media Device Diagnostic | Checks for availability of device (camera, microphone and speaker) and enabled permissions for those devices on the browser. |
- | Service Connectivity | Checks whether it can connect to Azure Communication Services |
| Audio Test | Performs an echo bot call. Here the user can talk to echo bot and hear themselves back. The test records media quality statistics for audio including jitter, bitrate, packet loss and RTT with thresholds for optimal conditions. | | Video Test | Performs a loop back video test, where video captured by the camera is sent back and forth to check for network quality conditions. The test records media quality statistics for video including jitter, bitrate, packet loss and RTT with thresholds for optimal conditions. |
The test provides a **unique identifier** for your test which you can provide ou
- [Use Pre-Call Diagnostic APIs to build your own tech check](../voice-video-calling/pre-call-diagnostics.md) - [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)-- [Debug your application with Monitoring tool](./real-time-inspection.md) - [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
Title: Quickstart - Add a bot to your chat app
+ Title: Add a bot to your chat app
description: This quickstart shows you how to build chat experience with a bot using Communication Services Chat SDK and Bot Services. --++ - Previously updated : 01/25/2022+ Last updated : 10/18/2022
-# Quickstart: Add a bot to your chat app
+# Add a bot to your chat app
> [!IMPORTANT]
-> This functionality is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/HBm8jRuuGZ) and we will review your scenario(s) and evaluate your participation in the preview.
+> This functionality is in public preview.
>
-> Private Preview APIs and SDKs are provided without a service-level agreement, and are not appropriate for production workloads and should only be used with test users and test data. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-In this quickstart, we'll learn how to build conversational AI experiences in our chat application using 'Communication Services-Chat' messaging channel available under Azure Bot Services. We'll create a bot using BotFramework SDK and learn how to integrate this bot into our chat application that is built using Communication Services Chat SDK.
-You'll learn how to:
+In this quickstart, you will learn how to build conversational AI experiences in a chat application using Azure Communication Services Chat messaging channel that is available under Azure Bot Services. This article will describe how to create a bot using BotFramework SDK and how to integrate this bot into any chat application that is built using Communication Services Chat SDK.
-- [Create and deploy a bot](#step-1create-and-deploy-a-bot)-- [Get an Azure Communication Services Resource](#step-2get-an-azure-communication-services-resource)-- [Enable Communication Services' Chat Channel for the bot](#step-3enable-azure-communication-services-chat-channel)
+You will learn how to:
+
+- [Create and deploy an Azure bot](#step-1create-and-deploy-an-azure-bot)
+- [Get an Azure Communication Services resource](#step-2get-an-azure-communication-services-resource)
+- [Enable Communication Services Chat channel for the bot](#step-3enable-azure-communication-services-chat-channel)
- [Create a chat app and add bot as a participant](#step-4create-a-chat-app-and-add-bot-as-a-participant)-- [Explore additional features available for bot](#more-things-you-can-do-with-bot)
+- [Explore more features available for bot](#more-things-you-can-do-with-a-bot)
## Prerequisites - Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) - [Visual Studio (2019 and above)](https://visualstudio.microsoft.com/vs/)-- [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) (Make sure to install version that corresponds with your visual studio instance, 32 vs 64 bit)-
+- Latest version of .NET Core. For this tutorial, we have used [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) (Make sure to install the version that corresponds with your visual studio instance, 32 vs 64 bit)
-## Step 1 - Create and deploy a bot
-In order to use Azure Communication Services chat as a channel in Azure Bot Service, the first step would be to deploy a bot. Please follow these steps:
+## Step 1 - Create and deploy an Azure bot
-### Provision a bot service resource in Azure
+To use Azure Communication Services chat as a channel in Azure Bot Service, the first step is to deploy a bot. You can do so by following below steps:
- 1. Click on create a resource option in Azure portal.
-
- :::image type="content" source="./media/create-a-new-resource.png" alt-text="Create a new resource":::
-
- 2. Search Azure Bot in the list of available resource types.
-
- :::image type="content" source="./media/search-azure-bot.png" alt-text="Search Azure Bot":::
+### Create an Azure bot service resource in Azure
+ Refer to the Azure Bot Service documentation on how to [create a bot](/azure/bot-service/abs-quickstart?tabs=userassigned).
- 3. Choose Azure Bot to create it.
+ For this example, we have selected a multitenant bot but if you wish to use single tenant or managed identity bots refer to [configuring single tenant and managed identity bots](#support-for-single-tenant-and-managed-identity-bots).
- :::image type="content" source="./media/create-azure-bot.png" alt-text="Creat Azure Bot":::
-
- 4. Finally create an Azure Bot resource. You might use an existing Microsoft app ID or use a new one created automatically.
-
- :::image type="content" source="./media/smaller-provision-azure-bot.png" alt-text="Provision Azure Bot" lightbox="./media/provision-azure-bot.png":::
### Get Bot's MicrosoftAppId and MicrosoftAppPassword
-After creating the Azure Bot resource, next step would be to set a password for the App ID we set for the Bot credential if you chose to create one automatically in the first step.
-
- 1. Go to Azure Active Directory
-
- :::image type="content" source="./media/azure-ad.png" alt-text="Azure Active Directory":::
+ Fetch your Azure bot's [Microsoft App ID and secret](/azure/bot-service/abs-quickstart?tabs=userassigned#to-get-your-app-or-tenant-id) as you will need those values for configurations.
-2. Find your app in the App Registration blade
+### Create a Web App where the bot logic resides
- :::image type="content" source="./media/smaller-app-registration.png" alt-text="App Registration" lightbox="./media/app-registration.png":::
+ You can check out some samples at [Bot Builder Samples](https://github.com/Microsoft/BotBuilder-Samples) and tweak them or use [Bot Builder SDK](/composer/introduction) to create one. One of the simplest samples is [Echo Bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot). Generally, the Azure Bot Service expects the Bot Application Web App Controller to expose an endpoint `/api/messages`, which handles all the messages reaching the bot. To create the bot application, you can either use Azure CLI to [create an App Service](/azure/bot-service/provision-app-service?tabs=singletenant%2Cexistingplan) or directly create from the portal using below steps.
-3. Create a new password for your app from the `Certificates and Secrets` blade and save the password you create as you won't be able to copy it again.
-
- :::image type="content" source="./media/smaller-save-password.png" alt-text="Save password" lightbox="./media/save-password.png":::
-
-### Create a Web App where actual bot logic resides
-
-Create a Web App where actual bot logic resides. You could check out some samples at [Bot Builder Samples](https://github.com/Microsoft/BotBuilder-Samples) and tweak them or use Bot Builder SDK to create one: [Bot Builder documentation](/composer/introduction). One of the simplest ones to play around with is Echo Bot located here with steps on how to use it and it's the one being used in this example [Echo Bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot). Generally, the Bot Service expects the Bot Application Web App Controller to expose an endpoint `/api/messages`, which handles all the messages reaching the bot. To create the Bot application, follow these steps.
-
- 1. As in previously shown create a resource and choose `Web App` in search.
+ 1. Select `Create a resource` and in the search box, search for web app and select `Web App`.
- :::image type="content" source="./media/web-app.png" alt-text="Web app":::
+ :::image type="content" source="./media/web-app.png" alt-text="Screenshot of creating a Web app resource in Azure portal.":::
2. Configure the options you want to set including the region you want to deploy it to.
- :::image type="content" source="./media/web-app-create-options.png" alt-text="Web App Create Options":::
-
+ :::image type="content" source="./media/web-app-create-options.png" alt-text="Screenshot of specifying Web App create options to set.":::
-
- 3. Review your options and create the Web App and move to the resource once its been provisioned and copy the hostname URL exposed by the Web App.
+ 3. Review your options and create the Web App and once it has been created, copy the hostname URL exposed by the Web App.
- :::image type="content" source="./media/web-app-endpoint.png" alt-text="Web App endpoint":::
+ :::image type="content" source="./media/web-app-endpoint.png" alt-text="Diagram that shows how to copy the newly created Web App endpoint.":::
### Configure the Azure Bot
-Configure the Azure Bot we created with its Web App endpoint where the bot logic is located. To do this, copy the hostname URL of the Web App and append it with `/api/messages`
+Configure the Azure Bot you created with its Web App endpoint where the bot logic is located. To do this configuration, copy the hostname URL of the Web App from previous step and append it with `/api/messages`
- :::image type="content" source="./media/smaller-bot-configure-with-endpoint.png" alt-text="Bot Configure with Endpoint" lightbox="./media/bot-configure-with-endpoint.png":::
+ :::image type="content" source="./media/smaller-bot-configure-with-endpoint.png" alt-text="Diagram that shows how to set bot messaging endpoint with the copied Web App endpoint." lightbox="./media/bot-configure-with-endpoint.png":::
### Deploy the Azure Bot
-The final step would be to deploy the bot logic to the Web App we created. As we mentioned for this tutorial, we'll be using the Echo Bot. This bot only demonstrates a limited set of capabilities, such as echoing the user input. Here's how we deploy it to Azure Web App.
+The final step would be to deploy the Web App we created. The Echo bot functionality is limited to echoing the user input. Here's how we deploy it to Azure Web App.
1. To use the samples, clone this GitHub repository using Git. ```
- git clone https://github.com/Microsoft/BotBuilder-Samples.gitcd BotBuilder-Samples
+ git clone https://github.com/Microsoft/BotBuilder-Samples.git
+ cd BotBuilder-Samples
``` 2. Open the project located here [Echo bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot) in Visual Studio.
- 3. Go to the appsettings.json file inside the project and copy the application ID and password we created in step 2 in respective places.
+ 3. Go to the appsettings.json file inside the project and copy the [Microsoft application ID and secret](#get-bots-microsoftappid-and-microsoftapppassword) in their respective placeholders.
```js { "MicrosoftAppId": "<App-registration-id>", "MicrosoftAppPassword": "<App-password>" } ```
+ For deploying the bot, you can either use command line to [deploy an Azure bot](/azure/bot-service/provision-and-publish-a-bot?tabs=userassigned%2Ccsharp) or use Visual studio for C# bots as described below.
- 4. Click on the project to publish the Web App code to Azure. Choose the publish option in Visual Studio.
+ 1. Select the project to publish the Web App code to Azure. Choose the publish option in Visual Studio.
- :::image type="content" source="./media/publish-app.png" alt-text="Publish app":::
+ :::image type="content" source="./media/publish-app.png" alt-text="Screenshot of publishing your Web App from Visual Studio.":::
- 5. Click on New to create a new publishing profile, choose Azure as the target, and Azure App Service as the specific target.
+ 2. Select New to create a new publishing profile, choose Azure as the target, and Azure App Service as the specific target.
- :::image type="content" source="./media/select-azure-as-target.png" alt-text="Select Azure as Target":::
+ :::image type="content" source="./media/select-azure-as-target.png" alt-text="Diagram that shows how to select Azure as target in a new publishing profile.":::
- :::image type="content" source="./media/select-app-service.png" alt-text="Select App Service":::
+ :::image type="content" source="./media/select-app-service.png" alt-text="Diagram that shows how to select specific target as Azure App Service.":::
- 6. Lastly, the above option opens the deployment config. Choose the Web App we had provisioned from the list of options it comes up with after signing into your Azure account. Once ready click on `Finish` to start the deployment.
+ 3. Lastly, the above option opens the deployment config. Choose the Web App we had created from the list of options it comes up with after signing into your Azure account. Once ready select `Finish` to complete the profile, and then select `Publish` to start the deployment.
- :::image type="content" source="./media/smaller-deployment-config.png" alt-text="Deployment config" lightbox="./media/deployment-config.png":::
+ :::image type="content" source="./media/smaller-deployment-config.png" alt-text="Screenshot of setting deployment config with the created Web App name." lightbox="./media/deployment-config.png":::
## Step 2 - Get an Azure Communication Services Resource
-Now that you got the bot part sorted out, we'll need to get an Azure Communication Services resource, which we would use for configuring the Azure Communication Services channel.
-1. Create an Azure Communication Services resource. For details, see [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md).
-2. Create a Azure Communication Services User and issue a user access token [User Access Token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**.
+Now that bot is created and deployed, you will need an Azure Communication Services resource, which you can use to configure the Azure Communication Services channel.
+1. Create an Azure Communication Services resource. For details, see [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md).
+
+2. Create an Azure Communication Services User and issue a [User Access Token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**.
-## Step 3 - Enable Azure Communication Services Chat Channel
-With the Azure Communication Services resource, we can configure the Azure Communication Services channel in Azure Bot to bind an Azure Communication Services User ID with a bot. Note that currently, only the allowlisted Azure account will be able to see Azure Communication Services - Chat channel.
-1. Go to your Bot Services resource on Azure portal. Navigate to `Channels` blade and click on `Azure Communications Services - Chat` channel from the list provided.
+## Step 3 - Enable Azure Communication Services Chat channel
+With the Azure Communication Services resource, you can set up the Azure Communication Services channel in Azure Bot to assign an Azure Communication Services User ID to a bot.
+
+1. Go to your Bot Services resource on Azure portal. Navigate to `Channels` configuration on the left pane and select `Azure Communications Services - Chat` channel from the list provided.
- :::image type="content" source="./media/smaller-demoapp-launch-acs-chat.png" alt-text="DemoApp Launch Acs Chat" lightbox="./media/demoapp-launch-acs-chat.png":::
+ :::image type="content" source="./media/smaller-demoapp-launch-acs-chat.png" alt-text="Screenshot of launching Azure Communication Services Chat channel." lightbox="./media/demoapp-launch-acs-chat.png":::
-2. Choose from the dropdown list the Azure Communication Services resource that you want to connect with.
+2. Select the connect button to see a list of ACS resources available under your subscriptions.
+
+ :::image type="content" source="./media/smaller-bot-connect-acs-chat-channel.png" alt-text="Diagram that shows how to connect an Azure Communication Service Resource to this bot." lightbox="./media/bot-connect-acs-chat-channel.png":::
- :::image type="content" source="./media/smaller-demoapp-connect-acsresource.png" alt-text="DemoApp Connect Acs Resource" lightbox="./media/demoapp-connect-acsresource.png":::
+3. Once you have selected the required Azure Communication Services resource from the resources dropdown list, press the apply button.
+ :::image type="content" source="./media/smaller-bot-choose-resource.png" alt-text="Diagram that shows how to save the selected Azure Communication Service resource to create a new Azure Communication Services user ID." lightbox="./media/bot-choose-resource.png":::
-3. Once the provided resource details are verified, you'll see the **bot's Azure Communication Services ID** assigned. With this ID, you can add the bot to the conversation at whenever appropriate using Chat's AddParticipant API. Once the bot is added as participant to a chat, it will start receiving chat related activities and can respond back in the chat thread.
+4. Once the provided resource details are verified, you will see the **bot's Azure Communication Services ID** assigned. With this ID, you can add the bot to the conversation whenever appropriate using Chat's AddParticipant API. Once the bot is added as participant to a chat, it will start receiving chat related activities, and can respond back in the chat thread.
- :::image type="content" source="./media/smaller-demoapp-bot-detail.png" alt-text="DemoApp Bot Detail" lightbox="./media/demoapp-bot-detail.png":::
+ :::image type="content" source="./media/smaller-acs-chat-channel-saved.png" alt-text="Screenshot of new Azure Communication Services user ID assigned to the bot." lightbox="./media/acs-chat-channel-saved.png":::
## Step 4 - Create a chat app and add bot as a participant
-Now that you have the bot's Azure Communication Services ID, you'll be able to create a chat thread with bot as a participant.
+Now that you have the bot's Azure Communication Services ID, you can create a chat thread with the bot as a participant.
+ ### Create a new C# application ```console
dotnet add package Azure.Communication.Chat
### Create a chat client
-To create a chat client, you'll use your Communication Services endpoint and the access token that was generated as part of Step 2. You need to use the `CommunicationIdentityClient` class from the Identity SDK to create a user and issue a token to pass to your chat client.
+To create a chat client, you will use your Azure Communication Services endpoint and the access token that was generated as part of Step 2. You need to use the `CommunicationIdentityClient` class from the Identity SDK to create a user and issue a token to pass to your chat client.
Copy the following code snippets and paste into source file: **Program.cs**
await foreach (ChatMessage message in allMessages)
} ``` You should see bot's echo reply to "Hello World" in the list of messages.
-When creating the actual chat applications, you can also receive real-time chat messages by subscribing to listen for new incoming messages using our JavaScript or mobile SDKs. An example using JavaScript SDK would be:
+When creating the chat applications, you can also receive real-time notifications by subscribing to listen for new incoming messages using our JavaScript or mobile SDKs. An example using JavaScript SDK would be:
```js // open notifications channel await chatClient.startRealtimeNotifications();
chatClient.on("chatMessageReceived", (e) => {
}); ```
+### Clean up the chat thread
+
+Delete the thread when finished.
+
+```csharp
+chatClient.DeleteChatThread(threadId);
+```
### Deploy the C# chat application
-If you would like to deploy the chat application, you can follow these steps:
+Follow these steps to deploy the chat application:
1. Open the chat project in Visual Studio.
-2. Right click on the ChatQuickstart project and click Publish
+2. Select the ChatQuickstart project and from the right-click menu, select Publish
- :::image type="content" source="./media/deploy-chat-application.png" alt-text="Deploy Chat Application":::
+ :::image type="content" source="./media/deploy-chat-application.png" alt-text="Screenshot of deploying chat application to Azure from Visual Studio.":::
-## More things you can do with bot
-Besides simple text message, bot is also able to receive and send many other activities including
+## More things you can do with a bot
+In addition to sending a plain text message, a bot is also able to receive many other activities from the user through Azure Communications Services Chat channel including
- Conversation update - Message update - Message delete - Typing indicator - Event activity
+- Various attachments including Adaptive cards
+- Bot channel data
+
+Below are some samples to illustrate these features:
### Send a welcome message when a new user is added to the thread
-With the current Echo Bot logic, it accepts input from the user and echoes it back. If you would like to add additional logic such as responding to a participant added Azure Communication Services event, copy the following code snippets and paste into the source file: [EchoBot.cs](https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/csharp_dotnetcore/02.echo-bot/Bots/EchoBot.cs)
+ The current Echo Bot logic accepts input from the user and echoes it back. If you would like to add extra logic such as responding to a participant added Azure Communication Services event, copy the following code snippets and paste into the source file: [EchoBot.cs](https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/csharp_dotnetcore/02.echo-bot/Bots/EchoBot.cs)
```csharp using System.Threading;
namespace Microsoft.BotBuilderSamples.Bots
``` ### Send an adaptive card
-To help you increase engagement and efficiency and communicate with users in a variety of ways, you can send adaptive cards to the chat thread. You can send adaptive cards from a bot by adding them as bot activity attachments.
+Sending adaptive cards to the chat thread can help you increase engagement and efficiency and communicate with users in a variety of ways. You can send adaptive cards from a bot by adding them as bot activity attachments.
```csharp
await turnContext.SendActivityAsync(reply, cancellationToken);
``` You can find sample payloads for adaptive cards at [Samples and Templates](https://adaptivecards.io/samples)
-And on the Azure Communication Services User side, the Azure Communication Services message's metadata field will indicate this is a message with attachment.The key is microsoft.azure.communication.chat.bot.contenttype, which is set to the value azurebotservice.adaptivecard. This is an example of the chat message that will be received:
+On the Azure Communication Services User side, the Azure Communication Services Chat channel will add a field to the message's metadata that will indicate that this message has an attachment. The key in the metadata is `microsoft.azure.communication.chat.bot.contenttype`, which is set to the value `azurebotservice.adaptivecard`. Here is an example of the chat message that will be received:
```json {
And on the Azure Communication Services User side, the Azure Communication Servi
} ```
+* ### Send a message from user to bot
+
+You can send a simple text message from user to bot just the same way you send a text message to another user.
+However, when sending a message carrying an attachment from a user to the bot, you will need to add this flag to the ACS Chat metadata `"microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard"`. For sending an event activity from user to bot, you will need to add to ACS Chat metadata `"microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.event"`. Below are sample formats for user to bot ACS Chat messages.
+
+ * #### Simple text message
+
+```json
+{
+ "content":"Simple text message",
+ "senderDisplayName":"Acs-Dev-Bot",
+ "metadata":{
+ "text":"random text",
+ "key1":"value1",
+ "key2":"{\r\n \"subkey1\": \"subValue1\"\r\n
+ "},
+ "messageType": "Text"
+}
+```
+
+ * #### Message with an attachment
+
+```json
+{
+ "content": "{
+ \"text\":\"sample text\",
+ \"attachments\": [{
+ \"contentType\":\"application/vnd.microsoft.card.adaptive\",
+ \"content\": { \"*adaptive card payload*\" }
+ }]
+ }",
+ "senderDisplayName": "Acs-Dev-Bot",
+ "metadata": {
+ "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard",
+ "text": "random text",
+ "key1": "value1",
+ "key2": "{\r\n \"subkey1\": \"subValue1\"\r\n}"
+ },
+ "messageType": "Text"
+}
+```
+
+ * #### Message with an event activity
+
+Event payload comprises all json fields in the message content except name field, which should contain the name of the event. Below event name `endOfConversation` with the payload `"{field1":"value1", "field2": { "nestedField":"nestedValue" }}` is sent to the bot.
+```json
+{
+ "content":"{
+ \"name\":\"endOfConversation\",
+ \"field1\":\"value1\",
+ \"field2\": {
+ \"nestedField\":\"nestedValue\"
+ }
+ }",
+ "senderDisplayName":"Acs-Dev-Bot",
+ "metadata":{
+ "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.event",
+ "text":"random text",
+ "key1":"value1",
+ "key2":"{\r\n \"subkey1\": \"subValue1\"\r\n}"
+ },
+ "messageType": "Text"
+}
+```
+
+> The metadata field `"microsoft.azure.communication.chat.bot.contenttype"` is only needed in user to bot direction. It is not needed in bot to user direction.
+
+## Supported bot activity fields
+
+### Bot to user flow
+
+#### Activities
+
+- Message activity
+- Typing activity
+
+#### Message activity fields
+- `Text`
+- `Attachments`
+- `AttachmentLayout`
+- `SuggestedActions`
+- `From.Name` (Converted to ACS SenderDisplayName)
+- `ChannelData` (Converted to ACS Chat Metadata. If any `ChannelData` mapping values are objects, then they'll be serialized in JSON format and sent as a string)
+
+### User to bot flow
+
+#### Activities and fields
+
+- Message activity
+ - `Id` (ACS Chat message ID)
+ - `TimeStamp`
+ - `Text`
+ - `Attachments`
+- Conversation update activity
+ - `MembersAdded`
+ - `MembersRemoved`
+ - `TopicName`
+- Message update activity
+ - `Id` (Updated ACS Chat message ID)
+ - `Text`
+ - `Attachments`
+- Message delete activity
+ - `Id` (Deleted ACS Chat message ID)
+- Event activity
+ - `Name`
+ - `Value`
+- Typing activity
+
+#### Other common fields
+
+- `Recipient.Id` and `Recipeint.Name` (ACS Chat user ID and display name)
+- `From.Id` and `From.Name` (ACS Chat user ID and display name)
+- `Conversation.Id` (ACS Chat thread ID)
+- `ChannelId` (AcsChat if empty)
+- `ChannelData` (ACS Chat message metadata)
+
+## Support for single tenant and managed identity bots
+
+ACS Chat channel supports single tenant and managed identity bots as well. Refer to [bot identity information](/azure/bot-service/bot-builder-authentication?tabs=userassigned%2Caadv2%2Ccsharp#bot-identity-information) to set up your bot web app.
+
+For managed identity bots, additionally, you might have to [update bot service identity](/azure/bot-service/bot-builder-authentication?tabs=userassigned%2Caadv2%2Ccsharp#to-update-your-app-service).
+
+## Bot handoff patterns
+
+Sometimes the bot wouldn't be able to understand or answer a question or a customer can request to be connected to a human agent. Then it will be necessary to handoff the chat thread from a bot to a human agent. In such cases, you can design your application to [transition conversation from bot to human](/azure/bot-service/bot-service-design-pattern-handoff-human).
+
+## Handling bot to bot communication
+
+ There may be certain usecases where two bots need to be added to the same thread. If this occurs, then the bots may start replying to each other's messages. If this scenario is not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. This scenario is handled by Azure Communication Services Chat by throttling the requests which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](/azure/communication-services/concepts/service-limits#chat).
+
+## Troubleshooting
+
+### Chat channel cannot be added
+
+- Verify that in the ABS portal, Configuration -> Bot Messaging endpoint has been set correctly.
+
+### Bot gets a forbidden exception while replying to a message
+
+- Verify that bot's Microsoft App ID and secret are saved correctly in the bot configuration file uploaded to the webapp.
+
+### Bot is not able to be added as a participant
+
+- Verify that bot's ACS ID is being used correctly while sending a request to add bot to a chat thread.
+ ## Next steps Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
Title: Azure Confidential virtual machine options on AMD processors (preview)
+ Title: Azure Confidential virtual machine options on AMD processors
description: Azure Confidential Computing offers multiple options for confidential virtual machines that run on AMD processors backed by SEV-SNP technology.
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Although analytical store has built-in protection against physical failures, bac
Synapse Link, and analytical store by consequence, has different compatibility levels with Azure Cosmos DB backup modes: * Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account.
-* Continuous backup mode isn't fully supported yet:
- * Database accounts with Synapse Link enabled currently can't use continuous backup mode.
- * Database accounts with continuous backup mode enabled can enable Synapse Link through a support case. This capability is in preview now.
- * Database accounts that have neither continuous backup nor Synapse Link enabled can use these two features together through a support case. This capability is in preview now.
+* Currently Continuous backup mode and Synapse Link aren't supported in the same database account. Customers have to choose one of these two features and this decision can't be changed.
### Backup Polices
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views.md
Title: Materialized Views for Azure Cosmos DB for Apache Cassandra. (Preview)
-description: This documentation is provided as a resource for participants in the preview of Azure Cosmos DB for Apache Cassandra Materialized View.
+ Title: Materialized views (preview)
+
+description: This documentation is provided as a resource for participants in the preview of Azure Cosmos DB Cassandra API Materialized View.
+ - Previously updated : 01/06/2022- Last updated : 11/17/2022+
-# Enable materialized views for Azure Cosmos DB for Apache Cassandra operations (Preview)
+# Materialized views in Azure Cosmos DB for Apache Cassandra (preview)
+ [!INCLUDE[Cassandra](../includes/appliesto-cassandra.md)] > [!IMPORTANT]
-> Materialized Views for Azure Cosmos DB for Apache Cassandra is currently in gated preview. Please send an email to mv-preview@microsoft.com to try this feature.
-> Materialized View preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Feature overview
-
-Materialized Views when defined will help provide a means to efficiently query a base table (container on Azure Cosmos DB) with non-primary key filters. When users write to the base table, the Materialized view is built automatically in the background. This view can have a different primary key for lookups. The view will also contain only the projected columns from the base table. It will be a read-only table.
+> Materialized views in Azure Cosmos DB for Cassandra is currently in preview. You can enable this feature using the Azure portal. This preview of materialized views is provided without a service-level agreement. At this time, materialized views are not recommended for production workloads. Certain features of this preview may not be supported or may have constrained capabilities. For more information, see [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-You can query a column store without specifying a partition key by using Secondary Indexes. However, the query won't be effective for columns with high cardinality (scanning through all data for a small result set) or columns with low cardinality. Such queries end up being expensive as they end up being a cross partition query.
+Materialized views, when defined, help provide a means to efficiently query a base table (or container in Azure Cosmos DB) with filters that aren't primary keys. When users write to the base table, the materialized view is built automatically in the background. This view can have a different primary key for efficient lookups. The view will also only contain columns explicitly projected from the base table. This view will be a read-only table.
-With Materialized view, you can
-- Use as Global Secondary Indexes and save cross partition scans that reduce expensive queries -- Provide SQL based conditional predicate to populate only certain columns and certain data that meet the pre-condition -- Real time MVs that simplify real time event based scenarios where customers today use Change feed trigger for precondition checks to populate new collections"
+You can query a column store without specifying a partition key by using secondary indexes. However, the query won't be effective for columns with high or low cardinality. The query could scan through all data for a small result set. Such queries end up being expensive as they end up inadvertently executing as a cross-partition query.
-## Main benefits
+With a materialized view, you can:
-- With Materialized View (Server side denormalization), you can avoid multiple independent tables and client side denormalization. -- Materialized view feature takes on the responsibility of updating views in order to keep them consistent with the base table. With this feature, you can avoid dual writes to the base table and the view.-- Materialized Views helps optimize read performance-- Ability to specify throughput for the materialized view independently-- Based on the requirements to hydrate the view, you can configure the MV builder layer appropriately.-- Speeding up write operations as it only needs to be written to the base table.-- Additionally, This implementation on Azure Cosmos DB is based on a pull model, which doesn't affect the writer performance.
+- Use as lookup or mapping table to persist cross partition scans that would otherwise be expensive queries.
+- Provide a SQL-based conditional predicate to populate only certain columns and data that meet the pre-condition.
+- Create real-time views that simplify event-based scenarios that are commonly stored as separate collections using change feed triggers.
+## Benefits of materialized views
+Materialized views have many benefits that include, but aren't limited to:
-## How to get started?
+- You can implement server-side denormalization using materialized views. With server-side denormalization, you can avoid multiple independent tables and computationally complex denormalization in client applications.
+- Materialized views automatically updating views to keep them consistent with the base table. This automatic update abstracts the responsibilities of your client applications with would typically implement custom logic to perform dual writes to the base table and the view.
+- Materialized views optimize read performance by reading from a single view.
+- You can specify throughput for the materialized view independently.
+- You can configure a materialized view builder layer to map to your requirements to hydrate a view.
+- Materialized views improve write performance as write operations only need to be written to the base table.
+- Additionally, the Azure Cosmos DB implementation of materialized views is based on a pull model. This implementation doesn't affect write performance.
-New API for Cassandra accounts with Materialized Views enabled can be provisioned on your subscription by using REST API calls from az CLI.
+## Get started with materialized views
-### Log in to the Azure command line interface
+Create new API for Cassandra accounts by using the Azure CLI to enable the materialized views feature either with a native command or a REST API operation.
-Install Azure CLI as mentioned at [How to install the Azure CLI | Microsoft Docs](/cli/azure/install-azure-cli) and log on using the below:
- ```azurecli-interactive
- az login
- ```
+### [Azure portal](#tab/azure-portal)
-### Create an account
+1. Sign in to the [Azure portal](https://portal.azure.com/).
-To create account with support for customer managed keys and materialized views skip to **this** section
+1. Navigate to your API for Cassandra account.
-To create an account, use the following command after creating body.txt with the below content, replacing {{subscriptionId}} with your subscription ID, {{resourceGroup}} with a resource group name that you should have created in advance, and {{accountName}} with a name for your API for Cassandra account.
+1. In the resource menu, select **Settings**.
- ```azurecli-interactive
- az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview --body @body.txt
- body.txt content:
- {
- "location": "East US",
- "properties":
- {
- "databaseAccountOfferType": "Standard",
- "locations": [ { "locationName": "East US" } ],
- "capabilities": [ { "name": "EnableCassandra" }, { "name": "CassandraEnableMaterializedViews" }],
- "enableMaterializedViews": true
- }
- }
- ```
-
- Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
- ```
- az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview
- ```
-### Create an account with support for customer managed keys and materialized views
+1. In the **Settings** section, select **Materialized View for Cassandra API (Preview)**.
-This step is optional ΓÇô you can skip this step if you don't want to use Customer Managed Keys for your Azure Cosmos DB account.
+1. In the new dialog, select **Enable** to enable this feature for this account.
-To use Customer Managed Keys feature and Materialized views together on Azure Cosmos DB account, you must first configure managed identities with Azure Active Directory for your account and then enable support for materialized views.
+ :::image type="content" source="media/materialized-views/enable-in-portal.png" lightbox="media/materialized-views/enable-in-portal.png" alt-text="Screenshot of the Materialized Views feature being enabled in the Azure portal.":::
-You can use the documentation [here](../how-to-setup-cmk.md) to configure your Azure Cosmos DB Cassandra account with customer managed keys and setup managed identity access to the key Vault. Make sure you follow all the steps in [Using a managed identity in Azure key vault access policy](../how-to-setup-managed-identity.md). The next step to enable materializedViews on the account.
+### [Azure CLI](#tab/azure-cli)
-Once your account is set up with CMK and managed identity, you can enable materialized views on the account by enabling ΓÇ£enableMaterializedViewsΓÇ¥ property in the request body.
+1. Sign-in to the Azure CLI.
- ```azurecli-interactive
- az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
+ ```azurecli
+ az login
+ ```
+ > [!NOTE]
+ > If you do not have the Azure CLI installed, see [how to install the Azure CLI](/cli/azure/install-azure-cli).
-body.txt content:
-{
- "properties":
- {
- "enableMaterializedViews": true
- }
-}
- ```
+1. Install the [`cosmosdb-preview`](https://github.com/azure/azure-cli-extensions/tree/main/src/cosmosdb-preview) extension.
+ ```azurecli
+ az extension add \
+ --name cosmosdb-preview
+ ```
- Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
- ```
-az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview
-```
+1. Create shell variables for `accountName` and `resourceGroupName`.
-Perform another patch to set ΓÇ£CassandraEnableMaterializedViewsΓÇ¥ capability and wait for it to succeed
+ ```azurecli
+ # Variable for resource group name
+ resourceGroupName="<resource-group-name>"
+
+ # Variable for account name
+ accountName="<account-name>"
+ ```
-```
-az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
+1. Enable the preview materialized views feature for the account using [`az cosmosdb update`](/cli/azure/cosmosdb#az-cosmosdb-update).
-body.txt content:
-{
- "properties":
- {
- "capabilities":
-[{"name":"EnableCassandra"},
- {"name":"CassandraEnableMaterializedViews"}]
- }
-}
-```
+ ```azurecli
+ az cosmosdb update \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --enable-materialized-views true \
+ --capabilities CassandraEnableMaterializedViews
+ ```
-### Create materialized view builder
+### [REST API](#tab/rest-api)
-Following this step, you'll also need to provision a Materialized View Builder:
+1. Sign-in to the Azure CLI.
-```
-az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview --body @body.txt
+ ```azurecli
+ az login
+ ```
-body.txt content:
-{
- "properties":
- {
- "serviceType": "materializedViewsBuilder",
- "instanceCount": 1,
- "instanceSize": "Cosmos.D4s"
- }
-}
-```
+ > [!NOTE]
+ > If you do not have the Azure CLI installed, see [how to install the Azure CLI](/cli/azure/install-azure-cli).
-Wait for a couple of minutes and check the status using the below, the status in the output should have become Running:
+1. Create shell variables for `accountName` and `resourceGroupName`.
-```
-az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview
-```
+ ```azurecli
+ # Variable for resource group name
+ resourceGroupName="<resource-group-name>"
+
+ # Variable for account name
+ accountName="<account-name>"
+ ```
-## Caveats and current limitations
+1. Create a new JSON file with the capabilities manifest.
-Once your account and Materialized View Builder is set up, you should be able to create Materialized views per the documentation [here](https://cassandra.apache.org/doc/latest/cql/mvs.html) :
+ ```json
+ {
+ "properties": {
+ "capabilities": [
+ {
+ "name": "CassandraEnableMaterializedViews"
+ }
+ ],
+ "enableMaterializedViews": true
+ }
+ }
+ ```
-However, there are a few caveats with Azure Cosmos DB for Apache CassandraΓÇÖs preview implementation of Materialized Views:
-- Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined.-- For the MV definitionΓÇÖs WHERE clause, only ΓÇ£IS NOT NULLΓÇ¥ filters are currently allowed.-- After a Materialized View is created against a base table, ALTER TABLE ADD operations aren't allowed on the base tableΓÇÖs schema - they're allowed only if none of the MVs have select * in their definition.
+ > [!NOTE]
+ > In this example, we named the JSON file **capabilities.json**.
-In addition to the above, note the following limitations
+1. Get the unique identifier for your existing account using [`az cosmosdb show`](/cli/azure/cosmosdb#az-cosmosdb-show).
-### Availability zones limitations
+ ```azurecli
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query id
+ ```
-- Materialized views can't be enabled on an account that has Availability zone enabled regions. -- Adding a new region with Availability zone is not supported once ΓÇ£enableMaterializedViewsΓÇ¥ is set to true on the account.
+ Store the unique identifier in a shell variable named `$uri`.
-### Periodic backup and restore limitations
+ ```azurecli
+ uri=$(
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query id \
+ --output tsv
+ )
+ ```
-Materialized views aren't automatically restored with the restore process. Customer needs to re-create the materialized views after the restore process is complete. Customer needs to enableMaterializedViews on their restored account before creating the materialized views and provision the builders for the materialized views to be built.
+1. Enable the preview materialized views feature for the account using the REST API and [`az rest`](/cli/azure/reference-index#az-rest) with an HTTP `PATCH` verb.
-Other limitations similar to **Open Source Apache Cassandra** behavior
+ ```azurecli
+ az rest \
+ --method PATCH \
+ --uri "https://management.azure.com/$uri/?api-version=2021-11-15-preview" \
+ --body @capabilities.json
+ ```
-- Defining Conflict resolution policy on Materialized Views is not allowed.-- Write operations from customer aren't allowed on Materialized views.-- Cross document queries and use of aggregate functions aren't supported on Materialized views.-- Modifying MaterializedViewDefinitionString after MV creation is not supported.-- Deleting base table is not allowed if at least one MV is defined on it. All the MVs must first be deleted and then the base table can be deleted.-- Defining materialized views on containers with Static columns is not allowed+ ## Under the hood
-Azure Cosmos DB for Apache Cassandra uses a MV builder compute layer to maintain Materialized views. Customer gets flexibility to configure the MV builder compute instances depending on the latency and lag requirements to hydrate the views. The compute containers are shared among all MVs within the database account. Each provisioned compute container spawns off multiple tasks that read change feed from base table partitions and write data to MV (which is also another table) after transforming them as per MV definition for every MV in the database account.
-
-## Frequently asked questions (FAQs) …
--
-### What transformations/actions are supported?
--- Specifying a partition key that is different from base table partition key.-- Support for projecting selected subset of columns from base table.-- Determine if row from base table can be part of materialized view based on conditions evaluated on primary key columns of base table row. Filters supported - equalities, inequalities, contains. (Planned for GA)
+The API for Cassandra uses a materialized view builder compute layer to maintain the views.
-### What consistency levels will be supported?
+You get the flexibility to configure the view builder's compute instances based on your latency and lag requirements to hydrate the views. From a technical stand point, this compute layer helps manage connections between partitions in a more efficient manner even when the data size is large and the number of partitions is high.
-Data in materialized view is eventually consistent. User might read stale rows when compared to data on base table due to redo of some operations on MVs. This behavior is acceptable since we guarantee only eventual consistency on the MV. Customers can configure (scale up and scale down) the MV builder layer depending on the latency requirement for the view to be consistent with base table.
+The compute containers are shared among all materialized views within an Azure Cosmos DB account. Each provisioned compute container spawns off multiple tasks that read the change feed from base table partitions and writes data to the target materialized view\[s\]. The compute container transforms the data per the materialized view definition for each materialized view in the account.
-### Will there be an autoscale layer for the MV builder instances?
+## Create a materialized view builder
-Autoscaling for MV builder is not available right now. The MV builder instances can be manually scaled by modifying the instance count(scale out) or instance size(scale up).
+Create a materialized view builder to automatically transform data and write to a materialized view.
-### Details on the billing model
+### [Azure portal](#tab/azure-portal)
-The proposed billing model will be to charge the customers for:
+1. Sign in to the [Azure portal](https://portal.azure.com/).
-**MV Builder compute nodes** MV Builder Compute ΓÇô Single tenant layer
+1. Navigate to your API for Cassandra account.
-**Storage** The OLTP storage of the base table and MV based on existing storage meter for Containers. LogStore won't be charged.
+1. In the resource menu, select **Materialized Views Builder**.
-**Request Units** The provisioned RUs for base container and Materialized View.
+1. On the **Materialized Views Builder** page, configure the SKU and number of instances for the builder.
-### What are the different SKUs that will be available?
-Refer to Pricing - [Azure Cosmos DB | Microsoft Azure](https://azure.microsoft.com/pricing/details/cosmos-db/) and check instances under Dedicated Gateway
+ > [!NOTE]
+ > This resource menu option and page will only appear when the Materialized Views feature is enabled for the account.
-### What type of TTL support do we have?
+1. Select **Save**.
-Setting table level TTL on MV is not allowed. TTL from base table rows will be applied on MV as well.
+### [Azure CLI](#tab/azure-cli)
+1. Enable the materialized views builder for the account using [`az cosmosdb service create`](/cli/azure/cosmosdb/service#az-cosmosdb-service-create).
-### Initial troubleshooting if MVs aren't up to date:
-- Check if MV builder instances are provisioned-- Check if enough RUs are provisioned on the base table-- Check for unavailability on Base table or MV
+ ```azurecli
+ az cosmosdb service create \
+ --resource-group $resourceGroupName \
+ --name materialized-views-builder \
+ --account-name $accountName \
+ --count 1 \
+ --kind MaterializedViewsBuilder \
+ --size Cosmos.D4s
+ ```
-### What type of monitoring is available in addition to the existing monitoring for API for Cassandra?
+### [REST API](#tab/rest-api)
-- Max Materialized View Catchup Gap in Minutes ΓÇô Value(t) indicates rows written to base table in last ΓÇÿtΓÇÖ minutes is yet to be propagated to MV. -- Metrics related to RUs consumed on base table for MV build (read change feed cost)-- Metrics related to RUs consumed on MV for MV build (write cost)-- Metrics related to resource consumption on MV builders (CPU, memory usage metrics)
+1. Create a new JSON file with the builder manifest.
+ ```json
+ {
+ "properties": {
+ "serviceType": "materializedViewsBuilder",
+ "instanceCount": 1,
+ "instanceSize": "Cosmos.D4s"
+ }
+ }
+ ```
-### What are the restore options available for MVs?
-MVs can't be restored. Hence, MVs will need to be recreated once the base table is restored.
-
-### Can you create more than one view on a base table?
-
-Multiple views can be created on the same base table. Limit of five views is enforced.
-
-### How is uniqueness enforced on the materialized view? How will the mapping between the records in base table to the records in materialized view look like?
-
-The partition and clustering key of the base table are always part of primary key of any materialized view defined on it and enforce uniqueness of primary key after data repartitioning.
+ > [!NOTE]
+ > In this example, we named the JSON file **builder.json**.
-### Can we add or remove columns on the base table once materialized view is defined?
+1. Enable the materialized views builder for the account using the REST API and `az rest` with an HTTP `PUT` verb.
-You'll be able to add a column to the base table, but you won't be able to remove a column. After a MV is created against a base table, ALTER TABLE ADD operations aren't allowed on the base table - they're allowed only if none of the MVs have select * in their definition. Cassandra doesn't support dropping columns on the base table if it has a materialized view defined on it.
+ ```azurecli
+ az rest \
+ --method PUT \
+ --uri "https://management.azure.com/$uri/services/materializedViewsBuilder?api-version=2021-11-15-preview" \
+ --body @builder.json
+ ```
-### Can we create MV on existing base table?
+1. Wait a couple of minutes and check the status using `az rest` again with the HTTP `GET` verb. The status in the output should now be `Running`:
-No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. You would need to create a new table with materialized views defined and move the existing data using [container copy jobs](../intra-account-container-copy.md). MV on existing table is planned for the future.
+ ```azurecli
+ az rest \
+ --method GET \
+ --uri "https://management.azure.com/$uri/services/materializedViewsBuilder?api-version=2021-11-15-preview"
+ ```
-### What are the conditions on which records won't make it to MV and how to identify such records?
+
-Below are some of the identified cases where data from base table can't be written to MV as they violate some constraints on MV table-
-- Rows that donΓÇÖt satisfy partition key size limit in the materialized views-- Rows that don't satisfy clustering key size limit in materialized views
-
-Currently we drop these rows but plan to expose details related to dropped rows in future so that the user can reconcile the missing data.
+## Create a materialized view
+
+Once your account and Materialized View Builder is set up, you should be able to create Materialized views using CQLSH.
+
+> [!NOTE]
+> If you do not already have the standalone CQLSH tool installed, see [install the CQLSH Tool](support.md#cql-shell). You should also [update your connection string](manage-data-cqlsh.md#update-your-connection-string) in the tool.
+
+Here are a few sample commands to create a materialized view:
+
+1. First, create a **keyspace** name `uprofile`.
+
+ ```sql
+ CREATE KEYSPACE IF NOT EXISTS uprofile WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'datacenter1' : 1 };
+ ```
+
+1. Next, create a table named `user` within the keyspace.
+
+ ```sql
+ CREATE TABLE IF NOT EXISTS uprofile.USER (user_id INT PRIMARY KEY, user_name text, user_bcity text);
+ ```
+
+1. Now, create a materialized view named `user_by_bcity` within the same keyspace. Specify, using a query, how data is projected into the view from the base table.
+
+ ```sql
+ CREATE MATERIALIZED VIEW uprofile.user_by_bcity AS
+ SELECT
+ user_id,
+ user_name,
+ user_bcity
+ FROM
+ uprofile.USER
+ WHERE
+ user_id IS NOT NULL
+ AND user_bcity IS NOT NULL PRIMARY KEY (user_bcity, user_id);
+ ```
+
+1. Insert rows into the base table.
+
+ ```sql
+ INSERT INTO
+ uprofile.USER (user_id, user_name, user_bcity)
+ VALUES
+ (
+ 101, 'johnjoe', 'New York'
+ );
+
+ INSERT INTO
+ uprofile.USER (user_id, user_name, user_bcity)
+ VALUES
+ (
+ 102, 'james', 'New York'
+ );
+ ```
+
+1. Query the materialized view.
+
+ ```sql
+ SELECT * FROM user_by_bcity;
+ ```
+
+1. Observe the output from the materialized view.
+
+ ```output
+ user_bcity | user_id | user_name
+ ++--
+ New York | 101 | johnjoe
+ New York | 102 | james
+
+ (2 rows)
+ ```
+
+Optionally, you can also use the resource provider to create or update a materialized view.
+
+- [Create or Update a view in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/create-update-cassandra-view)
+- [Get a view in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/get-cassandra-view)
+- [List views in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/list-cassandra-views)
+- [Delete a view in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/delete-cassandra-view)
+- [Update the throughput of a view in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/update-cassandra-view-throughput)
+
+## Current limitations
+
+There are a few limitations with the API for Cassandra's preview implementation of materialized views:
+
+- Materialized views can't be created on a table that existed before support for materialized views was enabled on the account. To use materialized views, create a new table after the feature is enabled.
+- For the materialized view definitionΓÇÖs `WHERE` clause, only `IS NOT NULL` filters are currently allowed.
+- After a materialized view is created against a base table, `ALTER TABLE ADD` operations aren't allowed on the base tableΓÇÖs schema. `ALTER TABLE APP` is allowed only if none of the materialized views have selected `*` in their definition.
+- There are limits on partition key size (**2 Kb**) and total length of clustering key size (**1 Kb**). If this size limit is exceeded, the responsible message will end up in poison message queue.
+- If a base table has user-defined types (UDTs) and materialized view definition has either `SELECT * FROM` or has the UDT in one of projected columns, UDT updates aren't permitted on the account.
+- Materialized views may become inconsistent with the base table for a few rows after automatic regional failover. To avoid this inconsistency, rebuild the materialized view after the failover.
+- Creating materialized view builder instances with **32 cores** isn't supported. If needed, you can create multiple builder instances with a smaller number of cores.
+
+In addition to the above limitations, consider the following extra limitations:
+
+- Availability zones
+ - Materialized views can't be enabled on an account that has availability zone enabled regions.
+ - Adding a new region with an availability zone isn't supported once `enableMaterializedViews` is set to true on the account.
+- Periodic backup and restore
+ - Materialized views aren't automatically restored with the restore process. You'll need to re-create the materialized views after the restore process is complete. Then, you should configure `enableMaterializedViews` on their restored account before creating the materialized views and builders again.
+- Apache Cassandra
+ - Defining conflict resolution policy on materialized views isn't allowed.
+ - Write operations aren't allowed on materialized views.
+ - Cross document queries and use of aggregate functions aren't supported on materialized views.
+ - A materialized view's schema can't be modified after creation.
+ - Deleting the base table isn't allowed if at least one materialized view is defined on it. All the views must first be deleted and then the base table can be deleted.
+ - Defining materialized views on containers with static columns isn't allowed.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Review frequently asked questions (FAQ) about materialized views in API for Cassandra](materialized-views-faq.yml)
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB for MongoDB and .NET
+ Title: Get started with Azure Cosmos DB for MongoDB using .NET
description: Get started developing a .NET application that works with Azure Cosmos DB for MongoDB. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for MongoDB database.--++
+ms.devlang: csharp
Last updated 10/17/2022-+
-# Get started with Azure Cosmos DB for MongoDB and .NET Core
+# Get started with Azure Cosmos DB for MongoDB using .NET
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] This article shows you how to connect to Azure Cosmos DB for MongoDB using .NET Core and the relevant NuGet packages. Once connected, you can perform operations on databases, collections, and documents.
This article shows you how to connect to Azure Cosmos DB for MongoDB using .NET
[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver) - ## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [.NET 6.0](https://dotnet.microsoft.com/en-us/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-* [Azure Cosmos DB for MongoDB resource](quickstart-dotnet.md#create-an-azure-cosmos-db-account)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [.NET 6.0](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- [Azure Cosmos DB for MongoDB resource](quickstart-dotnet.md#create-an-azure-cosmos-db-account)
## Create a new .NET Core app
-1. Create a new .NET Core application in an empty folder using your preferred terminal. For this scenario you'll use a console application. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command to create and name the console app.
+1. Create a new .NET Core application in an empty folder using your preferred terminal. For this scenario, you'll use a console application. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command to create and name the console app.
```console dotnet new console -o app
The following guides show you how to use each of these classes to build your app
**Guide**:
-* [Manage databases](how-to-dotnet-manage-databases.md)
-* [Manage collections](how-to-dotnet-manage-collections.md)
-* [Manage documents](how-to-dotnet-manage-documents.md)
-* [Use queries to find documents](how-to-dotnet-manage-queries.md)
+- [Manage databases](how-to-dotnet-manage-databases.md)
+- [Manage collections](how-to-dotnet-manage-collections.md)
+- [Manage documents](how-to-dotnet-manage-documents.md)
+- [Use queries to find documents](how-to-dotnet-manage-queries.md)
## See also
The following guides show you how to use each of these classes to build your app
## Next steps
-Now that you've connected to a API for MongoDB account, use the next guide to create and manage databases.
+Now that you've connected to an API for MongoDB account, use the next guide to create and manage databases.
> [!div class="nextstepaction"] > [Create a database in Azure Cosmos DB for MongoDB using .NET](how-to-dotnet-manage-databases.md)
cosmos-db How To Dotnet Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-collections.md
Title: Create a collection in Azure Cosmos DB for MongoDB using .NET description: Learn how to work with a collection in your Azure Cosmos DB for MongoDB database using the .NET SDK.--++
+ms.devlang: csharp
Last updated 07/22/2022-+ # Manage a collection in Azure Cosmos DB for MongoDB using .NET
In Azure Cosmos DB, a collection is analogous to a table in a relational databas
Here are some quick rules when naming a collection:
-* Keep collection names between 3 and 63 characters long
-* Collection names can only contain lowercase letters, numbers, or the dash (-) character.
-* Container names must start with a lowercase letter or number.
+- Keep collection names between 3 and 63 characters long
+- Collection names can only contain lowercase letters, numbers, or the dash (-) character.
+- Container names must start with a lowercase letter or number.
## Get collection instance Use an instance of the **Collection** class to access the collection on the server.
-* [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+- [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
The following code snippets assume you've already created your [client connection](how-to-dotnet-get-started.md#create-mongoclient-with-connection-string).
The following code snippets assume you've already created your [client connectio
To create a collection, insert a document into the collection.
-* [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
-* [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
-* [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
+- [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
+- [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+- [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/110-manage-collections/program.cs" id="create_collection"::: ## Drop a collection
-* [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
+- [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
Drop the collection from the database to remove it permanently. However, the next insert or update operation that accesses the collection will create a new collection with that name.
Drop the collection from the database to remove it permanently. However, the nex
An index is used by the MongoDB query engine to improve performance to database queries.
-* [MongoClient.Database.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
+- [MongoClient.Database.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/110-manage-collections/program.cs" id="get_indexes"::: - ## See also - [Get started with Azure Cosmos DB for MongoDB and .NET](how-to-dotnet-get-started.md)
cosmos-db How To Dotnet Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-databases.md
Title: Manage a MongoDB database using .NET description: Learn how to manage your Azure Cosmos DB resource when it provides the API for MongoDB with a .NET SDK.--++
+ms.devlang: csharp
Last updated 07/22/2022-+ # Manage a MongoDB database using .NET
In Azure Cosmos DB, a database is analogous to a namespace. When you create a da
Here are some quick rules when naming a database:
-* Keep database names between 3 and 63 characters long
-* Database names can only contain lowercase letters, numbers, or the dash (-) character.
-* Database names must start with a lowercase letter or number.
+- Keep database names between 3 and 63 characters long
+- Database names can only contain lowercase letters, numbers, or the dash (-) character.
+- Database names must start with a lowercase letter or number.
Once created, the URI for a database is in this format:
Once created, the URI for a database is in this format:
You can use the `MongoClient` to get an instance of a database, or create one if it doesn't exist already. The `MongoDatabase` class provides access to collections and their documents.
-* [MongoClient](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm)
-* [MongoClient.Database](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm)
+- [MongoClient](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm)
+- [MongoClient.Database](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm)
-The following code snippet creates a new database by inserting a document into a collection. Remember, the database will not be created until it is needed for this type of operation.
+The following code snippet creates a new database by inserting a document into a collection. Remember, the database won't be created until it's needed for this type of operation.
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/105-manage-databases/program.cs" id="create_database":::
The following code snippet creates a new database by inserting a document into a
You can also retrieve an existing database by name using the `GetDatabase` method to access its collections and documents.
-* [MongoClient.GetDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_GetDatabase.htm)
+- [MongoClient.GetDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_GetDatabase.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/105-manage-databases/program.cs" id="get_database":::
You can also retrieve an existing database by name using the `GetDatabase` metho
You can retrieve a list of all the databases on the server using the `MongoClient`.
-* [MongoClient.Database.ListDatabaseNames](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_ListDatabaseNames_3.htm)
+- [MongoClient.Database.ListDatabaseNames](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_ListDatabaseNames_3.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/105-manage-databases/program.cs" id="get_all_databases":::
This technique can then be used to check if a database already exists.
## Drop a database
-A database is removed from the server using the `DropDatabase` method on the DB class.
+A database is removed from the server using the `DropDatabase` method on the DB class.
-* [MongoClient.DropDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_DropDatabase_1.htm)
+- [MongoClient.DropDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_DropDatabase_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/105-manage-databases/program.cs" id="drop_database":::
cosmos-db How To Dotnet Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-documents.md
Title: Create a document in Azure Cosmos DB for MongoDB using .NET description: Learn how to work with a document in your Azure Cosmos DB for MongoDB database using the .NET SDK.--++
+ms.devlang: csharp
Last updated 07/22/2022-+ # Manage a document in Azure Cosmos DB for MongoDB using .NET
Manage your MongoDB documents with the ability to insert, update, and delete doc
Insert one or many documents, defined with a JSON schema, into your collection.
-* [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertOne_1.htm)
-* [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertMany_1.htm)
+- [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertOne_1.htm)
+- [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertMany_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/115-manage-documents/program.cs" id="insert_document"::: ## Update a document
-To update a document, specify the query filter used to find the document along with a set of properties of the document that should be updated.
+To update a document, specify the query filter used to find the document along with a set of properties of the document that should be updated.
-* [MongoClient.Database.Collection.UpdateOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateOne_1.htm)
-* [MongoClient.Database.Collection.UpdateMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateMany_1.htm)
+- [MongoClient.Database.Collection.UpdateOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateOne_1.htm)
+- [MongoClient.Database.Collection.UpdateMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateMany_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/115-manage-documents/program.cs" id="update_document"::: ## Bulk updates to a collection
-You can perform several different types of operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+You can perform several different types of operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
The following bulk operations are available:
-* [MongoClient.Database.Collection.BulkWrite](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_BulkWrite_1.htm)
+- [MongoClient.Database.Collection.BulkWrite](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_BulkWrite_1.htm)
- * insertOne
- * updateOne
- * updateMany
- * deleteOne
- * deleteMany
+ - insertOne
+
+ - updateOne
+
+ - updateMany
+
+ - deleteOne
+
+ - deleteMany
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/115-manage-documents/program.cs" id="bulk_write"::: ## Delete a document
-To delete documents, use a query to define how the documents are found.
+To delete documents, use a query to define how the documents are found.
-* [MongoClient.Database.Collection.DeleteOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteOne_1.htm)
-* [MongoClient.Database.Collection.DeleteMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteMany_1.htm)
+- [MongoClient.Database.Collection.DeleteOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteOne_1.htm)
+- [MongoClient.Database.Collection.DeleteMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteMany_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/115-manage-documents/program.cs" id="delete_document":::
cosmos-db How To Dotnet Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-queries.md
Title: Query documents in Azure Cosmos DB for MongoDB using .NET description: Learn how to query documents in your Azure Cosmos DB for MongoDB database using the .NET SDK.--++
+ms.devlang: csharp
Last updated 07/22/2022-+ # Query documents in Azure Cosmos DB for MongoDB using .NET
Use queries to find documents in a collection.
## Query for documents
-To find documents, use a query filter on the collection to define how the documents are found.
+To find documents, use a query filter on the collection to define how the documents are found.
-* [MongoClient.Database.Collection.Find](https://www.mongodb.com/docs/manual/reference/method/db.collection.find/)
-* [FilterDefinition](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinition_1.htm)
-* [FilterDefinitionBuilder](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinitionBuilder_1.htm)
+- [MongoClient.Database.Collection.Find](https://www.mongodb.com/docs/manual/reference/method/db.collection.find/)
+- [FilterDefinition](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinition_1.htm)
+- [FilterDefinitionBuilder](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinitionBuilder_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/125-manage-queries/program.cs" id="query_documents":::
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-get-started.md
Title: Get started with Azure Cosmos DB for MongoDB and JavaScript
+ Title: Get started with Azure Cosmos DB for MongoDB using JavaScript
description: Get started developing a JavaScript application that works with Azure Cosmos DB for MongoDB. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for MongoDB database.
ms.devlang: javascript Last updated 06/23/2022-+
-# Get started with Azure Cosmos DB for MongoDB and JavaScript
+# Get started with Azure Cosmos DB for MongoDB using JavaScript
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] This article shows you how to connect to Azure Cosmos DB for MongoDB using the native MongoDB npm package. Once connected, you can perform operations on databases, collections, and docs.
This article shows you how to connect to Azure Cosmos DB for MongoDB using the n
[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb) - ## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [Node.js LTS](https://nodejs.org/en/download/)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-* [Azure Cosmos DB for MongoDB resource](quickstart-nodejs.md#create-an-azure-cosmos-db-account)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [Node.js LTS](https://nodejs.org/en/download/)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- [Azure Cosmos DB for MongoDB resource](quickstart-nodejs.md#create-an-azure-cosmos-db-account)
## Create a new JavaScript app
-1. Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
+1. Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
```console npm init ```
-2. Add the [MongoDB](https://www.npmjs.com/package/mongodb) npm package to the JavaScript project. Use the [``npm install package``](https://docs.npmjs.com/cli/v8/commands/npm-install) command specifying the name of the npm package. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
+1. Add the [MongoDB](https://www.npmjs.com/package/mongodb) npm package to the JavaScript project. Use the [``npm install package``](https://docs.npmjs.com/cli/v8/commands/npm-install) command specifying the name of the npm package. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
```console npm install mongodb dotenv ```
-3. To run the app, use a terminal to navigate to the application directory and run the application.
+1. To run the app, use a terminal to navigate to the application directory and run the application.
```console node index.js
This article shows you how to connect to Azure Cosmos DB for MongoDB using the n
## Connect with MongoDB native driver to Azure Cosmos DB for MongoDB
-To connect with the MongoDB native driver to Azure Cosmos DB, create an instance of the [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) class. This class is the starting point to perform all operations against databases.
+To connect with the MongoDB native driver to Azure Cosmos DB, create an instance of the [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) class. This class is the starting point to perform all operations against databases.
The most common constructor for **MongoClient** has two parameters:
Skip this step and use the information for the portal in the next step.
## Create MongoClient with connection string -
-1. Add dependencies to reference the MongoDB and DotEnv npm packages.
+1. Add dependencies to reference the MongoDB and DotEnv npm packages.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="package_dependencies":::
-2. Define a new instance of the ``MongoClient`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to use the connection string.
+1. Define a new instance of the ``MongoClient`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to use the connection string.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="client_credentials":::
For more information on different ways to create a ``MongoClient`` instance, see
## Close the MongoClient connection
-When your application is finished with the connection remember to close it. That `.close()` call should be after all database calls are made.
+When your application is finished with the connection, remember to close it. The `.close()` call should be after all database calls are made.
```javascript client.close()
The following guides show you how to use each of these classes to build your app
**Guide**:
-* [Manage databases](how-to-javascript-manage-databases.md)
-* [Manage collections](how-to-javascript-manage-collections.md)
-* [Manage documents](how-to-javascript-manage-documents.md)
-* [Use queries to find documents](how-to-javascript-manage-queries.md)
+- [Manage databases](how-to-javascript-manage-databases.md)
+- [Manage collections](how-to-javascript-manage-collections.md)
+- [Manage documents](how-to-javascript-manage-documents.md)
+- [Use queries to find documents](how-to-javascript-manage-queries.md)
## See also
The following guides show you how to use each of these classes to build your app
## Next steps
-Now that you've connected to a API for MongoDB account, use the next guide to create and manage databases.
+Now that you've connected to an API for MongoDB account, use the next guide to create and manage databases.
> [!div class="nextstepaction"] > [Create a database in Azure Cosmos DB for MongoDB using JavaScript](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-collections.md
ms.devlang: javascript Last updated 06/23/2022-+ # Manage a collection in Azure Cosmos DB for MongoDB using JavaScript
Manage your MongoDB collection stored in Azure Cosmos DB with the native MongoDB
[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb) - ## Name a collection In Azure Cosmos DB, a collection is analogous to a table in a relational database. When you create a collection, the collection name forms a segment of the URI used to access the collection resource and any child docs. Here are some quick rules when naming a collection:
-* Keep collection names between 3 and 63 characters long
-* Collection names can only contain lowercase letters, numbers, or the dash (-) character.
-* Container names must start with a lowercase letter or number.
+- Keep collection names between 3 and 63 characters long
+- Collection names can only contain lowercase letters, numbers, or the dash (-) character.
+- Container names must start with a lowercase letter or number.
## Get collection instance Use an instance of the **Collection** class to access the collection on the server.
-* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+- [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
The following code snippets assume you've already created your [client connection](how-to-javascript-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-javascript-get-started.md#close-the-mongoclient-connection) after these code snippets.
The following code snippets assume you've already created your [client connectio
To create a collection, insert a document into the collection.
-* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
-* [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
-* [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
+- [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
+- [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+- [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/203-insert-doc/index.js" id="database_object"::: ## Drop a collection
-* [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
+- [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
Drop the collection from the database to remove it permanently. However, the next insert or update operation that accesses the collection will create a new collection with that name.
The preceding code snippet displays the following example console output:
An index is used by the MongoDB query engine to improve performance to database queries.
-* [MongoClient.Db.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
+- [MongoClient.Db.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/225-get-collection-indexes/index.js" id="collection":::
cosmos-db How To Javascript Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-databases.md
ms.devlang: javascript Last updated 06/23/2022-+ # Manage a MongoDB database using JavaScript
Your MongoDB server in Azure Cosmos DB is available from the common npm packages for MongoDB such as:
-* [MongoDB](https://www.npmjs.com/package/mongodb)
+- [MongoDB](https://www.npmjs.com/package/mongodb)
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
In Azure Cosmos DB, a database is analogous to a namespace. When you create a da
Here are some quick rules when naming a database:
-* Keep database names between 3 and 63 characters long
-* Database names can only contain lowercase letters, numbers, or the dash (-) character.
-* Database names must start with a lowercase letter or number.
+- Keep database names between 3 and 63 characters long
+- Database names can only contain lowercase letters, numbers, or the dash (-) character.
+- Database names must start with a lowercase letter or number.
Once created, the URI for a database is in this format:
Once created, the URI for a database is in this format:
## Get database instance
-The database holds the collections and their documents. Use an instance of the **Db** class to access the databases on the server.
+The database holds the collections and their documents. Use an instance of the `Db` class to access the databases on the server.
-* [MongoClient.Db](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+- [MongoClient.Db](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
The following code snippets assume you've already created your [client connection](how-to-javascript-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-javascript-get-started.md#close-the-mongoclient-connection) after these code snippets.
The following code snippets assume you've already created your [client connectio
Access the **Admin** class to retrieve server information. You don't need to specify the database name in the `db` method. The information returned is specific to MongoDB and doesn't represent the Azure Cosmos DB platform itself.
-* [MongoClient.Db.Admin](https://mongodb.github.io/node-mongodb-native/4.7/classes/Admin.html)
+- [MongoClient.Db.Admin](https://mongodb.github.io/node-mongodb-native/4.7/classes/Admin.html)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/200-admin/index.js" id="server_info":::
The preceding code snippet displays the following example console output:
The native MongoDB driver for JavaScript creates the database if it doesn't exist when you access it. If you would prefer to know if the database already exists before using it, get the list of current databases and filter for the name:
-* [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+- [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/201-does-database-exist/index.js" id="does_database_exist":::
The preceding code snippet displays the following example console output:
When you manage your MongoDB server programmatically, it's helpful to know what databases and collections are on the server and how many documents in each collection.
-* [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
-* [MongoClient.Db.listCollections](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#listCollections)
-* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
-* [MongoClient.Db.Collection.countDocuments](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#countDocuments)
+- [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+- [MongoClient.Db.listCollections](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#listCollections)
+- [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+- [MongoClient.Db.Collection.countDocuments](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#countDocuments)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/202-get-doc-count/index.js" id="database_object":::
The preceding code snippet displays the following example console output:
To get a database object instance, call the following method. This method accepts an optional database name and can be part of a chain.
-* [``MongoClient.Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)
+- [``MongoClient.Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)
-A database is created when it is accessed. The most common way to access a new database is to add a document to a collection. In one line of code using chained objects, the database, collection, and doc are created.
+A database is created when it's accessed. The most common way to access a new database is to add a document to a collection. In one line of code using chained objects, the database, collection, and doc are created.
```javascript const insertOneResult = await client.db("adventureworks").collection("products").insertOne(doc);
Learn more about working with [collections](how-to-javascript-manage-collections
## Drop a database
-A database is removed from the server using the dropDatabase method on the DB class.
+A database is removed from the server using the dropDatabase method on the DB class.
-* [DB.dropDatabase](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropDatabase)
+- [DB.dropDatabase](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropDatabase)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/300-drop-database/index.js" id="drop_database":::
cosmos-db How To Javascript Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-documents.md
ms.devlang: javascript Last updated 06/23/2022-+ # Manage a document in Azure Cosmos DB for MongoDB using JavaScript
Manage your MongoDB documents with the ability to insert, update, and delete doc
Insert a document, defined with a JSON schema, into your collection.
-* [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
-* [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
+- [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+- [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/203-insert-doc/index.js" id="database_object":::
The preceding code snippet displays the following example console output:
If you don't provide an ID, `_id`, for your document, one is created for you as a BSON object. The value of the provided ID is accessed with the ObjectId method.
-* [ObjectId](https://mongodb.github.io/node-mongodb-native/4.7/classes/ObjectId.html)
+- [ObjectId](https://mongodb.github.io/node-mongodb-native/4.7/classes/ObjectId.html)
Use the ID to query for documents:
const query = { _id: ObjectId("62b1f43a9446918500c875c5")};
## Update a document
-To update a document, specify the query used to find the document along with a set of properties of the document that should be updated. You can choose to upsert the document, which inserts the document if it doesn't already exist.
-
-* [MongoClient.Db.Collection.updateOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateOne)
-* [MongoClient.Db.Collection.updateMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateMany)
+To update a document, specify the query used to find the document along with a set of properties of the document that should be updated. You can choose to upsert the document, which inserts the document if it doesn't already exist.
+- [MongoClient.Db.Collection.updateOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateOne)
+- [MongoClient.Db.Collection.updateMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateMany)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/250-upsert-doc/index.js" id="upsert":::
The preceding code snippet displays the following example console output for an
## Bulk updates to a collection
-You can perform several operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+You can perform several operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
The following bulk operations are available:
-* [MongoClient.Db.Collection.bulkWrite](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#bulkWrite)
+- [MongoClient.Db.Collection.bulkWrite](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#bulkWrite)
+
+ - insertOne
+
+ - updateOne
+
+ - updateMany
+
+ - deleteOne
- * insertOne
- * updateOne
- * updateMany
- * deleteOne
- * deleteMany
+ - deleteMany
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/251-bulk_write/index.js" id="bulk_write":::
The preceding code snippet displays the following example console output:
## Delete a document
-To delete documents, use a query to define how the documents are found.
+To delete documents, use a query to define how the documents are found.
-* [MongoClient.Db.Collection.deleteOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteOne)
-* [MongoClient.Db.Collection.deleteMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteMany)
+- [MongoClient.Db.Collection.deleteOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteOne)
+- [MongoClient.Db.Collection.deleteMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteMany)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/290-delete-doc/index.js" id="delete":::
cosmos-db How To Javascript Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-queries.md
ms.devlang: javascript Last updated 07/29/2022-+ # Query data in Azure Cosmos DB for MongoDB using JavaScript
Use [queries](#query-for-documents) and [aggregation pipelines](#aggregation-pip
[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb) - ## Query for documents
-To find documents, use a query to define how the documents are found.
+To find documents, use a query to define how the documents are found.
-* [MongoClient.Db.Collection.findOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#findOne)
-* [MongoClient.Db.Collection.find](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#find)
-* [FindCursor](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html)
+- [MongoClient.Db.Collection.findOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#findOne)
+- [MongoClient.Db.Collection.find](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#find)
+- [FindCursor](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/275-find/index.js" id="read_doc":::
The preceding code snippet displays the following example console output:
## Aggregation pipelines
-Aggregation pipelines are useful to isolate expensive query computation, transformations, and other processing on your Azure Cosmos DB server, instead of performing these operations on the client.
+Aggregation pipelines are useful to isolate expensive query computation, transformations, and other processing on your Azure Cosmos DB server, instead of performing these operations on the client.
-For specific **aggregation pipeline support**, refer to the following:
+For specific **aggregation pipeline support**, refer to the following:
-* [Version 4.2](feature-support-42.md#aggregation-pipeline)
-* [Version 4.0](feature-support-40.md#aggregation-pipeline)
-* [Version 3.6](feature-support-36.md#aggregation-pipeline)
-* [Version 3.2](feature-support-32.md#aggregation-pipeline)
+- [Version 4.2](feature-support-42.md#aggregation-pipeline)
+- [Version 4.0](feature-support-40.md#aggregation-pipeline)
+- [Version 3.6](feature-support-36.md#aggregation-pipeline)
+- [Version 3.2](feature-support-32.md#aggregation-pipeline)
### Aggregation pipeline syntax
-A pipeline is an array with a series of stages as JSON objects.
+A pipeline is an array with a series of stages as JSON objects.
```javascript const pipeline = [
const pipeline = [
A _stage_ defines the operation and the data it's applied to, such as:
-* $match - find documents
-* $addFields - add field to cursor, usually from previous stage
-* $limit - limit the number of results returned in cursor
-* $project - pass along new or existing fields, can be computed fields
-* $group - group results by a field or fields in pipeline
-* $sort - sort results
+- $match - find documents
+- $addFields - add field to cursor, usually from previous stage
+- $limit - limit the number of results returned in cursor
+- $project - pass along new or existing fields, can be computed fields
+- $group - group results by a field or fields in pipeline
+- $sort - sort results
```javascript // reduce collection to relative documents
const sortStage = {
### Aggregate the pipeline to get iterable cursor
-The pipeline is aggregated to produce an iterable cursor.
+The pipeline is aggregated to produce an iterable cursor.
```javascript const db = 'adventureworks';
await aggCursor.forEach(product => {
## Use an aggregation pipeline in JavaScript
-Use a pipeline to keep data processing on the server before returning to the client.
+Use a pipeline to keep data processing on the server before returning to the client.
-### Example product data
+### Example product data
The aggregations below use the [sample products collection](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/252-insert-many/products.json) with data in the shape of:
The aggregations below use the [sample products collection](https://github.com/A
### Example 1: Product subcategories, count of products, and average price
-Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/average-price-in-each-product-subcategory.js) to report on average price in each product subcategory.
+Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/average-price-in-each-product-subcategory.js) to report on average price in each product subcategory.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/280-aggregation/average-price-in-each-product-subcategory.js" id="aggregation_1" highlight="26, 43, 53, 56, 66"::: - ### Example 2: Bike types with price range
-Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/bike-types-and-price-ranges.js) to report on the `Bikes` subcategory.
+Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/bike-types-and-price-ranges.js) to report on the `Bikes` subcategory.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/280-aggregation/bike-types-and-price-ranges.js" id="aggregation_1" highlight="23, 30, 38, 45, 68, 80, 85, 98"::: -- ## See also - [Get started with Azure Cosmos DB for MongoDB and JavaScript](how-to-javascript-get-started.md)
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-python-get-started.md
+
+ Title: Get started with Azure Cosmos DB for MongoDB and Python
+description: Get started developing a Python application that works with Azure Cosmos DB for MongoDB. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for MongoDB database.
+++++
+ms.devlang: python
+ Last updated : 11/16/2022+++
+# Get started with Azure Cosmos DB for MongoDB and Python
+
+This article shows you how to connect to Azure Cosmos DB for MongoDB using the PyMongo driver package. Once connected, you can perform operations on databases, collections, and docs.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) are available on GitHub as a Python project.
+
+This article shows you how to communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using one of the open-source MongoDB client drivers for Python, [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/).
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [Python 3.8+](https://www.python.org/downloads/)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+* [Azure Cosmos DB for MongoDB resource](quickstart-python.md#create-an-azure-cosmos-db-account)
+
+## Create a new Python app
+
+1. Create a new empty folder using your preferred terminal and change directory to the folder.
+
+ > [!NOTE]
+ > If you just want the finished code, download or fork and clone the [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) repo that has the full example. You can also `git clone` the repo in Azure Cloud Shell to walk through the steps shown in this quickstart.
+
+2. Create a *requirements.txt* file that lists the [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) and [python-dotenv](https://pypi.org/project/python-dotenv/) packages. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
+
+ ```text
+ # requirements.txt
+ pymongo
+ python-dotenv
+ ```
+
+3. Create a virtual environment and install the packages.
+
+ #### [Windows](#tab/venv-windows)
+
+ ```bash
+ # py -3 uses the global python interpreter. You can also use python3 -m venv .venv.
+ py -3 -m venv .venv
+ source .venv/Scripts/activate
+ pip install -r requirements.txt
+ ```
+
+ #### [Linux / macOS](#tab/venv-linux+macos)
+
+ ```bash
+ python3 -m venv .venv
+ source .venv/bin/activate
+ pip install -r requirements.txt
+ ```
+
+
+
+## Connect with PyMongo driver to Azure Cosmos DB for MongoDB
+
+To connect with the PyMongo driver to Azure Cosmos DB, create an instance of the [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient) object. This class is the starting point to perform all operations against databases.
+
+The most common constructor for **MongoClient** requires just the `host` parameter, which in this article is set to the `COSMOS_CONNECTION_STRING` environment variable. There are other optional parameters and keyword parameters you can use in the constructor. Many of the optional parameters can also be specified with the `host` parameter. If the same option is passed in with `host` and as a parameter, the parameter takes precedence.
+
+Refer to the [Troubleshooting guide](error-codes-solutions.md) for connection issues.
+
+## Get resource name
+
+In the commands below, we show *msdocs-cosmos* as the resource group name. Change the name as appropriate for your situation.
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
+
+Skip this step and use the information for the portal in the next step.
++
+## Retrieve your connection string
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
++++
+## Configure environment variables
++
+## Create MongoClient with connection string
+
+1. Add dependencies to reference the [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) and [python-dotenv](https://pypi.org/project/python-dotenv/) packages.
+
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/101-client-connection-string/run.py" id="package_dependencies":::
+
+2. Define a new instance of the `MongoClient` class using the constructor and the connection string read from an environment variable.
+
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/101-client-connection-string/run.py" id="client_credentials":::
+
+For more information on different ways to create a ``MongoClient`` instance, see [Making a Connection with MongoClient](https://pymongo.readthedocs.io/en/stable/tutorial.html#making-a-connection-with-mongoclient).
+
+## Close the MongoClient connection
+
+When your application is finished with the connection, remember to close it. That `.close()` call should be after all database calls are made.
+
+```python
+client.close()
+```
+
+## Use MongoDB client classes with Azure Cosmos DB for API for MongoDB
+
+Let's look at the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+
+* [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html) - The first step when working with PyMongo is to create a MongoClient to connect to Azure Cosmos DB's API for MongoDB. The client object is used to configure and execute requests against the service.
+
+* [Database](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html) - Azure Cosmos DB's API for MongoDB can support one or more independent databases.
+
+* [Collection](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html) - A database can contain one or more collections. A collection is a group of documents stored in MongoDB, and can be thought of as roughly the equivalent of a table in a relational database.
+
+* [Document](https://pymongo.readthedocs.io/en/stable/tutorial.html#documents) - A document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema means that documents in the same collection don't need to have the same set of fields or structure. And common fields in a collection's documents may hold different types of data.
+
+To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article.
+
+## See also
+
+- [PyPI Package](https://pypi.org/project/azure-cosmos/)
+- [API reference](https://www.mongodb.com/docs/drivers/python/)
+
+## Next steps
+
+Now that you've connected to an API for MongoDB account, use the next guide to create and manage databases.
+
+> [!div class="nextstepaction"]
+> [Create a database in Azure Cosmos DB for MongoDB using Python](how-to-python-manage-databases.md)
cosmos-db How To Python Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-python-manage-databases.md
+
+ Title: Manage a MongoDB database using Python
+description: Learn how to manage your Azure Cosmos DB resource when it provides the API for MongoDB with a Python SDK.
++++
+ms.devlang: python
+ Last updated : 11/15/2022+++
+# Manage a MongoDB database using Python
++
+Your MongoDB server in Azure Cosmos DB is available from the common Python packages for MongoDB such as:
+
+* [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) for synchronous Python applications and used in this article.
+* [Motor](https://www.mongodb.com/docs/drivers/motor/) for asynchronous Python applications.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) are available on GitHub as a Python project.
+
+## Name a database
+
+In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
+
+Here are some quick rules when naming a database:
+
+* Keep database names between 3 and 63 characters long
+* Database names can only contain lowercase letters, numbers, or the dash (-) character.
+* Database names must start with a lowercase letter or number.
+
+Once created, the URI for a database is in this format:
+
+`https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>`
+
+## Get database instance
+
+The database holds the collections and their documents. To access a database, use attribute style access or dictionary style access of the MongoClient. For more information, see [Getting a Database](https://pymongo.readthedocs.io/en/stable/tutorial.html#getting-a-database).
+
+The following code snippets assume you've already created your [client connection](how-to-python-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-python-get-started.md#close-the-mongoclient-connection) after these code snippets.
+
+## Get server information
+
+Access server info with the [server_info](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.server_info) method of the MongoClient class. You don't need to specify the database name to get this information. The information returned is specific to MongoDB and doesn't represent the Azure Cosmos DB platform itself.
+
+You can also list databases using the [MongoClient.list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method and issue a [MongoDB command](https://www.mongodb.com/docs/manual/reference/command/nav-diagnostic/) to a database with the [MongoClient.db.command](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.command) method.
++
+The preceding code snippet displays output similar to the following example console output:
++
+## Does database exist?
+
+The PyMongo driver for Python creates a database if it doesn't exist when you access it. However, we recommend that instead you use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. To create a new database if it doesn't exist, use the [create database extension](/azure/cosmos-db/mongodb/custom-commands#create-database) as shown in the following code snippet.
+
+To see if the database already exists before using it, get the list of current databases with the [list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method.
++
+The preceding code snippet displays output similar to the following example console output:
++
+## Get list of databases, collections, and document count
+
+When you manage your MongoDB server programmatically, it's helpful to know what databases and collections are on the server and how many documents in each collection. For more information, see:
+
+* [Getting a database](https://pymongo.readthedocs.io/en/stable/tutorial.html#getting-a-database)
+* [Getting a collection](https://pymongo.readthedocs.io/en/stable/tutorial.html#getting-a-collection)
+* [Counting documents](https://pymongo.readthedocs.io/en/stable/tutorial.html#counting)
++
+The preceding code snippet displays output similar to the following example console output:
++
+## Get database object instance
+
+If a database doesn't exist, the PyMongo driver for Python creates it when you access it. However, we recommend that instead you use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. The pattern is shown above in the section [Does database exist?](#does-database-exist).
+
+When working with PyMongo, you access databases using attribute style access on MongoClient instances. Once you have a database instance, you can use database level operations as shown below.
+
+```python
+collections = client[db].list_collection_names()
+```
+
+For an overview of working with databases using the PyMongo driver, see [Database level operations](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database).
++
+## Drop a database
+
+A database is removed from the server using the [drop_database](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.drop_database) method of the MongoClient.
++
+The preceding code snippet displays output similar to the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB for MongoDB and Python](how-to-python-get-started.md)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB for MongoDB for .NET with MongoDB driver description: Learn how to build a .NET app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.---++
+ms.devlang: csharp
Last updated 07/06/2022-+ # Quickstart: Azure Cosmos DB for MongoDB for .NET with the MongoDB driver
Get started with MongoDB to create databases, collections, and docs within your
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [.NET 6.0](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [.NET 6.0](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
### Prerequisite check
-* In a terminal or command window, run ``dotnet --list-sdks`` to check that .NET 6.x is one of the available versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+- In a terminal or command window, run ``dotnet --list-sdks`` to check that .NET 6.x is one of the available versions.
+- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
## Setting up
Before you start building the application, let's look into the hierarchy of reso
You'll use the following MongoDB classes to interact with these resources:
-* [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
-* [``MongoDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Collection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoCollection.htm) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
+- [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
+- [``MongoDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+- [``Collection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoCollection.htm) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
## Code examples
-* [Authenticate the client](#authenticate-the-client)
-* [Create a database](#create-a-database)
-* [Create a container](#create-a-collection)
-* [Create an item](#create-an-item)
-* [Get an item](#get-an-item)
-* [Query items](#query-items)
+- [Authenticate the client](#authenticate-the-client)
+- [Create a database](#create-a-database)
+- [Create a container](#create-a-collection)
+- [Create an item](#create-an-item)
+- [Get an item](#get-an-item)
+- [Query items](#query-items)
The sample code demonstrated in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-nodejs.md
Title: Quickstart - Azure Cosmos DB for MongoDB for JavaScript with MongoDB drier
-description: Learn how to build a JavaScript app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.
---
+ Title: Quickstart - Azure Cosmos DB for MongoDB driver for MongoDB
+description: Learn how to build a Node.js app to manage Azure Cosmos DB for MongoDB account resources and data in this quickstart.
++ ms.devlang: javascript Last updated 07/06/2022-+
-# Quickstart: Azure Cosmos DB for MongoDB for JavaScript with MongoDB driver
+# Quickstart: Azure Cosmos DB for MongoDB driver for Node.js
[!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
Get started with the MongoDB npm package to create databases, collections, and d
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [Node.js LTS](https://nodejs.org/en/download/)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [Node.js LTS](https://nodejs.org/en/download/)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
### Prerequisite check
-* In a terminal or command window, run ``node --version`` to check that Node.js is one of the LTS versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+- In a terminal or command window, run ``node --version`` to check that Node.js is one of the LTS versions.
+- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
## Setting up
Before you start building the application, let's look into the hierarchy of reso
You'll use the following MongoDB classes to interact with these resources:
-* [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
-* [``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
+- [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
+- [``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+- [``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
## Code examples
-* [Authenticate the client](#authenticate-the-client)
-* [Get database instance](#get-database-instance)
-* [Get collection instance](#get-collection-instance)
-* [Chained instances](#chained-instances)
-* [Create an index](#create-an-index)
-* [Create a doc](#create-a-doc)
-* [Get an doc](#get-a-doc)
-* [Query docs](#query-docs)
+- [Authenticate the client](#authenticate-the-client)
+- [Get database instance](#get-database-instance)
+- [Get collection instance](#get-collection-instance)
+- [Chained instances](#chained-instances)
+- [Create an index](#create-an-index)
+- [Create a doc](#create-a-doc)
+- [Get an doc](#get-a-doc)
+- [Query docs](#query-docs)
The sample code described in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
Use the [``Collection.createIndex``](https://mongodb.github.io/node-mongodb-nati
Create a doc with the *product* properties for the `adventureworks` database:
-* An _id property for the unique identifier of the product.
-* A *category* property. This property can be used as the logical partition key.
-* A *name* property.
-* An inventory *quantity* property.
-* A *sale* property, indicating whether the product is on sale.
+- An _id property for the unique identifier of the product.
+- A *category* property. This property can be used as the logical partition key.
+- A *name* property.
+- An inventory *quantity* property.
+- A *sale* property, indicating whether the product is on sale.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="new_doc":::
After you insert a doc, you can run a query to get all docs that match a specifi
Troubleshooting:
-* If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
+- If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
## Run the code
cosmos-db How To Dotnet Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-container.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create a container in Azure Cosmos DB for NoSQL using .NET
In Azure Cosmos DB, a container is analogous to a table in a relational database
Here are some quick rules when naming a container:
-* Keep container names between 3 and 63 characters long
-* Container names can only contain lowercase letters, numbers, or the dash (-) character.
-* Container names must start with a lowercase letter or number.
+- Keep container names between 3 and 63 characters long
+- Container names can only contain lowercase letters, numbers, or the dash (-) character.
+- Container names must start with a lowercase letter or number.
Once created, the URI for a container is in this format:
Once created, the URI for a container is in this format:
To create a container, call one of the following methods:
-* [``CreateContainerAsync``](#create-a-container-asynchronously)
-* [``CreateContainerIfNotExistsAsync``](#create-a-container-asynchronously-if-it-doesnt-already-exist)
+- [``CreateContainerAsync``](#create-a-container-asynchronously)
+- [``CreateContainerIfNotExistsAsync``](#create-a-container-asynchronously-if-it-doesnt-already-exist)
### Create a container asynchronously
cosmos-db How To Dotnet Create Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-database.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create a database in Azure Cosmos DB for NoSQL using .NET
In Azure Cosmos DB, a database is analogous to a namespace. When you create a da
Here are some quick rules when naming a database:
-* Keep database names between 3 and 63 characters long
-* Database names can only contain lowercase letters, numbers, or the dash (-) character.
-* Database names must start with a lowercase letter or number.
+- Keep database names between 3 and 63 characters long
+- Database names can only contain lowercase letters, numbers, or the dash (-) character.
+- Database names must start with a lowercase letter or number.
Once created, the URI for a database is in this format:
Once created, the URI for a database is in this format:
To create a database, call one of the following methods:
-* [``CreateDatabaseAsync``](#create-a-database-asynchronously)
-* [``CreateDatabaseIfNotExistsAsync``](#create-a-database-asynchronously-if-it-doesnt-already-exist)
+- [``CreateDatabaseAsync``](#create-a-database-asynchronously)
+- [``CreateDatabaseIfNotExistsAsync``](#create-a-database-asynchronously-if-it-doesnt-already-exist)
### Create a database asynchronously
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-item.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create an item in Azure Cosmos DB for NoSQL using .NET
When referencing the item using a URI, use the system-generated *resource identi
To create an item, call one of the following methods:
-* [``CreateItemAsync<>``](#create-an-item-asynchronously)
-* [``ReplaceItemAsync<>``](#replace-an-item-asynchronously)
-* [``UpsertItemAsync<>``](#create-or-replace-an-item-asynchronously)
+- [``CreateItemAsync<>``](#create-an-item-asynchronously)
+- [``ReplaceItemAsync<>``](#replace-an-item-asynchronously)
+- [``UpsertItemAsync<>``](#create-or-replace-an-item-asynchronously)
## Create an item asynchronously
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB for NoSQL and .NET
+ Title: Get started with Azure Cosmos DB for NoSQL using .NET
description: Get started developing a .NET application that works with Azure Cosmos DB for NoSQL. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for NoSQL endpoint.
ms.devlang: csharp Last updated 07/06/2022-+
-# Get started with Azure Cosmos DB for NoSQL and .NET
+# Get started with Azure Cosmos DB for NoSQL using .NET
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
This article shows you how to connect to Azure Cosmos DB for NoSQL using the .NE
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
+- [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
## Set up your project
dotnet build
## <a id="connect-to-azure-cosmos-db-sql-api"></a>Connect to Azure Cosmos DB for NoSQL
-To connect to the API for NoSQL of Azure Cosmos DB, create an instance of the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) class. This class is the starting point to perform all operations against databases. There are three core ways to connect to a API for NoSQL account using the **CosmosClient** class:
+To connect to the API for NoSQL of Azure Cosmos DB, create an instance of the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) class. This class is the starting point to perform all operations against databases. There are three core ways to connect to an API for NoSQL account using the **CosmosClient** class:
-* [Connect with a API for NoSQL endpoint and read/write key](#connect-with-an-endpoint-and-key)
-* [Connect with a API for NoSQL connection string](#connect-with-a-connection-string)
-* [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
+- [Connect with a API for NoSQL endpoint and read/write key](#connect-with-an-endpoint-and-key)
+- [Connect with a API for NoSQL connection string](#connect-with-a-connection-string)
+- [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
### Connect with an endpoint and key
Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT``
As you build your application, your code will primarily interact with four types of resources:
-* The API for NoSQL account, which is the unique top-level namespace for your Azure Cosmos DB data.
+- The API for NoSQL account, which is the unique top-level namespace for your Azure Cosmos DB data.
-* Databases, which organize the containers in your account.
+- Databases, which organize the containers in your account.
-* Containers, which contain a set of individual items in your database.
+- Containers, which contain a set of individual items in your database.
-* Items, which represent a JSON document in your container.
+- Items, which represent a JSON document in your container.
The following diagram shows the relationship between these resources.
The following guides show you how to use each of these classes to build your app
## See also
-* [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
-* [Samples](samples-dotnet.md)
-* [API reference](/dotnet/api/microsoft.azure.cosmos)
-* [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)
-* [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
+- [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+- [Samples](samples-dotnet.md)
+- [API reference](/dotnet/api/microsoft.azure.cosmos)
+- [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
## Next steps
-Now that you've connected to a API for NoSQL account, use the next guide to create and manage databases.
+Now that you've connected to an API for NoSQL account, use the next guide to create and manage databases.
> [!div class="nextstepaction"] > [Create a database in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-database.md)
cosmos-db How To Dotnet Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-query-items.md
ms.devlang: csharp Last updated 06/15/2022-+ # Query items in Azure Cosmos DB for NoSQL using .NET
To learn more about the SQL syntax for Azure Cosmos DB for NoSQL, see [Getting s
To query items in a container, call one of the following methods:
-* [``GetItemQueryIterator<>``](#query-items-using-a-sql-query-asynchronously)
-* [``GetItemLinqQueryable<>``](#query-items-using-linq-asynchronously)
+- [``GetItemQueryIterator<>``](#query-items-using-a-sql-query-asynchronously)
+- [``GetItemLinqQueryable<>``](#query-items-using-linq-asynchronously)
## Query items using a SQL query asynchronously
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-read-item.md
ms.devlang: csharp Last updated 07/06/2022-+ # Read an item in Azure Cosmos DB for NoSQL using .NET
Every item in Azure Cosmos DB for NoSQL has a unique identifier specified by the
To perform a point read of an item, call one of the following methods:
-* [``ReadItemAsync<>``](#read-an-item-asynchronously)
-* [``ReadItemStreamAsync<>``](#read-an-item-as-a-stream-asynchronously)
-* [``ReadManyItemsAsync<>``](#read-multiple-items-asynchronously)
+- [``ReadItemAsync<>``](#read-an-item-asynchronously)
+- [``ReadItemStreamAsync<>``](#read-an-item-as-a-stream-asynchronously)
+- [``ReadManyItemsAsync<>``](#read-multiple-items-asynchronously)
## Read an item asynchronously
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp Last updated 11/07/2022-+ # Quickstart: Azure Cosmos DB for NoSQL client library for .NET
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
Title: Quickstart- Use Node.js to query from Azure Cosmos DB for NoSQL account
-description: How to use Node.js to create an app that connects to Azure Cosmos DB for NoSQL account and queries data.
+ Title: Quickstart - Azure Cosmos DB for NoSQL client library for Node.js
+description: Learn how to build a Node.js app to manage Azure Cosmos DB for NoSQL account resources in this quickstart.
+ ms.devlang: javascript Last updated 09/22/2022---+
-# Quickstart: Use Node.js to connect and query data from Azure Cosmos DB for NoSQL account
+# Quickstart - Azure Cosmos DB for NoSQL client library for Node.js
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Java](quickstart-java.md)
-> * [Spring Data](quickstart-java-spring-data.md)
-> * [Python](quickstart-python.md)
-> * [Spark v3](quickstart-spark.md)
-> * [Go](quickstart-go.md)
->
Get started with the Azure Cosmos DB client library for JavaScript to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
Get started with the Azure Cosmos DB client library for JavaScript to create dat
## Prerequisites
-* In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+- In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions.
+- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
## Setting up
Add the following code at the end of the `index.js` file to include the required
### Add variables for names
-Add the following variables to manage unique database and container names and the [partition key (pk)](../partitioning-overview.md).
+Add the following variables to manage unique database and container names and the [**partition key (`pk`)**](../partitioning-overview.md).
:::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/index.js" range="13-19":::
In this example, we chose to add a timeStamp to the database and container in ca
You'll use the following JavaScript classes to interact with these resources:
-* [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
-* [``Database``](/javascript/api/@azure/cosmos/database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Container``](/javascript/api/@azure/cosmos/container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
-* [``SqlQuerySpec``](/javascript/api/@azure/cosmos/sqlqueryspec) - This interface represents a SQL query and any query parameters.
-* [``QueryIterator<>``](/javascript/api/@azure/cosmos/queryiterator) - This class represents an iterator that can track the current page of results and get a new page of results.
-* [``FeedResponse<>``](/javascript/api/@azure/cosmos/feedresponse) - This class represents a single page of responses from the iterator.
+- [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+- [``Database``](/javascript/api/@azure/cosmos/database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+- [``Container``](/javascript/api/@azure/cosmos/container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+- [``SqlQuerySpec``](/javascript/api/@azure/cosmos/sqlqueryspec) - This interface represents a SQL query and any query parameters.
+- [``QueryIterator<>``](/javascript/api/@azure/cosmos/queryiterator) - This class represents an iterator that can track the current page of results and get a new page of results.
+- [``FeedResponse<>``](/javascript/api/@azure/cosmos/feedresponse) - This class represents a single page of responses from the iterator.
## Code examples
-* [Authenticate the client](#authenticate-the-client)
-* [Create a database](#create-a-database)
-* [Create a container](#create-a-container)
-* [Create an item](#create-an-item)
-* [Get an item](#get-an-item)
-* [Query items](#query-items)
+- [Authenticate the client](#authenticate-the-client)
+- [Create a database](#create-a-database)
+- [Create a container](#create-a-container)
+- [Create an item](#create-an-item)
+- [Get an item](#get-an-item)
+- [Query items](#query-items)
The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
Touring-1000 Blue, 50 read
In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the JavaScript SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources. > [!div class="nextstepaction"]
-> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md)
+> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
ms.devlang: python Last updated 11/03/2022-+ # Quickstart: Azure Cosmos DB for NoSQL client library for Python
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-dotnet.md
ms.devlang: csharp-+ Last updated 07/06/2022-+ # Examples for Azure Cosmos DB for NoSQL SDK for .NET [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
->
-> * [.NET](samples-dotnet.md)
->
The [cosmos-db-nosql-dotnet-samples](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources. ## Prerequisites
-* An Azure account with an active subscription. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
-* Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- An Azure account with an active subscription. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
+- Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
+- [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
## Samples
cosmos-db Samples Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-nodejs.md
Title: Node.js examples to manage data in Azure Cosmos DB database
+ Title: Examples for Azure Cosmos DB for NoSQL SDK for JS
description: Find Node.js examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.-+++
+ms.devlang: javascript
Last updated 08/26/2021--+
-# Node.js examples to manage data in Azure Cosmos DB
+
+# Examples for Azure Cosmos DB for NoSQL SDK for JS
+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK Examples](samples-dotnet.md)
-> * [Java V4 SDK Examples](samples-java.md)
-> * [Spring Data V3 SDK Examples](samples-java-spring-data.md)
-> * [Node.js Examples](samples-nodejs.md)
-> * [Python Examples](samples-python.md)
-> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
->
Sample solutions that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-cosmos-js](https://github.com/Azure/azure-cosmos-js/tree/master/samples) GitHub repository. This article provides:
-* Links to the tasks in each of the Node.js example project files.
-* Links to the related API reference content.
+- Links to the tasks in each of the Node.js example project files.
+- Links to the related API reference content.
-**Prerequisites**
+## Prerequisites
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
Sample solutions that perform CRUD operations and other common operations on Azu
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] You also need the [JavaScript SDK](sdk-nodejs.md).
-
+ > [!NOTE] > Each sample is self-contained, it sets itself up and cleans up after itself. As such, the samples issue multiple calls to [Containers.create](/javascript/api/%40azure/cosmos/containers). Each time this is done your subscription will be billed for 1 hour of usage per the performance tier of the container being created.
- >
- >
## Database examples
-The [DatabaseManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts) file shows how to perform the CRUD operations on the database. To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+The [DatabaseManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts) file shows how to perform the CRUD operations on the database. To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | |
-| [Create a database if it does not exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L12-L14) |[Databases.createIfNotExists](/javascript/api/@azure/cosmos/databases#createifnotexists-databaserequest--requestoptions-) |
+| [Create a database if it doesn't exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L12-L14) |[Databases.createIfNotExists](/javascript/api/@azure/cosmos/databases#createifnotexists-databaserequest--requestoptions-) |
| [List databases for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L16-L18) |[Databases.readAll](/javascript/api/@azure/cosmos/databases#readall-feedoptions-) | | [Read a database by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L20-L29) |[Database.read](/javascript/api/@azure/cosmos/database#read-requestoptions-) | | [Delete a database](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L31-L32) |[Database.delete](/javascript/api/@azure/cosmos/database#delete-requestoptions-) | ## Container examples
-The [ContainerManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts) file shows how to perform the CRUD operations on the container. To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+The [ContainerManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts) file shows how to perform the CRUD operations on the container. To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | |
-| [Create a container if it does not exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L14-L15) |[Containers.createIfNotExists](/javascript/api/@azure/cosmos/containers#createifnotexists-containerrequest--requestoptions-) |
+| [Create a container if it doesn't exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L14-L15) |[Containers.createIfNotExists](/javascript/api/@azure/cosmos/containers#createifnotexists-containerrequest--requestoptions-) |
| [List containers for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L17-L21) |[Containers.readAll](/javascript/api/@azure/cosmos/containers#readall-feedoptions-) | | [Read a container definition](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L23-L26) |[Container.read](/javascript/api/@azure/cosmos/container#read-requestoptions-) | | [Delete a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L28-L30) |[Container.delete](/javascript/api/@azure/cosmos/container#delete-requestoptions-) | ## Item examples
-The [ItemManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts) file shows how to perform the CRUD operations on the item. To learn about the Azure Cosmos DB documents before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+The [ItemManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts) file shows how to perform the CRUD operations on the item. To learn about the Azure Cosmos DB documents before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | | | [Create items](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L18-L21) |[Items.create](/javascript/api/@azure/cosmos/items#create-t--requestoptions-) | | [Read all items in a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L23-L28) |[Items.readAll](/javascript/api/@azure/cosmos/items#readall-feedoptions-) | | [Read an item by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L30-L33) |[Item.read](/javascript/api/@azure/cosmos/item#read-requestoptions-) |
-| [Read item only if item has changed](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L45-L56) |[Item.read](/javascript/api/%40azure/cosmos/item)<br/>[RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
+| [Read item only if item has changed](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L45-L56) |[Item.read](/javascript/api/%40azure/cosmos/item) - [RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
| [Query for documents](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L58-L79) |[Items.query](/javascript/api/%40azure/cosmos/items) | | [Replace an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L81-L96) |[Item.replace](/javascript/api/%40azure/cosmos/item) |
-| [Replace item with conditional ETag check](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L98-L135) |[Item.replace](/javascript/api/%40azure/cosmos/item)<br/>[RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
+| [Replace item with conditional ETag check](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L98-L135) |[Item.replace](/javascript/api/%40azure/cosmos/item) - [RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
| [Delete an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L137-L140) |[Item.delete](/javascript/api/%40azure/cosmos/item) | ## Indexing examples
-The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
+The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
| Task | API reference | | | |
The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/sampl
| [Manually exclude a specific item from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L17-L29) |[RequestOptions.indexingDirective: 'exclude'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) | | [Exclude a path from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L142-L167) |[IndexingPolicy.ExcludedPath](/javascript/api/%40azure/cosmos/indexingpolicy#excludedpaths) | | [Create a range index on a string path](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L87-L112) |[IndexKind.Range](/javascript/api/%40azure/cosmos/indexkind), [IndexingPolicy](/javascript/api/%40azure/cosmos/indexingpolicy), [Items.query](/javascript/api/%40azure/cosmos/items) |
-| [Create a container with default indexPolicy, then update this online](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L13-L15) |[Containers.create](/javascript/api/%40azure/cosmos/containers)
+| [Create a container with default indexPolicy, then update the container online](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L13-L15) |[Containers.create](/javascript/api/%40azure/cosmos/containers)
## Server-side programming examples
-The [index.ts](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) file of the [ServerSideScripts](https://github.com/Azure/azure-cosmos-js/tree/master/samples/ServerSideScripts) project shows how to perform the following tasks. To learn about Server-side programming in Azure Cosmos DB before running the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md) conceptual article.
+The [index.ts](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) file of the [ServerSideScripts](https://github.com/Azure/azure-cosmos-js/tree/master/samples/ServerSideScripts) project shows how to perform the following tasks. To learn about Server-side programming in Azure Cosmos DB before running the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md) conceptual article.
| Task | API reference | | | |
For more information about server-side programming, see [Azure Cosmos DB server-
## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+- If all you know is the number of vCores and servers in your existing database cluster, see [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-python.md
Title: API for NoSQL Python examples for Azure Cosmos DB
+ Title: Examples for Azure Cosmos DB for NoSQL SDK for Python
description: Find Python examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.
ms.devlang: python Last updated 10/18/2021-+
-# Azure Cosmos DB Python examples
+# Examples for Azure Cosmos DB for NoSQL SDK for Python
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
->
-> - [.NET SDK Examples](samples-dotnet.md)
-> - [Java V4 SDK Examples](samples-java.md)
-> - [Spring Data V3 SDK Examples](samples-java-spring-data.md)
-> - [Node.js Examples](samples-nodejs.md)
-> - [Python Examples](samples-python.md)
-> - [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
Sample solutions that do CRUD operations and other common operations on Azure Cosmos DB resources are included in the `main/sdk/cosmos` folder of the [azure/azure-sdk-for-python](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/cosmos) GitHub repository. This article provides:
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-item.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create an item in Azure Cosmos DB for Table using .NET
The [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) class is a gene
Use one of the following strategies to model items that you wish to create in a table:
-* [Create an instance of the ``TableEntity`` class](#use-a-built-in-class)
-* [Implement the ``ITableEntity`` interface](#implement-interface)
+- [Create an instance of the ``TableEntity`` class](#use-a-built-in-class)
+- [Implement the ``ITableEntity`` interface](#implement-interface)
### Use a built-in class
cosmos-db How To Dotnet Create Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-table.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create a table in Azure Cosmos DB for Table using .NET
In Azure Cosmos DB, a table is analogous to a table in a relational database.
Here are some quick rules when naming a table:
-* Keep table names between 3 and 63 characters long
-* Table names can only contain lowercase letters, numbers, or the dash (-) character.
-* Table names must start with a lowercase letter or number.
+- Keep table names between 3 and 63 characters long
+- Table names can only contain lowercase letters, numbers, or the dash (-) character.
+- Table names must start with a lowercase letter or number.
## Create a table To create a table, call one of the following methods:
-* [``CreateAsync``](#create-a-table-asynchronously)
-* [``CreateIfNotExistsAsync``](#create-a-table-asynchronously-if-it-doesnt-already-exist)
+- [``CreateAsync``](#create-a-table-asynchronously)
+- [``CreateIfNotExistsAsync``](#create-a-table-asynchronously-if-it-doesnt-already-exist)
### Create a table asynchronously
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB for Table and .NET
+ Title: Get started with Azure Cosmos DB for Table using .NET
description: Get started developing a .NET application that works with Azure Cosmos DB for Table. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for Table endpoint.
ms.devlang: csharp Last updated 07/06/2022-+
-# Get started with Azure Cosmos DB for Table and .NET
+# Get started with Azure Cosmos DB for Table using .NET
[!INCLUDE[Table](../includes/appliesto-table.md)]
This article shows you how to connect to Azure Cosmos DB for Table using the .NE
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* Azure Cosmos DB for Table account. [Create a API for Table account](how-to-create-account.md).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Azure Cosmos DB for Table account. [Create a API for Table account](how-to-create-account.md).
+- [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
## Set up your project
dotnet build
To connect to the API for Table of Azure Cosmos DB, create an instance of the [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) class. This class is the starting point to perform all operations against tables. There are two primary ways to connect to an API for Table account using the **TableServiceClient** class:
-* [Connect with a API for Table connection string](#connect-with-a-connection-string)
+- [Connect with a API for Table connection string](#connect-with-a-connection-string)
### Connect with a connection string
Create a new instance of the **TableServiceClient** class with the ``COSMOS_CONN
As you build your application, your code will primarily interact with four types of resources:
-* The API for Table account, which is the unique top-level namespace for your Azure Cosmos DB data.
+- The API for Table account, which is the unique top-level namespace for your Azure Cosmos DB data.
-* Tables, which contain a set of individual items in your account.
+- Tables, which contain a set of individual items in your account.
-* Items, which represent an individual item in your table.
+- Items, which represent an individual item in your table.
The following diagram shows the relationship between these resources.
The following guides show you how to use each of these classes to build your app
## See also
-* [Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
-* [Samples](samples-dotnet.md)
-* [API reference](/dotnet/api/azure.data.tables)
-* [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/tables/Azure.Data.Tables)
-* [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
+- [Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
+- [Samples](samples-dotnet.md)
+- [API reference](/dotnet/api/azure.data.tables)
+- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/tables/Azure.Data.Tables)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
## Next steps
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-read-item.md
ms.devlang: csharp Last updated 07/06/2022-+ # Read an item in Azure Cosmos DB for Table using .NET
Azure Cosmos DB requires both the unique identifier and the partition key value
To perform a point read of an item, use one of the following strategies:
-* [Return a ``TableEntity`` object using ``GetEntityAsync<>``](#read-an-item-using-a-built-in-class)
-* [Return an object of your own type using ``GetEntityAsync<>``](#read-an-item-using-your-own-type)
+- [Return a ``TableEntity`` object using ``GetEntityAsync<>``](#read-an-item-using-a-built-in-class)
+- [Return an object of your own type using ``GetEntityAsync<>``](#read-an-item-using-your-own-type)
### Read an item using a built-in class
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB for Table for .NET description: Learn how to build a .NET app to manage Azure Cosmos DB for Table resources in this quickstart.--++
+ms.devlang: csharp
Last updated 08/22/2022-+ # Quickstart: Azure Cosmos DB for Table for .NET
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/samples-dotnet.md
ms.devlang: csharp Last updated 07/06/2022-+ # Examples for Azure Cosmos DB for Table SDK for .NET
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.m
┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored. > [!NOTE]
-> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+> Try Azure Cosmos DB supports global distribution in only the **East US**, **North Europe**, **Southeast Asia**, and **North Central US** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
### [MongoDB](#tab/mongodb)
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.m
┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored. > [!NOTE]
-> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+> Try Azure Cosmos DB supports global distribution in only the **East US**, **North Europe**, **Southeast Asia**, and **North Central US** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
### [PostgreSQL](#tab/postgresql)
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.m
┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored. > [!NOTE]
-> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+> Try Azure Cosmos DB supports global distribution in only the **East US**, **North Europe**, **Southeast Asia**, and **North Central US** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
+
+ Title: Group and allocate costs using tag inheritance
+
+description: This article explains how to group costs using tag inheritance.
++ Last updated : 11/16/2022++++++
+# Group and allocate costs using tag inheritance
+
+Azure tags are widely used to group costs to align with different business units, engineering environments, and cost departments. Tags provide the visibility needed for businesses to manage and allocate costs across the different groups.
+
+This article explains how to use the tag inheritance setting in Cost Management. When enabled, tag inheritance applies resource group and subscription tags to child resource usage records. You don't have to tag every resource or rely on resources that emit usage to have their own tags.
+
+Tag inheritance is available for customers with an Enterprise Account (EA) or a Microsoft Customer Agreement (MCA) account.
+
+## Required permissions
+
+- For subscriptions:
+ - Cost Management reader to view
+ - Cost Management Contributor to edit
+- For EA billing accounts:
+ - Enterprise Administrator (read-only) to view
+ - Enterprise Administrator to edit
+- For MCA billing profiles:
+ - Billing profile reader to view
+ - Billing profile contributor to edit
+
+## Enable tag inheritance
+
+You can enable the tag inheritance setting in the Azure portal. You apply the setting at the EA billing account, MCA billing profile, and subscription scopes. After the setting is enabled, all resource group and subscription tags are automatically applied to child resource usage records.
+
+To enable tag inheritance in the Azure portal:
+
+1. In the Azure portal, navigate to Cost Management.
+2. Select a scope.
+3. In the left menu under **Settings**, select either **Manage billing account** or **Manage subscription**, depending on your scope.
+4. Under **Tag inheritance**, select **Edit**.
+ :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance.png" alt-text="Screenshot showing the Edit option for Tag inheritance." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance.png" :::
+5. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+ :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" :::
+
+Here's an example diagram showing how a tag is inherited.
++
+## Choose between resource and inherited tags
+
+When a resource tag matches the resource group or subscription tag being applied, the resource tag is applied to its usage record by default. You can change the default behavior to have the subscription or resource group tag override the resource tag.
+
+In the Tag inheritance window, select the **Use the subscription or resource group tag** option.
++
+Let's look at an example of how a resource tag gets applied. In the following diagram, resource 4 and resource group 2 have the same tag: *App*. Because the user chose to keep the resource tag, usage record 4 is updated with the resource tag value *E2E*.
++
+Let's look at another example where a resource tag gets overridden. In the following diagram, resource 4 and resource group 2 have the same tag: **App**. Because the user chose to use the resource group or subscription tag, usage record 4 is updated with the resource group tag value, which is *backend*.
++
+## Usage record updates
+
+After the tag inheritance setting is enabled, it takes about 8-24 hours for the child resource usage records to get updated with subscription and resource group tags. The usage records are updated for the current month using the existing subscription and resource group tags.
+
+For example, if the tag inheritance setting is enabled on October 20, child resource usage records are updated from October 1 using the tags that existed on October 20.
+
+Similarly, if the tag inheritance setting is disabled, the inherited tags will be removed from the usage records for the current month.
+
+> [!NOTE]
+> If there are purchases or resources that donΓÇÖt emit usage at a subscription scope, they will not have the subscription tags applied even if the setting is enabled.
+
+## View costs grouped by tags
+
+You can use cost analysis to view the costs grouped by tags.
+
+1. In the Azure portal, navigate to **Cost Management**.
+1. In the left menu, select **Cost Analysis**.
+1. Select a scope.
+1. In the **Group by** list, select the tag you want to view costs for.
+
+Here's an example showing costs for the *org* tag.
++
+You can also view the inherited tags by downloading your Azure usage. For more information, see [View and download your Azure usage and charges](../understand/download-azure-daily-usage.md).
+
+## Next steps
+
+- Learn how to [split shared costs](allocate-costs.md).
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
description: This article helps you better understand data that's included in Cost Management and how frequently it's processed, collected, shown, and closed. Previously updated : 11/04/2022 Last updated : 11/16/2022
If you don't see a specific tag in Cost Management, consider the following quest
Here are a few tips for working with tags: - Plan ahead and define a tagging strategy that allows you to break down costs by organization, application, environment, and so on.-- Use Azure Policy to copy resource group tags to individual resources and enforce your tagging strategy.
+- [Group and allocate costs using tag inheritance](enable-tag-inheritance.md) to apply resource group and subscription tags to child resource usage records. If you were using Azure policy to enforce tagging for cost reporting, consider enabling the tag inheritance setting for easier management and more flexibility.
- Use the Tags API with either Query or UsageDetails to get all cost based on the current tags. ## Cost and usage data updates and retention
cost-management-billing Ea Portal Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-rest-apis.md
Title: Azure Enterprise REST APIs
description: This article describes the REST APIs for use with your Azure enterprise enrollment. Previously updated : 11/16/2022 Last updated : 11/17/2022
Microsoft Enterprise Azure customers can get usage and billing information throu
**Billing Periods -** The [Billing Periods API](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods) returns a list of billing periods that have consumption data for an enrollment in reverse chronological order. Each period contains a property pointing to the API route for the four sets of data, BalanceSummary, UsageDetails, Marketplace Charges, and PriceSheet. For more information, see [Reporting APIs for Enterprise customers - Billing Periods](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods).
-### Enable API data access
+### API key generation
Role owners can perform the following steps in the Azure EA portal. Navigate to **Reports** > **Download Usage** > **API Access Key**. Then they can:
cost-management-billing Link Partner Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md
description: Track engagements with Azure customers by linking a partner ID to t
Previously updated : 06/28/2022 Last updated : 11/17/2022
# Link a partner ID to your account thatΓÇÖs used to manage customers
-Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When acting on behalf of the customer managing, configuring, and supporting Azure services, the partner users will need access to the customerΓÇÖs environment. Using Partner Admin Link (PAL), partners can associate their partner network ID with the credentials used for service delivery.
+Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When a partner acts on behalf of the customer to manage, configure, and support Azure services, the partner users will need access to the customerΓÇÖs environment. When partners use Partner Admin Link (PAL), they can associate their partner network ID with the credentials used for service delivery.
PAL enables Microsoft to identify and recognize partners who drive Azure customer success. Microsoft can attribute influence and Azure consumed revenue to your organization based on the account's permissions (Azure role) and scope (subscription, resource group, resource). If a group has Azure RBAC access, then PAL is recognized for all the users in the group.
Yes. A linked partner ID can be changed, added, or removed.
The link between the partner ID and the account is done for each customer tenant. Link the partner ID in each customer tenant.
-However, if you are managing customer resources through Azure Lighthouse, you should create the link in your service provider tenant, using an account that has access to the customer resources. For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
+However, if you're managing customer resources through Azure Lighthouse, you should create the link in your service provider tenant, using an account that has access to the customer resources. For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
**Can other partners or customers edit or remove the link to the partner ID?**
Yes, You can link your partner ID for Azure Stack.
**How do I link my partner ID if my company uses [Azure Lighthouse](../../lighthouse/overview.md) to access customer resources?**
-In order for Azure Lighthouse activities to be recognized, you'll need to associate your Partner ID with at least one user account that has access to each of your onboarded subscriptions. Note that you'll need to do this in your service provider tenant rather than in each customer tenant. For simplicity, we recommend creating a service principal account in your tenant, associating it with your Partner ID, then granting it access to every customer you onboard with an [Azure built-in role that is eligible for partner earned credit](/partner-center/azure-roles-perms-pec). For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
+In order for Azure Lighthouse activities to be recognized, you need to associate your Partner ID with at least one user account that has access to each of your onboarded subscriptions. The association is needed in your service provider tenant rather than in each customer tenant. For simplicity, we recommend creating a service principal account in your tenant, associating it with your Partner ID, then granting it access to every customer you onboard with an [Azure built-in role that is eligible for partner earned credit](/partner-center/azure-roles-perms-pec). For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
**How do I explain Partner Admin Link (PAL) to my Customer?**
Partner Admin Link (PAL) enables Microsoft to identify and recognize those partn
**What data does PAL collect?**
-The PAL association to existing credentials provides no new customer data to Microsoft. It simply provides the telemetry to Microsoft where a partner is actively involved in a customerΓÇÖs Azure environment. Microsoft can attribute influence and Azure consumed revenue from customer environment to partner organization based on the account's permissions (Azure role) and scope (Management Group, Subscription, Resource Group, Resource) provided to the partner by customer.
+The PAL association to existing credentials provides no new customer data to Microsoft. It simply provides the information to Microsoft where a partner is actively involved in a customerΓÇÖs Azure environment. Microsoft can attribute influence and Azure consumed revenue from customer environment to partner organization based on the account's permissions (Azure role) and scope (Management Group, Subscription, Resource Group, Resource) provided to the partner by customer.
**Does this impact the security of a customerΓÇÖs Azure Environment?**
-PAL association only adds partnerΓÇÖs ID to the credential already provisioned and it does not alter any permissions (Azure role) or provide additional Azure service data to partner or Microsoft.
+PAL association only adds partnerΓÇÖs ID to the credential already provisioned and it doesn't alter any permissions (Azure role) or provide other Azure service data to partner or Microsoft.
+
+**What happens if the PAL identity is deleted?**
+
+If the partner network ID, also called MPN ID, is deleted, then all the recognition mechanisms including Azure Consumed Revenue (ACR) attribution stops working.
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 11/16/2022 Last updated : 11/17/2022
A billing account is created when you sign up to use Azure. You use your billing
Azure portal supports the following type of billing accounts: - **Microsoft Online Services Program**: A billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/).
+ - A new billing account for a Microsoft Online Services Program can have a maximum of 5 subscriptions. However, subscriptions transferred to the new billing account don't count against the limit.
- The ability to create other Microsoft Online Services Program subscriptions is determined on an individual basis according to your history with Azure. - **Enterprise Agreement**: A billing account for an Enterprise Agreement (EA) is created when your organization signs an [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. An EA enrollment can contain an unlimited number of EA accounts.
databox Data Box Troubleshoot Data Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot-data-upload.md
When the following errors occur, you can resolve the errors and include the file
|Storage account deleted or moved |One or more storage accounts were moved or deleted. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts deleted or moved**<br>Storage accounts: &lt;*storage accounts list*&gt; were either deleted, or moved to a different subscription or resource group. Recover or re-create the storage accounts with the original set of properties, and then confirm to resume data copy.<br>[Learn more on how to recover a storage account](../storage/common/storage-account-recover.md). | |Storage account location changed |One or more storage accounts were moved to a different region. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts location changed**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved to a different region. Restore the account to the original destination region and then confirm to resume data copy.<br>[Learn more on how to move storage accounts](../storage/common/storage-account-move.md). | |Virtual network restriction on storage account |One or more storage accounts are behind a virtual network and have restricted access. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts behind virtual network**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved behind a virtual network. Add Data Box to the list of trusted services to allow access and then confirm to resume data copy.<br>[Learn more about trusted first party access](../storage/common/storage-network-security.md#exceptions). |
-|Storage account owned by a different tenant |One or more storage accounts were moved under a different tenant. Resolve the error and resume data copy, or skip to data erasure and complete the order.|**Storage accounts moved to a different tenant**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved to a different tenant. Restore the account to the original tenant and then confirm to resume data copy.<br>[Learn more on how to move storage accounts](../storage/common/storage-account-recover.md#recover-a-deleted-account-via-a-support-ticket). |
+|Storage account owned by a different tenant |One or more storage accounts were moved under a different tenant. Resolve the error and resume data copy, or skip to data erasure and complete the order.|**Storage accounts moved to a different tenant**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved to a different tenant. Restore the account to the original tenant and then confirm to resume data copy.<br>[Learn more on how to move storage accounts](../storage/common/storage-account-move.md). |
|Kek user identity not found |The user identity that has access to the customer-managed key wasn’t found in the active directory. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**User identity not found**<br>Applied a customer-managed key but the user assigned identity that has access to the key was not found in the active directory.<br>This error may occur if a user identity is deleted from Azure.<br>Try adding another user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](data-box-customer-managed-encryption-key-portal.md#enable-key).<br>Confirm to resume data copy after the error is resolved. | |Cross tenant identity access not allowed |Managed identity couldn’t access the customer-managed key. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Cross tenant identity access not allowed**<br>Managed identity couldn’t access the customer-managed key.<br>This error may occur if a subscription is moved to a different tenant. To resolve this error, manually move the identity to the new tenant.<br>Try adding another user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](data-box-customer-managed-encryption-key-portal.md#enable-key).<br>Confirm to resume data copy after the error is resolved. | |Key details not found |Couldn’t fetch the passkey as the customer-managed key wasn’t found. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Key details not found**<br>If you deleted the key vault, you can't recover the customer-managed key. If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault and it is still in the purge-protection duration, use the steps at [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).<br>If the key vault was migrated to a different tenant, use one of the following steps to recover the vault:<ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity` = `None` and then set the value back to `Identity` = `SystemAssigned`. This deletes and recreates the identity after the new identity is created. Enable `Get`, `WrapKey`, and `UnwrapKey` permissions for the new identity in the key vault's access policy.</li></ol> |
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
To protect your Kubernetes containers, Defender for Containers receives and anal
- Workload configuration from Azure Policy - Security signals and events from the node level
+To learn more about implementation details such as supported operating systems, feature availability, outbound proxy, see [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ ## Architecture for each Kubernetes environment ## [**Azure (AKS)**](#tab/defender-for-container-arch-aks)
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
Title: Install OT network monitoring software - Microsoft Defender for IoT description: Learn how to install agentless monitoring software for an OT sensor and an on-premises management console for Microsoft Defender for IoT. Use this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances. Previously updated : 07/13/2022 Last updated : 11/09/2022
Mount the ISO file onto your hardware appliance or VM using one of the following
- DVDs: First burn the software to the DVD as an image - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
- Your physical media must have a minimum of 4 GB storage.
+ Your physical media must have a minimum of 4-GB storage.
- **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
After installing OT monitoring software, make sure to run the following tests:
- **ifconfig**: Verify that all the input interfaces configured during the installation process are running.
+#### Gateway checks
+
+Use the `route` command to show the gateway's IP address. For example:
+
+``` CLI
+<root@xsense:/# route -n
+Kernel IP routing table
+Destination Gateway Genmask Flags Metric Ref Use Iface
+0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
+172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
+>
+```
+
+Use the `arp -a` command to verify that there is a binding between the MAC address and the IP address of the default gateway. For example:
+
+``` CLI
+<root@xsense:/# arp -a
+cusalvtecca101-gi0-02-2851.network.microsoft.com (172.18.0.1) at 02:42:b0:3a:e8:b5 [ether] on eth0
+mariadb_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.5) at 02:42:ac:12:00:05 [ether] on eth0
+redis_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.3) at 02:42:ac:12:00:03 [ether] on eth0
+>
+```
+
+#### DNS checks
+
+Use the `cat /etc/resolv.conf` command to find the IP address that's configured for DNS traffic. For example:
+``` CLI
+<root@xsense:/# cat /etc/resolv.conf
+search reddog.microsoft.com
+nameserver 127.0.0.11
+options ndots:0
+>
+```
+
+Use the `host` command to resolve an FQDN. For example:
+
+``` CLI
+<root@xsense:/# host www.apple.com
+www.apple.com is an alias for www.apple.com.edgekey.net.
+www.apple.com.edgekey.net is an alias for www.apple.com.edgekey.net.globalredir.akadns.net.
+www.apple.com.edgekey.net.globalredir.akadns.net is an alias for e6858.dscx.akamaiedge.net.
+e6858.dscx.akamaiedge.net has address 72.246.148.202
+e6858.dscx.akamaiedge.net has IPv6 address 2a02:26f0:5700:1b4::1aca
+e6858.dscx.akamaiedge.net has IPv6 address 2a02:26f0:5700:182::1aca
+>
+```
+
+#### Firewall checks
+
+Use the `wget` command to verify that port 443 is open for communication. For example:
+
+``` CLI
+<root@xsense:/# wget https://www.apple.com
+--2022-11-09 11:21:15-- https://www.apple.com/
+Resolving www.apple.com (www.apple.com)... 72.246.148.202, 2a02:26f0:5700:1b4::1aca, 2a02:26f0:5700:182::1aca
+Connecting to www.apple.com (www.apple.com)|72.246.148.202|:443... connected.
+HTTP request sent, awaiting response... 200 OK
+Length: 99966 (98K) [text/html]
+Saving to: 'https://docsupdatetracker.net/index.html.1'
+
+https://docsupdatetracker.net/index.html.1 100%[===================>] 97.62K --.-KB/s in 0.02s
+
+2022-11-09 11:21:15 (5.88 MB/s) - 'https://docsupdatetracker.net/index.html.1' saved [99966/99966]
+
+>
+```
+ For more information, see [Check system health](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) in our sensor and on-premises management console troubleshooting article. ## Configure tunneling access for sensors through the on-premises management console
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
Complete the following steps in the Azure CLI to create an environment and confi
```azurecli az devcenter dev environment create --dev-center-name <devcenter-name> --project-name <project-name> -n <name> --environment-type <environment-type-name>
- --catalog-item-name <catalog-item-name> catalog-name <catalog-name>
+ --catalog-item-name <catalog-item-name> --catalog-name <catalog-name>
``` If the specific *catalog-item* requires any parameters, use `--parameters` and provide the parameters as a JSON string or a JSON file. For example:
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration.md
Start by selecting the tab below for your preferred interface.
Navigate to [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) in the Azure portal (you can use this link or find it with the portal search bar). Select **App registrations** from the service menu, and then **+ New registration**. In the **Register an application** page that follows, fill in the requested values: * **Name**: An Azure AD application display name to associate with the registration
In the **Register an application** page that follows, fill in the requested valu
When you're finished, select the **Register** button. When the registration is finished setting up, the portal will redirect you to its details page.
Start on your app registration page in the Azure portal.
1. Select **Certificates & secrets** from the registration's menu, and then select **+ New client secret**.
- :::image type="content" source="media/how-to-create-app-registration/client-secret.png" alt-text="Screenshot of the Azure portal showing an Azure AD app registration and a highlight around 'New client secret'.":::
+ :::image type="content" source="media/how-to-create-app-registration/client-secret.png" alt-text="Screenshot of the Azure portal showing an Azure AD app registration and a highlight around 'New client secret'." lightbox="media/how-to-create-app-registration/client-secret.png":::
1. Enter whatever values you want for Description and Expires, and select **Add**.
- :::row:::
- :::column:::
- :::image type="content" source="media/how-to-create-app-registration/add-client-secret.png" alt-text="Screenshot of the Azure portal while adding a client secret.":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
+ :::image type="content" source="media/how-to-create-app-registration/add-client-secret.png" alt-text="Screenshot of the Azure portal while adding a client secret.":::
1. Verify that the client secret is visible on the **Certificates & secrets** page with Expires and Value fields. 1. Take note of its **Secret ID** and **Value** to use later (you can also copy them to the clipboard with the Copy icons).
- :::image type="content" source="media/how-to-create-app-registration/client-secret-value.png" alt-text="Screenshot of the Azure portal showing how to copy the client secret value.":::
+ :::image type="content" source="media/how-to-create-app-registration/client-secret-value.png" alt-text="Screenshot of the Azure portal showing how to copy the client secret value." lightbox="media/how-to-create-app-registration/client-secret-value.png":::
>[!IMPORTANT] >Make sure to copy the values now and store them in a safe place, as they can't be retrieved again. If you can't find them later, you'll have to create a new secret.
Use these steps to create the role assignment for your registration.
| Assign access to | User, group, or service principal | | Members | Search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
- ![Add role assignment page](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the 'Add role assignment' page." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
#### Verify role assignment You can view the role assignment you've set up under **Access control (IAM) > Role assignments**. The app registration should show up in the list along with the role you assigned to it.
Select **Add permissions** when finished.
On the **API permissions** page, verify that there's now an entry for Azure Digital Twins reflecting **Read.Write** permissions: You can also verify the connection to Azure Digital Twins within the app registration's *manifest.json*, which was automatically updated with the Azure Digital Twins information when you added the API permissions.
To do so, select **Manifest** from the menu to view the app registration's manif
These values are shown in the screenshot below: If these values are missing, retry the steps in the [section for adding the API permission](#provide-api-permissions).
It's possible that your organization requires more actions from subscription own
Here are some common potential activities that an owner or administrator on the subscription may need to do. These and other operations can be performed from the [Azure AD App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) page in the Azure portal. * Grant admin consent for the app registration. Your organization may have **Admin Consent Required** globally turned on in Azure AD for all app registrations within your subscription. If so, the owner/administrator will need to select this button for your company on the app registration's **API permissions** page for the app registration to be valid:
- :::image type="content" source="media/how-to-create-app-registration/grant-admin-consent.png" alt-text="Screenshot of the Azure portal showing the 'Grant admin consent' button under API permissions.":::
- - If consent was granted successfully, the entry for Azure Digital Twins should then show a **Status** value of **Granted for (your company)**
+ :::image type="content" source="media/how-to-create-app-registration/grant-admin-consent.png" alt-text="Screenshot of the Azure portal showing the 'Grant admin consent' button under API permissions." lightbox="media/how-to-create-app-registration/grant-admin-consent.png":::
+
+ - If consent was granted successfully, the entry for Azure Digital Twins should then show a **Status** value of **Granted for (your company)**
- :::image type="content" source="media/how-to-create-app-registration/granted-admin-consent-done.png" alt-text="Screenshot of the Azure portal showing the admin consent granted for the company under API permissions.":::
+ :::image type="content" source="media/how-to-create-app-registration/granted-admin-consent-done.png" alt-text="Screenshot of the Azure portal showing the admin consent granted for the company under API permissions." lightbox="media/how-to-create-app-registration/granted-admin-consent-done.png":::
+ * Activate public client access * Set specific reply URLs for web and desktop access * Allow for implicit OAuth2 authentication flows
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-enable-private-link.md
The Private Link options are located in the **Networking** tab of instance setup
1. In the **Create private endpoint** page that opens, enter the details of a new private endpoint.
- :::image type="content" source="media/how-to-enable-private-link/create-private-endpoint-full.png" alt-text="Screenshot of the Azure portal showing the Create private endpoint page. It contains the fields described below.":::
+ :::image type="content" source="media/how-to-enable-private-link/create-private-endpoint-full.png" alt-text="Screenshot of the Azure portal showing the Create private endpoint page. It contains the fields described below." lightbox="media/how-to-enable-private-link/create-private-endpoint-full.png":::
1. Fill in selections for your **Subscription** and **Resource group**. Set the **Location** to the same location as the VNet you'll be using. Choose a **Name** for the endpoint, and for **Target sub-resources** select *API*.
To disable or enable public network access in the [Azure portal](https://portal.
1. In the **Public access** tab, set **Allow public network access to** either **Disabled** or **All networks**.
- :::row:::
- :::column:::
- :::image type="content" source="media/how-to-enable-private-link/network-flag-portal.png" alt-text="Screenshot of the Azure portal showing the Networking page for an Azure Digital Twins instance, highlighting how to toggle public access." lightbox="media/how-to-enable-private-link/network-flag-portal.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
+ :::image type="content" source="media/how-to-enable-private-link/network-flag-portal.png" alt-text="Screenshot of the Azure portal showing the Networking page for an Azure Digital Twins instance, highlighting how to toggle public access." lightbox="media/how-to-enable-private-link/network-flag-portal.png":::
Select **Save**.
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-time-series-insights.md
Before you can set up a relationship with Time Series Insights, you'll need to s
You'll be attaching Time Series Insights to Azure Digital Twins through the following path.
- :::column:::
- :::image type="content" source="media/how-to-integrate-time-series-insights/diagram-simple.png" alt-text="Diagram of Azure services in an end-to-end scenario, highlighting Time Series Insights." lightbox="media/how-to-integrate-time-series-insights/diagram-simple.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
## Create Event Hubs namespace
In this section, you'll create an Azure function that will convert twin update e
2. Create a new Azure function called *ProcessDTUpdatetoTSI.cs* to update device telemetry events to the Time Series Insights. The function type will be **Event Hub trigger**.
- :::image type="content" source="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png" alt-text="Screenshot of Visual Studio to create a new Azure function of type event hub trigger.":::
+ :::image type="content" source="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png" alt-text="Screenshot of Visual Studio to create a new Azure function of type event hub trigger." lightbox="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png":::
3. Add the following packages to your project (you can use the Visual Studio NuGet package manager, or the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in a command-line tool). * [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/)
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
You can find these details in the [Azure portal](https://portal.azure.com) after
Select your instance from the results to see these details in the Overview for your instance: Follow the instructions below if you intend to use the Azure CLI while following this guide.
To create a new endpoint, go to your instance's page in the [Azure portal](https
1. Complete the other details that are required for your endpoint type, including your subscription and the endpoint resources described [above](#prerequisite-create-endpoint-resources). 1. For Event Hubs and Service Bus endpoints only, you must select an **Authentication type**. You can use key-based authentication with a pre-created authorization rule, or identity-based authentication if you'll be using the endpoint with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for your Azure Digital Twins instance.
- :::row:::
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png" alt-text="Screenshot of creating an endpoint of type Event Hubs in the Azure portal." lightbox="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
1. Finish creating your endpoint by selecting **Save**.
To create a new endpoint, go to your instance's page in the [Azure portal](https
After creating your endpoint, you can verify that the endpoint was successfully created by checking the notification icon in the top Azure portal bar:
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-endpoint-notifications.png" alt-text="Screenshot of the notification to verify the creation of an endpoint in the Azure portal.":::
- :::column-end:::
- :::column:::
- :::column-end:::
+ If the endpoint creation fails, observe the error message and retry after a few minutes.
You can either select from some basic common filter options, or use the advanced
To use the basic filters, expand the **Event types** option and select the checkboxes corresponding to the events you want to send to your endpoint.
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-event-route-filter-basic-1.png" alt-text="Screenshot of creating an event route with a basic filter in the Azure portal, highlighting the checkboxes of the events.":::
- :::column-end:::
- :::column:::
- :::column-end:::
Doing so will autopopulate the filter text box with the text of the filter you've selected:
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-event-route-filter-basic-2.png" alt-text="Screenshot of creating an event route with a basic filter in the Azure portal, highlighting the autopopulated filter text after selecting the events.":::
- :::column-end:::
- :::column:::
- :::column-end:::
### Use the advanced filters
You can also use the advanced filter option to write your own custom filters.
To create an event route with advanced filter options, toggle the switch for the **Advanced editor** to enable it. You can then write your own event filters in the **Filter** box:
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-event-route-filter-advanced.png" alt-text="Screenshot of creating an event route with an advanced filter in the Azure portal.":::
- :::column-end:::
- :::column:::
- :::column-end:::
# [API](#tab/api)
digital-twins How To Route With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-route-with-managed-identity.md
# Mandatory fields. Title: Route events with a managed identity
-description: See how to enable a system-assigned identity for Azure Digital Twins and use it to forward events, using the Azure portal or CLI.
+description: See how to use a system-assigned identity to forward events in Azure Digital Twins.
Previously updated : 02/23/2022 Last updated : 11/17/2022
# Enable a managed identity for routing Azure Digital Twins events
-This article describes how to enable a [system-assigned identity for an Azure Digital Twins instance](concepts-security.md#managed-identity-for-accessing-other-resources), and use the identity when forwarding events to supported routing destinations. Setting up a managed identity isn't required for routing, but it can help the instance to easily access other Azure AD-protected resources, such as [Event Hubs](../event-hubs/event-hubs-about.md), [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md).
+This article describes how to use a [system-assigned identity for an Azure Digital Twins instance](concepts-security.md#managed-identity-for-accessing-other-resources) when forwarding events to supported routing destinations. Setting up a managed identity isn't required for routing, but it can help the instance to easily access other Azure AD-protected resources, such as [Event Hubs](../event-hubs/event-hubs-about.md), [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md).
Here are the steps that are covered in this article:
Here are the steps that are covered in this article:
1. Add an appropriate role or roles to the identity. For example, assign the **Azure Event Hub Data Sender** role to the identity if the endpoint is Event Hubs, or **Azure Service Bus Data Sender role** if the endpoint is Service Bus. 1. Create an endpoint in Azure Digital Twins that can use system-assigned identities for authentication.
-## Enable system-managed identity for the instance
+## Create an Azure Digital Twins instance with a managed identity
-When you enable a system-assigned identity on your Azure Digital Twins instance, Azure automatically creates an identity for it in [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md). That identity can then be used to authenticate to Azure Digital Twins endpoints for event forwarding.
+If you already have an Azure Digital Twins instance, ensure that you've enabled a [system-managed identity](how-to-set-up-instance-cli.md#enabledisable-system-managed-identity-for-the-instance) for it.
-You can enable system-managed identities for an Azure Digital Twins instance in two different ways:
+If you don't have an Azure Digital Twins instance, follow the instructions in [Create the instance with a system-managed identity](how-to-set-up-instance-cli.md#create-the-instance-with-a-system-managed-identity) to create an Azure Digital Twins instance with a managed identity for the first time.
-- Enable it as part of the instance's initial setup.-- Enable it later on an instance that already exists.-
-Either of these creation methods will give the same configuration options and the same end result for your instance. This section describes how to do both.
-
-### Add a system-managed identity during instance creation
-
-In this section, you'll learn how to enable a system-managed identity for an Azure Digital Twins instance while the instance is being created. You can enable the identity whether you're creating the instance with the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/what-is-azure-cli). Use the tabs below to select instructions for your preferred experience.
-
-# [Portal](#tab/portal)
-
-To add a managed identity during instance creation in the portal, begin [creating an instance as you normally would](how-to-set-up-instance-portal.md).
-
-The system-managed identity option is located in the **Advanced** tab of instance setup.
-
-In this tab, select the **On** option for **System managed identity** to turn on this feature.
--
-You can then use the bottom navigation buttons to continue with the rest of instance setup.
-
-# [CLI](#tab/cli)
-
-In the CLI, you can add an `--assign-identity` parameter to the `az dt create` command that's used to create the instance. (For more information about this command, see its [reference documentation](/cli/azure/dt#az-dt-create) or the [general instructions for setting up an Azure Digital Twins instance](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance)).
-
-To create an instance with a system managed identity, add the `--assign-identity` parameter like this:
-
-```azurecli-interactive
-az dt create --dt-name <new-instance-name> --resource-group <resource-group> --assign-identity
-```
---
-### Add a system-managed identity to an existing instance
-
-In this section, you'll add a system-managed identity to an Azure Digital Twins instance that already exists. Use the tabs below to select instructions for your preferred experience.
-
-# [Portal](#tab/portal)
-
-Start by opening the [Azure portal](https://portal.azure.com) in a browser.
-
-1. Search for the name of your instance in the portal search bar, and select it to view its details.
-
-1. Select **Identity** in the left-hand menu.
-
-1. On this page, select the **On** option to turn on this feature.
-
-1. Select the **Save** button, and **Yes** to confirm.
-
- :::image type="content" source="media/how-to-route-with-managed-identity/identity-digital-twins.png" alt-text="Screenshot of the Azure portal showing the Identity page for an Azure Digital Twins instance.":::
-
-After the change is saved, more fields will appear on this page for the new identity's **Object ID** and **Permissions**.
-
-You can copy the **Object ID** from here if needed, and use the **Permissions** button to view the Azure roles that are assigned to the identity. To set up some roles, continue to the next section.
-
-# [CLI](#tab/cli)
-
-Again, you can add the identity to your instance by using the `az dt create` command and `--assign-identity` parameter. Instead of providing a new name of an instance to create, you can provide the name of an instance that already exists to update the value of `--assign-identity` for that instance.
-
-The command to enable managed identity is the same as the command to create an instance with a system managed identity. All that changes is the value of the instance name parameter:
-
-```azurecli-interactive
-az dt create --dt-name <name-of-existing-instance> --resource-group <resource-group> --assign-identity
-```
-
-To disable managed identity on an instance where it's currently enabled, use the following similar command to set `--assign-identity` to `false`.
-
-```azurecli-interactive
-az dt create --dt-name <name-of-existing-instance> --resource-group <resource-group> --assign-identity false
-```
--
+Then, make sure you have *Azure Digital Twins Data Owner* role on the instance. You can find instructions in [Set up user access permissions](how-to-set-up-instance-cli.md#set-up-user-access-permissions).
## Assign Azure roles to the identity
To assign a role to the identity, start by opening the [Azure portal](https://po
| Assign access to | Under **System assigned managed identity**, select **Digital Twins**. | | Members | Select the managed identity of your Azure Digital Twins instance that's being assigned the role. The name of the managed identity matches the name of the instance, so choose the name of your Azure Digital Twins instance. |
- ![Add role assignment page](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the 'Add role assignment' page for an Azure Digital Twins instance." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
# [CLI](#tab/cli)
Start following the [instructions to create an Azure Digital Twins endpoint](how
When you get to the step of completing the details required for your endpoint type, make sure to select **Identity-based** for the Authentication type.
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png" alt-text="Screenshot of creating an endpoint of type Event Hubs." lightbox="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
Finish setting up your endpoint and select **Save**.
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-cli.md
description: See how to set up an instance of the Azure Digital Twins service using the CLI Previously updated : 02/24/2022 Last updated : 11/17/2022
az dt create --dt-name <name-for-your-Azure-Digital-Twins-instance> --resource-g
There are several optional parameters that can be added to the command to specify additional things about your resource during creation, including creating a [system managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for the instance or enabling/disabling public network access. For a full list of supported parameters, see the [az dt create](/cli/azure/dt#az-dt-create) reference documentation.
+### Create the instance with a system-managed identity
+
+When you enable a [system-assigned identity](concepts-security.md#managed-identity-for-accessing-other-resources) on your Azure Digital Twins instance, Azure automatically creates an identity for it in [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md). That identity can then be used to authenticate to Azure Digital Twins endpoints for event forwarding. You can enable a system-managed identity for an Azure Digital Twins instance during instance creation, or [later on an existing instance](#enabledisable-system-managed-identity-for-the-instance).
+
+To create an Azure Digital Twins instance with system-assigned identity enabled, you can add an `--assign-identity` parameter to the `az dt create` command that's used to create the instance. (For more information about this command, see its [reference documentation](/cli/azure/dt#az-dt-create) or the [general instructions for setting up an Azure Digital Twins instance](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance)).
+
+To create an instance with a system managed identity, add the `--assign-identity` parameter like this:
+
+```azurecli-interactive
+az dt create --dt-name <new-instance-name> --resource-group <resource-group> --assign-identity
+```
+ ### Verify success and collect important values If the instance was created successfully, the result in the CLI looks something like this, outputting information about the resource you have created:
The result of this command is outputted information about the role assignment th
You now have an Azure Digital Twins instance ready to go, and have assigned permissions to manage it.
+## Enable/disable system-managed identity for the instance
+
+This section shows you how to add a system-managed identity to an existing Azure Digital Twins instance. You can also disable system-managed identity on an instance that has it already.
+
+The command to enable managed identity for an existing instance is the same `az dt create` command that's used to [create a new instance with a system managed identity](#create-the-instance-with-a-system-managed-identity). Instead of providing a new name of an instance to create, you can provide the name of an instance that already exists. Then, make sure to add the `--assign-identity` parameter.
+
+```azurecli-interactive
+az dt create --dt-name <name-of-existing-instance> --resource-group <resource-group> --assign-identity
+```
+
+To disable managed identity on an instance where it's currently enabled, use the following similar command to set `--assign-identity` to `false`.
+
+```azurecli-interactive
+az dt create --dt-name <name-of-existing-instance> --resource-group <resource-group> --assign-identity false
+```
+
+### Considerations for disabling system-managed identities
+
+It's important to consider the effects that any changes to the identity or its roles can have on the resources that use it. If you're [using managed identities with your Azure Digital Twins endpoints](how-to-route-with-managed-identity.md) or for [data history](how-to-use-data-history.md) and the identity is disabled, or a necessary role is removed from it, the endpoint or data history connection can become inaccessible and the flow of events will be disrupted.
+
+To continue using an endpoint that was set up with a managed identity that's now been disabled, you'll need to delete the endpoint and [re-create it](how-to-manage-routes.md#create-an-endpoint-for-azure-digital-twins) with a different authentication type. It may take up to an hour for events to resume delivery to the endpoint after this change.
+ ## Next steps Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands:
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-portal.md
description: See how to set up an instance of the Azure Digital Twins service using the Azure portal Previously updated : 02/24/2022 Last updated : 11/17/2022
This version of this article goes through these steps manually, one by one, usin
Here are the additional options you can configure during setup, using the other tabs in the **Create Resource** process. * **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [Enable private access with Private Link](./how-to-enable-private-link.md?tabs=portal#add-a-private-endpoint-during-instance-creation).
-* **Advanced**: In this tab, you can enable a system-managed identity for your instance that can be used when forwarding events along [event routes](concepts-route-events.md). For more information about using system-managed identities with Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#managed-identity-for-accessing-other-resources).
+* **Advanced**: In this tab, you can enable a [system-managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for your instance. When this is enabled, Azure automatically creates an identity for the instance in [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md). That identity can then be used to authenticate to Azure Digital Twins endpoints for event forwarding. You can enable that system-managed identity here, or [later on an existing instance](#enabledisable-system-managed-identity-for-the-instance).
* **Tags**: In this tab, you can add tags to your instance to help you organize it among your Azure resources. For more about Azure resource tags, see [Tag resources, resource groups, and subscriptions for logical organization](../azure-resource-manager/management/tag-resources.md). ### Verify success and collect important values
You can view the role assignment you've set up under **Access control (IAM) > Ro
You now have an Azure Digital Twins instance ready to go, and have assigned permissions to manage it.
+## Enable/disable system-managed identity for the instance
+
+This section shows you how to add a system-managed identity to an existing Azure Digital Twins instance. You can also use this page to disable system-managed identity on an instance that has it already.
+
+Start by opening the [Azure portal](https://portal.azure.com) in a browser.
+
+1. Search for the name of your instance in the portal search bar, and select it to view its details.
+
+1. Select **Identity** in the left-hand menu.
+
+1. On this page, select the **On** option to turn on this feature.
+
+1. Select the **Save** button, and **Yes** to confirm.
+
+ :::image type="content" source="media/how-to-route-with-managed-identity/identity-digital-twins.png" alt-text="Screenshot of the Azure portal showing the Identity page for an Azure Digital Twins instance." lightbox="media/how-to-route-with-managed-identity/identity-digital-twins.png":::
+
+After the change is saved, more fields will appear on this page for the new identity's **Object ID** and **Permissions**.
+
+You can copy the **Object ID** from here if needed, and use the **Permissions** button to view the Azure roles that are assigned to the identity. To set up some roles, continue to the next section.
+
+### Considerations for disabling system-managed identities
+
+It's important to consider the effects that any changes to the identity or its roles can have on the resources that use it. If you're [using managed identities with your Azure Digital Twins endpoints](how-to-route-with-managed-identity.md) or for [data history](how-to-use-data-history.md) and the identity is disabled, or a necessary role is removed from it, the endpoint or data history connection can become inaccessible and the flow of events will be disrupted.
+ ## Next steps Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands:
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
There are two possible error scenarios that each give their own error message:
Both of these error messages are shown in the screenshot below:
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/properties-errors.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Properties panel, showing two error messages. One error indicates that models are missing, and the other indicates that properties are missing a model. " lightbox="media/how-to-use-azure-digital-twins-explorer/properties-errors.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
#### View a twin's relationships
You can create a new digital twin from its model definition in the **Models** pa
To create a twin from a model, find that model in the list and choose the menu dots next to the model name. Then, select **Create a Twin**. You'll be asked to enter a **name** for the new twin, which must be unique. Then save the twin, which will add it to your graph.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-create-a-twin.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The menu dots for a single model are highlighted, and the menu option to Create a Twin is also highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-create-a-twin.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
To add property values to your twin, see [Edit twin and relationship properties](#edit-twin-and-relationship-properties).
You can use the **Model Graph** panel to view a graphical representation of the
To see the full definition of a model, find that model in the **Models** pane and select the menu dots next to the model name. Then, select **View Model**. Doing so will display a **Model Information** modal showing the raw DTDL definition of the model.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-view.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The menu dots for a single model are highlighted, and the menu option to View Model is also highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-view.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
You can also view a model's full definition by selecting it in the **Model Graph**, and using the **Toggle model details** button to expand the **Model Detail** panel. This panel will also display the full DTDL code for the model.
You can upload custom images to represent different models in the Model Graph an
To upload an image for a single model, find that model in the **Models** panel and select the menu dots next to the model name. Then, select **Upload Model Image**. In the file selector box that appears, navigate on your machine to the image file you want to upload for that model. Choose **Open** to upload it.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-one-image.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The menu dots for a single model are highlighted, and the menu option to Upload Model Image is also highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-one-image.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
You can also upload model images in bulk.
First, use the following instructions to set the image file names before uploadi
Then, to upload the images at the same time, use the **Upload Model Images** icon at the top of the Models panel. In the file selector box, choose which image files to upload.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-images.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Upload Model Images icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-images.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
## Manage models
You can upload models from your machine by selecting them individually, or by up
To upload one or more models that are individually selected, select the **Upload a model** icon showing an upwards arrow.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-upload.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Upload a model icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-upload.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
In the file selector box that appears, navigate on your machine to the model(s) you want to upload. You can select one or more JSON model files and select **Open** to upload them. To upload a folder of models, including everything that's inside it, select the **Upload a directory of Models** icon showing a file folder.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-directory.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Upload a directory of Models icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-directory.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
In the file selector box that appears, navigate on your machine to a folder containing JSON model files. Select **Open** to upload that top-level folder and all of its contents.
You can use the Models panel to delete individual models, or all of the models i
To delete a single model, find that model in the list and select the menu dots next to the model name. Then, select **Delete Model**.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-delete-one.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The menu dots for a single model are highlighted, and the menu option to Delete Model is also highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-delete-one.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
To delete all of the models in your instance at once, choose the **Delete All Models** icon at the top of the Models panel.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-delete-all.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Delete All Models icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-delete-all.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
### Refresh models
When you open Azure Digital Twins Explorer, the Models panel should automaticall
However, you can manually refresh the panel at any time to reload the list of all models in your Azure Digital Twins instance. To do so, select the **Refresh models** icon.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-refresh.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Refresh models icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-refresh.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
## Import/export graph
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
description: See how to set up and use data history for Azure Digital Twins, using the CLI or Azure portal. Previously updated : 03/23/2022 Last updated : 11/17/2022
databasename="<name-for-your-database>"
## Create an Azure Digital Twins instance with a managed identity
-If you already have an Azure Digital Twins instance, ensure that you've enabled a [system-managed identity](how-to-route-with-managed-identity.md#add-a-system-managed-identity-to-an-existing-instance) for it.
+If you already have an Azure Digital Twins instance, ensure that you've enabled a [system-managed identity](how-to-set-up-instance-cli.md#enabledisable-system-managed-identity-for-the-instance) for it.
-If you don't have an Azure Digital Twins instance, set one up using the instructions in this section.
+If you don't have an Azure Digital Twins instance, follow the instructions in [Create the instance with a system-managed identity](how-to-set-up-instance-cli.md#create-the-instance-with-a-system-managed-identity) to create an Azure Digital Twins instance with a managed identity for the first time.
-# [CLI](#tab/cli)
-
-Use the following command to create a new instance with a system-managed identity. The command uses three local variables (`$dtname`, `$resourcegroup`, and `$location`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
-
-```azurecli-interactive
-az dt create --dt-name $dtname --resource-group $resourcegroup --location $location --assign-identity
-```
-
-Next, use the following command to grant yourself the *Azure Digital Twins Data Owner* role on the instance. The command has one placeholder, `<owneruser@microsoft.com>`, that you should replace with your own Azure account information, and uses a local variable (`$dtname`) that was created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
-
-```azurecli-interactive
-az dt role-assignment create --dt-name $dtname --assignee "<owneruser@microsoft.com>" --role "Azure Digital Twins Data Owner"
-```
-
->[!NOTE]
->It may take up to five minutes for this RBAC change to apply.
-
-# [Portal](#tab/portal)
-
-Follow the instructions in [Set up an Azure Digital Twins instance and authentication](how-to-set-up-instance-portal.md) to create an instance, making sure to enable a **system-managed identity** in the [Advanced](how-to-set-up-instance-portal.md#additional-setup-options) tab during setup. Then, continue through the article's instructions to set up user access permissions so that you have the Azure Digital Twins Data Owner role on the instance.
-
-Remember the name you give to your instance so you can use it later.
--
+Then, make sure you have *Azure Digital Twins Data Owner* role on the instance. You can find instructions in [Set up user access permissions](how-to-set-up-instance-cli.md#set-up-user-access-permissions).
## Create an Event Hubs namespace and event hub
digital-twins How To Use Postman With Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman-with-digital-twins.md
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
3. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/console-access-token.png" alt-text="Screenshot of the console showing the result of the az account get-access-token command. The accessToken field and its sample value is highlighted.":::
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/console-access-token.png" alt-text="Screenshot of the console showing the result of the az account get-access-token command. The accessToken field and its sample value is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/console-access-token.png":::
>[!TIP] >This token is valid for at least five minutes and a maximum of 60 minutes. If you run out of time allotted for the current token, you can repeat the steps in this section to get a new one.
To find the collection, navigate to the repo link and choose the folder for your
Here's how to download your chosen collection to your machine so that you can import it into Postman. 1. Use the links above to open the collection file in GitHub in your browser. 1. Select the **Raw** button to open the raw text of the file.
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There is a highlight around the Raw button." lightbox="media/how-to-use-postman-with-digital-twins/swagger-raw.png":::
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There is a highlight around the Raw button." lightbox="media/how-to-use-postman-with-digital-twins/swagger-raw.png":::
1. Copy the text from the window, and paste it into a new file on your machine. 1. Save the file with a .json extension (the file name can be whatever you want, as long as you can remember it to find the file later).
Next, import the collection into Postman.
1. In the **Import** window that follows, select **Upload Files** and navigate to the collection file on your machine that you created earlier. Select Open. 1. Select the **Import** button to confirm.
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-import-collection-2.png" alt-text="Screenshot of Postman's 'Import' window, showing the file to import as a collection and the Import button.":::
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-import-collection-2.png" alt-text="Screenshot of Postman's 'Import' window, showing the file to import as a collection and the Import button." lightbox="media/how-to-use-postman-with-digital-twins/postman-import-collection-2.png":::
The newly imported collection can now be seen from your main Postman view, in the Collections tab.
Now that your collection is set up, you can add your own requests to the Azure D
1. This action opens the SAVE REQUEST window, where you can enter a name for your request, give it an optional description, and choose the collection that it's a part of. Fill in the details and save the request to the collection you created earlier.
- :::row:::
- :::column:::
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-save-request.png" alt-text="Screenshot of 'Save request' window in Postman showing the fields described. The 'Save to Azure Digital Twins collection' button is highlighted.":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
You can now view your request under the collection, and select it to pull up its editable details.
digital-twins Reference Query Clause Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-clause-select.md
For the following examples, consider a twin graph that contains the following da
Here's a diagram illustrating this scenario:
- :::column:::
- :::image type="content" source="media/reference-query-clause-select/projections-graph.png" alt-text="Diagram showing the sample graph described above.":::
- :::column-end:::
- :::column:::
- :::column-end:::
#### Project collection example
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-code.md
There's also a section showing the complete code at the end of the tutorial. You
To begin, open the file *Program.cs* in any code editor. You'll see a minimal code template that looks something like this:
- :::column:::
- :::image type="content" source="media/tutorial-code/starter-template.png" alt-text="Screenshot of a snippet of sample code in a code editor." lightbox="media/tutorial-code/starter-template.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
First, add some `using` lines at the top of the code to pull in necessary dependencies.
energy-data-services Tutorial Seismic Ddms Sdutil https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms-sdutil.md
# Tutorial: Seismic store sdutil
-Sdutil is a command line python utility tool designed to easily interact with seismic store. The seismic store is a cloud-based solution designed to store and manage datasets of any size in the cloud by enabling a secure way to access them through a scoped authorization mechanism. Seismic Store overcomes the object size limitations imposed by a cloud provider by managing generic datasets as multi-independent objects. This provides a generic, reliable, and better performing solution to handle data in cloud storage.
+Sdutil is a command line Python utility tool designed to easily interact with seismic store. The seismic store is a cloud-based solution designed to store and manage datasets of any size in the cloud by enabling a secure way to access them through a scoped authorization mechanism. Seismic Store overcomes the object size limitations imposed by a cloud provider by managing generic datasets as multi-independent objects. This provides a generic, reliable, and better performing solution to handle data in cloud storage.
**Sdutil** is an intuitive command line utility tool to interact with seismic store and perform some basic operations like upload or download datasets to or from seismic store, manage users, list folders content and more.
Run the changelog script (`./changelog-generator.sh`) to automatically generate
Microsoft Energy Data Services instance is using OSDU&trade; M12 Version of sdutil. Follow the below steps if you would like to use SDUTIL to leverage the SDMS API of your MEDS instance.
-1. Ensure you have followed the [installation](#prerequisites) and [configuration](#configuration) steps from above. This includes downloading the SDUTIL source code, configuring your python virtual environment, editing the `config.yaml` file and setting your three environment variables.
+1. Ensure you have followed the [installation](#prerequisites) and [configuration](#configuration) steps from above. This includes downloading the SDUTIL source code, configuring your Python virtual environment, editing the `config.yaml` file and setting your three environment variables.
2. Run below commands to sign in, list, upload and download files in the seismic store.
event-grid Event Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-domains.md
Title: Event Domains in Azure Event Grid description: This article describes how to use event domains to manage the flow of custom events to your various business organizations, customers, or applications. Previously updated : 04/13/2021 Last updated : 11/17/2022 # Understand event domains for managing Event Grid topics-
-This article describes how to use event domains to manage the flow of custom events to your various business organizations, customers, or applications. Use event domains to:
+An event domain is a management tool for large number of Event Grid topics related to the same application. You can think of it as a meta-topic that can have thousands of individual topics. It allows an event publisher to publish events to thousands of topics at the same time. Domains also give you authentication and authorization control over each topic so you can partition your tenants. This article describes how to use event domains to manage the flow of custom events to your various business organizations, customers, or applications. Use event domains to:
* Manage multitenant eventing architectures at scale.
-* Manage your authorization and authentication.
+* Manage your authentication and authorization.
* Partition your topics without managing each individually. * Avoid individually publishing to each of your topic endpoints.
-## Event domain overview
-
-An event domain is a management tool for large numbers of Event Grid topics related to the same application. You can think of it as a meta-topic that can have thousands of individual topics.
-
-Event domains provide you the same architecture used by Azure services like Storage and IoT Hub to publish their events. They allow you to publish events to thousands of topics. Domains also give you authorization and authentication control over each topic so you can partition your tenants.
- ## Example use case [!INCLUDE [event-grid-domain-example-use-case.md](./includes/event-grid-domain-example-use-case.md)] ## Access management
-With a domain, you get fine grain authorization and authentication control over each topic via Azure role-based access control (Azure RBAC). You can use these roles to restrict each tenant in your application to only the topics you wish to grant them access to.
-
-Azure RBAC in event domains works the same way [managed access control](security-authorization.md) works in the rest of Event Grid and Azure. Use Azure RBAC to create and enforce custom role definitions in event domains.
+With a domain, you get fine grain authorization and authentication control over each topic via Azure role-based access control (Azure RBAC). You can use these roles to restrict each tenant in your application to only the topics you wish to grant them access to. Azure RBAC in event domains works the same way [managed access control](security-authorization.md) works in the rest of Event Grid and Azure. Use Azure RBAC to create and enforce custom role definitions in event domains.
### Built in roles
-Event Grid has two built-in role definitions to make Azure RBAC easier for working with event domains. These roles are **EventGrid EventSubscription Contributor (Preview)** and **EventGrid EventSubscription Reader (Preview)**. You assign these roles to users who need to subscribe to topics in your event domain. You scope the role assignment to only the topic that users need to subscribe to.
-
-For information about these roles, see [Built-in roles for Event Grid](security-authorization.md#built-in-roles).
+Event Grid has two built-in role definitions to make Azure RBAC easier for working with event domains. These roles are **EventGrid EventSubscription Contributor** and **EventGrid EventSubscription Reader**. You assign these roles to users who need to subscribe to topics in your event domain. You scope the role assignment to only the topic that users need to subscribe to. For information about these roles, see [Built-in roles for Event Grid](security-authorization.md#built-in-roles).
## Subscribing to topics
-Subscribing to events on a topic within an event domain is the same as [creating an Event Subscription on a custom topic](./custom-event-quickstart.md) or subscribing to an event from an Azure service.
+Subscribing to events for a topic within an event domain is the same as [creating an Event Subscription on a custom topic](./custom-event-quickstart.md) or subscribing to an event from an Azure service.
> [!IMPORTANT] > Domain topic is considered an **auto-managed** resource in Event Grid. You can create an event subscription at the [domain scope](#domain-scope-subscriptions) without creating the domain topic. In this case, Event Grid automatically creates the domain topic on your behalf. Of course, you can still choose to create the domain topic manually. This behavior allows you to worry about one less resource when dealing with a huge number of domain topics. When the last subscription to a domain topic is deleted, the domain topic is also deleted irrespective of whether the domain topic was manually created or auto-created.
Event domains also allow for domain-scope subscriptions. An event subscription o
## Publishing to an event domain
-When you create an event domain, you're given a publishing endpoint similar to if you had created a topic in Event Grid.
-
-To publish events to any topic in an Event Domain, push the events to the domain's endpoint the [same way you would for a custom topic](./post-to-custom-topic.md). The only difference is that you must specify the topic you'd like the event to be delivered to.
-
-For example, publishing the following array of events would send event with `"id": "1111"` to topic `foo` while the event with `"id": "2222"` would be sent to topic `bar`:
+When you create an event domain, you're given a publishing endpoint similar to if you had created a topic in Event Grid. To publish events to any topic in an event domain, push the events to the domain's endpoint the [same way you would for a custom topic](./post-to-custom-topic.md). The only difference is that you must specify the topic you'd like the event to be delivered to. For example, publishing the following array of events would send event with `"id": "1111"` to topic `foo` while the event with `"id": "2222"` would be sent to topic `bar`:
```json [{
Here are the limits and quotas related to event domains:
If these limits don't suit you, open a support ticket or send an email to [askgrid@microsoft.com](mailto:askgrid@microsoft.com). ## Pricing
-Event domains use the same [operations pricing](https://azure.microsoft.com/pricing/details/event-grid/) that all other features in Event Grid use.
-
-Operations work the same in event domains as they do in custom topics. Each ingress of an event to an event domain is an operation, and each delivery attempt for an event is an operation.
--
+Event domains use the same [operations pricing](https://azure.microsoft.com/pricing/details/event-grid/) that all other features in Event Grid use. Operations work the same in event domains as they do in custom topics. Each ingress of an event to an event domain is an operation, and each delivery attempt for an event is an operation.
## Next steps-
-* To learn about setting up event domains, creating topics, creating event subscriptions, and publishing events, see [Manage event domains](./how-to-event-domains.md).
+To learn about setting up event domains, creating topics, creating event subscriptions, and publishing events, see [Manage event domains](./how-to-event-domains.md).
event-grid Event Schema Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-key-vault.md
Title: Azure Key Vault as Event Grid source description: Describes the properties and schema provided for Azure Key Vault events with Azure Event Grid Previously updated : 09/15/2021 Last updated : 11/17/2022 # Azure Key Vault as Event Grid source
-This article provides the properties and schema for events in [Azure Key Vault](../key-vault/index.yml). For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+This article provides the properties and schema for events in [Azure Key Vault](../key-vault/index.yml). For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md) and [Cloud event schema](cloud-event-schema.md).
## Available event types
event-grid Handler Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-service-bus.md
Title: Service Bus queues and topics as event handlers for Azure Event Grid events description: Describes how you can use Service Bus queues and topics as event handlers for Azure Event Grid events. Previously updated : 09/30/2021 Last updated : 11/17/2022 # Service Bus queues and topics as event handlers for Azure Event Grid events
-An event handler is the place where the event is sent. The handler takes some further action to process the event. Several Azure services are automatically configured to handle events and **Azure Service Bus** is one of them.
-
-You can use a Service queue or topic as a handler for events from Event Grid.
+An event handler receives events from an event source via Event Grid, and processes those events. You can use instances of a few Azure services to handle events and **Azure Service Bus** is one of them. This article shows you how to use a Service Bus queue or topic as a handler for events from Event Grid.
## Service Bus queues
-> [!NOTE]
-> Session enabled queues are not supported as event handlers for Azure Event Grid events
-
-You can route events in Event Grid directly to Service Bus queues for use in buffering or command & control scenarios in enterprise applications.
+You can route events in Event Grid directly to Service Bus queues for use in buffering or command and control scenarios in enterprise applications.
-In the Azure portal, while creating an event subscription, select **Service Bus Queue** as endpoint type and then click **select an endpoint** to choose a Service Bus queue.
+### Use Azure portal
+In the Azure portal, while creating an event subscription, select **Service Bus Queue** as the endpoint type and then click **select an endpoint** to choose a Service Bus queue.
-### Using CLI to add a Service Bus queue handler
-For Azure CLI, the following example subscribes and connects an event grid topic to a Service Bus queue:
+> [!NOTE]
+> Session enabled queues are not supported as event handlers for Azure Event Grid events
+
+### Use Azure CLI
+Use the [`az eventgrid event-subscription create`](/cli/azure/eventgrid/event-subscription) command with `--endpoint-type` set to `servicebusqueue` and `--endpoint` set to `/subscriptions/{AZURE SUBSCRIPTION}/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ServiceBus/namespaces/<NAMESPACE NAME>/queues/<QUEUE NAME>`. Here's an example:
```azurecli-interactive az eventgrid event-subscription create \
az eventgrid event-subscription create \
--endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/queues/queue1 ```
+You can also use the [`az eventgrid topic event-subscription`](/cli/azure/eventgrid/topic/event-subscription) command for custom topics, the [`az eventgrid system-topic event-subscription`](/cli/azure/eventgrid/system-topic/event-subscription) command for system topics, and the [`az eventgrid partner topic event-subscription create`](/cli/azure/eventgrid/partner/topic/event-subscription#az-eventgrid-partner-topic-event-subscription-create) command for partner topics.
+
+### Use Azure PowerShell
+Use the [New-AzEventGridSubscription](/powershell/module/az.eventgrid/new-azeventgridsubscription) command with `-EndpointType` set to `servicebusqueue` and `-Endpoint` set to `/subscriptions/{AZURE SUBSCRIPTION}/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ServiceBus/namespaces/<NAMESPACE NAME>/queues/<QUEUE NAME>`. Here's an example:
++
+```azurepowershell-interactive
+New-AzEventGridSubscription -ResourceGroup MyResourceGroup `
+ -TopicName Topic1 `
+ -EndpointType servicebusqueue `
+ -Endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/queues/queue1 `
+ -EventSubscriptionName EventSubscription1
+```
+
+You can also use the [`New-AzEventGridSystemTopicEventSubscription`](/powershell/module/az.eventgrid/new-azeventgridsystemtopiceventsubscription) command for system topics, and the [`New-AzEventGridPartnerTopicEventSubscription`](/powershell/module/az.eventgrid/new-azeventgridpartnertopiceventsubscription) command for partner topics.
+ ## Service Bus topics
-You can route events in Event Grid directly to Service Bus topics to handle Azure system events with Service Bus topics, or for command & control messaging scenarios.
+You can route events in Event Grid directly to Service Bus topics for command and control messaging scenarios.
-In the Azure portal, while creating an event subscription, select **Service Bus Topic** as endpoint type and then click **select and endpoint** to choose a Service Bus topic.
+### Use Azure portal
+In the Azure portal, while creating an event subscription, select **Service Bus Topic** as the endpoint type and then click **select an endpoint** to choose a Service Bus topic.
-### Using CLI to add a Service Bus topic handler
-For Azure CLI, the following example subscribes and connects an event grid topic to a Service Bus topic:
+### Use Azure CLI
+Use the [`az eventgrid event-subscription create`](/cli/azure/eventgrid/event-subscription) command with `--endpoint-type` set to `servicebustopic` and `--endpoint` set to `/subscriptions/{AZURE SUBSCRIPTION}/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ServiceBus/namespaces/<NAMESPACE NAME>/topics/<TOPIC NAME>`. Here's an example:
```azurecli-interactive az eventgrid event-subscription create \
az eventgrid event-subscription create \
--endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/topics/topic1 ```
+You can also use the [`az eventgrid topic event-subscription`](/cli/azure/eventgrid/topic/event-subscription) command for custom topics, the [`az eventgrid system-topic event-subscription`](/cli/azure/eventgrid/system-topic/event-subscription) command for system topics, and the [`az eventgrid partner topic event-subscription create`](/cli/azure/eventgrid/partner/topic/event-subscription#az-eventgrid-partner-topic-event-subscription-create) command for partner topics.
+
+### Use Azure PowerShell
+Use the [New-AzEventGridSubscription](/powershell/module/az.eventgrid/new-azeventgridsubscription) command with `-EndpointType` set to `servicebustopic` and `-Endpoint` set to `/subscriptions/{AZURE SUBSCRIPTION}/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ServiceBus/namespaces/<NAMESPACE NAME>/topics/<TOPIC NAME>`. Here's an example:
++
+```azurepowershell-interactive
+New-AzEventGridSubscription -ResourceGroup MyResourceGroup `
+ -TopicName Topic1 `
+ -EndpointType servicebustopic `
+ -Endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/topics/topic1 `
+ -EventSubscriptionName EventSubscription1
+```
+
+You can also use the [`New-AzEventGridSystemTopicEventSubscription`](/powershell/module/az.eventgrid/new-azeventgridsystemtopiceventsubscription) command for system topics, and the [`New-AzEventGridPartnerTopicEventSubscription`](/powershell/module/az.eventgrid/new-azeventgridpartnertopiceventsubscription) command for partner topics.
++ [!INCLUDE [event-grid-message-headers](./includes/event-grid-message-headers.md)] When sending an event to a Service Bus queue or topic as a brokered message, the `messageid` of the brokered message is an internal system ID. The internal system ID for the message will be maintained across redelivery of the event so that you can avoid duplicate deliveries by turning on **duplicate detection** on the service bus entity. We recommend that you enable duration of the duplicate detection on the Service Bus entity to be either the time-to-live (TTL) of the event or max retry duration, whichever is longer.
+## Delivery properties
+Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set custom headers on the events that are delivered to Azure Service Bus queues and topics.
+
+Azure Service Bus supports the use of following message properties when sending single messages.
+
+| Header name | Header type |
+| :-- | :-- |
+| `MessageId` | Dynamic |
+| `PartitionKey` | Static or dynamic |
+| `SessionId` | Static or dynamic |
+| `CorrelationId` | Static or dynamic |
+| `Label` | Static or dynamic |
+| `ReplyTo` | Static or dynamic |
+| `ReplyToSessionId` | Static or dynamic |
+| `To` |Static or dynamic |
+| `ViaPartitionKey` | Static or dynamic |
+
+> [!NOTE]
+> - The default value of `MessageId` is the internal ID of the Event Grid event. You can override it. For example, `data.field`.
+> - You can only set either `SessionId` or `MessageId`.
+
+For more information, see [Custom delivery properties](delivery-properties.md).
+ ## REST examples (for PUT) ### Service Bus queue
The internal system ID for the message will be maintained across redelivery of t
} ```
-## Delivery properties
-Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set custom headers on the events that are delivered to Azure Service Bus queues and topics.
-
-Azure Service Bus supports the use of following message properties when sending single messages.
-
-| Header name | Header type |
-| :-- | :-- |
-| `MessageId` | Dynamic |
-| `PartitionKey` | Static or dynamic |
-| `SessionId` | Static or dynamic |
-| `CorrelationId` | Static or dynamic |
-| `Label` | Static or dynamic |
-| `ReplyTo` | Static or dynamic |
-| `ReplyToSessionId` | Static or dynamic |
-| `To` |Static or dynamic |
-| `ViaPartitionKey` | Static or dynamic |
-
-> [!NOTE]
-> - The default value of `MessageId` is the internal ID of the Event Grid event. You can override it. For example, `data.field`.
-> - You can only set either `SessionId` or `MessageId`.
-
-For more information, see [Custom delivery properties](delivery-properties.md).
## Next steps See the [Event handlers](event-handlers.md) article for a list of supported event handlers.
event-grid Handler Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-webhooks.md
Title: Webhooks as event handlers for Azure Event Grid events description: Describes how you can use webhooks as event handlers for Azure Event Grid events. Azure Automation runbooks and logic apps are supported as event handlers via webhooks. Previously updated : 09/15/2021 Last updated : 11/17/2022 # Webhooks, Automation runbooks, Logic Apps as event handlers for Azure Event Grid events
-An event handler is the place where the event is sent. The handler takes some further action to process the event. Several Azure services are automatically configured to handle events. You can also use any WebHook for handling events. The WebHook doesn't need to be hosted in Azure to handle events. Event Grid only supports HTTPS Webhook endpoints.
+An event handler receives events from an event source via Event Grid, and processes those events. You can use any WebHook as an event handler for events forwarded by Event Grid. The WebHook doesn't need to be hosted in Azure to handle events. Event Grid supports only HTTPS Webhook endpoints. You can also use an Azure Automation workbook or an Azure logic app as an event handler via webhooks. This article provides you links to conceptual, quickstart, and tutorial articles that provide you with more information.
> [!NOTE]
-> - Azure Automation runbooks and logic apps are supported as event handlers via webhooks.
-> - Even though you can use **Webhook** as an **endpoint type** to configure an Azure function as an event handler, use **Azure Function** as an endpoint type. For more information, see [Azure function as an event handler](handler-functions.md).
+> Even though you can use **Webhook** as an **endpoint type** to configure an Azure function as an event handler, use **Azure Function** as an endpoint type. For more information, see [Azure function as an event handler](handler-functions.md).
## Webhooks See the following articles for an overview and examples of using webhooks as event handlers.
See the following articles for an overview and examples of using webhooks as eve
| Quickstart: create and route custom events with - [Azure CLI](custom-event-quickstart.md), [PowerShell](custom-event-quickstart-powershell.md), and [portal](custom-event-quickstart-portal.md). | Shows how to send custom events to a WebHook. | | Quickstart: route Blob storage events to a custom web endpoint with - [Azure CLI](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json), [PowerShell](../storage/blobs/storage-blob-event-quickstart-powershell.md?toc=%2fazure%2fevent-grid%2ftoc.json), and [portal](blob-event-quickstart-portal.md). | Shows how to send blob storage events to a WebHook. | | [Quickstart: send container registry events](../container-registry/container-registry-event-grid-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) | Shows how to use Azure CLI to send Container Registry events. |
-| [Overview: receive events to an HTTP endpoint](receive-events.md) | Describes how to validate an HTTP endpoint to receive events from an Event Subscription, and receive and deserialize events. |
+| [Overview: receive events to an HTTP endpoint](receive-events.md) | Describes how to validate an HTTP endpoint to receive events from an event subscription, and receive and deserialize events. |
## Azure Automation
event-grid Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/network-security.md
Title: Network security for Azure Event Grid resources
description: This article describes how to use service tags for egress, IP firewall rules for ingress, and private endpoints for ingress with Azure Event Grid. Previously updated : 09/28/2021 Last updated : 11/17/2022
event-grid Post To Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/post-to-custom-topic.md
Title: Post event to custom Azure Event Grid topic description: This article describes how to post an event to a custom topic. It shows the format of the post and event data. Previously updated : 08/19/2021 Last updated : 11/17/2022
This article describes how to post an event to a custom topic using an access ke
## Endpoint
-When sending the HTTP POST to a custom topic, use the URI format: `https://<topic-endpoint>?api-version=2018-01-01`.
-
-For example, a valid URI is: `https://exampletopic.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01`.
-
-To get the endpoint for a custom topic with Azure CLI, use:
+When sending the HTTP POST to a custom topic, use the URI format: `https://<topic-endpoint>?api-version=2018-01-01`. For example, a valid URI is: `https://exampletopic.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01`. To get the endpoint for a custom topic using Azure CLI, use:
```azurecli-interactive az eventgrid topic show --name <topic-name> -g <topic-resource-group> --query "endpoint" ```
-To get the endpoint for a custom topic with Azure PowerShell, use:
+To get the endpoint for a custom topic using Azure PowerShell, use:
```powershell (Get-AzEventGridTopic -ResourceGroupName <topic-resource-group> -Name <topic-name>).Endpoint
To get the endpoint for a custom topic with Azure PowerShell, use:
## Header
-In the request, include a header value named `aeg-sas-key` that contains a key for authentication.
-
-For example, a valid header value is `aeg-sas-key: VXbGWce53249Mt8wuotr0GPmyJ/nDT4hgdEj9DpBeRr38arnnm5OFg==`.
-
-To get the key for a custom topic with Azure CLI, use:
+In the request, include a header value named `aeg-sas-key` that contains a key for authentication. For example, a valid header value is `aeg-sas-key: xxxxxxxxxxxxxxxxxxxxxxx`. To get the key for a custom topic using Azure CLI, use:
```azurecli az eventgrid topic key list --name <topic-name> -g <topic-resource-group> --query "key1" ```
-To get the key for a custom topic with PowerShell, use:
+To get the key for a custom topic using PowerShell, use:
```powershell (Get-AzEventGridTopicKey -ResourceGroupName <topic-resource-group> -Name <topic-name>).Key1
To get the key for a custom topic with PowerShell, use:
## Event data
-For custom topics, the top-level data contains the same fields as standard resource-defined events. One of those properties is a data property that contains properties unique to the custom topic. As event publisher, you determine the properties for that data object. Use the following schema:
+For custom topics, the top-level data contains the same fields as standard resource-defined events. One of those properties is a `data` property that contains properties unique to the custom topic. As an event publisher, you determine properties for that data object. Here's the schema:
```json [
For custom topics, the top-level data contains the same fields as standard resou
] ```
-For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an event grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments. When receiving events in a batch, the maximum allowed number of events is 5,000 per batch.
+For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an Event Grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments. When receiving events in a batch, the maximum allowed number of events is 5,000 per batch.
For example, a valid event data schema is:
firewall-manager Rule Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/rule-hierarchy.md
Previously updated : 08/26/2020 Last updated : 11/17/2022 + # Use Azure Firewall policy to define a rule hierarchy
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Azure Firewall Premium includes the following features:
- **TLS inspection** - decrypts outbound traffic, processes the data, then encrypts the data and sends it to the destination. - **IDPS** - A network intrusion detection and prevention system (IDPS) allows you to monitor network activities for malicious activity, log information about this activity, report it, and optionally attempt to block it.-- **URL filtering** - extends Azure FirewallΓÇÖs FQDN filtering capability to consider an entire URL. For example, `www.contoso.com/a/c` instead of `www.contoso.com`.
+- **URL filtering** - extends Azure FirewallΓÇÖs FQDN filtering capability to consider an entire URL along with any additional path. For example, `www.contoso.com/a/c` instead of `www.contoso.com`.
- **Web categories** - administrators can allow or deny user access to website categories such as gambling websites, social media websites, and others. ## TLS inspection
You can identify what category a given FQDN or URL is by using the **Web Categor
### Category change
-Under the **Web Categories** tab in **Firewall Policy Settings**, you can request a categorization change if you:
+Under the **Web Categories** tab in **Firewall Policy Settings**, you can request a category change if you:
- think an FQDN or URL should be under a different category
frontdoor Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/billing.md
If the request can be served from the Front Door edge location's cache, Front Do
### Data transfer from origin to Front Door
-When your origin server processes a request, it sends data back to Front Door so that it can be returned to the client. This traffic not billed by Front Door, even if the origin is in a different region to the Front Door edge location for the request.
+When your origin server processes a request, it sends data back to Front Door so that it can be returned to the client. This traffic is not billed by Front Door, even if the origin is in a different region to the Front Door edge location for the request.
If your origin is within Azure, the data egress from the Azure origin to Front Door isn't charged. However, you should determine whether those Azure services might bill you to process your requests.
hdinsight Apache Kafka Spark Structured Streaming Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md
description: Learn how to use Apache Spark Structured Streaming to read data fro
Previously updated : 04/08/2022 Last updated : 11/16/2022 # Use Apache Spark Structured Streaming with Apache Kafka and Azure Cosmos DB
While you can create an Azure virtual network, Kafka, and Spark clusters manuall
|Ssh User Name|The SSH user to create for the Spark and Kafka clusters.| |Ssh Password|The password for the SSH user for the Spark and Kafka clusters.|
- :::image type="content" source="./media/apache-kafka-spark-structured-streaming-cosmosdb/hdi-custom-parameters.png" alt-text="HDInsight custom deployment values":::
+ :::image type="content" source="./media/apache-kafka-spark-structured-streaming-cosmosdb/hdi-custom-parameters-40.png" alt-text="HDInsight version 4.0 custom deployment values":::
1. Read the **Terms and Conditions**, and then select **I agree to the terms and conditions stated above**.
hdinsight Apache Hadoop Etl At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-etl-at-scale.md
description: Learn how extract, transform, and load is used in HDInsight with Ap
Previously updated : 04/01/2022 Last updated : 11/17/2022 # Extract, transform, and load (ETL) at scale
Azure Data Lake Storage is a managed, hyperscale repository for analytics data.
Data is usually ingested into Data Lake Storage through Azure Data Factory. You can also use Data Lake Storage SDKs, the AdlCopy service, Apache DistCp, or Apache Sqoop. The service you choose depends on where the data is. If it's in an existing Hadoop cluster, you might use Apache DistCp, the AdlCopy service, or Azure Data Factory. For data in Azure Blob storage, you might use Azure Data Lake Storage .NET SDK, Azure PowerShell, or Azure Data Factory.
-Data Lake Storage is optimized for event ingestion through Azure Event Hubs or Apache Storm.
+Data Lake Storage is optimized for event ingestion through Azure Event Hubs.
### Considerations for both storage options
hdinsight Apache Hadoop Linux Tutorial Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started-bicep.md
Previously updated : 07/18/2022 Last updated : 11/17/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Bicep
Two Azure resources are defined in the Bicep file:
You need to provide values for the parameters: * Replace **\<cluster-name\>** with the name of the HDInsight cluster to create.
- * Replace **\<cluster-type\>** with the type of the HDInsight cluster to create. Allowed strings include: `hadoop`, `interactivehive`, `hbase`, `storm`, and `spark`.
+ * Replace **\<cluster-type\>** with the type of the HDInsight cluster to create. Allowed strings include: `hadoop`, `interactivehive`, `hbase`, and `spark`.
* Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards. * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username cannot be admin.
hdinsight Apache Hadoop On Premises Migration Best Practices Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-architecture.md
description: Learn architecture best practices for migrating on-premises Hadoop
Previously updated : 07/18/2022 Last updated : 11/17/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - architecture best practices
Azure HDInsight clusters are designed for a specific type of compute usage. Beca
||| |Batch processing (ETL / ELT)|Hadoop, Spark| |Data warehousing|Hadoop, Spark, Interactive Query|
-|IoT / Streaming|Kafka, Storm, Spark|
+|IoT / Streaming|Kafka, Spark|
|NoSQL Transactional processing|HBase| |Interactive and Faster queries with in-memory caching|Interactive Query| |Data Science| Spark|
hdinsight Apache Hadoop On Premises Migration Motivation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-motivation.md
description: Learn the motivation and benefits for migrating on-premises Hadoop
Previously updated : 04/28/2022 Last updated : 11/17/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - motivation and benefits
Azure HDInsight is a cloud distribution of Hadoop components. Azure HDInsight ma
- Apache Spark - Apache Hive with LLAP - Apache Kafka-- Apache Storm - Apache HBase-- R ## Azure HDInsight advantages over on-premises Hadoop
This section provides template questionnaires to help gather important informati
|**Topic**: **Environment**||| |Cluster Distribution version|HDP 2.6.5, CDH 5.7| |Big Data eco-system components|HDFS, Yarn, Hive, LLAP, Impala, Kudu, HBase, Spark, MapReduce, Kafka, Zookeeper, Solr, Sqoop, Oozie, Ranger, Atlas, Falcon, Zeppelin, R|
-|Cluster types|Hadoop, Spark, Confluent Kafka, Storm, Solr|
+|Cluster types|Hadoop, Spark, Confluent Kafka, Solr|
|Number of clusters|4| |Number of master nodes|2| |Number of worker nodes|100|
hdinsight Apache Hbase Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-overview.md
description: An introduction to Apache HBase in HDInsight, a NoSQL database buil
Previously updated : 05/11/2022 Last updated : 11/17/2022 #Customer intent: As a developer new to Apache HBase and Apache HBase in Azure HDInsight, I want to have a basic understanding of Microsoft's implementation of Apache HBase in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
The canonical use case for which BigTable (and by extension, HBase) was created
|Key-value store|HBase can be used as a key-value store, and it's suitable for managing message systems. Facebook uses HBase for their messaging system, and it's ideal for storing and managing Internet communications. WebTable uses HBase to search for and manage tables that are extracted from webpages.| |Sensor data|HBase is useful for capturing data that is collected incrementally from various sources. This data includes social analytics, and time series. And keeping interactive dashboards up to date with trends and counters, and managing audit log systems. Examples include Bloomberg trader terminal and the Open Time Series Database (OpenTSDB). OpenTSDB stores and provides access to metrics collected about the health of server systems.| |Real-time query|[Apache Phoenix](https://phoenix.apache.org/) is a SQL query engine for Apache HBase. It's accessed as a JDBC driver, and it enables querying and managing HBase tables by using SQL.|
-|HBase as a platform|Applications can run on top of HBase by using it as a datastore. Examples include Phoenix, OpenTSDB, `Kiji`, and Titan. Applications can also integrate with HBase. Examples include: [Apache Hive](https://hive.apache.org/), Apache Pig, [Solr](https://lucene.apache.org/solr/), Apache Storm, Apache Flume, [Apache Impala](https://impala.apache.org/), Apache Spark, `Ganglia`, and Apache Drill.|
+|HBase as a platform|Applications can run on top of HBase by using it as a datastore. Examples include Phoenix, OpenTSDB, `Kiji`, and Titan. Applications can also integrate with HBase. Examples include: [Apache Hive](https://hive.apache.org/), Apache Pig, [Solr](https://lucene.apache.org/solr/), Apache Flume, [Apache Impala](https://impala.apache.org/), Apache Spark, `Ganglia`, and Apache Drill.|
## Next steps
hdinsight Hdinsight Administer Use Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-command-line.md
Title: Manage Azure HDInsight clusters using Azure CLI
-description: Learn how to use the Azure CLI to manage Azure HDInsight clusters. Cluster types include Apache Hadoop, Spark, HBase, Storm, Kafka, Interactive Query.
+description: Learn how to use the Azure CLI to manage Azure HDInsight clusters. Cluster types include Apache Hadoop, Spark, HBase, Kafka, Interactive Query.
Previously updated : 06/16/2022 Last updated : 11/17/2022 # Manage Azure HDInsight clusters using Azure CLI
hdinsight Hdinsight Hadoop Create Linux Clusters Curl Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-curl-rest.md
description: Learn how to create HDInsight clusters by submitting Azure Resource
Previously updated : 08/05/2022 Last updated : 11/17/2022 # Create Apache Hadoop clusters using the Azure REST API
The following JSON document is a merger of the template and parameters files fro
"type": "string", "allowedValues": ["hadoop", "hbase",
- "storm",
"spark"], "metadata": { "description": "The type of the HDInsight cluster to create."
hdinsight Hdinsight Hadoop Customize Cluster Bootstrap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-bootstrap.md
description: Learn how to customize HDInsight cluster configuration programmatic
Previously updated : 05/31/2022 Last updated : 11/17/2022 # Customize HDInsight clusters using Bootstrap
For example, using these programmatic methods, you can configure options in thes
* mapred-site * oozie-site.xml * oozie-env.xml
-* storm-site.xml
* tez-site.xml * webhcat-site.xml * yarn-site.xml
hdinsight Hdinsight Hadoop Linux Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-information.md
description: Get implementation tips for using Linux-based HDInsight (Hadoop) cl
Previously updated : 04/29/2020 Last updated : 11/17/2022 # Information about using HDInsight on Linux
In HDInsight, the data storage resources (Azure Blob Storage and Azure Data Lake
### <a name="URI-and-scheme"></a>URI and scheme
-Some commands may require you to specify the scheme as part of the URI when accessing a file. For example, the Storm-HDFS component requires you to specify the scheme. When using non-default storage (storage added as "additional" storage to the cluster), you must always use the scheme as part of the URI.
+Some commands may require you to specify the scheme as part of the URI when accessing a file. When using non-default storage (storage added as "additional" storage to the cluster), you must always use the scheme as part of the URI.
When using [**Azure Storage**](./hdinsight-hadoop-use-blob-storage.md), use one of the following URI schemes:
To use a different version of a component, upload the version you need and use i
* [Manage HDInsight clusters by using the Apache Ambari REST API](./hdinsight-hadoop-manage-ambari-rest-api.md) * [Use Apache Hive with HDInsight](hadoop/hdinsight-use-hive.md)
-* [Use MapReduce jobs with HDInsight](hadoop/hdinsight-use-mapreduce.md)
+* [Use MapReduce jobs with HDInsight](hadoop/hdinsight-use-mapreduce.md)
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md
description: Learn how to query data from Azure Data Lake Storage Gen1 and to st
Previously updated : 09/15/2022 Last updated : 11/17/2022 # Use Data Lake Storage Gen1 with Azure HDInsight clusters
Currently, only some of the HDInsight cluster types/versions support using Data
| HDInsight version 3.4 | No | Yes | | | HDInsight version 3.3 | No | No | | | HDInsight version 3.2 | No | Yes | |
-| Storm | | |You can use Data Lake Storage Gen1 to write data from a Storm topology. You can also use Data Lake Storage Gen1 for reference data that can then be read by a Storm topology.|
> [!WARNING] > HDInsight HBase is not supported with Azure Data Lake Storage Gen1
hdinsight Hdinsight Key Scenarios To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-key-scenarios-to-monitor.md
description: How to monitor health and performance of Apache Hadoop clusters in
Previously updated : 04/28/2022 Last updated : 11/17/2022 # Monitor cluster performance in Azure HDInsight
If your cluster's backing store is Azure Data Lake Storage (ADLS), your throttli
* [Performance tuning guidance for Apache Hive on HDInsight and Azure Data Lake Storage](../data-lake-store/data-lake-store-performance-tuning-hive.md) * [Performance tuning guidance for MapReduce on HDInsight and Azure Data Lake Storage](../data-lake-store/data-lake-store-performance-tuning-mapreduce.md)
-* [Performance tuning guidance for Apache Storm on HDInsight and Azure Data Lake Storage](../data-lake-store/data-lake-store-performance-tuning-storm.md)
## Troubleshoot sluggish node performance
hdinsight Hdinsight Scaling Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-scaling-best-practices.md
Previously updated : 07/21/2022 Last updated : 11/17/2022 # Manually scale Azure HDInsight clusters
The impact of changing the number of data nodes varies for each type of cluster
For more information on using the HBase shell, see [Get started with an Apache HBase example in HDInsight](hbase/apache-hbase-tutorial-get-started-linux.md).
-* Apache Storm
-
- You can seamlessly add or remove data nodes while Storm is running. However, after a successful completion of the scaling operation, you'll need to rebalance the topology. Rebalancing allows the topology to readjust [parallelism settings](https://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html) based on the new number of nodes in the cluster. To rebalance running topologies, use one of the following options:
-
- * Storm web UI
-
- Use the following steps to rebalance a topology using the Storm UI.
-
- 1. Open `https://CLUSTERNAME.azurehdinsight.net/stormui` in your web browser, where `CLUSTERNAME` is the name of your Storm cluster. If prompted, enter the HDInsight cluster administrator (admin) name and password you specified when creating the cluster.
-
- 1. Select the topology you wish to rebalance, then select the **Rebalance** button. Enter the delay before the rebalance operation is done.
-
- :::image type="content" source="./media/hdinsight-scaling-best-practices/hdinsight-portal-scale-cluster-storm-rebalance.png" alt-text="HDInsight Storm scale rebalance":::
-
- * Command-line interface (CLI) tool
-
- Connect to the server and use the following command to rebalance a topology:
-
- ```bash
- storm rebalance TOPOLOGYNAME
- ```
-
- You can also specify parameters to override the parallelism hints originally provided by the topology. For example, the code below reconfigures the `mytopology` topology to 5 worker processes, 3 executors for the blue-spout component, and 10 executors for the yellow-bolt component.
-
- ```bash
- ## Reconfigure the topology "mytopology" to use 5 worker processes,
- ## the spout "blue-spout" to use 3 executors, and
- ## the bolt "yellow-bolt" to use 10 executors
- $ storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
- ```
- * Kafka You should rebalance partition replicas after scaling operations. For more information, see the [High availability of data with Apache Kafka on HDInsight](./kafk) document.
hdinsight Hdinsight Virtual Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-virtual-network-architecture.md
Title: Azure HDInsight virtual network architecture
description: Learn the resources available when you create an HDInsight cluster in an Azure Virtual Network. Previously updated : 04/01/2022 Last updated : 11/17/2022 # Azure HDInsight virtual network architecture
Azure HDInsight clusters have different types of virtual machines, or nodes. Eac
| Type | Description | | | |
-| Head node | For all cluster types except Apache Storm, the head nodes host the processes that manage execution of the distributed application. The head node is also the node that you can SSH into and execute applications that are then coordinated to run across the cluster resources. The number of head nodes is fixed at two for all cluster types. |
| ZooKeeper node | Zookeeper coordinates tasks between the nodes that are doing data processing. It also does leader election of the head node, and keeps track of which head node is running a specific master service. The number of ZooKeeper nodes is fixed at three. | | Worker node | Represents the nodes that support data processing functionality. Worker nodes can be added or removed from the cluster to scale computing capability and manage costs. | | Region node | For the HBase cluster type, the region node (also referred to as a Data Node) runs the Region Server. Region Servers serve and manage a portion of the data managed by HBase. Region nodes can be added or removed from the cluster to scale computing capability and manage costs.|
-| Nimbus node | For the Storm cluster type, the Nimbus node provides functionality similar to the Head node. The Nimbus node assigns tasks to other nodes in a cluster through Zookeeper, which coordinates the running of Storm topologies. |
-| Supervisor node | For the Storm cluster type, the supervisor node executes the instructions provided by the Nimbus node to do the processing. |
## Resource naming conventions
hdinsight Apache Kafka Azure Container Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-azure-container-services.md
description: Learn how to use Kafka on HDInsight from container images hosted in
Previously updated : 08/23/2022 Last updated : 11/17/2022 # Use Azure Kubernetes Service with Apache Kafka on HDInsight
Use the following links to learn how to use Apache Kafka on HDInsight:
* [Use MirrorMaker to create a replica of Apache Kafka on HDInsight](apache-kafka-mirroring.md)
-* [Use Apache Storm with Apache Kafka on HDInsight](../hdinsight-apache-storm-with-kafka.md)
* [Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md)
hdinsight Apache Kafka Connector Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connector-iot-hub.md
description: Learn how to use Apache Kafka on HDInsight with Azure IoT Hub. The
Previously updated : 09/15/2022 Last updated : 11/17/2022 # Use Apache Kafka on HDInsight with Azure IoT Hub
For more information on using the sink connector, see [https://github.com/Azure/
In this document, you learned how to use the Apache Kafka Connect API to start the IoT Kafka Connector on HDInsight. Use the following links to discover other ways to work with Kafka: * [Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md)
-* [Use Apache Storm with Apache Kafka on HDInsight](../hdinsight-apache-storm-with-kafka.md)
hdinsight Apache Kafka Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-introduction.md
description: 'Learn about Apache Kafka on HDInsight: What it is, what it does, a
Previously updated : 03/30/2022 Last updated : 10/17/2022 #Customer intent: As a developer, I want to understand how Kafka on HDInsight is different from Kafka on other platforms.
The following are common tasks and patterns that can be performed using Kafka on
||| |Replication of Apache Kafka data|Kafka provides the MirrorMaker utility, which replicates data between Kafka clusters. For information on using MirrorMaker, see [Replicate Apache Kafka topics with Apache Kafka on HDInsight](apache-kafka-mirroring.md).| |Publish-subscribe messaging pattern|Kafka provides a Producer API for publishing records to a Kafka topic. The Consumer API is used when subscribing to a topic. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).|
-|Stream processing|Kafka is often used with Apache Storm or Spark for real-time stream processing. Kafka 0.10.0.0 (HDInsight version 3.5 and 3.6) introduced a streaming API that allows you to build streaming solutions without requiring Storm or Spark. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).|
+|Stream processing|Kafka is often used with Spark for real-time stream processing. Kafka 0.10.0.0 (HDInsight version 3.5 and 3.6) introduced a streaming API that allows you to build streaming solutions without requiring Spark. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).|
|Horizontal scale|Kafka partitions streams across the nodes in the HDInsight cluster. Consumer processes can be associated with individual partitions to provide load balancing when consuming records. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).| |In-order delivery|Within each partition, records are stored in the stream in the order that they were received. By associating one consumer process per partition, you can guarantee that records are processed in-order. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).| |Messaging|Since it supports the publish-subscribe message pattern, Kafka is often used as a message broker.|
Use the following links to learn how to use Apache Kafka on HDInsight:
* [Tutorial: Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md)
-* [Tutorial: Use Apache Storm with Apache Kafka on HDInsight](../hdinsight-apache-storm-with-kafka.md)
hdinsight Apache Kafka Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-scalability.md
To control the number of disks used by the worker nodes in a Kafka cluster, use
For more information on working with Apache Kafka on HDInsight, see the following documents: * [Use MirrorMaker to create a replica of Apache Kafka on HDInsight](apache-kafka-mirroring.md)
-* [Use Apache Storm with Apache Kafka on HDInsight](../hdinsight-apache-storm-with-kafka.md)
* [Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md) * [Connect to Apache Kafka through an Azure Virtual Network](apache-kafka-connect-vpn-gateway.md)
hdinsight Apache Kafka Streams Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-streams-api.md
description: Tutorial - Learn how to use the Apache Kafka Streams API with Kafka
Previously updated : 08/23/2022 Last updated : 11/17/2022 #Customer intent: As a developer, I need to create an application that uses the Kafka streams API with Kafka on HDInsight
Learn how to create an application that uses the Apache Kafka Streams API and ru
The application used in this tutorial is a streaming word count. It reads text data from a Kafka topic, extracts individual words, and then stores the word and count into another Kafka topic.
-Kafka stream processing is often done using Apache Spark or Apache Storm. Kafka version 1.1.0 (in HDInsight 3.5 and 3.6) introduced the Kafka Streams API. This API allows you to transform data streams between input and output topics. In some cases, this may be an alternative to creating a Spark or Storm streaming solution.
+Kafka stream processing is often done using Apache Spark. Kafka version 1.1.0 (in HDInsight 3.5 and 3.6) introduced the Kafka Streams API. This API allows you to transform data streams between input and output topics.
For more information on Kafka Streams, see the [Intro to Streams](https://kafka.apache.org/10/documentation/streams/) documentation on Apache.org.
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
Previously updated : 09/02/2022 Last updated : 11/17/2022 # Log Analytics migration guide for Azure HDInsight clusters
The following charts show the table mappings from the classic Azure Monitoring I
| HDInsightHBaseMetrics | <ul><li>**Description**: This table contains JMX metrics from HBase. It contains all the same JMX metrics from the tables listed in the Old Schema column. In contrast from the old tables, each row contains one metric.</li><li>**Old table**: metrics\_regionserver\_CL, metrics\_regionserver\_wal\_CL, metrics\_regionserver\_ipc\_CL, metrics\_regionserver\_os\_CL, metrics\_regionserver\_replication\_CL, metrics\_restserver\_CL, metrics\_restserver\_jvm\_CL, metrics\_hmaster\_assignmentmanager\_CL, metrics\_hmaster\_ipc\_CL, metrics\_hmaser\_os\_CL, metrics\_hmaster\_balancer\_CL, metrics\_hmaster\_jvm\_CL, metrics\_hmaster\_CL,metrics\_hmaster\_fs\_CL</li></ul>| | HDInsightHBaseLogs | <ul><li>**Description**: This table contains logs from HBase and its related components: Phoenix and HDFS.</li><li>**Old table**: log\_regionserver\_CL, log\_restserver\_CL, log\_phoenixserver\_CL, log\_hmaster\_CL, log\_hdfsnamenode\_CL, log\_garbage\_collector\_CL</li></ul>|
-## Storm workload
-
-| New Table | Details |
-| | |
-| HDInsightStormMetrics | <ul><li>**Description**: This table contains the same JMX metrics as the tables in the Old Tables section. Its rows contain one metric per record.</li><li>**Old table**: metrics\_stormnimbus\_CL, metrics\_stormsupervisor\_CL</li></ul>|
-| HDInsightStormTopologyMetrics | <ul><li>**Description**: This table contains topology level metrics from Storm. It's the same shape as the table listed in Old Tables section.</li><li>**Old table**: metrics\_stormrest\_CL</li></ul>|
-| HDInsightStormLogs | <ul><li>**Description**: This table contains all logs generated from Storm.</li><li>**Old table**: log\_supervisor\_CL, log\_nimbus\_CL</li></ul>|
## Oozie workload
hdinsight Manage Clusters Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/manage-clusters-runbooks.md
description: Learn how to create and delete Azure HDInsight clusters with script
Previously updated : 12/27/2019 Last updated : 11/17/2022 # Tutorial: Create Azure HDInsight clusters with Azure Automation
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
#Automation credential for user to SSH into cluster $sshCreds = Get-AutomationPSCredential ΓÇôName 'ssh-password'
- $clusterType = "Hadoop" #Use any supported cluster type (Hadoop, HBase, Storm, etc.)
+ $clusterType = "Hadoop" #Use any supported cluster type (Hadoop, HBase, etc.)
$clusterOS = "Linux" $clusterWorkerNodes = 3 $clusterNodeSize = "Standard_D3_v2"
When no longer needed, delete the Azure Automation Account that was created to a
## Next steps > [!div class="nextstepaction"]
-> [Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell](hdinsight-administer-use-powershell.md)
+> [Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell](hdinsight-administer-use-powershell.md)
hdinsight Apache Spark Improve Performance Iocache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-improve-performance-iocache.md
Title: Apache Spark performance - Azure HDInsight IO Cache (Preview)
description: Learn about Azure HDInsight IO Cache and how to use it to improve Apache Spark performance. Previously updated : 11/09/2022 Last updated : 11/16/2022 # Improve performance of Apache Spark workloads using Azure HDInsight IO Cache > [!NOTE]
-> * IO Cache is only available for Spark 2.4(HDInsight 4.0).
-> * Spark 3.1.2 (HDInsight 5.0) doesnΓÇÖt support IO Cache.
+> * IO Cache was supported till Spark 2.3 and will not be supported in Spark 2.4 (HDInsight 4.0) and Spark 3.1.2 (HDInsight 5.0)
IO Cache is a data caching service for Azure HDInsight that improves the performance of Apache Spark jobs. IO Cache also works with [Apache TEZ](https://tez.apache.org/) and [Apache Hive](https://hive.apache.org/) workloads, which can be run on [Apache Spark](https://spark.apache.org/) clusters. IO Cache uses an open-source caching component called RubiX. RubiX is a local disk cache for use with big data analytics engines that access data from cloud storage systems. RubiX is unique among caching systems, because it uses Solid-State Drives (SSDs) rather than reserve operating memory for caching purposes. The IO Cache service launches and manages RubiX Metadata Servers on each worker node of the cluster. It also configures all services of the cluster for transparent use of RubiX cache.
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Substitutable Medical Applications and Reusable Technologies [SMART on FHIR](htt
- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository - Users have the ability to grant applications access to a further limited set of their data by using SMART clinical scopes.
-<!SMART Implementation Guide v1.0.0 is supported by Azure API for FHIR and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services.
+<!SMART Implementation Guide v1.0.0 is supported by Azure Health Data Services and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services.
Sample demonstrates and list steps that can be referenced to pass ONC G(10) with Inferno test suite. >
-One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
+One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure Health Data Services (FHIR Service) has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
Below tutorial describes steps to enable SMART on FHIR applications with FHIR Service.
Below tutorial describes steps to enable SMART on FHIR applications with FHIR Se
- An instance of the FHIR Service - .NET SDK 6.0 - [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)-- [Register public client application in Azure AD](/register-public-azure-ad-client-app.md)
+- [Register public client application in Azure AD]([https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app]
- After registering the application, make note of the applicationId for client application. <! Tutorial : To enable SMART on FHIR using APIM, follow below steps
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
# Tutorial: Receive device data through Azure IoT Hub
-The MedTech service may be used with devices created and managed through an [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md) for enhanced workflows and ease of use. This tutorial uses an Azure Resource Manager (ARM) template and a **Deploy to Azure** button to deploy a MedTech service using an Azure IoT Hub for device creation, management, and routing of device messages to the MedTech service device message event hub. The ARM template used in this article is available from the [Azure Quickstart Templates](/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
+The MedTech service may be used with devices created and managed through an [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md) for enhanced workflows and ease of use. This tutorial uses an Azure Resource Manager (ARM) template and a **Deploy to Azure** button to deploy a MedTech service using an Azure IoT Hub for device creation, management, and routing of device messages to the MedTech service device message event hub. The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
> [!TIP] > For more information about using Azure PowerShell and CLI to deploy MedTech service ARM templates, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates](deploy-08-new-ps-cli.md).
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
In this article, you'll learn how to use the [MedTech service](iot-connector-ove
:::image type="content" source="media\iot-monitoring-tab\pin-metrics-to-dashboard.png" alt-text="Screenshot the MedTech service monitoring tile with red box around the pin icon." lightbox="media\iot-monitoring-tab\pin-metrics-to-dashboard.png"::: > [!TIP]
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure/azure/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
## Available metrics for the MedTech service
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
flowchart TB
id1-- Issued by -- -> id2 -->
-The root certificate authority (CA) is the [Baltimore CyberTrust Root](https://baltimore-cybertrust-root.chain-demos.digicert.com/info/https://docsupdatetracker.net/index.html) certificate. This root certificate is signed by DigiCert, and is widely trusted and stored in many operating systems. For example, both Ubuntu and Windows include it in the default certificate store.
+The root certificate authority (CA) is the [Baltimore CyberTrust Root](https://www.digicert.com/kb/digicert-root-certificates.htm) certificate. This root certificate is signed by DigiCert, and is widely trusted and stored in many operating systems. For example, both Ubuntu and Windows include it in the default certificate store.
Windows certificate store:
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
Changes made in `config.toml` to `edgeAgent` environment variables like the `hos
When using Node.js to send device to cloud messages with the AMQP protocol to an IoT Edge runtime, messages stop sending after 2047 messages. No error is thrown and the messages eventually start sending again, then cycle repeats. If the client connects directly to Azure IoT Hub, there's no issue with sending messages. This issue has been fixed in IoT Edge 1.2 and later.
+### NTLM Authentication
+
+IoT Edge does not currently support network proxies that use NTLM authentication. Users may consider bypassing the proxy by adding the required endpoints to the firewall allow-list.
+ :::moniker-end <!-- end 1.1 -->
lab-services Class Type Adobe Creative Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-adobe-creative-cloud.md
description: Learn how to set up a lab for digital arts and media classes that u
Last updated 04/21/2021-+ # Set up a lab for Adobe Creative Cloud
lab-services Class Type Arcgis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-arcgis.md
description: Learn how to set up a lab for classes using ArcGIS.
Last updated 02/28/2022- + # Set up a lab for ArcMap\ArcGIS Desktop
lab-services Class Type Autodesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-autodesk.md
description: Learn how to set up labs to teach engineering classes with Autodesk
Last updated 02/02/2022-+ # Set up labs for Autodesk
lab-services Class Type Big Data Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-big-data-analytics.md
Last updated 03/08/2022 -+ # Set up a lab for big data analytics using Docker deployment of HortonWorks Data Platform
lab-services Class Type Database Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-database-management.md
description: Learn how to set up a lab to teach the management of relational dat
Last updated 02/22/2022-+
lab-services Class Type Jupyter Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-jupyter-notebook.md
description: Learn how to set up a lab to teach data science using Python and Ju
Last updated 01/04/2022-+ # Set up a lab to teach data science with Python and Jupyter Notebooks
lab-services Class Type Matlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-matlab.md
Last updated 04/06/2022 -+ # Setup a lab to teach MATLAB
lab-services Class Type React Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-react-linux.md
Last updated 04/25/2022 -+ # Set up lab for React on Linux
lab-services Class Type React Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-react-windows.md
description: Learn how to set up labs to teach front-end development with React.
Last updated 05/16/2021-+ # Set up lab for React on Windows
lab-services Class Type Rstudio Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-linux.md
description: Learn how to set up labs to teach R using RStudio on Linux
Last updated 08/25/2021-+ # Set up a lab to teach R on Linux
lab-services Class Type Rstudio Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-windows.md
description: Learn how to set up labs to teach R using RStudio on Windows
Last updated 08/26/2021-+ # Set up a lab to teach R on Windows
lab-services Class Type Solidworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-solidworks.md
description: Learn how to set up a lab for engineering courses using SOLIDWORKS.
Last updated 01/05/2022-+ # Set up a lab for engineering classes using SOLIDWORKS
lab-services Class Type Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-sql-server.md
description: Learn how to set up a lab to manage and develop with Azure SQL Data
Last updated 06/26/2020-+ # Set up a lab to manage and develop with SQL Server
lab-services Classroom Labs Fundamentals 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals-1.md
Last updated 05/30/2022 - # Architecture Fundamentals in Azure Lab Services when using lab accounts
lab-services Classroom Labs Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md
description: This article will cover the fundamental resources used by Lab Servi
Last updated 05/30/2022-
lab-services Connect Virtual Machine Chromebook Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-chromebook-remote-desktop.md
Last updated 01/27/2022-+ # Connect to a VM using Remote Desktop Protocol on a Chromebook
lab-services Cost Management Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/cost-management-guide.md
Title: Cost management guide for Azure Lab Services description: Understand the different ways to view costs for Lab Services.--++ Last updated 07/04/2022
lab-services How To Attach External Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-external-storage.md
description: Learn how to set up a lab that uses external file storage in Lab Se
Last updated 03/30/2021-+ # Use external file storage in Lab Services
lab-services How To Create A Lab With Shared Resource 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-a-lab-with-shared-resource-1.md
Last updated 03/03/2022 -+ # How to create a lab with a shared resource in Azure Lab Services when using lab accounts
lab-services How To Create A Lab With Shared Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-a-lab-with-shared-resource.md
Last updated 07/04/2022 -+ # How to create a lab with a shared resource in Azure Lab Services
lab-services How To Enable Nested Virtualization Template Vm Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-ui.md
description: Learn how to create a template VM with multiple VMs inside. In oth
Last updated 01/27/2022-+ # Enable nested virtualization manually on a template VM in Azure Lab Services
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
description: Generic steps to prepare a Windows template machine in Lab Services
Last updated 06/26/2020-+ # Guide to setting up a Windows template machine in Azure Lab Services
lab-services How To Setup Lab Gpu 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu-1.md
description: Learn how to set up a lab with graphics processing unit (GPU) virtu
Last updated 06/26/2020-+ # Set up GPU virtual machines in labs contained within lab accounts
lab-services How To Setup Lab Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu.md
description: Learn how to set up a lab with graphics processing unit (GPU) virtu
Last updated 06/09/2022-+ # Set up a lab with GPU virtual machines
lab-services Quick Create Lab Plan Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-python.md
In this article you, as the admin, use Python and the Azure Python SDK to create
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). - [Setup Local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment).-- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/RogerBestMsft/azure-samples-python-management/blob/rbest_ALSSample/samples/labservices/requirements.txt)
+- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/Azure-Samples/azure-samples-python-management/blob/main/samples/labservices/requirements.txt)
## Create a lab plan
lab-services Quick Create Lab Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-python.md
In this quickstart, you, as the educator, create a lab using Python and the Azur
- Azure subscription. If you donΓÇÖt have one, [create a free account](https://azure.microsoft.com/free/) before you begin. - [Setup Local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment).-- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/RogerBestMsft/azure-samples-python-management/blob/rbest_ALSSample/samples/labservices/requirements.txt)
+- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/Azure-Samples/azure-samples-python-management/blob/main/samples/labservices/requirements.txt)
- Lab plan. To create a lab plan, see [Quickstart: Create a lab plan using Python and the Azure Python libraries (SDK)](quick-create-lab-plan-python.md). ## Create a lab
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
In this article, learn about Azure Machine Learning CLI (v2) releases.
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes-v2%22&locale=en-us`
+## 2022-11-08
+
+### Azure Machine Learning CLI (v2) v2.11.0
+
+- The CLI is depending on azure-ai-ml 1.1.0.
+- `az ml registry`
+ - Added `ml registry delete` command.
+ - Adjusted registry experimental tags and imports to avoid warning printouts for unrelated operations.
+- `az ml environment`
+ - Prevented registering an already existing environment that references conda file.
+ ## 2022-10-10 ### Azure Machine Learning CLI (v2) v2.10.0
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
For more information about the MLTable YAML schema, see [CLI (v2) mltable YAML s
- [Install and set up the CLI (v2)](how-to-configure-cli.md#install-and-set-up-the-cli-v2) - [Create datastores](how-to-datastore.md#create-datastores) - [Create data assets](how-to-create-data-assets.md#create-data-assets)-- [Read and write data in a job](how-to-read-write-data-v2.md#read-and-write-data-in-a-job)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)
+- [Access data in a job](how-to-read-write-data-v2.md)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
+
+ Title: Access data from Azure cloud storage during interactive development
+
+description: Access data from Azure cloud storage during interactive development
++++++ Last updated : 11/17/2022+
+#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Access data from Azure cloud storage during interactive development
++
+Typically the beginning of a machine learning project involves exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and building prototypes of ML models to validate hypotheses. This *prototyping* phase of the project is highly interactive in nature that lends itself to developing in a Jupyter notebook or an IDE with a *Python interactive console*. In this article you'll learn how to:
+
+> [!div class="checklist"]
+> * Access data from a Azure ML Datastores URI as if it were a file system.
+> * Materialize data into Pandas using `mltable` Python library.
+> * Materialize Azure ML data assets into Pandas using `mltable` Python library.
+> * Materialize data through an explicit download with the `azcopy` utility.
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. For more information, see [Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)](how-to-manage-workspace.md).
+* An Azure Machine Learning Datastore. For more information, see [Create datastores](how-to-datastore.md).
+
+> [!TIP]
+> The guidance in this article to access data during interactive development applies to any host that can run a Python session - for example: your local machine, a cloud VM, a GitHub Codespace, etc. We recommend using an Azure Machine Learning compute instance - a fully managed and pre-configured cloud workstation. For more information, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
+
+> [!IMPORTANT]
+> Ensure you have the latest `azure-fsspec` and `mltable` python libraries installed in your python environment:
+>
+> ```bash
+> pip install -U azureml-fsspec mltable
+> ```
+
+## Access data from a datastore URI, like a filesystem (preview)
++
+An Azure ML datastore is a *reference* to an *existing* storage account on Azure. The benefits of creating and using a datastore include:
+
+> [!div class="checklist"]
+> * A common and easy-to-use API to interact with different storage types (Blob/Files/ADLS).
+> * Easier to discover useful datastores when working as a team.
+> * Supports both credential-based (for example, SAS token) and identity-based (use Azure Active Directory or Manged identity) to access data.
+> * When using credential-based access, the connection information is secured so you don't expose keys in scripts.
+> * Browse data and copy-paste datastore URIs in the Studio UI.
+
+A *Datastore URI* is a Uniform Resource Identifier, which is a *reference* to a storage *location* (path) on your Azure storage account. The format of the datastore URI is:
+
+```python
+# AzureML workspace details:
+subscription = '<subscription_id>'
+resource_group = '<resource_group>'
+workspace = '<workspace>'
+datastore_name = '<datastore>'
+path_on_datastore '<path>'
+
+# long-form Datastore uri format:
+uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'.
+```
+
+These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/latest/https://docsupdatetracker.net/index.html#) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage.
+
+The Azure ML Datastore implementation of `fsspec` automatically handles credential/identity passthrough used by the Azure ML datastore. This means you don't need to expose account keys in your scripts or do additional sign-in procedures on a compute instance.
+
+For example, you can directly use Datastore URIs in Pandas - below is an example of reading a CSV file:
+
+```python
+import pandas as pd
+
+df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv")
+df.head()
+```
+
+> [!TIP]
+> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
+> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
+> 1. Select your datastore name and then **Browse**.
+> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
+
+You can also instantiate an Azure ML filesystem and do filesystem-like commands like `ls`, `glob`, `exists`, `open`, etc. The `open()` method will return a file-like object, which can be passed to any other library that expects to work with python files, or used by your own code as you would a normal python file object. These file-like objects respect the use of `with` contexts, for example:
+
+```python
+from azureml.fsspec import AzureMachineLearningFileSystem
+
+# instantiate file system using datastore URI
+fs = AzureMachineLearningFileSystem('azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>')
+
+# list files in the path
+fs.ls()
+# output example:
+# /datastore_name/folder/file1.csv
+# /datastore_name/folder/file2.csv
+
+# use an open context
+with fs.open('/datastore_name/folder/file1.csv') as f:
+ # do some process
+ process_file(f)
+```
+
+### Examples
+
+In this section we provide some examples of how to use Filesystem spec, for some common scenarios.
+
+#### Read a single CSV file into pandas
+
+If you have a *single* CSV file, then as outlined above you can read that into pandas with:
+
+```python
+import pandas as pd
+
+df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv")
+```
+
+#### Read a folder of CSV files into pandas
+
+The Pandas `read_csv()` method doesn't support reading a folder of CSV files. You need to glob csv paths and concatenate them to a data frame using Pandas `concat()` method. The code below demonstrates how to achieve this concatenation with the Azure ML filesystem:
+
+```python
+import pandas as pd
+from azureml.fsspec import AzureMachineLearningFileSystem
+
+# define the URI - update <> placeholders
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/*.csv'
+
+# create the filesystem
+fs = AzureMachineLearningFileSystem(uri)
+
+# append csv files in folder to a list
+dflist = []
+for path in fs.ls():
+ with fs.open(path) as f:
+ dflist.append(pd.read_csv(f))
+
+# concatenate data frames
+df = pd.concat(dflist)
+df.head()
+```
+
+#### Reading CSV files into Dask
+
+Below is an example of reading a CSV file into a Dask data frame:
+
+```python
+import dask.dd as dd
+
+df = dd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv")
+df.head()
+```
+
+#### Read a folder of parquet files into pandas
+Parquet files are typically written to a folder as part of an ETL process, which can emit files pertaining to the ETL such as progress, commits, etc. Below is an example of files created from an ETL process (files beginning with `_`) to produce a parquet file of data.
++
+In these scenarios, you'll only want to read the parquet files in the folder and ignore the ETL process files. The code below shows how you can use glob patterns to read only parquet files in a folder:
+
+```python
+import pandas as pd
+from azureml.fsspec import AzureMachineLearningFileSystem
+
+# define the URI - update <> placeholders
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/*.parquet'
+
+# create the filesystem
+fs = AzureMachineLearningFileSystem(uri)
+
+# append csv files in folder to a list
+dflist = []
+for path in fs.ls():
+ with fs.open(path) as f:
+ dflist.append(pd.read_parquet(f))
+
+# concatenate data frames
+df = pd.concat(dflist)
+df.head()
+```
+
+#### Accessing data from your Azure Databricks filesystem (`dbfs`)
+
+Filesystem spec (`fsspec`) has a range of [known implementations](https://filesystem-spec.readthedocs.io/en/stable/_modules/https://docsupdatetracker.net/index.html), one of which is the Databricks Filesystem (`dbfs`).
+
+To access data from `dbfs` you will need:
+
+- **Instance name**, which is in the form of `adb-<some-number>.<two digits>.azuredatabricks.net`. You can glean this from the URL of your Azure Databricks workspace.
+- **Personal Access Token (PAT)**, for more information on creating a PAT, please see [Authentication using Azure Databricks personal access tokens](/azure/databricks/dev-tools/api/latest/authentication)
+
+Once you have these, you will need to create an environment variable on your compute instance for the PAT token:
+
+```bash
+export ADB_PAT=<pat_token>
+```
+
+You can then access data in Pandas using:
+
+```python
+import os
+import pandas as pd
+
+pat = os.getenv(ADB_PAT)
+path_on_dbfs = '<absolute_path_on_dbfs>' # e.g. /folder/subfolder/file.csv
+
+storage_options = {
+ 'instance':'adb-<some-number>.<two digits>.azuredatabricks.net',
+ 'token': pat
+}
+
+df = pd.read_csv(f'dbfs://{path_on_dbfs}', storage_options=storage_options)
+```
+
+#### Reading images with `pillow`
+
+```python
+from PIL import Image
+from azureml.fsspec import AzureMachineLearningFileSystem
+
+# define the URI - update <> placeholders
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<image.jpeg>'
+
+# create the filesystem
+fs = AzureMachineLearningFileSystem(uri)
+
+with fs.open() as f:
+ img = Image.open(f)
+ img.show()
+```
+
+#### PyTorch custom dataset example
+
+In this example, you create a PyTorch custom dataset for processing images. The assumption is that an annotations file (in CSV format) exists that looks like:
+
+```text
+image_path, label
+0/image0.png, label0
+0/image1.png, label0
+1/image2.png, label1
+1/image3.png, label1
+2/image4.png, label2
+2/image5.png, label2
+```
+
+The images are stored in subfolders according to their label:
+
+```text
+/
+└── 📁images
+ ├── 📁0
+ │ ├── 📷image0.png
+ │ └── 📷image1.png
+ ├── 📁1
+ │ ├── 📷image2.png
+ │ └── 📷image3.png
+ └── 📁2
+ ├── 📷image4.png
+ └── 📷image5.png
+```
+
+A custom Dataset class in PyTorch must implement three functions: `__init__`, `__len__`, and `__getitem__`, which are implemented below:
+
+```python
+import os
+import pandas as pd
+from PIL import Image
+from torch.utils.data import Dataset
+
+class CustomImageDataset(Dataset):
+ def __init__(self, filesystem, annotations_file, img_dir, transform=None, target_transform=None):
+ self.fs = filesystem
+ f = filesystem.open(annotations_file)
+ self.img_labels = pd.read_csv(f)
+ f.close()
+ self.img_dir = img_dir
+ self.transform = transform
+ self.target_transform = target_transform
+
+ def __len__(self):
+ return len(self.img_labels)
+
+ def __getitem__(self, idx):
+ img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
+ f = self.fs.open(img_path)
+ image = Image.open(f)
+ f.close()
+ label = self.img_labels.iloc[idx, 1]
+ if self.transform:
+ image = self.transform(image)
+ if self.target_transform:
+ label = self.target_transform(label)
+ return image, label
+```
+
+You can then instantiate the dataset using:
+
+```python
+from azureml.fsspec import AzureMachineLearningFileSystem
+from torch.utils.data import DataLoader
+
+# define the URI - update <> placeholders
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/'
+
+# create the filesystem
+fs = AzureMachineLearningFileSystem(uri)
+
+# create the dataset
+training_data = CustomImageDataset(
+ filesystem=fs,
+ annotations_file='<datastore_name>/<path>/annotations.csv',
+ img_dir='<datastore_name>/<path_to_images>/'
+)
+
+# Preparing your data for training with DataLoaders
+train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
+```
+
+## Materialize data into Pandas using `mltable` library (preview)
++
+Another method for accessing data in cloud storage is to use the `mltable` library. The general format for reading data into pandas using `mltable` is:
+
+```python
+import mltable
+
+# define a path or folder or pattern
+path = {
+ 'file': '<supported_path>'
+ # alternatives
+ # 'folder': '<supported_path>'
+ # 'pattern': '<supported_path>'
+}
+
+# create an mltable from paths
+tbl = mltable.from_delimited_files(paths=[path])
+# alternatives
+# tbl = mltable.from_parquet_files(paths=[path])
+# tbl = mltable.from_json_lines_files(paths=[path])
+# tbl = mltable.from_delta_lake(paths=[path])
+
+# materialize to pandas
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+### Supported paths
+
+You'll notice the `mltable` library supports reading tabular data from different path types:
+
+|Location | Examples |
+|||
+|A path on your local computer | `./home/username/data/my_data` |
+|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
+|A path on Azure Storage | `wasbs://<container_name>@<account_name>.blob.core.windows.net/<path>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
+|A long-form Azure ML datastore | `azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<path>` |
+
+> [!NOTE]
+> `mltable` does user credential passthrough for paths on Azure Storage and Azure ML datastores. If you do not have permission to the data on the underlying storage then you will not be able to access the data.
+
+### Files, folders and globs
+
+`mltable` supports reading from:
+
+- file(s), for example: `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-csv.csv`
+- folder(s), for example `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-folder/`
+- [glob](https://wikipedia.org/wiki/Glob_(programming)) pattern(s), for example `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-folder/*.csv`
+- Or, a combination of files, folders, globbing patterns
+
+The flexibility of `mltable` allows you to materialize data into a single dataframe from a combination of local/cloud storage and combinations of files/folder/globs. For example:
+
+```python
+path1 = {
+ 'file': 'abfss://filesystem@account1.dfs.core.windows.net/my-csv.csv'
+}
+
+path2 = {
+ 'folder': './home/username/data/my_data'
+}
+
+path3 = {
+ 'pattern': 'abfss://filesystem@account2.dfs.core.windows.net/folder/*.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path1, path2, path3])
+```
+
+### Supported file formats
+`mltable` supports the following file formats:
+
+- Delimited Text (for example: CSV files): `mltable.from_delimited_files(paths=[path])`
+- Parquet: `mltable.from_parquet_files(paths=[path])`
+- Delta: `mltable.from_delta_lake(paths=[path])`
+- JSON lines format: `mltable.from_json_lines_files(paths=[path])`
+
+### Examples
+
+#### Read a CSV file
+
+##### [ADLS gen2](#tab/adls)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'file': 'abfss://<filesystem>@<account>.dfs.core.windows.net/<folder>/<file_name>.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+##### [Blob storage](#tab/blob)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'file': 'wasbs://<container>@<account>.blob.core.windows.net/<folder>/<file_name>.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+##### [Azure ML Datastore](#tab/datastore)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'file': 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<folder>/<file>.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+> [!TIP]
+> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
+> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
+> 1. Select your datastore name and then **Browse**.
+> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
+
+##### [HTTP Server](#tab/http)
+```python
+import mltable
+
+path = {
+ 'file': 'https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+++
+#### Read parquet files in a folder
+The example code below shows how `mltable` can use [glob](https://wikipedia.org/wiki/Glob_(programming)) patterns - such as wildcards - to ensure only the parquet files are read.
+
+##### [ADLS gen2](#tab/adls)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'pattern': 'abfss://<filesystem>@<account>.dfs.core.windows.net/<folder>/*.parquet'
+}
+
+tbl = mltable.from_parquet_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+##### [Blob storage](#tab/blob)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'pattern': 'wasbs://<container>@<account>.blob.core.windows.net/<folder>/*.parquet'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+##### [Azure ML Datastore](#tab/datastore)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'pattern': 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<folder>/*.parquet'
+}
+
+tbl = mltable.from_parquet_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+> [!TIP]
+> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
+> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
+> 1. Select your datastore name and then **Browse**.
+> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
+
+##### [HTTP Server](#tab/http)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+> [!IMPORTANT]
+> To glob the pattern on a public HTTP server, there must be access at the **folder** level.
+
+```python
+import mltable
+
+path = {
+ 'pattern': '<https_address>/<folder>/*.parquet'
+}
+
+tbl = mltable.from_parquet_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+++
+### Reading data assets
+In this section, you'll learn how to access your Azure ML data assets into pandas.
+
+#### Table asset
+
+If you've previously created a Table asset in Azure ML (an `mltable`, or a V1 `TabularDataset`), you can load that into pandas using:
+
+```python
+import mltable
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+ml_client = MLClient.from_config(credential=DefaultAzureCredential())
+data_asset = ml_client.data.get(name="<name_of_asset>", version="<version>")
+
+tbl = mltable.load(f'azureml:/{data_asset.id}')
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+#### File asset
+
+If you've registered a File asset that you want to read into Pandas data frame - for example, a CSV file - you can achieve this using:
+
+```python
+import mltable
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+ml_client = MLClient.from_config(credential=DefaultAzureCredential())
+data_asset = ml_client.data.get(name="<name_of_asset>", version="<version>")
+
+path = {
+ 'file': data_asset.path
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+#### Folder asset
+
+If you've registered a Folder asset (`uri_folder` or a V1 `FileDataset`) that you want to read into Pandas data frame - for example, a folder containing CSV file - you can achieve this using:
+
+```python
+import mltable
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+ml_client = MLClient.from_config(credential=DefaultAzureCredential())
+data_asset = ml_client.data.get(name="<name_of_asset>", version="<version>")
+
+path = {
+ 'folder': data_asset.path
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+## A note on reading and processing large data volumes with Pandas
+> [!TIP]
+> Pandas is not designed to handle large datasets - you will only be able to process data that can fit into the memory of the compute instance.
+>
+> For large datasets we recommend that you use AzureML managed Spark, which provides the [PySpark Pandas API](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/https://docsupdatetracker.net/index.html).
+
+You may wish to iterate quickly on a smaller subset of a large dataset before scaling up to a remote asynchronous job. `mltable` provides in-built functionality to get samples of large data using the [take_random_sample](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-take-random-sample) method:
+
+```python
+import mltable
+
+path = {
+ 'file': 'https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+# take a random 30% sample of the data
+tbl = tbl.take_random_sample(probability=.3)
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+You can also take subsets of large data by using:
+
+- [filter](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-filter)
+- [keep_columns](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-keep-columns)
+- [drop_columns](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-drop-columns)
++
+## Downloading data using the `azcopy` utility
+
+You may want to download the data to the local SSD of your host (local machine, cloud VM, Azure ML Compute Instance) and use the local filesystem. You can do this with the `azcopy` utility, which is pre-installed on an Azure ML compute instance. If you are **not** using an Azure ML compute instance or a Data Science Virtual Machine (DSVM), you may need to install `azcopy`. For more information please read [azcopy](../storage/common/storage-ref-azcopy.md).
+
+> [!CAUTION]
+> We do not recommend downloading data in the `/home/azureuser/cloudfiles/code` location on a compute instance. This is designed to store notebook and code artifacts, **not** data. Reading data from this location will incur significant performance overhead when training. Instead we recommend storing your data in `home/azureuser`, which is the local SSD of the compute node.
+
+Open a terminal and create a new directory, for example:
+
+```bash
+mkdir /home/azureuser/data
+```
+
+Sign-in to azcopy using:
+
+```bash
+azcopy login
+```
+
+Next, you can copy data using a storage URI
+
+```bash
+SOURCE=https://<account_name>.blob.core.windows.net/<container>/<path>
+DEST=/home/azureuser/data
+azcopy cp $SOURCE $DEST
+```
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
Access all Git operations from the terminal. All Git files and folders will be s
> [!NOTE] > Add your files and folders anywhere under the **~/cloudfiles/code/Users** folder so they will be visible in all your Jupyter environments.
+To integrate Git with your Azure Machine Learning workspace, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md).
++ ## Install packages Install packages from a terminal window. Install Python packages into the **Python 3.8 - AzureML** environment. Install R packages into the **R** environment.
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Last updated 09/22/2022
> * [v1](./v1/how-to-create-register-datasets.md) > * [v2 (current version)](how-to-create-data-assets.md)
-In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from AzureML datastores, Azure Storage, public URLs, and local files.
+In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from Azure ML datastores, Azure Storage, public URLs, and local files.
> [!IMPORTANT]
-> If you didn't creat/register the data source as a data asset, you can still [consume the data via specifying the data path in a job](how-to-read-write-data-v2.md#read-data-in-a-job) without benefits below.
-
-The benefits of creating data assets are:
-
-* You can **share and reuse data** with other members of the team such that they don't need to remember file locations.
-
-* You can **seamlessly access data** during model training (on any supported compute type) without worrying about connection strings or data paths.
-
-* You can **version** the data.
--
+> If you just want to access your data in an interactive session (for example, a Notebook) or a job, you are **not** required to create a data asset first. Creating a data asset would be an unnecessary step for you.
+>
+> For more information about accessing your data in a notebook, please see [Access data from Azure cloud storage for interactive development](how-to-access-data-interactive.md).
+>
+> For more information about accessing your data - both local and cloud storage - in a job, please see [Access data in a job](how-to-read-write-data-v2.md).
+>
+> Creating data assets are useful when you want to:
+> - **Share and reuse** data with other members of your team so that they don't need to remember file locations in cloud storage.
+> - **Version** the metadata such as location, description and tags.
+
## Prerequisites
To create a Folder data asset in the Azure Machine Learning studio, use the foll
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+1. Under **Assets** in the left navigation, select **Data**. On the Data assets tab, select Create
:::image type="content" source="./media/how-to-create-data-assets/data-assets-create.png" alt-text="Screenshot highlights Create in the Data assets tab."::: 1. Give your data asset a name and optional description. Then, select the "Folder (uri_folder)" option under Type, if it is not already selected.
To create a File data asset in the Azure Machine Learning studio, use the follow
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+1. Under **Assets** in the left navigation, select **Data**. On the Data assets tab, select Create
:::image type="content" source="./media/how-to-create-data-assets/data-assets-create.png" alt-text="Screenshot highlights Create in the Data assets tab."::: 1. Give your data asset a name and optional description. Then, select the "File (uri_file)" option under Type.
paths:
- pattern: ./*.txt transformations: - read_delimited:
- delimiter: ','
+ delimiter: ,
encoding: ascii header: all_files_same_headers ```
For more transformations available in `mltable`, please look into [reference-yam
### Create an MLTable artifact via Python SDK: from_* If you would like to create an MLTable object in memory via Python SDK, you could use from_* methods.
-The from_* methods does not materialize the data, but rather stores is as a transformation in the MLTable definition.
+The from_* methods do not materialize the data, but rather stores it as a transformation in the MLTable definition.
For example you can use from_delta_lake() to create an in-memory MLTable artifact to read delta lake data from the path `delta_table_path`. ```python
To create a Table data asset in the Azure Machine Learning studio, use the follo
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+1. Under **Assets** in the left navigation, select **Data**. On the Data assets tab, select Create
:::image type="content" source="./media/how-to-create-data-assets/data-assets-create.png" alt-text="Screenshot highlights Create in the Data assets tab."::: 1. Give your data asset a name and optional description. Then, select the "Table (mltable)" option under Type.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
This setting can be configured during CI creation or for existing CIs via the fo
```YAML # Note that this is just a snippet for the idle shutdown property. Refer to the "Create" Azure CLI section for more information.
+ # Note that idle_time_before_shutdown has been deprecated.
idle_time_before_shutdown_minutes: 30 ``` * Python SDKv2: only configurable during new CI creation ```Python
+ # Note that idle_time_before_shutdown has been deprecated.
ComputeInstance(name=ci_basic_name, size="STANDARD_DS3_v2", idle_time_before_shutdown_minutes="30") ```
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
Previously updated : 08/12/2022 Last updated : 11/09/2022 #Customer intent: As a DevOps person, I need to automate or customize the creation of Azure Machine Learning by using templates.
For more information, see [Deploy an application with Azure Resource Manager tem
[!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)]
+* The example template may not always use the latest API version for Azure Machine Learning. Before using the template, we recommend modifying it to use the latest API versions. For information on the latest API versions for Azure Machine Learning, see the [Azure Machine Learning REST API](/rest/api/azureml/).
+
+ > [!TIP]
+ > Each Azure service has its own set of API versions. For information on the API for a specific service, check the service information in the [Azure REST API reference](/rest/api/azure/).
+
+ To update the API version, find the `"apiVersion": "YYYY-MM-DD"` entry for the resource type and update it to the latest version. The following example is an entry for Azure Machine Learning:
+
+ ```json
+ "type": "Microsoft.MachineLearningServices/workspaces",
+ "apiVersion": "2020-03-01",
+ ```
+ ### Multiple workspaces in the same VNet The template doesn't support multiple Azure Machine Learning workspaces deployed in the same VNet. This is because the template creates new DNS zones during deployment.
New-AzResourceGroupDeployment `
-<!-- Workspaces need a private endpoint when associated resources are behind a virtual network to work properly. To set up a private endpoint for the workspace with a new virtual network:
-
-> [!IMPORTANT]
-> The deployment is only valid in regions which support private endpoints.
-
-# [Azure CLI](#tab/azcli)
-
-```azurecli
-az deployment group create \
- --name "exampledeployment" \
- --resource-group "examplegroup" \
- --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-vnet/azuredeploy.json" \
- --parameters workspaceName="exampleworkspace" \
- location="eastus" \
- vnetOption="new" \
- vnetName="examplevnet" \
- privateEndpointType="AutoApproval"
-```
-
-# [Azure PowerShell](#tab/azpowershell)
-
-```azurepowershell
-New-AzResourceGroupDeployment `
- -Name "exampledeployment" `
- -ResourceGroupName "examplegroup" `
- -TemplateUri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-vnet/azuredeploy.json" `
- -workspaceName "exampleworkspace" `
- -location "eastus" `
- -vnetOption "new" `
- -vnetName "examplevnet" `
- -privateEndpointType "AutoApproval"
-```
-
- -->
### Use an existing virtual network & resources
To deploy a workspace with existing associated resources you have to set the **v
```
-<!-- Workspaces need a private endpoint when associated resources are behind a virtual network to work properly. To set up a private endpoint for the workspace with an existing virtual network:
-
-> [!IMPORTANT]
-> The deployment is only valid in regions which support private endpoints.
-
-# [Azure CLI](#tab/azcli)
-
-```azurecli
-az deployment group create \
- --name "exampledeployment" \
- --resource-group "examplegroup" \
- --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-vnet/azuredeploy.json" \
- --parameters workspaceName="exampleworkspace" \
- location="eastus" \
- vnetOption="existing" \
- vnetName="examplevnet" \
- vnetResourceGroupName="rg" \
- privateEndpointType="AutoApproval" \
- subnetName="subnet" \
- subnetOption="existing"
-```
-
-# [Azure PowerShell](#tab/azpowershell)
-
-```azurepowershell
-New-AzResourceGroupDeployment `
- -Name "exampledeployment" `
- -ResourceGroupName "examplegroup" `
- -TemplateUri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-vnet/azuredeploy.json" `
- -workspaceName "exampleworkspace" `
- -location "eastus" `
- -vnetOption "existing" `
- -vnetName "examplevnet" `
- -vnetResourceGroupName "rg"
- -privateEndpointType "AutoApproval"
- -subnetName "subnet"
- -subnetOption "existing"
-```
-
- -->
- ## Use the Azure portal 1. Follow the steps in [Deploy resources from custom template](../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template). When you arrive at the __Select a template__ screen, choose the **quickstarts** entry. When it appears, select the link labeled "Click here to open template repository". This link takes you to the `quickstarts` directory in the Azure quickstart templates repository.
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-files.md
To create a new file in a different folder:
## Manage files with Git
-[Use a compute instance terminal](how-to-access-terminal.md#git) to clone and manage Git repositories.
+[Use a compute instance terminal](how-to-access-terminal.md#git) to clone and manage Git repositories. To integrate Git with your Azure Machine Learning workspace, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md).
## Clone samples
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
When you provide a model you want to register, you'll need to specify a `path` p
|A path on an AzureML Datastore | `azureml://datastores/<datastore-name>/paths/<path_on_datastore>` | |A path from an AzureML job | `azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>` | |A path from an MLflow job | `runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>` |
+|A path from a Model Asset in AzureML Workspace | `azureml:<model-name>:<version>`|
+|A path from a Model Asset in AzureML Registry | `azureml://registries/<registry-name>/models/<model-name>/versions/<version>`|
## Supported modes
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
Title: Read and write data in jobs
+ Title: Access data in a job
description: Learn how to read and write data in Azure Machine Learning training jobs.
#Customer intent: As an experienced Python developer, I need to read in my data to make it available to a remote compute to train my machine learning models.
-# Read and write data in a job
+# Access data in a job
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
The following example defines a pipeline containing three nodes and moves data b
* [Train models](how-to-train-model.md) * [Tutorial: Create production ML pipelines with Python SDK v2](tutorial-pipeline-python-sdk.md)
-* Learn more about [Data in Azure Machine Learning](concept-data.md)
+* Learn more about [Data in Azure Machine Learning](concept-data.md)
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
AzureML allows you to either use a curated (or ready-made) environment or create
In this section, we'll begin by introducing the data for training. We'll then cover how to run a training job, using a training script that we've provided. You'll learn to build the training job by configuring the command for running the training script. Then, you'll submit the training job to run in AzureML. ### Obtain the training data
-You'll use data that is stored on a public blob as a [zip file](https://azureopendatastorage.blob.core.windows.net/testpublic/temp/fowl_data.zip). This dataset consists of about 120 training images each for two classes (turkeys and chickens), with 100 validation images for each class. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). We'll download and extract the dataset as part of our training script `pytorch_train.py`.
+You'll use data that is stored on a public blob as a [zip file](https://azuremlexamples.blob.core.windows.net/datasets/fowl_data.zip). This dataset consists of about 120 training images each for two classes (turkeys and chickens), with 100 validation images for each class. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). We'll download and extract the dataset as part of our training script `pytorch_train.py`.
### Prepare the training script
In this article, you trained and registered a deep learning neural network using
- [Track run metrics during training](how-to-log-view-metrics.md) - [Tune hyperparameters](how-to-tune-hyperparameters.md)-- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
+- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
The specified VM Size failed to provision due to a lack of Azure Machine Learnin
Below is a list of reasons you might run into this error: * [Resource request was greater than limits](#resource-requests-greater-than-limits)
+* [Subscription does not exist](#subscription-does-not-exist)
* [Startup task failed due to authorization error](#authorization-error) * [Startup task failed due to incorrect role assignments on resource](#authorization-error) * [Unable to download user container image](#unable-to-download-user-container-image)
Below is a list of reasons you might run into this error:
Requests for resources must be less than or equal to limits. If you don't set limits, we set default values when you attach your compute to an Azure Machine Learning workspace. You can check limits in the Azure portal or by using the `az ml compute show` command.
+#### Subscription does not exist
+
+The Azure subscription that is entered must be existing. This error occurs when we cannot find the Azure subscription that was referenced. This is likely due to a typo in the subscription ID. Please double-check that the subscription ID was correctly typed and that it is currently active.
+
+For more information about Azure subscriptions, refer to the [prerequisites section](#prerequisites).
+ #### Authorization error After you provisioned the compute resource, during deployment creation, Azure tries to pull the user container image from the workspace private Azure Container Registry (ACR) and mount the user model and code artifacts into the user container from the workspace storage account.
machine-learning Migrate To V2 Command Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-command-job.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
For more information, see: * [V1 - Experiment](/python/api/azureml-core/azureml.core.experiment)
-* [V2 - Command Job](/python/api/azure-ai-ml/azure.ai.ml.md#azure-ai-ml-command)
+* [V2 - Command Job](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command)
* [Train models with the Azure ML Python SDK v2](how-to-train-sdk.md)
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
The Bicep template is made up of the [main.bicep](https://github.com/Azure/azure
| [machinelearningcompute.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/machinelearningcompute.bicep) | Defines an Azure Machine Learning compute cluster and compute instance. | | [privateaks.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/privateaks.bicep) | Defines an Azure Kubernetes Services cluster instance. |
+> [!IMPORTANT]
+> The example templates may not always use the latest API version for Azure Machine Learning. Before using the template, we recommend modifying it to use the latest API versions. For information on the latest API versions for Azure Machine Learning, see the [Azure Machine Learning REST API](/rest/api/azureml/).
+>
+> Each Azure service has its own set of API versions. For information on the API for a specific service, check the service information in the [Azure REST API reference](/rest/api/azure/).
+>
+> To update the API version, find the `Microsoft.MachineLearningServices/<resource>` entry for the resource type and update it to the latest version. The following example is an entry for the Azure Machine Learning workspace that uses an API version of `2022-05-01`:
+>
+>```json
+>resource machineLearning 'Microsoft.MachineLearningServices/workspaces@2022-05-01' = {
+>```
+ # [Terraform](#tab/terraform) The template consists of multiple files. The following table describes what each file is responsible for:
marketplace Deprecate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/deprecate-vm.md
This article describes how to deprecate or restore virtual machine images, plans
## What is deprecation?
-Deprecation is the delisting of a VM offer or a subset of the offer from Azure Marketplace so that it is no longer available for customers to deploy additional instances. Reasons to deprecate may vary. Common examples are due to security issues or end of life. You can deprecate image versions, plans, or an entire VM offer:
-
+Deprecation is the removal of a VM offer or a subset of the offer from Azure Marketplace so that it is no longer available for customers to deploy additional instances. Reasons to deprecate may vary. Common examples are due to security issues or end of life. You can deprecate image versions, plans, or an entire VM offer:
- **Deprecation of an image version** ΓÇô The removal of an individual VM image version - **Deprecation of a plan** ΓÇô The removal of a plan and subsequently all images within the plan - **Deprecation of an offer** ΓÇô The removal of an entire VM offer, including all plans within the offer and subsequently all images within each plan
Before the scheduled deprecation date:
- Customers with active deployments are notified. - Customers can continue to deploy new instances up until the deprecation date.-- If deprecating an offer or plan, the offer or plan will no longer be available in the marketplace. This is to reduce the discoverability of the offer or plan.-
+- If deprecating an offer, the offer will no longer be searchable in the marketplace upon scheduling the deprecation. This is to reduce the discoverability of the offer.
After the scheduled deprecation date: - Customers will not be able to deploy new instances using the affected images. If deprecating a plan, all images within the plan will no longer be available and if deprecating an offer, all images within the offer will no longer be available following deprecation. - Active VM instances will not be impacted.-- Existing virtual machine scale sets (VMSS) deployments cannot be scaled out if configured with any of the impacted images. If deprecating a plan or offer, all existing VMSS deployments pinned to any image within the plan or offer respectively cannot be scaled out.-
+- Existing virtual machine scale sets deployments cannot be scaled out. If deprecating a plan or offer, all existing scale sets deployments using any image within the plan or offer respectively cannot be scaled out.
> [!TIP]
-> Before you deprecate an offer or plan, make sure you understand the current usage by reviewing the [Usage dashboard in commercial marketplace analytics](usage-dashboard.md). If usage is high, consider hiding the plan or offer to minimize discoverability within the commercial marketplace. This will steer new customers towards other available options.
+> Before you deprecate an offer, plan, or image, make sure you understand the current usage by reviewing the [Usage dashboard in commercial marketplace analytics](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/usage). If usage is high, consider hiding the plan or offer to minimize discoverability within the commercial marketplace and steer new customers towards other available options. To hide an offer, select the **Hide plan** checkbox on the **Pricing and Availability** page of each individual plan in the offer.
## Deprecate an image Keep the following things in mind when deprecating an image: - You can deprecate any image within a plan.-- Each plan must have at least one image.-- Publish the offer after scheduling the deprecation of an image.-- Images that are published to preview can be deprecated or deleted immediately.-
+- Each plan must have at least one active image.
+- You must publish the offer after scheduling the deprecation of an image.
+- Images that are published only to preview and have never been published live can be deleted immediately when the offer is still in preview state.
**To deprecate an image**: 1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer with the image you want to deprecate.
Keep the following things in mind when deprecating an image:
Keep the following things in mind when restoring a deprecated image: -- Publish the offer after restoring an image for it to become available to customers.
+- You must publish the offer after restoring an image for it to become available to customers.
- You can undo or cancel the deprecation anytime up until the scheduled date. - You can restore an image for a period of time after deprecation. After the window has expired, the image can no longer be restored.
Keep the following things in mind when restoring a deprecated image:
1. In the **Action** column, select one of the following: - If the deprecation date shown in the **Status** column is in the future, you can select **Cancel deprecation**. The image version will then be listed under the Active tab. - If the deprecation date shown in the **Status** column is in the past, select **Restore image**. The image is then listed on the **Active** tab.
- > [!NOTE]
- > If the image can no longer be restored, then no actions will be available.
+1. > [!NOTE]
+ > If the image can no longer be restored, then no actions will be available.
+
1. Save your changes on the **Technical configuration** page. 1. For the change to take effect, select **Review and publish** and publish the offer. ## Deprecate a plan
-Keep the following things in ming when deprecating a plan:
--- Publish the offer after scheduling the deprecation of a plan.
+Keep the following things in mind when deprecating a plan:
+- You must publish the offer after scheduling the deprecation of a plan.
- Upon scheduling the deprecation of a plan, free trials are disabled immediately. - If a test drive is enabled on your offer and itΓÇÖs configured to use the plan thatΓÇÖs being deprecated, be sure to reconfigure the test drive to use another plan in the offer. Otherwise, disable the test drive on the **Offer Setup** page.
Keep the following things in ming when deprecating a plan:
Keep the following things in mind when restoring a plan: - Ensure that there is at least one active image version available on the **Technical Configuration** page of the plan. You can either restore a deprecated image or provide a new one.-- Publish the offer after restoring a plan for it to become available to customers.-
+- You must publish the offer after restoring a plan for it to become available to customers.
**To restore a plan**: 1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer with the plan you want to restore.
On the **Offer Overview** page, you can deprecate the entire offer. This depreca
Keep the following things in mind when deprecating an offer: -- The deprecation will be scheduled 90 days into the future and customers will be notified.
+- The deprecation will immediately be scheduled 90 days into the future upon confirmation and customers will be notified
- Test drive and any free trials will be disabled immediately upon scheduling deprecation of an offer. **To deprecate an offer**:
Keep the following things in mind when deprecating an offer:
1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer you want to deprecate. 1. On the **Offer overview** page, in the upper right, select **Deprecate offer**. 1. In the confirmation dialog box that appears, enter the Offer ID and then confirm that you want to deprecate the offer.
- > [!NOTE]
- > On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, the **Status column** of the offer will say **Deprecation scheduled**. On the **Offer overview** page, under **Publish status**, the scheduled deprecation date is shown.
-
+1. > [!NOTE]
+ > On the **Offer overview** page, under **Publish status**, the scheduled deprecation date is shown.
+
## Restore a deprecated offer You can restore an offer only if the offer contains at least one active plan and at least one active image.
You can restore an offer only if the offer contains at least one active plan and
1. Ensure that there is at least one active image version available on the **Technical Configuration** page of the plan. Note that all deprecated images are listed under **VM Images** on the **Deprecated** tab. You can either [restore a deprecated image](#restore-a-deprecated-image) or [add a new VM image](azure-vm-plan-technical-configuration.md#vm-images). Remember, if the restore window has expired, the image can no longer be restored. 1. Save your changes on the **Technical configuration** page. 1. For the changes to take effect, select **Review and publish** and publish the offer.++
marketplace Publisher Guide By Offer Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/publisher-guide-by-offer-type.md
The following table shows the commercial marketplace offer types in Partner Cent
| [**Azure Application**](plan-azure-application-offer.md) | There are two kinds of Azure application plans: _solution template_ and _managed application_. Both plan types support automating the deployment and configuration of a solution beyond a single virtual machine (VM). You can automate the process of providing multiple resources, including VMs, networking, and storage resources to provide complex solutions, such as IaaS solutions. Both plan types can employ many different kinds of Azure resources, including but not limited to VMs.<ul><li>**Solution template** plans are one of the main ways to publish a solution in the commercial marketplace. Solution template plans are not transactable in the commercial marketplace, but they can be used to deploy paid VM offers that are billed through the commercial marketplace. Use the solution template plan type when the customer will manage the solution and the transactions are billed through another plan.</li><br><li>**Managed application** plans enable you to easily build and deliver fully managed, turnkey applications for your customers. They have the same capabilities as solution template plans, with some key differences:</li><ul><li> The resources are deployed to a resource group and are managed by the publisher of the app. The resource group is present in the consumer's subscription, but an identity in the publisher's tenant has access to the resource group.</li><li>As the publisher, you specify the cost for ongoing support of the solution and transactions are supported through the commercial marketplace.</li></ul>Use the managed application plan type when you or your customer requires that the solution is managed by a partner or you will deploy a subscription-based solution.</ul> | | [**Azure Container**](marketplace-containers.md) | Use the Azure Container offer type when your solution is a Docker container image provisioned as a Kubernetes-based Azure container service. | | [**Azure virtual machine**](marketplace-virtual-machines.md) | Use the virtual machine offer type when you deploy a virtual appliance to the subscription associated with your customer. |
-| [**Consulting service**](./plan-consulting-service-offer.md) | Consulting services help to connect customers with services to support and extend their use of Azure, Dynamics 365, or Power Suite services.|
+| [**Consulting service**](./plan-consulting-service-offer.md) | Consulting services help to connect customers with services to support and extend their use of Azure, Dynamics 365, Microsoft 365 or Power Suite services.|
| [**Dynamics 365**](marketplace-dynamics-365.md) | Publish AppSource offers that build on or extend Dynamics 365 products.| | [**IoT Edge module**](marketplace-iot-edge.md) | Azure IoT Edge modules are the smallest computation units managed by IoT Edge, and can contain Microsoft services (such as Azure Stream Analytics), 3rd-party services, or your own solution-specific code. | | [**Managed service**](./plan-managed-service-offer.md) | Create managed service offers and manage customer-delegated subscriptions or resource groups through [Azure Lighthouse](../lighthouse/overview.md).|
migrate Add Server Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/add-server-credentials.md
ms. Previously updated : 09/12/2022 Last updated : 11/13/2022 # Provide server credentials to discover software inventory, dependencies, web apps, and SQL Server instances and databases
Type of credentials | Description
**Non-domain credentials (Windows/Linux)** | You can add **Windows (Non-domain)** or **Linux (Non-domain)** by selecting the required option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. **SQL Server Authentication credentials** | You can add **SQL Server Authentication** credentials by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. <br/><br/> You can add this type of credentials to discover SQL Server instances and databases running in your VMware environment, if you've configured SQL Server authentication mode on your SQL Servers.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.<br/><br/> You need to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can complete the software inventory to discover SQL installed on the servers before it uses the SQL Server authentication credentials to discover the SQL Server instances and databases.
-> [!Note]
-> Currently, the SQL Server authentication credentials can only be provided in appliance used for discovery and assessment of servers running in VMware environment.
- Check the permissions required on the Windows/Linux credentials to perform the software inventory, agentless dependency analysis and discover web apps, and SQL Server instances and databases. ### Required permissions
Feature | Windows credentials | Linux credentials
**Software inventory** | Guest user account | Regular/normal user account (non-sudo access permissions) **Discovery of SQL Server instances and databases** | User account that is member of the sysadmin server role. | _Not supported currently_ **Discovery of ASP.NET web apps** | Domain or non-domain (local) account with administrative permissions | _Not supported currently_
-**Agentless dependency analysis** | Domain or non-domain (local) account with administrative permissions | Root user account, or <br/> an account with these permissions on /bin/netstat and /bin/ls files: CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE.<br/><br/> Set these capabilities using the following commands: <br/><br/> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br/><br/> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat
+**Agentless dependency analysis** | Domain or non-domain (local) account with administrative permissions | Sudo user account with permissions to execute ls and netstat commands. If you are providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time the sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
### Recommended practices to provide credentials
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md
ms. Previously updated : 05/05/2022 Last updated : 11/13/2022 # Discovery, assessment, and dependency analysis - Common questions
-This article answers common questions about discovery, assessment, and dependency analysis in Azure Migrate. If you have other questions, check these resources:
+This article answers common questions about discovery, assessment, and dependency analysis in Azure Migrate. If you've other questions, check these resources:
- [General questions](resources-faq.md) about Azure Migrate - Questions about the [Azure Migrate appliance](common-questions-appliance.md)
Review the supported geographies for [public](migrate-support-matrix.md#public-c
## How many servers can I discover with an appliance?
-You can discover up to 10,000 servers from VMware environment, up to 5,000 servers from Hyper-V environment, and up to 1000 physical servers by using a single appliance. If you have more servers, read about [scaling a Hyper-V assessment](scale-hyper-v-assessment.md), [scaling a VMware assessment](scale-vmware-assessment.md), or [scaling a physical server assessment](scale-physical-assessment.md).
+You can discover up to 10,000 servers from VMware environment, up to 5,000 servers from Hyper-V environment, and up to 1000 physical servers by using a single appliance. If you've more servers, read about [scaling a Hyper-V assessment](scale-hyper-v-assessment.md), [scaling a VMware assessment](scale-vmware-assessment.md), or [scaling a physical server assessment](scale-physical-assessment.md).
## How do I choose the assessment type? - Use **Azure VM assessments** when you want to assess servers from your on-premises [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs. [Learn More](concepts-assessment-calculation.md).-- Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. for migration to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-azure-sql-assessment-calculation.md).
+- Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. for migration to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-azure-sql-assessment-calculation.md).
- Use assessment type **Azure App Service** when you want to assess your on-premises ASP.NET web apps running on IIS web server from your VMware environment for migration to Azure App Service. [Learn More](concepts-assessment-calculation.md). - Use **Azure VMware Solution (AVS)** assessments when you want to assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md).-- You can use a common group with VMware machines only to run both types of assessments. If you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines.
+- You can use a common group with VMware machines only to run both types of assessments. If you're running AVS assessments in Azure Migrate for the first time, it's advisable to create a new group of VMware machines.
## Why is performance data missing for some/all servers in my Azure VM and/or AVS assessment report?
-For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance cannot collect performance data for the on-premises servers. Check:
+For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance can't collect performance data for the on-premises servers. Check:
-- If the servers are powered on for the duration for which you are creating the assessment-- If only memory counters are missing and you are trying to assess servers in Hyper-V environment. In this scenario, enable dynamic memory on the servers and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for severs in Hyper-V environment only when the server has dynamic memory enabled.
+- If the servers are powered on for the duration for which you're creating the assessment
+- If only memory counters are missing and you're trying to assess servers in Hyper-V environment. In this scenario, enable dynamic memory on the servers and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for severs in Hyper-V environment only when the server has dynamic memory enabled.
- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
For "Performance-based" assessment, the assessment report export says 'Percentag
To ensure performance data is collected, check: -- If the SQL Servers are powered on for the duration for which you are creating the assessment.
+- If the SQL Servers are powered on for the duration for which you're creating the assessment.
- If the connection status of the SQL agent in Azure Migrate is 'Connected', and check the last heartbeat. -- If Azure Migrate connection status for all SQL instances is 'Connected' in the discovered SQL instance blade.
+- If Azure Migrate connection status for all SQL instances is 'Connected' in the discovered SQL instance section.
- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed. If any of the performance counters are missing, Azure SQL assessment recommends the smallest Azure SQL configuration for that instance/database.
-## Why confidence rating is not available for Azure App Service assessments?
+## Why is confidence rating not available for Azure App Service assessments?
-Performance data is not captured for Azure App Service assessment and hence you do not see confidence rating for this assessment type. Azure App Service assessment takes configuration data of web apps in to account while performing assessment calculation.
+Performance data isn't captured for Azure App Service assessment and hence you don't see confidence rating for this assessment type. Azure App Service assessment takes configuration data of web apps in to account while performing assessment calculation.
## Why is the confidence rating of my assessment low? The confidence rating is calculated for "Performance-based" assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. Below are the reasons why an assessment could get a low confidence rating: -- You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, change the performance duration to a smaller period and **Recalculate** the assessment.-- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
+- You didn't profile your environment for the duration for which you're creating the assessment. For example, if you're creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you can't wait for the duration, change the performance duration to a smaller period and **Recalculate** the assessment.
+- Assessment isn't able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
- Servers are powered on for the duration of the assessment - Outbound connections on ports 443 are allowed - For Hyper-V Servers, dynamic memory is enabled - The connection status of agents in Azure Migrate are 'Connected' and check the last heartbeat
- - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
+ - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance section.
**Recalculate** the assessment to reflect the latest changes in confidence rating. -- For Azure VM and AVS assessments, few servers were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few servers were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based).-- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings).
+- For Azure VM and AVS assessments, few servers were created after discovery had started. For example, if you're creating an assessment for the performance history of last one month, but few servers were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based).
+- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, if you're creating an assessment for the performance history of last one month, but few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings).
## Why is my RAM utilization greater than 100%?
By design, in Hyper-V if maximum memory provisioned is less than what is require
## Why can't I see all Azure VM families in the Azure VM assessment properties? There could be two reasons:-- You have chosen an Azure region where a particular series is not supported. Azure VM families shown in Azure VM assessment properties are dependent on the availability of the VM series in the chosen Azure location, storage type and Reserved Instance. -- The VM series is not support in the assessment and is not in the consideration logic of the assessment. We currently do not support B-series burstable, accelerated and high performance SKU series. We are trying to keep the VM series updated, and the ones mentioned are on our roadmap.
+- You've chosen an Azure region where a particular series isn't supported. Azure VM families shown in Azure VM assessment properties are dependent on the availability of the VM series in the chosen Azure location, storage type and Reserved Instance.
+- The VM series isn't support in the assessment and isn't in the consideration logic of the assessment. We currently don't support B-series burstable, accelerated and high performance SKU series. We are trying to keep the VM series updated, and the ones mentioned are on our roadmap.
## The number of Azure VM or AVS assessments on the Discovery and assessment tool are incorrect
- To remediate this, click the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessment. The discovery and assessment tool will then show the correct count for that assessment type.
+ To remediate this, select the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessment. The discovery and assessment tool will then show the correct count for that assessment type.
## I want to try out the new Azure SQL assessment
-Discovery and assessment of SQL Server instances and databases running in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+Discovery and assessment of SQL Server instances and databases running in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you've completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## I want to try out the new Azure App Service assessment
-Discovery and assessment of .NET web apps running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+Discovery and assessment of .NET web apps running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you've completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## I can't see some servers when I am creating an Azure SQL assessment -- Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, wait for some time for the discovery to get completed and then create the assessment.-- If you are not able to see a previously created group while creating the assessment, remove any server without a SQL instance from the group.-- If you are running Azure SQL assessments in Azure Migrate for the first time, it is advisable to create a new group of servers.
+- Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, wait for some time for the discovery, and then create the assessment.
+- If you're not able to see a previously created group while creating the assessment, remove any server without a SQL instance from the group.
+- If you're running Azure SQL assessments in Azure Migrate for the first time, it's advisable to create a new group of servers.
## I can't see some servers when I am creating an Azure App Service assessment -- Azure App Service assessment can only be done on servers running where web server role was discovered. If you don't see the servers that you wish to assess, wait for some time for the discovery to get completed and then create the assessment.-- If you are not able to see a previously created group while creating the assessment, remove any non-VMware server or any server without a web app from the group.-- If you are running Azure App Service assessments in Azure Migrate for the first time, it is advisable to create a new group of servers.
+- Azure App Service assessment can only be done on servers running where web server role was discovered. If you don't see the servers that you wish to assess, wait for some time for the discovery to get completed, and then create the assessment.
+- If you're not able to see a previously created group while creating the assessment, remove any non-VMware server or any server without a web app from the group.
+- If you're running Azure App Service assessments in Azure Migrate for the first time, it's advisable to create a new group of servers.
## I want to understand how was the readiness for my instance computed?
The readiness for your web apps is computed by running series of technical check
## Why is my web app marked as Ready with conditions or Not ready in my Azure App Service assessment?
-This can happen when one or more technical checks fail for a given web app. You may click on the readiness status for the web app to find out details and remediation for failed checks.
+This can happen when one or more technical checks fail for a given web app. You may select the readiness status for the web app to find out details and remediation for failed checks.
## Why is the readiness for all my SQL instances marked as unknown?
The SQL discovery is performed once every 24 hours and you might need to wait up
This could happen if: - The discovery is still in progress. We recommend that you wait for some time for the appliance to profile the environment and then recalculate the assessment.-- There are some discovery issues that you need to fix in the Errors and notifications blade.
+- There are some discovery issues that you need to fix in **Errors and notifications**.
The SQL discovery is performed once every 24 hours and you might need to wait upto a day for the latest configuration changes to reflect.
The Azure SQL assessment only includes databases that are in online status. In c
## I want to compare costs for running my SQL instances on Azure VM vs Azure SQL Database/Azure SQL Managed Instance
-You can create a single **Azure SQL** assessment consisting of desired SQL servers across VMware, Microsoft Hyper-V and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. A single assessment covers readiness, SKUs, estimated costs and migration blockers for all the available SQL migration targets in Azure - Azure SQL Managed Instance, Azure SQL Database and SQL Server on Azure VM. You can then compare the assessment output for the desired targets. [Learn More](./concepts-azure-sql-assessment-calculation.md)
+You can create a single **Azure SQL** assessment consisting of desired SQL servers across VMware, Microsoft Hyper-V and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. A single assessment covers readiness, SKUs, estimated costs and migration blockers for all the available SQL migration targets in Azure - Azure SQL Managed Instance, Azure SQL Database and SQL Server on Azure VM. You can then compare the assessment output for the desired targets. [Learn More](./concepts-azure-sql-assessment-calculation.md)
## The storage cost in my Azure SQL assessment is zero
For Azure SQL Managed Instance, there is no storage cost added for the first 32
## I can't see some groups when I am creating an Azure VMware Solution (AVS) assessment - AVS assessment can be done on groups that have only VMware machines. Remove any non-VMware machine from the group if you intend to perform an AVS assessment.-- If you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines.
+- If you're running AVS assessments in Azure Migrate for the first time, it's advisable to create a new group of VMware machines.
## Queries regarding Ultra disks ### Can I migrate my disks to Ultra disk using Azure Migrate?
-No. Currently, both Azure Migrate and Azure Site Recovery do not support migration to Ultra disks. Find steps to deploy Ultra disk [here](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal#deploy-an-ultra-disk)
+No. Currently, both Azure Migrate and Azure Site Recovery don't support migration to Ultra disks. Find steps to deploy Ultra disk [here](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal#deploy-an-ultra-disk)
### Why are the provisioned IOPS and throughput in my Ultra disk more than my on-premises IOPS and throughput?
Using the 95th percentile value ensures that outliers are ignored. Outliers migh
Import-based Azure VM assessments are assessments created with machines that are imported into Azure Migrate using a CSV file. Only four fields are mandatory to import: Server name, cores, memory, and operating system. Here are some things to note:
+ - The readiness criteria is less stringent in import-based assessments on the boot type parameter. If the boot type isn't provided, it's assumed the machine has BIOS boot type, and the machine isn't marked as **Conditionally Ready**. In assessments with discovery source as appliance, the readiness is marked as **Conditionally Ready** if the boot type is missing. This difference in readiness calculation is because users may not have all information on the machines in the early stages of migration planning when import-based assessments are done.
- Performance-based import assessments use the utilization value provided by the user for right-sizing calculations. Since the utilization value is provided by the user, the **Performance history** and **Percentile utilization** options are disabled in the assessment properties. In assessments with discovery source as appliance, the chosen percentile value is picked from the performance data collected by the appliance. ## Why is the suggested migration tool in import-based AVS assessment marked as unknown?
-For machines imported via a CSV file, the default migration tool in an AVS assessment is unknown. Though, for VMware machines, it is recommended to use the VMware Hybrid Cloud Extension (HCX) solution. [Learn More](../azure-vmware/install-vmware-hcx.md).
+For machines imported via a CSV file, the default migration tool in an AVS assessment is unknown. Though, for VMware machines, it's recommended to use the VMware Hybrid Cloud Extension (HCX) solution. [Learn More](../azure-vmware/install-vmware-hcx.md).
## What is dependency visualization?
The differences between agentless visualization and agent-based visualization ar
**Requirement** | **Agentless** | **Agent-based** | |
-Support | This option is currently in preview, and is only available for servers in VMware environment. [Review](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) supported operating systems. | In general availability (GA).
+Support | This option is currently in preview, and is only available for servers in VMware environment. [Review](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) supported operating systems. | In General Availability (GA).
Agent | No need to install agents on machines you want to cross-check. | Agents to be installed on each on-premises machine that you want to analyze: The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md), and the [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md). Prerequisites | [Review](concepts-dependency-visualization.md#agentless-analysis) the prerequisites and deployment requirements. | [Review](concepts-dependency-visualization.md#agent-based-analysis) the prerequisites and deployment requirements. Log Analytics | Not required. | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization. [Learn more](concepts-dependency-visualization.md#agent-based-analysis).
To use agent-based dependency visualization, download and install agents on each
- [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md) - [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)-- If you have machines that don't have internet connectivity, download and install the Log Analytics gateway on them.
+- If you've machines that don't have internet connectivity, download and install the Log Analytics gateway on them.
You need these agents only if you use agent-based dependency visualization.
For agentless visualization, you can view the dependency map of a single server
## Can I visualize dependencies for groups of more than 10 servers?
-You can [visualize dependencies](./how-to-create-a-group.md#refine-a-group-with-dependency-mapping) for groups that have up to 10 servers. If you have a group that has more than 10 servers, we recommend that you split the group into smaller groups, and then visualize the dependencies.
+You can [visualize dependencies](./how-to-create-a-group.md#refine-a-group-with-dependency-mapping) for groups that have up to 10 servers. If you've a group that has more than 10 servers, we recommend that you split the group into smaller groups, and then visualize the dependencies.
## Next steps
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
ms. Pr