Updates from: 11/18/2022 02:13:12
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
Previously updated : 08/25/2022 Last updated : 11/17/2022
export const protectedResources = {
Now that the web API is registered and you've defined its scopes, configure the web API code to work with your Azure AD B2C tenant. Open the *3-Authorization-II/2-call-api-b2c/API* folder with Visual Studio Code.
-In the sample folder, open the *config.json* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token that the web app passes as a bearer token. Update the following properties of the app settings:
+In the sample folder, open the *authConfig.js* file. This file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token that the web app passes as a bearer token. Update the following properties of the app settings:
|Section |Key |Value | |||| |credentials|tenantName| Your Azure AD B2C [domain/tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso.ommicrosoft.com`.| |credentials|clientID| The web API application ID from step [2.1](#21-register-the-web-api-application). In the [earlier diagram](#app-registration), it's the application with **App ID: 2**.|
-|credentials| issuer| (Optional) The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
|policies|policyName|The user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). If your application uses multiple user flows or custom policies, specify only one. For example, use the sign-up or sign-in user flow.| | protectedRoutes| scopes | The scopes of your web API application registration from [step 2.5](#25-grant-permissions). |
Your final configuration file should look like the following JSON:
"credentials": { "tenantName": "<your-tenant-name>.ommicrosoft.com", "clientID": "<your-webapi-application-ID>",
- "issuer": "https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/"
}, "policies": { "policyName": "b2c_1_susi"
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md
The following errors can be returned by the Azure Active Directory B2C service.
| `AADB2C90288` | UserJourney with ID '{0}' referenced in TechnicalProfile '{1}' for refresh token redemption for tenant '{2}' does not exist in policy '{3}' or any of its base policies. | | `AADB2C90287` | The request contains invalid redirect URI '{0}'.| [Register a web application](tutorial-register-applications.md), [Sending authentication requests](openid-connect.md#send-authentication-requests) | | `AADB2C90289` | We encountered an error connecting to the identity provider. Please try again later. | [Add an IDP to your Azure AD B2C tenant](add-identity-provider.md) |
+| `AADB2C90289` | We encountered an 'invalid_client' error connecting to the identity provider. Please try again later. | Make sure the application secret is correct or it hasn't expired. Learn how to [Register apps](register-apps.md).|
| `AADB2C90296` | Application has not been configured correctly. Please contact administrator of the site you are trying to access. | [Register a web application](tutorial-register-applications.md) | | `AADB2C99005` | The request contains an invalid scope parameter which includes an illegal character '{0}'. | [Web sign-in with OpenID Connect](openid-connect.md) | | `AADB2C99006` | Azure AD B2C cannot find the extensions app with app ID '{0}'. Please visit https://go.microsoft.com/fwlink/?linkid=851224 for more information. | [Azure AD B2C extensions app](extensions-app.md) |
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Previously updated : 03/10/2022 Last updated : 11/17/2022
You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsP
</CryptographicKeys> <OutputClaims> <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="oid"/>
- <OutputClaim ClaimTypeReferenceId="tenantId" PartnerClaimType="tid"/>
<OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" /> <OutputClaim ClaimTypeReferenceId="surName" PartnerClaimType="family_name" /> <OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
If the sign-in process is successful, your browser is redirected to `https://jwt
- Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md). - Check out the Azure AD multi-tenant federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory), and how to pass Azure AD access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory-with-access-token)
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/relyingparty.md
Previously updated : 06/26/2022 Last updated : 11/17/2022
The **UserJourneyBehaviors** element contains the following elements:
| JourneyFraming | 0:1| Allows the user interface of this policy to be loaded in an iframe. | | ScriptExecution| 0:1| The supported [JavaScript](javascript-and-page-layout.md) execution modes. Possible values: `Allow` or `Disallow` (default). -
+When you use the above elements, you need add them to your **UserJourneyBehaviors** element in the order specified in the table. For example, the **JourneyInsights** element must be added before (above) the **ScriptExecution** element.
+
### SingleSignOn The **SingleSignOn** element contains the following attributes:
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md
Previously updated : 10/27/2022 Last updated : 11/14/2022 zone_pivot_groups: b2c-policy-type
The following table lists the administrative configuration limits in the Azure A
|Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 | |Number of policies per Azure AD B2C tenant (user flows + custom policies) |200 | |Maximum policy file size |1024 KB |
-|Number of API connectors per tenant |19 |
+|Number of API connectors per tenant |20 |
<sup>1</sup> See also [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md).
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
The sign-up and sign-in user flow handles both sign-up and sign-in experiences w
1. Enter a **Name** for the user flow. For example, *signupsignin1*. 1. For **Identity providers**, select **Email signup**.
-1. For **User attributes and claims**, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
+1. For **User attributes and token claims**, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
![Attributes and claims selection page with three claims selected](./media/tutorial-create-user-flows/signup-signin-attributes.png)
active-directory Active Directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/active-directory-app-proxy-protect-ndes.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Back End Kerberos Constrained Delegation How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-how-to.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Config Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-problem.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Config Sso How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-sso-how-to.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Cookie Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
Previously updated : 08/12/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Custom Home Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure For Claims Aware Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-for-claims-aware-applications.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Hard Coded Link Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-hard-coded-link-translation.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Native Client Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-native-client-application.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Single Sign On On Premises Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Single Sign On Password Vaulting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Single Sign On With Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-headers.md
Previously updated : 02/22/2021 Last updated : 11/17/2022
active-directory Application Proxy Configure Single Sign On With Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-kcd.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Connectivity No Working Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectivity-no-working-connector.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Connector Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-groups.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Connector Installation Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-installation-problem.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectors.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Debug Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-debug-apps.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Debug Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-debug-connectors.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-deployment-plan.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy High Availability Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-high-availability-load-balancing.md
Previously updated : 04/29/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Microsoft Cloud Application Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-microsoft-cloud-application-security.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-power-bi.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Sharepoint Server Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Sharepoint Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Tableau https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-tableau.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Integrate With Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-teams.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Page Appearance Broken Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-appearance-broken-problem.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Page Links Broken Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-links-broken-problem.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Page Load Speed Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-load-speed-problem.md
Previously updated : 07/11/2017 Last updated : 11/17/2022
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-powershell-samples.md
Previously updated : 04/29/2021 Last updated : 11/17/2022
active-directory Application Proxy Qlik https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-qlik.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Register Connector Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-register-connector-powershell.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Remove Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-remove-personal-data.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Secure Api Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-secure-api-access.md
Previously updated : 05/06/2021 Last updated : 11/17/2022
active-directory Application Proxy Sign In Bad Gateway Timeout Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-sign-in-bad-gateway-timeout-error.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-troubleshoot.md
Previously updated : 10/12/2021 Last updated : 11/17/2022
active-directory Application Proxy Understand Cors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-understand-cors-issues.md
Previously updated : 04/28/2021 Last updated : 11/17/2022
active-directory Application Proxy Wildcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-wildcard.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory Application Sign In Problem On Premises Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-sign-in-problem-on-premises-application-proxy.md
Previously updated : 04/27/2021 Last updated : 11/17/2022
active-directory V2 Oauth2 Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-device-code.md
Previously updated : 06/25/2021 Last updated : 11/15/2022 -+ # Microsoft identity platform and the OAuth 2.0 device authorization grant flow
-The Microsoft identity platform supports the [device authorization grant](https://tools.ietf.org/html/rfc8628), which allows users to sign in to input-constrained devices such as a smart TV, IoT device, or printer. To enable this flow, the device has the user visit a webpage in their browser on another device to sign in. Once the user signs in, the device is able to get access tokens and refresh tokens as needed.
+The Microsoft identity platform supports the [device authorization grant](https://tools.ietf.org/html/rfc8628), which allows users to sign in to input-constrained devices such as a smart TV, IoT device, or a printer. To enable this flow, the device has the user visit a webpage in a browser on another device to sign in. Once the user signs in, the device is able to get access tokens and refresh tokens as needed.
-This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
+This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). You can refer to [sample apps that use MSAL](sample-v2-code.md) for examples.
[!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)] ## Protocol diagram
-The entire device code flow looks similar to the next diagram. We describe each of the steps later in this article.
+The entire device code flow is shown in the following diagram. Each step is explained throughout this article.
![Device code flow](./media/v2-oauth2-device-code/v2-oauth-device-flow.svg) ## Device authorization request
-The client must first check with the authentication server for a device and user code that's used to initiate authentication. The client collects this request from the `/devicecode` endpoint. In this request, the client should also include the permissions it needs to acquire from the user. From the moment this request is sent, the user has only 15 minutes to sign in (the usual value for `expires_in`), so only make this request when the user has indicated they're ready to sign in.
+The client must first check with the authentication server for a device and user code that's used to initiate authentication. The client collects this request from the `/devicecode` endpoint. In the request, the client should also include the permissions it needs to acquire from the user.
+
+From the moment the request is sent, the user has 15 minutes to sign in. This is the default value for `expires_in`. The request should only be made when the user has indicated they're ready to sign in.
```HTTP // Line breaks are for legibility only.
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Parameter | Condition | Description | | | | |
-| `tenant` | Required | Can be /common, /consumers, or /organizations. It can also be the directory tenant that you want to request permission from in GUID or friendly name format. |
+| `tenant` | Required | Can be `/common`, `/consumers`, or `/organizations`. It can also be the directory tenant that you want to request permission from in GUID or friendly name format. |
| `client_id` | Required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | | `scope` | Required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. |
A successful response will be a JSON object containing the required information
## Authenticating the user
-After receiving the `user_code` and `verification_uri`, the client displays these to the user, instructing them to sign in using their mobile phone or PC browser.
+After receiving the `user_code` and `verification_uri`, the client displays these to the user, instructing them to use their mobile phone or PC browser to sign in.
-If the user authenticates with a personal account (on /common or /consumers), they will be asked to sign in again in order to transfer authentication state to the device. They will also be asked to provide consent, to ensure they are aware of the permissions being granted. This does not apply to work or school accounts used to authenticate.
+If the user authenticates with a personal account, using `/common` or `/consumers`, they'll be asked to sign in again in order to transfer authentication state to the device. This is because the device is unable to access the user's cookies. They'll also be asked to consent to the permissions requested by the client. This however doesn't apply to work or school accounts used to authenticate.
While the user is authenticating at the `verification_uri`, the client should be polling the `/token` endpoint for the requested token using the `device_code`.
While the user is authenticating at the `verification_uri`, the client should be
POST https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token Content-Type: application/x-www-form-urlencoded
-grant_type=urn:ietf:params:oauth:grant-type:device_code
-&client_id=6731de76-14a6-49ae-97bc-6eba6914391e
-&device_code=GMMhmHCXhWEzkobqIHGG_EnNYYsAkukHspeYUk9E8...
+grant_type=urn:ietf:params:oauth:grant-type:device_code&client_id=6731de76-14a6-49ae-97bc-6eba6914391e&device_code=GMMhmHCXhWEzkobqIHGG_EnNYYsAkukHspeYUk9E8...
``` | Parameter | Required | Description|
grant_type=urn:ietf:params:oauth:grant-type:device_code
### Expected errors
-The device code flow is a polling protocol so your client must expect to receive errors before the user has finished authenticating.
+The device code flow is a polling protocol so errors served to the client must be expected prior to completion of user authentication.
| Error | Description | Client Action | | | -- | -| | `authorization_pending` | The user hasn't finished authenticating, but hasn't canceled the flow. | Repeat the request after at least `interval` seconds. |
-| `authorization_declined` | The end user denied the authorization request.| Stop polling, and revert to an unauthenticated state. |
+| `authorization_declined` | The end user denied the authorization request.| Stop polling and revert to an unauthenticated state. |
| `bad_verification_code`| The `device_code` sent to the `/token` endpoint wasn't recognized. | Verify that the client is sending the correct `device_code` in the request. |
-| `expired_token` | At least `expires_in` seconds have passed, and authentication is no longer possible with this `device_code`. | Stop polling and revert to an unauthenticated state. |
+| `expired_token` | Value of `expires_in` has been exceeded and authentication is no longer possible with `device_code`. | Stop polling and revert to an unauthenticated state. |
### Successful authentication response
A successful token response will look like:
| Parameter | Format | Description | | | | -- | | `token_type` | String| Always `Bearer`. |
-| `scope` | Space separated strings | If an access token was returned, this lists the scopes the access token is valid for. |
-| `expires_in`| int | Number of seconds before the included access token is valid for. |
+| `scope` | Space separated strings | If an access token was returned, this lists the scopes in which the access token is valid for. |
+| `expires_in`| int | Number of seconds the included access token is valid for. |
| `access_token`| Opaque string | Issued for the [scopes](v2-permissions-and-consent.md) that were requested. | | `id_token` | JWT | Issued if the original `scope` parameter included the `openid` scope. | | `refresh_token` | Opaque string | Issued if the original `scope` parameter included `offline_access`. |
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-manage.md
Previously updated : 06/23/2022 Last updated : 11/11/2022
# Managing custom domain names in your Azure Active Directory
-A domain name is an important part of the identifier for resources in many Azure Active Directory (Azure AD), part of Microsoft Entra: it's part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
+A domain name is an important part of the identifier for resources in many Azure Active Directory (Azure AD) deployments. It is part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
## Set the primary domain name for your Azure AD organization
If you have already added a contoso.com domain to one Azure AD organization, you
## What to do if you change the DNS registrar for your custom domain name
-If you change the DNS registrars, there are no additional configuration tasks in Azure AD. You can continue using the domain name with Azure AD without interruption. If you use your custom domain name with Microsoft 365, Intune, or other services that rely on custom domain names in Azure AD, see the documentation for those services.
+If you change the DNS registrars, there are no other configuration tasks in Azure AD. You can continue using the domain name with Azure AD without interruption. If you use your custom domain name with Microsoft 365, Intune, or other services that rely on custom domain names in Azure AD, see the documentation for those services.
## Delete a custom domain name
You must change or delete any such resource in your Azure AD organization before
> [!Note] > To delete the custom domain, use a Global Administrator account that is based on either the default domain (onmicrosoft.com) or a different custom domain (mydomainname.com).
-### ForceDelete option
+## ForceDelete option
You can **ForceDelete** a domain name in the [Azure AD Admin Center](https://aad.portal.azure.com) or using [Microsoft Graph API](/graph/api/domain-forcedelete). These options use an asynchronous operation and update all references from the custom domain name like ΓÇ£user@contoso.comΓÇ¥ to the initial default domain name such as ΓÇ£user@contoso.onmicrosoft.com.ΓÇ¥
An error is returned when:
* The number of objects to be renamed is greater than 1000 * One of the applications to be renamed is a multi-tenant app
-### Frequently asked questions
+## Best Practices for Domain Hygiene
+
+Use a reputable registrar that provides ample notifications for domain name changes, registration expiry, a grace period for expired domains, and maintains high security standards for controlling who has access to your domain name configuration and TXT records.
+Keep your domain names current with your Registrar, and verify TXT records for accuracy.
+
+* If you purposefully are expiring your domain name or turning over ownership to someone else (separately from your Azure AD tenant), you should delete it from your Azure AD tenant prior to expiring or transferring.
+* If you do allow your domain name to expire, if you are able to reactivate it/regain control of it, carefully review all TXT records with the registrar to ensure no tampering of your domain name took place.
+* If you can't reactivate or regain control of your domain name immediately, you should delete it from your Azure AD tenant. Dom't readd/re-verify until you are able to resolve ownership of the domain name and verify the full TXT record for correctness.
+
+>[!NOTE]
+> Microsoft will not allow a domain name to be verified with more than Azure AD tenant. Once you delete a domain name from your tenant, you will not be able to re-add/re-verify it with your Azure AD tenant if it is subsequently added and verified with another Azure AD tenant.
+
+## Frequently asked questions
**Q: Why is the domain deletion failing with an error that states that I have Exchange mastered groups on this domain name?** <br>
-**A:** Today, certain groups like Mail-Enabled Security groups and distributed lists are provisioned by Exchange and need to be manually cleaned up in [Exchange Admin Center (EAC)](https://outlook.office365.com/ecp/). There may be lingering ProxyAddresses which rely on the custom domain name and will need to be updated manually to another domain name.
+**A:** Today, certain groups like Mail-Enabled Security groups and distributed lists are provisioned by Exchange and need to be manually cleaned up in [Exchange Admin Center (EAC)](https://outlook.office365.com/ecp/). There may be lingering ProxyAddresses, which rely on the custom domain name and will need to be updated manually to another domain name.
**Q: I am logged in as admin\@contoso.com but I cannot delete the domain name ΓÇ£contoso.comΓÇ¥?**<br>
-**A:** You cannot reference the custom domain name you are trying to delete in your user account name. Ensure that the Global Administrator account is using the initial default domain name (.onmicrosoft.com) such as admin@contoso.onmicrosoft.com. Sign in with a different Global Administrator account that such as admin@contoso.onmicrosoft.com or another custom domain name like ΓÇ£fabrikam.comΓÇ¥ where the account is admin@fabrikam.com.
+**A:** You can't reference the custom domain name you are trying to delete in your user account name. Ensure that the Global Administrator account is using the initial default domain name (.onmicrosoft.com) such as admin@contoso.onmicrosoft.com. Sign in with a different Global Administrator account that such as admin@contoso.onmicrosoft.com or another custom domain name like ΓÇ£fabrikam.comΓÇ¥ where the account is admin@fabrikam.com.
**Q: I clicked the Delete domain button and see `In Progress` status for the Delete operation. How long does it take? What happens if it fails?**<br>
-**A:** The delete domain operation is an asynchronous background task that renames all references to the domain name. It should complete within a minute or two. If domain deletion fails, ensure that you donΓÇÖt have:
+**A:** The delete domain operation is an asynchronous background task that renames all references to the domain name. It may take up to 24 hours to complete. If domain deletion fails, ensure that you donΓÇÖt have:
* Apps configured on the domain name with the appIdentifierURI * Any mail-enabled group referencing the custom domain name * More than 1000 references to the domain name
+* The domain to be removed the set as the Primary domain of your organization
-If you find that any of the conditions havenΓÇÖt been met, manually clean up the references and try to delete the domain again.
+Also note that the ForceDelete option won't work if the domain uses Federated authentication type. In that case the users/groups on the domain must be renamed or removed using the on-premises Active Directory before reattempting the domain removal.
+If you find that any of the conditions havenΓÇÖt been met, manually clean up the references, and try to delete the domain again.
## Use PowerShell or the Microsoft Graph API to manage domain names
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
You can also create a rule that selects device objects for membership in a group
> [!NOTE] > systemlabels is a read-only attribute that cannot be set with Intune. >
-> For Windows 10, the correct format of the deviceOSVersion attribute is as follows: (device.deviceOSVersion -startsWith "10.0.1"). The formatting can be validated with the Get-MsolDevice PowerShell cmdlet.
+> For Windows 10, the correct format of the deviceOSVersion attribute is as follows: (device.deviceOSVersion -startsWith "10.0.1"). The formatting can be validated with the Get-MgDevice PowerShell cmdlet:
+> ```
+> Get-MgDevice -Search "displayName:YourMachineNameHere" -ConsistencyLevel eventual | Select-Object -ExpandProperty 'OperatingSystemVersion'
+> ```
The following device attributes can be used.
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
Previously updated : 11/05/2021 Last updated : 11/17/2022 +
+# Customer intent: As a tenant administrator, I want to enable B2B user access to on-premises apps.
# Grant B2B users in Azure AD access to your on-premises applications
As an organization that uses Azure Active Directory (Azure AD) B2B collaboration
If your on-premises app uses SAML-based authentication, you can easily make these apps available to your Azure AD B2B collaboration users through the Azure portal using Azure AD Application Proxy.
-You must do the following :
+You must do the following:
- Enable Application Proxy and install a connector. For instructions, see [Publish applications using Azure AD Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md). - Publish the on-premises SAML-based application through Azure AD Application Proxy by following the instructions in [SAML single sign-on for on-premises applications with Application Proxy](../app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md).
To provide B2B users access to on-premises applications that are secured with in
For the B2B user scenario, there are two methods you can use to create the guest user objects that are required for authorization in the on-premises directory: - Microsoft Identity Manager (MIM) and the MIM management agent for Microsoft Graph.
- - A PowerShell script, which is a more lightweight solution that does not require MIM.
+ - A PowerShell script, which is a more lightweight solution that doesn't require MIM.
The following diagram provides a high-level overview of how Azure AD Application Proxy and the generation of the B2B user object in the on-premises directory work together to grant B2B users access to your on-premises IWA and KCD apps. The numbered steps are described in detail below the diagram.
-![Diagram of MIM and B2B script solutions](media/hybrid-cloud-to-on-premises/MIMScriptSolution.PNG)
+![Diagram of MIM and B2B script solutions.](media/hybrid-cloud-to-on-premises/MIMScriptSolution.PNG)
1. A user from a partner organization (the Fabrikam tenant) is invited to the Contoso tenant. 2. A guest user object is created in the Contoso tenant (for example, a user object with a UPN of guest_fabrikam.com#EXT#@contoso.onmicrosoft.com).
You can use an [Azure AD B2B sample script](https://github.com/Azure-Samples/B2B
### Create B2B guest user objects through MIM
-For information about how to use MIM 2016 Service Pack 1 and the MIM management agent for Microsoft Graph to create the guest user objects in the on-premises directory, see [Azure AD business-to-business (B2B) collaboration with Microsoft Identity Manager (MIM) 2016 SP1 with Azure Application Proxy](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario).
+You can use MIM 2016 Service Pack 1, and the MIM management agent for Microsoft Graph to create the guest user objects in the on-premises directory. To learn more, see [Azure AD business-to-business (B2B) collaboration with Microsoft Identity Manager (MIM) 2016 SP1 with Azure Application Proxy](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario).
## License considerations
Make sure that you have the correct Client Access Licenses (CALs) for external g
## Next steps -- See also [Azure Active Directory B2B collaboration for hybrid organizations](hybrid-organizations.md)--- For an overview of Azure AD Connect, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
+- [Grant local users access to cloud apps](hybrid-on-premises-to-cloud.md)
+- [Azure Active Directory B2B collaboration for hybrid organizations](hybrid-organizations.md)
+- For an overview of Azure AD Connect, see [Integrate your on-premises directories with Azure Active Directory](../hybrid/whatis-hybrid-identity.md).
active-directory Hybrid On Premises To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-on-premises-to-cloud.md
Title: Sync local partner accounts to cloud as B2B users - Azure AD
-description: Give locally-managed external partners access to both local and cloud resources using the same credentials with Azure AD B2B collaboration.
+description: Give locally managed external partners access to both local and cloud resources using the same credentials with Azure AD B2B collaboration.
Previously updated : 11/03/2020 Last updated : 11/17/2022 +
+# Customer intent: As a tenant administrator, I want to enable locally-managed external partners' access to both local and cloud resources via the Azure AD B2B collaboration.
-# Grant locally-managed partner accounts access to cloud resources using Azure AD B2B collaboration
+# Grant locally managed partner accounts access to cloud resources using Azure AD B2B collaboration
Before Azure Active Directory (Azure AD), organizations with on-premises identity systems have traditionally managed partner accounts in their on-premises directory. In such an organization, when you start to move apps to Azure AD, you want to make sure your partners can access the resources they need. It shouldn't matter whether the resources are on-premises or in the cloud. Also, you want your partner users to be able to use the same sign-in credentials for both on-premises and Azure AD resources.
-If you create accounts for your external partners in your on-premises directory (for example, you create an account with a sign-in name of "wmoran" for an external user named Wendy Moran in your partners.contoso.com domain), you can now sync these accounts to the cloud. Specifically, you can use Azure AD Connect to sync the partner accounts to the cloud, which creates a user account with UserType = Guest. This enables your partner users to access cloud resources using the same credentials as their local accounts, without giving them more access than they need.
+If you create accounts for your external partners in your on-premises directory (for example, you create an account with a sign-in name of "msullivan" for an external user named Maria Sullivan in your partners.contoso.com domain), you can now sync these accounts to the cloud. Specifically, you can use [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) to sync the partner accounts to the cloud, which creates a user account with UserType = Guest. This enables your partner users to access cloud resources using the same credentials as their local accounts, without giving them more access than they need.
> [!NOTE] > See also how to [invite internal users to B2B collaboration](invite-internal-users.md). With this feature, you can invite internal guest users to use B2B collaboration, regardless of whether you've synced their accounts from your on-premises directory to the cloud. Once the user accepts the invitation to use B2B collaboration, they'll be able to use their own identities and credentials to sign in to the resources you want them to access. You wonΓÇÖt need to maintain passwords or manage account lifecycles.
active-directory How To Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-groups.md
We currently don't support:
1. Locate the group you want your group to be a member of and choose **Select**.
- For this exercise, we're adding "MDM policy - West" to the "MDM policy - All org" group, so "MDM - policy - West" inherits all the properties and configurations of the "MDM policy - All org" group.
+ For this exercise, we're adding "MDM policy - West" to the "MDM policy - All org" group. The "MDM - policy - West" group will have the same access as the "MDM policy - All org" group.
![Screenshot of making a group the member of another group with 'Group membership' from the side menu and 'Add membership' option highlighted.](media/how-to-manage-groups/nested-groups-selected.png)
Now you can review the "MDM policy - West - Group memberships" page to see the g
For a more detailed view of the group and member relationship, select the parent group name (MDM policy - All org) and take a look at the "MDM policy - West" page details. ### Remove a group from another group
-You can remove an existing Security group from another Security group; however, removing the group also removes any inherited settings for its members.
+You can remove an existing Security group from another Security group; however, removing the group also removes any inherited access for its members.
1. On the **Groups - All groups** page, search for and select the group you need to remove as a member of another group.
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
For Example:
``` powershell $credential = Get-Credential
-Set-ADSyncRestrictedPermissions -ADConnectorAccountDN 'CN=ADConnectorAccount,CN=Users,DC=Contoso,DC=com' -Credential $credential
+Set-ADSyncRestrictedPermissions -ADConnectorAccountDN 'CN=ADConnectorAccount,OU=Users,DC=Contoso,DC=com' -Credential $credential
``` This cmdlet will set the following permissions:
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
To configure the application properties:
1. In the **Manage** section, select **Properties** to open the **Properties** pane for editing. 1. Configure the properties based on the needs of your application.
+## Use Microsoft Graph to configure application properties
+
+You can also configure properties of both app registrations and enterprise applications (service principals) through Microsoft Graph. These can include basic properties, permissions, and role assignments. For more information, see [Create and manage an Azure AD application using Microsoft Graph](/graph/tutorial-applications-basics#configure-other-basic-properties-for-your-app).
+ ## Next steps Learn more about how to manage enterprise applications.
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
To learn more about VU Security and its complete set of solutions, visit
To get started with the VU Identity Card, ensure the following prerequisites are met: -- A tenant [configured](/azure/active-directory/verifiable-credentials/verifiablee-credentials-configure-tenant)
+- A tenant [configured](/azure/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant)
for Entra Verified ID service. - If you don\'t have an existing tenant, you can [create an Azure
User flow is specific to your application or website. However if you are using o
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
Title: Integrate Azure Container Registry with Azure Kubernetes Service description: Learn how to integrate Azure Kubernetes Service (AKS) with Azure Container Registry (ACR) - Previously updated : 06/10/2021 Last updated : 11/16/2022 ms.tool: azure-cli, azure-powershell ms.devlang: azurecli # Authenticate with Azure Container Registry from Azure Kubernetes Service
-When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. This operation is implemented as part of the CLI, PowerShell, and Portal experience by granting the required permissions to your ACR. This article provides examples for configuring authentication between these two Azure services.
+You need to establish an authentication mechanism when using [Azure Container Registry (ACR)][acr-intro] with Azure Kubernetes Service (AKS). This operation is implemented as part of the Azure CLI, Azure PowerShell, and Azure portal experiences by granting the required permissions to your ACR. This article provides examples for configuring authentication between these Azure services.
-You can set up the AKS to ACR integration in a few simple commands with the Azure CLI or Azure PowerShell. This integration assigns the AcrPull role to the managed identity associated to the AKS Cluster.
+You can set up the AKS to ACR integration using the Azure CLI or Azure PowerShell. The AKS to ACR integration assigns the [**AcrPull** role][acr-pull] to the [Azure Active Directory (Azure AD) **managed identity**][aad-identity] associated with your AKS cluster.
> [!NOTE]
-> This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][Image Pull Secret].
+> This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][image-pull-secret].
## Before you begin
-These examples require:
+* You need to have the [**Owner**][rbac-owner], [**Azure account administrator**][rbac-classic], or [**Azure co-administrator**][rbac-classic] role on your **Azure subscription**.
+ * To avoid needing one of these roles, you can instead use an existing managed identity to authenticate ACR from AKS. For more information, see [Use an Azure managed identity to authenticate to an ACR](../container-registry/container-registry-authentication-managed-identity.md).
+* If you're using Azure CLI, this article requires that you're running Azure CLI version 2.7.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this article requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-### [Azure CLI](#tab/azure-cli)
+## Create a new AKS cluster with ACR integration
-* **Owner**, **Azure account administrator**, or **Azure co-administrator** role on the **Azure subscription**
-* Azure CLI version 2.7.0 or later
+You can set up AKS and ACR integration during the creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure AD managed identity is used.
-### [Azure PowerShell](#tab/azure-powershell)
+### Create an ACR
-* **Owner**, **Azure account administrator**, or **Azure co-administrator** role on the **Azure subscription**
-* Azure PowerShell version 5.9.0 or later
+If you don't already have an ACR, create one using the following command.
-
+#### [Azure CLI](#tab/azure-cli)
-To avoid needing an **Owner**, **Azure account administrator**, or **Azure co-administrator** role, you can use an existing managed identity to authenticate ACR from AKS. For more information, see [Use an Azure managed identity to authenticate to an Azure container registry](../container-registry/container-registry-authentication-managed-identity.md).
+```azurecli
+# Set this variable to the name of your ACR. The name must be globally unique.
-## Create a new AKS cluster with ACR integration
+MYACR=myContainerRegistry
-You can set up AKS and ACR integration during the initial creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure Active Directory **managed identity** is used. The following command allows you to authorize an existing ACR in your subscription and configures the appropriate **ACRPull** role for the managed identity. Supply valid values for your parameters below.
+az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic
+```
-### [Azure CLI](#tab/azure-cli)
+#### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+# Set this variable to the name of your ACR. The name must be globally unique.
+
+$MYACR = 'myContainerRegistry'
+
+New-AzContainerRegistry -Name $MYACR -ResourceGroupName myContainerRegistryResourceGroup -Sku Basic
+```
+++
+### Create a new AKS cluster and integrate with an existing ACR
+
+If you already have an ACR, use the following command to create a new AKS cluster with ACR integration. This command allows you to authorize an existing ACR in your subscription and configures the appropriate **AcrPull** role for the managed identity. Supply valid values for your parameters below.
+
+#### [Azure CLI](#tab/azure-cli)
```azurecli
-# set this to the name of your Azure Container Registry. It must be globally unique
+# Set this variable to the name of your ACR. The name must be globally unique.
+ MYACR=myContainerRegistry
-# Run the following line to create an Azure Container Registry if you do not already have one
-az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic
+# Create an AKS cluster with ACR integration.
-# Create an AKS cluster with ACR integration
az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr $MYACR ```
-Alternatively, you can specify the ACR name using an ACR resource ID, which has the following format:
+Alternatively, you can specify the ACR name using an ACR resource ID using the following format:
`/subscriptions/\<subscription-id\>/resourceGroups/\<resource-group-name\>/providers/Microsoft.ContainerRegistry/registries/\<name\>` > [!NOTE]
-> If you are using an ACR that is located in a different subscription from your AKS cluster, use the ACR resource ID when attaching or detaching from an AKS cluster.
-
-```azurecli
-az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr /subscriptions/<subscription-id>/resourceGroups/myContainerRegistryResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry
-```
+> If you're using an ACR located in a different subscription from your AKS cluster, use the ACR *resource ID* when attaching or detaching from the cluster.
+>
+> ```azurecli
+> az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr /subscriptions/<subscription-id>/resourceGroups/myContainerRegistryResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry
+> ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell
-# set this to the name of your Azure Container Registry. It must be globally unique
+# Set this variable to the name of your ACR. The name must be globally unique.
+ $MYACR = 'myContainerRegistry'
-# Run the following line to create an Azure Container Registry if you do not already have one
-New-AzContainerRegistry -Name $MYACR -ResourceGroupName myContainerRegistryResourceGroup -Sku Basic
+# Create an AKS cluster with ACR integration.
-# Create an AKS cluster with ACR integration
New-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -GenerateSshKey -AcrNameToAttach $MYACR ```
This step may take several minutes to complete.
## Configure ACR integration for existing AKS clusters
-### [Azure CLI](#tab/azure-cli)
+### Attach an ACR to an AKS cluster
+
+#### [Azure CLI](#tab/azure-cli)
-Integrate an existing ACR with existing AKS clusters by supplying valid values for **acr-name** or **acr-resource-id** as below.
+Integrate an existing ACR with an existing AKS cluster using the [`--attach-acr` parameter][cli-param] and valid values for **acr-name** or **acr-resource-id**.
```azurecli
+# Attach using acr-name
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>
-```
-or,
-
-```azurecli
+# Attach using acr-resource-id
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-resource-id> ``` > [!NOTE]
-> Running `az aks update --attach-acr` uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the kubelet managed identity. For more information on the AKS managed identities, see [Summary of managed identities][summary-msi].
+> The `az aks update --attach-acr` command uses the permissions of the user running the command to create the ACR role assignment. This role is assigned to the [kubelet][kubelet] managed identity. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
-You can also remove the integration between an ACR and an AKS cluster with the following
+#### [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-name>
+Integrate an existing ACR with an existing AKS cluster using the [`-AcrNameToAttach` parameter][ps-attach] and valid values for **acr-name**.
+
+```azurepowershell
+Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToAttach <acr-name>
```
-or
+> [!NOTE]
+> Running the `Set-AzAksCluster -AcrNameToAttach` cmdlet uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the [kubelet][kubelet] managed identity. For more information on AKS managed identities, see [Summary of managed identities][summary-msi].
-```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-resource-id>
-```
+
-### [Azure PowerShell](#tab/azure-powershell)
+### Detach an ACR from an AKS cluster
-Integrate an existing ACR with existing AKS clusters by supplying valid values for **acr-name** as below.
+#### [Azure CLI](#tab/azure-cli)
-```azurepowershell
-Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToAttach <acr-name>
+Remove the integration between an ACR and an AKS cluster using the [`--detach-acr` parameter][cli-param] and valid values for **acr-name** or **acr-resource-id**.
+
+```azurecli
+# Detach using acr-name
+az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-name>
+
+# Detach using acr-resource-id
+az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-resource-id>
```
-> [!NOTE]
-> Running `Set-AzAksCluster -AcrNameToAttach` uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the kubelet managed identity. For more information on the AKS managed identities, see [Summary of managed identities][summary-msi].
+#### [Azure PowerShell](#tab/azure-powershell)
-You can also remove the integration between an ACR and an AKS cluster with the following
+Remove the integration between an ACR and an AKS cluster using the [`-AcrNameToDetach` parameter][ps-detach] and valid values for **acr-name**.
```azurepowershell Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToDetach <acr-name>
Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameT
### Import an image into your ACR
-Import an image from docker hub into your ACR by running the following:
+Run the following command to import an image from Docker Hub into your ACR.
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
```azurecli az acr import -n <acr-name> --source docker.io/library/nginx:latest --image nginx:v1 ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell Import-AzContainerRegistryImage -RegistryName <acr-name> -ResourceGroupName myResourceGroup -SourceRegistryUri docker.io -SourceImage library/nginx:latest
Import-AzContainerRegistryImage -RegistryName <acr-name> -ResourceGroupName myRe
### Deploy the sample image from ACR to AKS
-Ensure you have the proper AKS credentials
+Ensure you have the proper AKS credentials.
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
```azurecli az aks get-credentials -g myResourceGroup -n myAKSCluster ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
-Create a file called **acr-nginx.yaml** that contains the following. Substitute the resource name of your registry for **acr-name**. Example: *myContainerRegistry*.
+Create a file called **acr-nginx.yaml** using the sample YAML below. Replace **acr-name** with the name of your ACR.
```yaml apiVersion: apps/v1
spec:
- containerPort: 80 ```
-Next, run this deployment in your AKS cluster:
+After creating the file, run the following deployment in your AKS cluster.
```console kubectl apply -f acr-nginx.yaml ```
-You can monitor the deployment by running:
+You can monitor the deployment by running `kubectl get pods`.
```console kubectl get pods ```
-You should have two running pods.
+The output should show two running pods.
```output NAME READY STATUS RESTARTS AGE
nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
``` ### Troubleshooting
-* Run the [az aks check-acr](/cli/azure/aks#az-aks-check-acr) command to validate that the registry is accessible from the AKS cluster.
-* Learn more about [ACR Monitoring](../container-registry/monitor-service.md)
-* Learn more about [ACR Health](../container-registry/container-registry-check-health.md)
+
+* Run the [`az aks check-acr`](/cli/azure/aks#az-aks-check-acr) command to validate that the registry is accessible from the AKS cluster.
+* Learn more about [ACR monitoring](../container-registry/monitor-service.md).
+* Learn more about [ACR health](../container-registry/container-registry-check-health.md).
<!-- LINKS - external -->
-[AKS AKS CLI]: /cli/azure/aks#az_aks_create
-[Image Pull secret]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+[image-pull-secret]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
[summary-msi]: use-managed-identity.md#summary-of-managed-identities
+[acr-pull]: ../role-based-access-control/built-in-roles.md#acrpull
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
+[acr-intro]: ../container-registry/container-registry-intro.md
+[aad-identity]: ../active-directory/managed-identities-azure-resources/overview.md
+[rbac-owner]: ../role-based-access-control/built-in-roles.md#owner
+[rbac-classic]: ../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles
+[kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
+[ps-detach]: /powershell/module/az.aks/set-azakscluster#-acrnametodetach
+[cli-param]: /cli/azure/aks#az-aks-update-optional-parameters
+[ps-attach]: /powershell/module/az.aks/set-azakscluster#-acrnametoattach
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
A breakdown of the deployment specifications in the YAML manifest file is as fol
| -- | - | | `.apiVersion` | Specifies the API group and API resource you want to use when creating the resource. | | `.kind` | Specifies the type of resource you want to create. |
-| `.metadata.name` | Specifies the image to run. This file will run the *nginx* image from Docker Hub. |
+| `.metadata.name` | Specifies the name of the deployment. This file will run the *nginx* image from Docker Hub. |
| `.spec.replicas` | Specifies how many pods to create. This file will create three deplicated pods. | | `.spec.selector` | Specifies which pods will be affected by this deployment. | | `.spec.selector.matchLabels` | Contains a map of *{key, value}* pairs that allows the deployment to find and manage the created pods. |
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
To use Managed NAT gateway, you must have the following:
* Kubernetes version 1.20.x or above ## Create an AKS cluster with a Managed NAT Gateway
-To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway` as well as `--nat-gateway-managed-outbound-ip-count` and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myresourcegroup* resource group, then creates a *natcluster* AKS cluster in *myresourcegroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 30 seconds.
+To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway` as well as `--nat-gateway-managed-outbound-ip-count` and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myresourcegroup* resource group, then creates a *natcluster* AKS cluster in *myresourcegroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 4 minutes.
```azurecli-interactive
az aks create \
--node-count 3 \ --outbound-type managedNATGateway \ --nat-gateway-managed-outbound-ip-count 2 \
- --nat-gateway-idle-timeout 30
+ --nat-gateway-idle-timeout 4
``` > [!IMPORTANT]
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quotas-skus-regions.md
Each node in an AKS cluster contains a fixed amount of compute resources such as
For more information on VM types and their compute resources, see [Sizes for virtual machines in Azure][vm-skus].
+## Supported container image sizes
+
+AKS doesn't set a limit on the container image size. However, it's important to understand that the larger the container image, the higher the memory demand. This could potentially exceed resource limits or the overall available memory of worker nodes. By default, memory for VM size Standard_DS2_v2 for an AKS cluster is set to 7 GiB.
+
+When a container image is very large (1 TiB or more), kubelet might not be able to pull it from your container registry to a node due to lack of disk space.
+ ## Region availability For the latest list of where you can deploy and run clusters, see [AKS region availability][region-availability].
aks Use Mariner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md
description: Learn how to use the Mariner container host on Azure Kubernetes Ser
Previously updated : 09/22/2022 Last updated : 11/17/2022 # Use the Mariner container host on Azure Kubernetes Service (AKS)
Mariner currently has the following limitations:
* Mariner does not yet have image SKUs for GPU, ARM64, SGX, or FIPS. * Mariner does not yet have FedRAMP, FIPS, or CIS certification. * Mariner cannot yet be deployed through Azure portal or Terraform.
-* Qualys and Trivy are the only vulnerability scanning tools that support Mariner today.
+* Qualys, Trivy, and Microsoft Defender for Containers are the only vulnerability scanning tools that support Mariner today.
* The Mariner container host is a Gen 2 image. Mariner does not plan to offer a Gen 1 SKU. * Node configurations are not yet supported. * Mariner is not yet supported in GitHub actions.
aks Use Windows Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-windows-hpc.md
spec:
hostProcess: true runAsUserName: "NT AUTHORITY\\SYSTEM" command:
- - pwsh.exe
+ - powershell.exe
- -command - | $AdminRights = ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]"Administrator")
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
To register the app, perform the following steps:
You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+## Additional validations (optional)
+
+The steps defined above allow you to authenticate incoming requests for your Azure AD tenant. This allows anyone within the tenant to access the application, which is fine for many applications. However, some applications need to restrict access further by making authorization decisions. Your application code is often the best place to handle custom authorization logic. However, for common scenarios, the platform provides built-in checks that you can use to limit access.
+
+This section shows how to enable built-in checks using the [App Service authentication V2 API](./configure-authentication-api-version.md). Currently, the only way to configure these built-in checks is via [Azure Resource Manager templates](/azure/templates/microsoft.web/sites/config-authsettingsv2) or the [REST API](/rest/api/appservice/web-apps/update-auth-settings-v2).
+
+Within the API object, the Azure Active Directory identity provider configuration has a `valdation` section that can include a `defaultAuthorizationPolicy` object as in the following structure:
+
+```json
+{
+ "validation": {
+ "defaultAuthorizationPolicy": {
+ "allowedApplications": [],
+ "allowedPrincipals": {
+ "identities": []
+ }
+ }
+ }
+}
+```
+
+| Property | Description |
+||-|
+| `defaultAuthorizationPolicy` | A grouping of requirements that must be met in order to access the app. Access is granted based on a logical `AND` over each of its configured properties. When `allowedApplications` and `allowedPrincipals` are both configured, the incoming request must satisfy both requirements in order to be accepted. |
+| `allowedApplications` | An allowlist of string application **client IDs** representing the client resource that is calling into the app. When this property is configured as a nonempty array, only tokens obtained by an application specified in the list will be accepted.<br/><br/>This policy evaluates the `appid` or `azp` claim of the incoming token, which must be an access token. See the [Microsoft Identity Platform claims reference]. |
+| `allowedPrincipals` | A grouping of checks that determine if the principal represented by the incoming request may access the app. Satisfaction of `allowedPrincipals` is based on a logical `OR` over its configured properties. |
+| `identities` (under `allowedPrincipals`) | An allowlist of string **object IDs** representing users or applications that have access. When this property is configured as a nonempty array, the `allowedPrincipals` requirement can be satisfied if the user or application represented by the request is specified in the list.<br/><br/>This policy evaluates the `oid` claim of the incoming token. See the [Microsoft Identity Platform claims reference]. |
+
+Requests that fail these built-in checks are given an HTTP `403 Forbidden` response.
+
+[Microsoft Identity Platform claims reference]: ../active-directory/develop/access-tokens.md#payload-claims
+ ## Configure client apps to access your App Service In the prior section, you registered your App Service or Azure Function to authenticate users. This section explains how to register native client or daemon apps so that they can request access to APIs exposed by your App Service on behalf of users or themselves. Completing the steps in this section is not required if you only wish to authenticate users.
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Azure generates the activity log by default. The logs are preserved for 90 days
The access log is generated only if you've enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format as shown below.
-#### For Application Gateway Standard and WAF SKU (v1)
-
-|Value |Description |
-|||
-|instanceId | Application Gateway instance that served the request. |
-|clientIP | Originating IP for the request. |
-|clientPort | Originating port for the request. |
-|httpMethod | HTTP method used by the request. |
-|requestUri | URI of the received request. |
-|RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
-|UserAgent | User agent from the HTTP request header. |
-|httpStatus | HTTP status code returned to the client from Application Gateway. |
-|httpVersion | HTTP version of the request. |
-|receivedBytes | Size of packet received, in bytes. |
-|sentBytes| Size of packet sent, in bytes.|
-|timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
-|sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
-|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name will reflect that.|
-|originalHost| The hostname with which the request was received by the Application Gateway from the client.|
-
-```json
-{
- "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
- "operationName": "ApplicationGatewayAccess",
- "time": "2017-04-26T19:27:38Z",
- "category": "ApplicationGatewayAccessLog",
- "properties": {
- "instanceId": "ApplicationGatewayRole_IN_0",
- "clientIP": "191.96.249.97",
- "clientPort": 46886,
- "httpMethod": "GET",
- "requestUri": "/phpmyadmin/scripts/setup.php",
- "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404",
- "userAgent": "-",
- "httpStatus": 404,
- "httpVersion": "HTTP/1.0",
- "receivedBytes": 65,
- "sentBytes": 553,
- "timeTaken": 205,
- "sslEnabled": "off",
- "host": "www.contoso.com",
- "originalHost": "www.contoso.com"
- }
-}
-```
#### For Application Gateway and WAF v2 SKU |Value |Description | ||| |instanceId | Application Gateway instance that served the request. |
-|clientIP | Originating IP for the request. |
+|clientIP | IP of the immediate client of Application Gateway. If another proxy fronts your application gateway, this displays the IP of that fronting proxy. |
|httpMethod | HTTP method used by the request. | |requestUri | URI of the received request. | |UserAgent | User agent from the HTTP request header. |
The access log is generated only if you've enabled it on each Application Gatewa
> [!Note] >Access logs with clientIP value 127.0.0.1 originate from an internal security process running on the application gateway instances. You can safely ignore these log entries.
+#### For Application Gateway Standard and WAF SKU (v1)
+
+|Value |Description |
+|||
+|instanceId | Application Gateway instance that served the request. |
+|clientIP | Originating IP for the request. |
+|clientPort | Originating port for the request. |
+|httpMethod | HTTP method used by the request. |
+|requestUri | URI of the received request. |
+|RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
+|UserAgent | User agent from the HTTP request header. |
+|httpStatus | HTTP status code returned to the client from Application Gateway. |
+|httpVersion | HTTP version of the request. |
+|receivedBytes | Size of packet received, in bytes. |
+|sentBytes| Size of packet sent, in bytes.|
+|timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
+|sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
+|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name will reflect that.|
+|originalHost| The hostname with which the request was received by the Application Gateway from the client.|
+
+```json
+{
+ "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
+ "operationName": "ApplicationGatewayAccess",
+ "time": "2017-04-26T19:27:38Z",
+ "category": "ApplicationGatewayAccessLog",
+ "properties": {
+ "instanceId": "ApplicationGatewayRole_IN_0",
+ "clientIP": "191.96.249.97",
+ "clientPort": 46886,
+ "httpMethod": "GET",
+ "requestUri": "/phpmyadmin/scripts/setup.php",
+ "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404",
+ "userAgent": "-",
+ "httpStatus": 404,
+ "httpVersion": "HTTP/1.0",
+ "receivedBytes": 65,
+ "sentBytes": 553,
+ "timeTaken": 205,
+ "sslEnabled": "off",
+ "host": "www.contoso.com",
+ "originalHost": "www.contoso.com"
+ }
+}
+```
+ ### Performance log The performance log is generated only if you have enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. The performance log data is generated in 1-minute intervals. It is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. The following data is logged:
automanage Reference Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/reference-sdk.md
Previously updated : 08/25/2022 Last updated : 11/17/2022 # Automanage SDK overview
-Azure Automanage currently supports the following SDKs:
+Azure Automanage currently supports the following SDKs:
- [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/automanage/azure-mgmt-automanage) - [Go](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/automanage/armautomanage) - [Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/automanage/azure-resourcemanager-automanage) - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/automanage/arm-automanage)-- CSharp (pending)
+- [CSharp](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/automanage/Azure.ResourceManager.Automanage)
- PowerShell (pending) - Azure CLI (pending) - Terraform (pending)
-Here's a list of a few of the primary operations the SDKs provide:
+Here's a list of a few of the primary operations the SDKs provide:
- Create custom configuration profiles - Delete custom configuration profiles-- Create Best Practices profile assignments -- Create custom profile assignments
+- Create Best Practices profile assignments
+- Create custom profile assignments
- Remove assignments
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
Use the App Configuration provider or SDK libraries to access App Configuration
You can also make your App Configuration data accessible to your application as *Application settings* or environment variables. With this approach, you can avoid changing your application code.
-* Add references to your App Configuration data in the *Application settings* of your App Service or Azure Functions. For more information, see [Use App Configuration references for App Service and Azure Functions](../app-service/app-service-configuration-references.md).
-* [Export your App Configuration data](howto-import-export-data.md#export-data-to-azure-app-service) to the *Application settings* of your App Service or Azure Functions. Export your data again every time you make new changes in App Configuration if you like your application to pick up the change.
+* Add references to your App Configuration data in the *Application settings* of your App Service or Azure Functions. App Configuration offers tools to [export a collection of key-values as references](howto-import-export-data.md#export-data-to-azure-app-service) at once. For more information, see [Use App Configuration references for App Service and Azure Functions](../app-service/app-service-configuration-references.md).
+* [Export your App Configuration data](howto-import-export-data.md#export-data-to-azure-app-service) to the *Application settings* of your App Service or Azure Functions without selecting the option of export-as-reference. Export your data again every time you make new changes in App Configuration if you like your application to pick up the change.
## Reduce requests made to App Configuration
azure-cache-for-redis Cache Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-failover.md
Title: Failover and patching - Azure Cache for Redis description: Learn about failover, patching, and the update process for Azure Cache for Redis. + Previously updated : 03/15/2022 Last updated : 11/16/2022+
An *unplanned failover* might happen because of hardware failure, network failur
The Azure Cache for Redis service regularly updates your cache with the latest platform features and fixes. To patch a cache, the service follows these steps:
-1. The management service selects one node to be patched.
-1. If the selected node is a primary node, the corresponding replica node cooperatively promotes itself. This promotion is considered a planned failover.
-1. The selected node reboots to take the new changes and comes back up as a replica node.
+1. The service patches the replica node first.
+1. The patched replica cooperatively promotes itself to primary. This promotion is considered a planned failover.
+1. The former primary node reboots to take the new changes and comes back up as a replica node.
1. The replica node connects to the primary node and synchronizes data. 1. When the data sync is complete, the patching process repeats for the remaining nodes.
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
Previously updated : 09/29/2022- Last updated : 11/16/2022+ # How to upgrade an existing Redis 4 cache to Redis 6
-Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is permanent, and it might cause a brief connection issue similar to regular monthly maintenance. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading.
+Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is similar to regular monthly maintenance. Upgrading follows the same pattern as maintenance: First, the Redis version on the replica node is updated, followed by an update to the primary node. Your client application should treat the upgrade operation exactly like a planned maintenance event.
+
+As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading.
For more details on how to export, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
For more details on how to export, see [Import and Export data in Azure Cache fo
### Limitations -- Upgrading a Basic tier cache results in brief unavailability and data loss.
+- When you upgrade a cache in the Basic tier, it is unavailable for several minutes and results in data loss.
- Upgrading on geo-replicated cache isn't supported. You must manually unlink the cache instances before upgrading. - Upgrading a cache with a dependency on Cloud Services isn't supported. You should migrate your cache instance to virtual machine scale set before upgrading. For more information, see [Caches with a dependency on Cloud Services (classic)](./cache-faq.yml) for details on cloud services hosted caches.
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
You can exclude certain types of telemetry from sampling. In this example, data
```
-For more information, see [Sampling in Application Insights](../azure-monitor/app/sampling.md).
+If your project takes a dependency on the Application Insights SDK to do manual telemetry tracking, you may experience strange behavior if your sampling configuration differs from the sampling configuration in your function app. In such cases, use the same sampling configuration as the function app. For more information, see [Sampling in Application Insights](../azure-monitor/app/sampling.md).
## Enable SQL query collection
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
Typically, you create an Application Insights instance when you create your func
> [!IMPORTANT] > Sovereign clouds, such as Azure Government, require the use of the Application Insights connection string (`APPLICATIONINSIGHTS_CONNECTION_STRING`) instead of the instrumentation key. To learn more, see the [APPLICATIONINSIGHTS_CONNECTION_STRING reference](functions-app-settings.md#applicationinsights_connection_string).
+The following table details the supported features of Application Insights available for monitoring your function apps:
+
+| Azure Functions runtime version | 1.x | 2.x+ |
+|--|::|::|
+| | | |
+| **Automatic collection of** | | |
+| &bull; Requests | Γ£ô | Γ£ô |
+| &bull; Exceptions | Γ£ô | Γ£ô |
+| &bull; Performance Counters | Γ£ô | Γ£ô |
+| &bull; Dependencies | | |
+| &nbsp;&nbsp;&nbsp;&mdash; HTTP | | Γ£ô |
+| &nbsp;&nbsp;&nbsp;&mdash; Service Bus| | Γ£ô |
+| &nbsp;&nbsp;&nbsp;&mdash; Event Hubs | | Γ£ô |
+| &nbsp;&nbsp;&nbsp;&mdash; SQL\* | | Γ£ô |
+| | | |
+| **Supported features** | | |
+| &bull; QuickPulse/LiveMetrics | Yes | Yes |
+| &nbsp;&nbsp;&nbsp;&mdash; Secure Control Channel | | Yes |
+| &bull; Sampling | Yes | Yes |
+| &bull; Heartbeats | | Yes |
+| | | |
+| **Correlation** | | |
+| &bull; Service Bus | | Yes |
+| &bull; Event Hubs | | Yes |
+| | | |
+| **Configurable** | | |
+| &bull;[Fully configurable](#custom-telemetry-data) | | Yes |
+
+\* To enable the collection of SQL query string text, see [Enable SQL query collection](./configure-monitoring.md#enable-sql-query-collection).
+ ## Collecting telemetry data With Application Insights integration enabled, telemetry data is sent to your connected Application Insights instance. This data includes logs generated by the Functions host, traces written from your functions code, and performance data.
In addition to automatic dependency data collection, you can also use one of the
+ [Log custom telemetry in JavaScript functions](functions-reference-node.md#log-custom-telemetry) + [Log custom telemetry in Python functions](functions-reference-python.md#log-custom-telemetry)
+### Performance Counters
+
+Automatic collection of Performance Counters isn't supported when running on Linux.
+ ## Writing to logs The way that you write to logs and the APIs you use depend on the language of your function app project.
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** blade from left menu > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
2. If you don't see the extension listed, check if machine can reach Azure and find the extension to install using the command below: ```azurecli az vm extension image list-versions --location <machine-region> --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
4. **Verify that the DCR exists and is associated with the virtual machine:** 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
- 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the virtual machine listed here.
+ 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here.
3. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs. 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** blade from left menu > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
+ 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
2. If not, check if the Arc agent (Connected Machine Agent) is able to connect to Azure and the extension service is running. ```azurecli azcmagent show
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
4. **Verify that the DCR exists and is associated with the Arc-enabled server:** 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace. 2. On your Arc-enabled server, verify the existence of the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.latest.xml`. If this file doesn't exist, the Arc-enabled server may not be associated with a DCR.
- 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the Arc-enabled server listed here
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the Arc-enabled server listed here
4. If not listed, click 'Add' and select your Arc-enabled server from the resource picker. Repeat across all DCRs. 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** blade from left menu > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
2. If not, check if machine can reach Azure and find the extension to install using the command below: ```azurecli az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
- The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable. - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'. - AMA cannot access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'.
- 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the virtual machine listed here
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here
4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs. 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
azure-monitor Data Sources Event Tracing Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-event-tracing-windows.md
Ensure the *Windows diagnostics extension* is [installed](./diagnostics-extensio
### Step 3: Configure ETW log collection
-1. Navigate to the **Diagnostic Settings** blade of the virtual machine
+1. From the pane on the left, navigate to the **Diagnostic Settings** for the virtual machine
-2. Select the **Logs** tab
+2. Select the **Logs** tab.
3. Scroll down and enable the **Event tracing for Windows (ETW) events** option ![Screenshot of diagnostics settings](./media/data-sources-event-tracing-windows/enable-event-tracing-windows-collection.png)
azure-monitor Alerts Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-logic-apps.md
To trigger your Logic app, create an action group, then create an alert that use
1. Select **OK**. 1. Enter a name in the **Name** field. 1. Select **Review + create**, the **Create**. ## Test your action group
The following email will be sent to the specified account:
1. Select your action group from the list. 1. Select **Select**. 1. Finish the creation of your rule.
- :::image type="content" source="./media/alerts-logic-apps/select-action-groups.png" alt-text="A screenshot showing the actions tab of the create rules page and the select action groups blade.":::
+ :::image type="content" source="./media/alerts-logic-apps/select-action-groups.png" alt-text="A screenshot showing the actions tab of the create rules page and the select action groups pane.":::
## Next steps
azure-monitor Alerts Smart Detections Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smart-detections-migration.md
A new set of alert rules is created when migrating an Application Insights resou
| Potential security issue detected (preview) | *discontinued* <sup>(3)</sup> | | Abnormal rise in daily data volume (preview) | *discontinued* <sup>(3)</sup> |
-<sup>(1)</sup> Name of rule as appears in smart detection Settings blade
+<sup>(1)</sup> Name of rule as appears in smart detection Settings pane
<sup>(2)</sup> Name of new alert rule after migration <sup>(3)</sup> These smart detection capabilities aren't converted to alerts, because of low usage and reassessment of detection effectiveness. These detectors will no longer be supported for this resource once its migration is completed.
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md
Smart detection automatically warns you of potential performance problems and failure anomalies in your web application. It performs proactive analysis of the telemetry that your app sends to [Application Insights](../app/app-insights-overview.md). If there is a sudden rise in failure rates, or abnormal patterns in client or server performance, you get an alert. This feature needs no configuration. It operates if your application sends enough telemetry.
-You can access the detections issued by smart detection both from the emails you receive, and from the smart detection blade.
+You can access the detections issued by smart detection both from the emails you receive, and from the smart detection pane.
## Review your smart detections You can discover detections in two ways:
You can discover detections in two ways:
![Email alert](./media/proactive-diagnostics/03.png) Click the large button to open more detail in the portal.
-* **The smart detection blade** in Application Insights. Select **Smart detection** under the **Investigate** menu to see a list of recent detections.
+* **The smart detection pane** in Application Insights. Select **Smart detection** under the **Investigate** menu to see a list of recent detections.
![View recent detections](./media/proactive-diagnostics/04.png)
Smart detection detects and notifies about various issues, such as:
All smart detection rules, except for rules marked as _preview_, are configured by default to send email notifications when detections are found.
-Configuring email notifications for a specific smart detection rule can be done by opening the smart detection **Settings** blade and selecting the rule, which will open the **Edit rule** blade.
+Configuring email notifications for a specific smart detection rule can be done by opening the smart detection **Settings** pane and selecting the rule, which will open the **Edit rule** pane.
Alternatively, you can change the configuration using Azure Resource Manager templates. For more information, see [Manage Application Insights smart detection rules using Azure Resource Manager templates](./proactive-arm-config.md) for more details.
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
The notifications include diagnostic information. Here's an example:
2. **Scope**. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or locations? This information can be obtained from the notification. 3. **Diagnose**. Often, the diagnostic information in the notification will suggest the nature of the problem. For example, if response time slows down when request rate is high, it may indicate that your server or dependencies are beyond their capacity.
- Otherwise, open the Performance blade in Application Insights. You'll find there [Profiler](../profiler/profiler.md) data. If exceptions are thrown, you can also try the [snapshot debugger](../snapshot-debugger/snapshot-debugger.md).
+ Otherwise, open the Performance pane in Application Insights. You'll find there [Profiler](../profiler/profiler.md) data. If exceptions are thrown, you can also try the [snapshot debugger](../snapshot-debugger/snapshot-debugger.md).
## Configure Email Notifications
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Filters can be reused in two ways:
later time may be different from the one observed when the link was created. -- The dashboard pin :::image type="content" source="media/app-map/image-10.png" alt-text="A screenshot displaying the dashboard pin button."::: is located next to the title bar of the Application Map blade. This button pins the map to a dashboard, along with the filters applied to it. This action can be useful for filters that are frequently interesting. As an example, the user can pin a map with "Error connector" filter applied to it, and the dashboard view will only show nodes that have errors in their HTTP calls.
+- The dashboard pin :::image type="content" source="media/app-map/image-10.png" alt-text="A screenshot displaying the dashboard pin button."::: is located next to the title bar of the Application Map pane. This button pins the map to a dashboard, along with the filters applied to it. This action can be useful for filters that are frequently interesting. As an example, the user can pin a map with "Error connector" filter applied to it, and the dashboard view will only show nodes that have errors in their HTTP calls.
#### Filter usage scenarios
azure-monitor Azure Functions Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-functions-supported-features.md
- Title: Azure Application Insights - Azure Functions Supported Features
-description: Application Insights Supported Features for Azure Functions
- Previously updated : 4/23/2019---
-# Application Insights for Azure Functions supported features
-
-Azure Functions offers [built-in integration](../../azure-functions/functions-monitoring.md) with Application Insights, which is available through the ILogger Interface. Below is the list of currently supported features. Review Azure Functions' guide for [Getting started](../../azure-functions/configure-monitoring.md#enable-application-insights-integration).
-
-For more information about Functions runtime versions, see [here](../../azure-functions/functions-versions.md).
-
-For more information about compatible versions of Application Insights, see [Dependencies](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Logging.ApplicationInsights/).
-
-## Supported features
-
-| Azure Functions | V1 | V2 & V3 |
-|--|||
-| | | |
-| **Automatic collection of** | | |
-| &bull; Requests | Yes | Yes |
-| &bull; Exceptions | Yes | Yes |
-| &bull; Performance Counters | Yes | Yes |
-| &bull; Dependencies | | |
-| &nbsp;&nbsp;&nbsp;&mdash; HTTP | | Yes |
-| &nbsp;&nbsp;&nbsp;&mdash; ServiceBus| | Yes |
-| &nbsp;&nbsp;&nbsp;&mdash; EventHub | | Yes |
-| &nbsp;&nbsp;&nbsp;&mdash; SQL | | Yes |
-| | | |
-| **Supported features** | | |
-| &bull; QuickPulse/LiveMetrics | Yes | Yes |
-| &nbsp;&nbsp;&nbsp;&mdash; Secure Control Channel | | Yes |
-| &bull; Sampling | Yes | Yes |
-| &bull; Heartbeats | | Yes |
-| | | |
-| **Correlation** | | |
-| &bull; ServiceBus | | Yes |
-| &bull; EventHub | | Yes |
-| | | |
-| **Configurable** | | |
-| &bull;Fully configurable.<br/>See [Azure Functions](https://github.com/Microsoft/ApplicationInsights-aspnetcore/issues/759#issuecomment-426687852) for instructions.<br/>See [ASP.NET Core](https://github.com/Microsoft/ApplicationInsights-aspnetcore/wiki/Custom-Configuration) for all options. | | Yes |
-
-## Performance Counters
-
-Automatic collection of Performance Counters only work Windows machines.
-
-## Live Metrics & Secure Control Channel
-
-The custom filters criteria you specify are sent back to the Live Metrics component in the Application Insights SDK. The filters could potentially contain sensitive information such as customerIDs. You can make the channel secure with a secret API key. See [Secure the control channel](./live-stream.md#secure-the-control-channel) for instructions.
-
-## Sampling
-
-Azure Functions enables Sampling by default in their configuration. For more information, see [Configure Sampling](../../azure-functions/configure-monitoring.md#configure-sampling).
-
-If your project takes a dependency on the Application Insights SDK to do manual telemetry tracking, you may experience strange behavior if your sampling configuration is different than the Functions' sampling configuration.
-
-We recommend using the same configuration as Functions. With **Functions v2**, you can get the same configuration using dependency injection in your constructor:
-
-```csharp
-using Microsoft.ApplicationInsights;
-using Microsoft.ApplicationInsights.Extensibility;
-
-public class Function1
-{
-
- private readonly TelemetryClient telemetryClient;
-
- public Function1(TelemetryConfiguration configuration)
- {
- this.telemetryClient = new TelemetryClient(configuration);
- }
-
- [FunctionName("Function1")]
- public async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger logger)
- {
- this.telemetryClient.TrackTrace("C# HTTP trigger function processed a request.");
- }
-}
-```
azure-monitor Java 2X Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-collectd.md
Restart collectd according to its [manual](https://collectd.org/wiki/index.php/F
## View the data in Application Insights In your Application Insights resource, open [Metrics and add charts][metrics], selecting the metrics you want to see from the Custom category.
-By default, the metrics are aggregated across all host machines from which the metrics were collected. To view the metrics per host, in the Chart details blade, turn on Grouping and then choose to group by CollectD-Host.
+By default, the metrics are aggregated across all host machines from which the metrics were collected. To view the metrics per host, in the Chart details pane, turn on Grouping and then choose to group by CollectD-Host.
## To exclude upload of specific statistics By default, the Application Insights plugin sends all the data collected by all the enabled collectd 'read' plugins.
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Either run it in debug mode on your development machine, or publish to your serv
## View your telemetry in Application Insights Return to your Application Insights resource in [Microsoft Azure portal](https://portal.azure.com).
-HTTP requests data appears on the overview blade. (If it isn't there, wait a few seconds and then click Refresh.)
+HTTP requests data appears on the overview pane. (If it isn't there, wait a few seconds and then click Refresh.)
![Screenshot of overview sample data](./media/java-get-started/overview-graphs.png)
azure-monitor Java 2X Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-trace-logs.md
To start getting traces, merge the relevant snippet of code to the Log4J or Logb
The Application Insights appenders can be referenced by any configured logger, and not necessarily by the root logger (as shown in the code samples above). ## Explore your traces in the Application Insights portal
-Now that you've configured your project to send traces to Application Insights, you can view and search these traces in the Application Insights portal, in the [Search][diagnostic] blade.
+Now that you've configured your project to send traces to Application Insights, you can view and search these traces in the Application Insights portal, in the [Search][diagnostic] pane.
Exceptions submitted via loggers will be displayed on the portal as Exception Telemetry.
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
The Application Insights Java profiler uses the JFR profiler provided by the JVM
This data is gathered on demand when trigger conditions are met. The available triggers are thresholds over CPU usage and Memory consumption.
-When a threshold is reached, a profile of the configured type and duration is gathered and uploaded. This profile is then visible within the performance blade of the associated Application Insights Portal UI.
+When a threshold is reached, a profile of the configured type and duration is gathered and uploaded. This profile is then visible within the performance pane of the associated Application Insights Portal UI.
> [!WARNING] > The JFR profiler by default executes the "profile-without-env-data" profile. A JFR file is a series of events emitted by the JVM. The "profile-without-env-data" configuration, is similar to the "profile" configuration that ships with the JVM, however has had some events disabled that have the potential to contain sensitive deployment information such as environment variables, arguments provided to the JVM and processes running on the system.
The following steps will guide you through enabling the profiling component on t
1. Configure the resource thresholds that will cause a profile to be collected: 1. Browse to the Performance -> Profiler section of the Application Insights instance.
- :::image type="content" source="./media/java-standalone-profiler/performance-blade.png" alt-text="Screenshot of the link to open performance blade." lightbox="media/java-standalone-profiler/performance-blade.png":::
- :::image type="content" source="./media/java-standalone-profiler/profiler-button.png" alt-text="Screenshot of the Profiler button from the Performance blade." lightbox="media/java-standalone-profiler/profiler-button.png":::
+ :::image type="content" source="./media/java-standalone-profiler/performance-blade.png" alt-text="Screenshot of the link to open performance pane." lightbox="media/java-standalone-profiler/performance-blade.png":::
+ :::image type="content" source="./media/java-standalone-profiler/profiler-button.png" alt-text="Screenshot of the Profiler button from the Performance pane." lightbox="media/java-standalone-profiler/profiler-button.png":::
2. Select "Triggers"
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
The end-to-end diagnostics and the application map provide visibility into one s
### How to enable distributed tracing for Java Function apps
-Navigate to the functions app Overview blade and go to configurations. Under Application Settings, click "+ New application setting".
+Navigate to the functions app Overview pane and go to configurations. Under Application Settings, click "+ New application setting".
> [!div class="mx-imgBorder"] > ![Under Settings, add new application settings](./media//functions/create-new-setting.png)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
for i in range(100):
> [!TIP]
-> If you're not sure where to set the sampling rate, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling.
+> If you're not sure where to set the sampling rate, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance panes. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling.
## Instrumentation libraries
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
This article explains the difference between "traditional" Application Insights
## Log-based Metrics
-In the past, the application monitoring telemetry data model in Application Insights was solely based on a small number of predefined types of events, such as requests, exceptions, dependency calls, page views, etc. Developers can use the SDK to either emit these events manually (by writing code that explicitly invokes the SDK) or they can rely on the automatic collection of events from auto-instrumentation. In either case, the Application Insights backend stores all collected events as logs, and the Application Insights blades in the Azure portal act as an analytical and diagnostic tool for visualizing event-based data from logs.
+In the past, the application monitoring telemetry data model in Application Insights was solely based on a small number of predefined types of events, such as requests, exceptions, dependency calls, page views, etc. Developers can use the SDK to either emit these events manually (by writing code that explicitly invokes the SDK) or they can rely on the automatic collection of events from auto-instrumentation. In either case, the Application Insights backend stores all collected events as logs, and the Application Insights panes in the Azure portal act as an analytical and diagnostic tool for visualizing event-based data from logs.
Using logs to retain a complete set of events can bring great analytical and diagnostic value. For example, you can get an exact count of requests to a particular URL with the number of distinct users who made these calls. Or you can get detailed diagnostic traces, including exceptions and dependency calls for any user session. Having this type of information can significantly improve visibility into the application health and usage, allowing to cut down the time necessary to diagnose issues with an app.
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Use this type of sampling if your app often goes over its monthly quota and you
Set the sampling rate in the Usage and estimated costs page:
-![From the application's Overview blade, click Settings, Quota, Samples, then select a sampling rate, and click Update.](./media/sampling/data-sampling.png)
+![From the application's Overview pane, click Settings, Quota, Samples, then select a sampling rate, and click Update.](./media/sampling/data-sampling.png)
Like other types of sampling, the algorithm retains related telemetry items. For example, when you're inspecting the telemetry in Search, you'll be able to find the request related to a particular exception. Metric counts such as request rate and exception rate are correctly retained.
As the application is scaled up, it may be processing dozens, hundreds, or thous
As sampling rates increase log based queries accuracy decrease and are usually inflated. This only impacts the accuracy of log-based queries when sampling is enabled and the sample rates are in a higher range (~ 60%). The impact varies based on telemetry types, telemetry counts per operation as well as other factors.
-To address the problems introduced by sampling pre-aggregated metrics are used in the SDKs. Additional details about these metrics, log-based and pre-aggregated, can be referenced in [Azure Application Insights - Azure Monitor | Microsoft Docs](./pre-aggregated-metrics-log-metrics.md#sdk-supported-pre-aggregated-metrics-table). Relevant properties of the logged data are identified and statistics extracted before sampling occurs. To avoid resource and cost issues, metrics are aggregated. The resulting aggregate data is represented by only a few metric telemetry items per minute, instead of potentially thousands of event telemetry items. These metrics calculate the 25 requests from the example and send a metric to the MDM account reporting ΓÇ£this web app processed 25 requestsΓÇ¥, but the sent request telemetry record will have an `itemCount` of 100. These pre-aggregated metrics report the correct numbers and can be relied upon when sampling affects the log-based queries results. They can be viewed on the Metrics blade of the Application Insights portal.
+To address the problems introduced by sampling pre-aggregated metrics are used in the SDKs. Additional details about these metrics, log-based and pre-aggregated, can be referenced in [Azure Application Insights - Azure Monitor | Microsoft Docs](./pre-aggregated-metrics-log-metrics.md#sdk-supported-pre-aggregated-metrics-table). Relevant properties of the logged data are identified and statistics extracted before sampling occurs. To avoid resource and cost issues, metrics are aggregated. The resulting aggregate data is represented by only a few metric telemetry items per minute, instead of potentially thousands of event telemetry items. These metrics calculate the 25 requests from the example and send a metric to the MDM account reporting ΓÇ£this web app processed 25 requestsΓÇ¥, but the sent request telemetry record will have an `itemCount` of 100. These pre-aggregated metrics report the correct numbers and can be relied upon when sampling affects the log-based queries results. They can be viewed on the Metrics pane of the Application Insights portal.
## Frequently asked questions
azure-monitor Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sharepoint.md
Insert a web part and embed the code snippet in it.
## View data about your app Redeploy your app.
-Return to your application blade in the [Azure portal](https://portal.azure.com).
+Return to your application pane in the [Azure portal](https://portal.azure.com).
The first events will appear in Search.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
The decision whether to configure a table for Basic Logs is based on the followi
- You only require basic queries of the data using a limited version of the query language. - The cost savings for data ingestion over a month exceed the expected cost for any expected queries
-See [Query Basic Logs in Azure Monitor (preview)](.//logs/basic-logs-query.md) for information on query limitations. See [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more information about Basic Logs.
+See [Query Basic Logs in Azure Monitor](.//logs/basic-logs-query.md) for information on query limitations. See [Configure Basic Logs in Azure Monitor](logs/basic-logs-configure.md) for more information about Basic Logs.
## Reduce the amount of data collected
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
If you are utilizing [Prometheus metric scraping](container-insights-prometheus.
### Configure Basic Logs
-You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs (preview)](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
+You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
>[!IMPORTANT] > If you are deploying Azure Monitor on a Kubernetes cluster running on top of Azure Stack Edge, then the Azure CLI option needs to be followed instead of the Azure portal option as a custom mount path needs to be set for these clusters.
-### Onboarding from the Azure Arc-enabled Kubernetes resource blade
+### Onboarding from the Azure Arc-enabled Kubernetes resource pane
1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster that you wish to monitor.
-2. Select the 'Insights' item under the 'Monitoring' section of the resource blade.
+2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
3. On the onboarding page, select the 'Configure Azure Monitor' button
Once you have successfully created the Azure Monitor extension for your Azure Ar
### [Azure portal](#tab/verify-portal) 1. In the Azure portal, select the Azure Arc-enabled Kubernetes cluster with the extension installing
-2. Select the 'Extensions' item under the 'Settings' section of the resource blade
+2. From the resource pane on the left, select the 'Extensions' item under the 'Settings' section.
3. You should see an extension with the name 'azuremonitor-containers' listed, with the listed status in the 'Install status' column ### [CLI](#tab/verify-cli) Run the following command to show the latest status of the `Microsoft.AzureMonitor.Containers` extension
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
The process that's outlined in this article only works on classic virtual machin
1. When you're creating this VM, choose the option to create a new classic storage account. We use this storage account in later steps.
-1. In the Azure portal, go to the **Storage accounts** resource blade. Select **Keys**, and take note of the storage account name and storage account key. You need this information in later steps.
+1. In the Azure portal, go to the **Storage accounts** resource pane. Select **Keys**, and take note of the storage account name and storage account key. You need this information in later steps.
![Storage access keys](./media/collect-custom-metrics-guestos-vm-classic/storage-access-keys.png) ## Create a service principal
Give this app ΓÇ£Monitoring Metrics PublisherΓÇ¥ permissions to the resource tha
1. On the left menu, select **Monitor.**
-1. On the **Monitor** blade, select **Metrics**.
+1. On the **Monitor** pane on the left, select **Metrics**.
![Navigate metrics](./media/collect-custom-metrics-guestos-vm-classic/navigate-metrics.png)
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 09/13/2022 Last updated : 11/17/2022
This latest update adds a new column and reorders the metrics to be alphabetical
|PendingCPU|Yes|Pending CPU|Count|Maximum|Pending CPU Requests in YARN|No Dimensions| |PendingMemory|Yes|Pending Memory|Count|Maximum|Pending Memory Requests in YARN|No Dimensions|
+> [!NOTE]
+> NumActiveWorkers is supported only if YARN is installed, and the Resource Manager is running.
## Microsoft.HealthcareApis/services
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
See the documentation for different services and solutions for any unique billin
In addition to the pay-as-you-go model, Log Analytics has *commitment tiers*, which can save you as much as 30 percent compared to the pay-as-you-go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB per day, at a lower price than pay-as-you-go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. - During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period.-- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to pay-as-you-go or to a different commitment tier at any time.
+- At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a lower commitment tier at any time.
+- If a workspace is inadvertently moved into a commitment tier, contact Microsoft Support to reset the commitment period so you can move back to the Pay-As-You-Go pricing tier.
Billing for the commitment tiers is done per workspace on a daily basis. If the workspace is part of a [dedicated cluster](#dedicated-clusters), the billing is done for the cluster. See the following "Dedicated clusters" section. For a list of the commitment tiers and their prices, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
The *principalId* GUID is generated by the managed identity service at cluster c
## Link a workspace to a cluster
-When a Log Analytics workspace is linked to a dedicated cluster, new data ingested to the workspace is routed to the new cluster while existing data remains on the existing cluster. If the dedicated cluster is encrypted using customer-managed keys (CMK), only new data is encrypted with the key. The system abstracts this difference, so you can query the workspace as usual while the system performs cross-cluster queries in the background.
+When a Log Analytics workspace is linked to a dedicated cluster, new data ingested to the workspace, is routed to the cluster while existing data remains in the existing Log Analytics cluster. If the dedicated cluster is configured with customer-managed keys (CMK), new ingested data is encrypted with your key. The system abstracts the data location, you can query data as usual while the system performs cross-cluster queries in the background.
-A cluster can be linked to up to 1,000 workspaces. Linked workspaces are located in the same region as the cluster. To protect the system backend and avoid fragmentation of data, a workspace can't be linked to a cluster more than twice a month.
+A cluster can be linked to up to 1,000 workspaces. Linked workspaces can be located in the same region as the cluster. A workspace can't be linked to a cluster more than twice a month, to prevent data fragmentation.
-To perform the link operation, you need to have 'write' permissions to both the workspace and the cluster resource:
+You need 'write' permissions to both the workspace and the cluster resource for workspace link operation:
- In the workspace: *Microsoft.OperationalInsights/workspaces/write* - In the cluster resource: *Microsoft.OperationalInsights/clusters/write*
-Other than the billing aspects, the linked workspace keeps its own settings such as the length of data retention.
+Other than the billing aspects, configuration of linked workspace remain, including data retention settings.
The workspace and the cluster can be in different subscriptions. It's possible for the workspace and cluster to be in different tenants if Azure Lighthouse is used to map both of them to a single tenant.
Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -Cl
``` The *billingType* property determines the billing attribution for the cluster and its data:-- *Cluster* (default) -- The billing is attributed to the Cluster resource-- *Workspaces* -- The billing is attributed to linked workspaces proportionally. When data volume from all workspaces is below the Commitment Tier level, the remaining volume is attributed to the cluster
+- *Cluster* (default) -- billing is attributed to the Cluster resource
+- *Workspaces* -- billing is attributed to linked workspaces proportionally. When data volume from all linked workspaces is below Commitment Tier level, the bill for the remaining volume is attributed to the cluster
**REST**
Content-type: application/json
### Unlink a workspace from cluster
-You can unlink a workspace from a cluster, and new data to workspace isn't ingested to cluster. Also, the workspace pricing tier is set to per-GB.
-Old data of the unlinked workspace might be left on the cluster. If this data is encrypted using customer-managed keys (CMK), the Key Vault secrets are kept. The system is abstracts this change from Log Analytics users. Users can just query the workspace as usual. The system performs cross-cluster queries on the backend as needed with no indication to users.
+You can unlink a workspace from a cluster at any time. The workspace pricing tier is changed to per-GB, data ingested to cluster before the unlink operation remains in the cluster, and new data to workspace get ingested to Log Analytics. You can query data as usual and the service performs cross-cluster queries seamlessly. If cluster was configured with Customer-managed key (CMK), data remains encrypted with your key and accessible, while your key and permissions to Key Vault remain.
-> [!WARNING]
-> There is a limit of two link operations for a specific workspace within a month. Take time to consider and plan unlinking actions accordingly.
+> [!NOT]
+> There is a limit of two link operations for a specific workspace within a month to prevent data distribution across clusters. Contact support if you reach limit.
Use the following commands to unlink a workspace from cluster:
Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-nam
## Delete cluster
-It's recommended that you unlink all workspaces from a dedicated cluster before deleting it. You need to have *write* permissions on the cluster resource. When deleting a cluster, you're losing access to all data ingested to the cluster from linked workspaces and from workspaces that were linked previously. This operation isn't reversible. If you delete your cluster when workspaces are linked, these get unlinked automatically and new data get ingested to Log Analytics storage instead.
+You need to have *write* permissions on the cluster resource.
-A cluster resource that was deleted in the last 14 days is kept in soft-delete state and its name remained reserved. After the soft-delete period, the cluster is permanently deleted and its name can be reused to create a cluster.
+When deleting a cluster, you're losing access to all data in cluster, ingested from workspaces that are linked to it, or were linked previously. This operation isn't reversible. If you delete your cluster while workspaces are linked, the workspaces get automatically unlinked from the cluster before the delete, and new data to workspaces gets ingested to Log Analytics. If workspace data retention is longer than the period it was linked to the cluster, you can query workspace for the time range before the link to cluster and after the unlink, and the service performs cross-cluster queries seamlessly.
-> [!WARNING]
-> - The recovery of soft-deleted clusters isn't supported and it can't be recovered once deleted.
-> - There is a limit of 4 clusters per subscription. Both active and soft-deleted clusters are counted as part of this. Customers shouldn't create recurrent procedures that create and delete clusters. It has a significant impact on Log Analytics backend systems.
+> [!NOTE]
+> - There is a limit of seven clusters per subscription, five active plus two if were deleted in past 14 days.
+> - Cluster's name remain reserved for 14 days after deletion, and can't be used for creating a new cluster. Deleted cluster's name is released and can be reused after 14 days.
Use the following commands to delete a cluster:
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
The `/read` permission is usually granted from a role that includes _\*/read or_
In addition to using the built-in roles for a Log Analytics workspace, you can create custom roles to assign more granular permissions. Here are some common examples.
-Grant a user access to log data from their resources:
+**Example 1: Grant a user access to log data from their resources.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users `*/read` or `Microsoft.Insights/logs/*/read` permissions to their resources. If they're already assigned the [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#reader) role on the workspace, it's sufficient.
-Grant a user access to log data from their resources and configure their resources to send logs to the workspace:
+**Example 2: Grant a user access to log data from their resources and configure their resources to send logs to the workspace.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions on the workspace: `Microsoft.OperationalInsights/workspaces/read` and `Microsoft.OperationalInsights/workspaces/sharedKeys/action`. With these permissions, users can't perform any workspace-level queries. They can only enumerate the workspace and use it as a destination for diagnostic settings or agent configuration. - Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read` and `Microsoft.Insights/diagnosticSettings/write`. If they're already assigned the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, assigned the Reader role, or granted `*/read` permissions on this resource, it's sufficient.
-Grant a user access to log data from their resources without being able to read security events and send data:
+**Example 3: Grant a user access to log data from their resources without being able to read security events and send data.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read`. - Add the following NonAction to block users from reading the SecurityEvent type: `Microsoft.Insights/logs/SecurityEvent/read`. The NonAction shall be in the same custom role as the action that provides the read permission (`Microsoft.Insights/logs/*/read`). If the user inherits the read action from another role that's assigned to this resource or to the subscription or resource group, they could read all log types. This scenario is also true if they inherit `*/read` that exists, for example, with the Reader or Contributor role.
-Grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace:
+**Example 4: Grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Grant users the following permissions on the workspace:
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
na Previously updated : 11/14/2022 Last updated : 11/17/2022 # Manage availability zone volume placement for Azure NetApp Files
Azure NetApp Files lets you deploy new volumes in the logical availability zone
* VMs and Azure NetApp Files volumes are to be deployed separately, within the same logical availability zone to create zone alignment between VMs and Azure NetApp Files. The availability zone volume placement feature does not create zonal VMs upon volume creation, or vice versa.
-> [!IMPORTANT]
-> Once the volume is created using the availability zone volume placement feature, the volume has the same level of support as other volumes deployed in the subscription without this feature enabled. For example, if there is an issue with backup and restore on the volume, it will be supported because the problem is not with the availability zone volume placement feature itself.
## Register the feature
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
na Previously updated : 10/31/2022 Last updated : 11/17/2022 # Use availability zones for high availability in Azure NetApp Files (preview)
The use of high availability (HA) architectures with availability zones are now
Azure NetApp Files' [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy volumes in availability zones of your choice, in alignment with Azure compute and other services in the same zone. + All Virtual Machines within the region in (peered) VNets can access all Azure NetApp Files resources (blue arrows). Virtual Machines accessing Azure NetApp Files volumes in the same zone (green arrows) share the availability zone failure domain. Azure NetApp Files deployments will occur in the availability of zone of choice if Azure NetApp Files is present in that availability zone and has sufficient capacity.
azure-portal Azure Portal Quickstart Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-quickstart-center.md
You can also select **Browse our full Azure catalog** to see all Azure learning
## Next steps * Learn more about Azure setup and migration in the [Microsoft Cloud Adoption Framework for Azure](/azure/architecture/cloud-adoption/).
-* Unlock your cloud skills with more [Learn modules]](/training/azure/).
+* Unlock your cloud skills with more [Learn modules](/training/azure/).
azure-resource-manager Bicep Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-scope.md
Title: Bicep functions - scopes description: Describes the functions to use in a Bicep file to retrieve values about deployment scopes. Previously updated : 11/23/2021 Last updated : 11/17/2022 # Scope functions for Bicep
Returns an object used for setting the scope to the tenant.
Or
-Returns properties about the tenant for the current deployment.
+Returns the tenant of the user.
Namespace: [az](bicep-functions.md#namespaces-for-functions).
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep
description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 11/10/2022 Last updated : 11/17/2022 # Azure Resource Manager template specs in Bicep
To learn more about template specs, and for hands-on guidance, see [Publish libr
## Required permissions
-To create a template spec, you need **write** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`.
+There are two Azure build-in roles defined for template spec:
-To deploy a template spec, you need **read** access to `Microsoft.Resources/templateSpecs` and `Microsoft.Resources/templateSpecs/versions`. In addition, you also need the permissions for deploying a Bicep file. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
+- [Template Spec Reader](../../role-based-access-control//built-in-roles.md#template-spec-reader)
+- [Template Spec Contributor](../../role-based-access-control//built-in-roles.md#template-spec-contributor)
+
+In addition, you also need the permissions for deploying a Bicep file. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
## Why use template specs?
az ts show \
## Deploy template spec
-After you've created the template spec, users with **read** access to the template spec can deploy it. For information about granting access, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md).
+After you've created the template spec, users with the [Template Specs Reader](#required-permissions) role can deploy it. In addition, you also need the permissions for deploying an ARM template. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
Template specs can be deployed through the portal, PowerShell, Azure CLI, or as a Bicep module in a larger template deployment. Users in an organization can deploy a template spec to any scope in Azure (resource group, subscription, management group, or tenant).
azure-resource-manager Template Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-scope.md
Title: Template functions - scope description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about deployment scope. Previously updated : 03/10/2022 Last updated : 11/17/2022 # Scope functions for ARM templates
The following example shows the subscription function called in the outputs sect
`tenant()`
-Returns properties about the tenant for the current deployment.
+Returns the tenant of the user.
In Bicep, use the [tenant](../bicep/bicep-functions-scope.md#tenant) scope function.
azure-resource-manager Template Specs Create Portal Forms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs-create-portal-forms.md
Title: Create portal forms for template spec
-description: Learn how to create forms that are displayed in the Azure portal forms. Use the form to deploying a template spec
+description: Learn how to create forms that are displayed in the Azure portal forms. Use the form to deploying a template spec
Previously updated : 11/02/2021 Last updated : 11/15/2022 # Tutorial: Create Azure portal forms for a template spec
-To help users deploy a [template spec](template-specs.md), you can create a form that is displayed in the Azure portal. The form lets users provide values that are passed to the template spec as parameters.
+You can create a form that appears in the Azure portal to assist users in deploying a [template spec](template-specs.md). The form allows users to enter values that are passed as parameters to the template spec.
When you create the template spec, you package the form and Azure Resource Manager template (ARM template) together. Deploying the template spec through the portal automatically launches the form.
+The following screenshot shows a form opened in the Azure portal.
+ ## Prerequisites
Copy this file and save it locally. This tutorial assumes you've named it **keyv
}, "skuName": { "type": "string",
- "defaultValue": "Standard",
+ "defaultValue": "standard",
"allowedValues": [
- "Standard",
- "Premium"
+ "standard",
+ "premium"
], "metadata": { "description": "Specifies whether the key vault is a standard vault or a premium vault."
Copy this file and save it locally. This tutorial assumes you've named it **keyv
} }, "secretValue": {
- "type": "securestring",
+ "type": "secureString",
"metadata": { "description": "Specifies the value of the secret that you want to create." }
Copy this file and save it locally. This tutorial assumes you've named it **keyv
"resources": [ { "type": "Microsoft.KeyVault/vaults",
- "apiVersion": "2019-09-01",
+ "apiVersion": "2022-07-01",
"name": "[parameters('keyVaultName')]", "location": "[parameters('location')]", "properties": {
Copy this file and save it locally. This tutorial assumes you've named it **keyv
}, { "type": "Microsoft.KeyVault/vaults/secrets",
- "apiVersion": "2019-09-01",
- "name": "[concat(parameters('keyVaultName'), '/', parameters('secretName'))]",
- "location": "[parameters('location')]",
+ "apiVersion": "2022-07-01",
+ "name": "[format('{0}/{1}', parameters('keyVaultName'), parameters('secretName'))]",
"dependsOn": [ "[resourceId('Microsoft.KeyVault/vaults', parameters('keyVaultName'))]" ],
Copy this file and save it locally. This tutorial assumes you've named it **keyv
## Create default form
-The Azure portal provides a sandbox for creating and previewing forms. This sandbox can generate a form from an existing ARM template. You'll use this default form to get started with creating a form for your template spec.
+The Azure portal provides a sandbox for creating and previewing forms. This sandbox can render a form from an existing ARM template. You'll use this default form to get started with creating a form for your template spec. For more information about the form structure, see [FormViewType](https://github.com/Azure/portaldocs/blob/main/portal-sdk/generated/dx-view-formViewType.md).
1. Open the [Form view sandbox](https://aka.ms/form/sandbox).
-1. Set **Package Type** to **CustomTemplate**.
+ :::image type="content" source="./media/template-specs-create-portal-forms/deploy-template-spec-config.png" alt-text="Screenshot of form view sandbox.":::
- :::image type="content" source="./media/template-specs-create-portal-forms/package-type.png" alt-text="Screenshot of setting package type to custom template":::
+1. In **Package Type**, select **CustomTemplate**. Make sure you select the package type before specify deployment template.
+1. In **Deployment template (optional)**, select the key vault template you saved locally. When prompted if you want to overwrite current changes, select **Yes**. The autogenerated form is displayed in the code window. The form is editable from the portal. To customize the form, see [customize form](#customize-form).
+ If you look closely into the autogenerated form, the default title is called **Test Form View**, and there's only one step called **basics** defined.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2021-09-09/uiFormDefinition.schema.json",
+ "view": {
+ "kind": "Form",
+ "properties": {
+ "title": "Test Form View",
+ "steps": [
+ {
+ "name": "basics",
+ "label": "Basics",
+ "elements": [
+ ...
+ ]
+ }
+ ]
+ },
+ "outputs": {
+ ...
+ }
+ }
+ }
+ ```
-1. Select the icon to open an existing template.
+1. To see that works it without any modifications, select **Preview**.
- :::image type="content" source="./media/template-specs-create-portal-forms/open-template.png" alt-text="Screenshot of icon to open file":::
+ :::image type="content" source="./media/template-specs-create-portal-forms/view-portal-basic.png" alt-text="Screenshot of the generated basic form.":::
-1. Navigate to the key vault template you saved locally. Select it and select **Open**.
-1. When prompted if you want to overwrite current changes, select **Yes**.
-1. The autogenerated form is displayed in the code window. To see that works it without any modifications, select **Preview**.
+ The sandbox displays the form. It has fields for selecting a subscription, resource group, and region. It also fields for all of the parameters from the template.
- :::image type="content" source="./media/template-specs-create-portal-forms/preview-form.png" alt-text="Screenshot of selecting preview":::
+ Most of the fields are text boxes, but some fields are specific for the type of parameter. When your template includes allowed values for a parameter, the autogenerated form uses a drop-down element. The drop-down element is pre-populated with the allowed values.
-1. The sandbox displays the form. It has fields for selecting a subscription, resource group, and region. It also fields for all of the parameters from the template.
+ In between the title and **Project details**, there are no tabs because the default form only has one step defined. In the **Customize form** section, you'll break the parameters into multiple tabs.
- Most of the fields are text boxes, but some fields are specific for the type of parameter. When your template includes allowed values for a parameter, the autogenerated form uses a drop-down element. The drop-down element is prepopulated with the allowed values.
+ > [!WARNING]
+ > Don't select **Create** as it will launch a real deployment. You'll have a chance to deploy the template spec later in this tutorial.
- > [!WARNING]
- > Don't select **Create** as it will launch a real deployment. You'll have a chance to deploy the template spec later in this tutorial.
+1. To exit from the preview, select **Cancel**.
## Customize form The default form is a good starting point for understanding forms but usually you'll want to customize it. You can edit it in the sandbox or in Visual Studio Code. The preview option is only available in the sandbox.
-1. Let's set the correct schema. Replace the schema text with:
-
- ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" range="1-2" highlight="2" :::
- 1. Give the form a **title** that describes its use.
- ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" range="1-6" highlight="6" :::
+ ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" range="1-6" highlight="6" :::
-1. Your default form had all of the fields for your template combined into one step called **Basics**. To help users understand the values they're providing, divide the form into steps. Each step contains fields related to a logical part of the solution to deploy.
+1. Your default form had all of the fields for your template combined into one step called **Basics**. To help users to understand the values they're providing, divide the form into steps. Each step contains fields related to a logical part of the solution to deploy.
- Find the step labeled **Basics**. You'll keep this step but add steps below it. The new steps will focus on configuring the key vault, setting user permissions, and specifying the secret. Make sure you add a comma after the basics step.
+ Find the step labeled **Basics**. You'll keep this step but add steps below it. The new steps will focus on configuring the key vault, setting user permissions, and specifying the secret. Make sure you add a comma after the basics step.
- ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/steps.json" highlight="15-32" :::
+ ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/steps.json" highlight="15-32" :::
- > [!IMPORTANT]
- > Properties in the form are case-sensitive. Make sure you use the casing shown in the examples.
+ > [!IMPORTANT]
+ > Properties in the form are case-sensitive. Make sure you use the casing shown in the examples.
1. Select **Preview**. You'll see the steps, but most of them don't have any elements.
- :::image type="content" source="./media/template-specs-create-portal-forms/view-steps.png" alt-text="Screenshot of form steps":::
+ :::image type="content" source="./media/template-specs-create-portal-forms/view-steps.png" alt-text="Screenshot of form steps.":::
1. Now, move elements to the appropriate steps. Start with the elements labeled **Secret Name** and **Secret Value**. Remove these elements from the **Basics** step and add them to the **Secret** step.
- ```json
- {
- "name": "secret",
- "label": "Secret",
- "elements": [
- {
- "name": "secretName",
- "type": "Microsoft.Common.TextBox",
- "label": "Secret Name",
- "defaultValue": "",
- "toolTip": "Specifies the name of the secret that you want to create.",
- "constraints": {
- "required": true,
- "regex": "",
- "validationMessage": ""
- },
- "visible": true
- },
- {
- "name": "secretValue",
- "type": "Microsoft.Common.PasswordBox",
- "label": {
- "password": "Secret Value",
- "confirmPassword": "Confirm password"
- },
- "toolTip": "Specifies the value of the secret that you want to create.",
- "constraints": {
- "required": true,
- "regex": "",
- "validationMessage": ""
- },
- "options": {
- "hideConfirmation": true
- },
- "visible": true
- }
- ]
- }
- ```
+ ```json
+ {
+ "name": "secret",
+ "label": "Secret",
+ "elements": [
+ {
+ "name": "secretName",
+ "type": "Microsoft.Common.TextBox",
+ "label": "Secret Name",
+ "defaultValue": "",
+ "toolTip": "Specifies the name of the secret that you want to create.",
+ "constraints": {
+ "required": true,
+ "regex": "",
+ "validationMessage": ""
+ },
+ "visible": true
+ },
+ {
+ "name": "secretValue",
+ "type": "Microsoft.Common.PasswordBox",
+ "label": {
+ "password": "Secret Value",
+ "confirmPassword": "Confirm password"
+ },
+ "toolTip": "Specifies the value of the secret that you want to create.",
+ "constraints": {
+ "required": true,
+ "regex": "",
+ "validationMessage": ""
+ },
+ "options": {
+ "hideConfirmation": true
+ },
+ "visible": true
+ }
+ ]
+ }
+ ```
1. When you move elements, you need to fix the `outputs` section. Currently, the outputs section references those elements as if they were still in the basics step. Fix the syntax so it references the elements in the `secret` step.
- ```json
- "outputs": {
- "parameters": {
- ...
- "secretName": "[steps('secret').secretName]",
- "secretValue": "[steps('secret').secretValue]"
- }
- ```
+ ```json
+ "outputs": {
+ "parameters": {
+ ...
+ "secretName": "[steps('secret').secretName]",
+ "secretValue": "[steps('secret').secretValue]"
+ }
+ ```
1. Continue moving elements to the appropriate steps. Rather than go through each one, take a look at the updated form.
- ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" :::
+ ::: code language="json" source="~/azure-docs-json-samples/azure-resource-manager/ui-forms/keyvaultform.json" :::
-Save this file locally with the name **keyvaultform.json**.
+1. Save this file locally with the name **keyvaultform.json**.
## Create template spec
az ts create \
To test the form, go to the portal and navigate to your template spec. Select **Deploy**. You'll see the form you created. Go through the steps and provide values for the fields.
az ts create \
Redeploy your template spec with the improved portal form. Notice that your permission fields are now drop-down that allow multiple values.
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs description: Describes how to create template specs and share them with other users in your organization. Previously updated : 11/10/2022 Last updated : 11/17/2022
If you currently have your templates in a GitHub repo or storage account, you ru
The templates you include in a template spec should be verified by administrators in your organization to follow the organization's requirements and guidance.
+## Required permissions
+
+There are two Azure build-in roles defined for template spec:
+
+- [Template Spec Reader](../../role-based-access-control//built-in-roles.md#template-spec-reader)
+- [Template Spec Contributor](../../role-based-access-control//built-in-roles.md#template-spec-contributor)
+
+In addition, you also need the permissions for deploying a Bicep file. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
+ ## Create template spec The following example shows a simple template for creating a storage account in Azure.
az ts show \
## Deploy template spec
-After you've created the template spec, users with **read** access to the template spec can deploy it. For information about granting access, see [Tutorial: Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md). In addition, you also need the permissions for deploying an ARM template. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
+After you've created the template spec, users with the [template spec reader](#required-permissions) role can deploy it. In addition, you also need the permissions for deploying an ARM template. See [Deploy - CLI](./deploy-cli.md#required-permissions) or [Deploy - PowerShell](./deploy-powershell.md#required-permissions).
Template specs can be deployed through the portal, PowerShell, Azure CLI, or as a linked template in a larger template deployment. Users in an organization can deploy a template spec to any scope in Azure (resource group, subscription, management group, or tenant).
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
Title: Logic Apps connector with ARM-based AVI accounts
description: This article shows how to unlock new experiences and monetization opportunities Azure Video Indexer connectors with Logic App and Power Automate with AVI ARM accounts. Previously updated : 08/04/2022 Last updated : 11/16/2022 # Logic Apps connector with ARM-based AVI accounts
The "upload and index your video automatically" scenario covered in this article
* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes. * The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
-The logic apps that you create in this article, contain one flow per app. The second section ("**Create a second flow - JSON extraction**") explains how to connect the two. The second flow stands alone and is triggered by the first one (the section with the callback URL).
+The logic apps that you create in this article, contain one flow per app. The second section (**Create a new logic app of type consumption**) explains how to connect the two. The second flow stands alone and is triggered by the first one (the section with the callback URL).
## Prerequisites
The logic apps that you create in this article, contain one flow per app. The se
## Set up the first flow - file upload
-In this section you'll, you create the following flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
+This section describes how to set up the first ("file upload") flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
The following image shows the first flow: ![Screenshot of the file upload flow.](./media/logic-apps-connector-arm-accounts/first-flow-high-level.png)
-1. Create the [Logic App](https://portal.azure.com/#create/Microsoft.LogicApp). We create a Logic App in the same region as the Azure Video Indexer region (recommended but not required). We call the logic app `UploadIndexVideosApp`.
+1. Create the <a href="https://portal.azure.com/#create/Microsoft.LogicApp" target="_blank">Logic App</a>. We create a Logic App in the same region as the Azure Video Indexer region (recommended but not required). We call the logic app `UploadIndexVideosApp`.
1. Select **Consumption** for **Plan type**. 1. Press **Review + Create** -> **Create**.
The following image shows the first flow:
Select **Save**. > [!TIP]
- > Before moving to the next step step up the right permission between the Logic app and the Azure Video Indexer account.
+ > Before moving to the next step, set up the right permission between the Logic app and the Azure Video Indexer account.
> > Make sure you have followed the steps to enable the system -assigned managed identity of your Logic Apps.
The following image shows the first flow:
The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output.
-## Create a second flow - JSON extraction
+## Create a new logic app of type consumption
Create the second flow, Logic Apps of type consumption. The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
azure-vmware Send Logs To Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/send-logs-to-log-analytics.md
+
+ Title: Send your Azure VMware Solution logs to Log Analytics
+description: Learn about sending logs to log analytics.
++ Last updated : 11/15/2022++
+# Send your Azure VMware Solution logs to Log Analytics
+
+This article shows you how to send Azure VMware Solution logs to Azure Monitor Log Analytics. You can send logs from your AVS private cloud to your Log Analytics workspace, allowing you to take advantage of the Log Analytics feature set, including:
+
+ΓÇó Powerful querying capabilities with Kusto Query Language (KQL)
+
+ΓÇó Interactive report-creation capability based on your data, using Workbooks
+
+...without having to get your logs out of the Microsoft ecosystem!
+
+In the rest of this article, weΓÇÖll show you how easy it is to make this happen.
+
+## How to set up Log Analytics
+
+A Log Analytics workspace:
+
+ΓÇó Contains your AVS private cloud logs.
+
+ΓÇó Is the workspace from which you can take desired actions, such as querying for logs.
+
+In this section, youΓÇÖll:
+
+ΓÇó Configure a Log Analytics workspace
+
+ΓÇó Create a diagnostic setting in your private cloud to send your logs to this workspace
+
+### Create a resource
+
+1. In the Azure portal, go to **Create a resource**.
+2. Search for ΓÇ£Log Analytics WorkspaceΓÇ¥ and click **Create** -> **Log Analytics Workspace**.
++
+### Set up your workspace
+
+1. Enter the Subscription you intend to use, the Resource Group thatΓÇÖll house this workspace. Give it a name and select a region.
+1. Click **Review** + **Create**.
++
+### Add a diagnostic setting
+
+Next, we add a diagnostic setting in your AVS private cloud, so it knows where to send your logs to.
++
+1. Click your AVS private cloud.
+Go to Diagnostic settings on the left-hand menu under Monitoring.
+Select **Add diagnostic setting**.
+2. Give your diagnostic setting a name.
+Select the log categories you are interested in sending to your Log Analytics workspace.
+
+3. Make sure to select the checkbox next to **Send to Log Analytics workspace**.
+Select the Subscription your Log Analytics workspace lives in and the Log Analytics workspace.
+Click **Save** on the top left.
++
+At this point, your Log Analytics workspace has been successfully configured to receive logs from your AVS private cloud.
+
+## Search and analyze logs using Kusto
+
+Now that youΓÇÖve successfully configured your logs to go to your Log Analytics workspace, you can use that data to gain meaningful insights with Log AnalyticsΓÇÖ search feature.
+Log Analytics uses a language called the Kusto Query Language (or Kusto) to search through your logs.
+
+For more information, see
+[Data analysis in Azure Data Explorer with Kusto Query Language](/training/paths/data-analysis-data-explorer-kusto-query-language/).
azure-web-pubsub Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/resource-faq.md
Azure SignalR Service is more suitable if:
Azure Web PubSub service is more suitable for situations where: - You need to build real-time applications based on WebSocket technology or publish-subscribe over WebSocket.-- You want to build your own subprotocol or use existing advanced protocols over WebSocket (for example, MQTT, AMQP over WebSocket).
+- You want to build your own subprotocol or use existing advanced sub-protocols over WebSocket (for example, [GraphQL subscriptions over WebSocket](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-graphql-subscribe)).
- You're looking for a lightweight server, for example, sending messages to client without going through the configured backend. ## Where does my data reside?
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 09/07/2022 Last updated : 09/22/2022
Azure Backup provides several ways to restore a VM.
**Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins. **Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).-
+**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs and Trusted Launch VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
>[!Tip] >To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues.
As one of the [restore options](#restore-options), you can create a VM quickly w
:::image type="content" source="./media/backup-azure-arm-restore-vms/backup-azure-cross-subscription-restore.png" alt-text="Screenshot showing the list of all subscriptions under the tenant where you have permissions.":::
+1. Choose the required zone from the **Availability Zone** drop-down list to restore an Azure VM pinned to any zone to a different zone.
+
+ Azure Backup now supports Cross Zonal Restore (CZR), you can now restore an Azure VM from the default zone to any available zones. Default zone is the zone in which Azure VM is running.
+
+ The following screenshot lists all zones that enable you to restore Azure VM to another zone.
+
+ :::image type="content" source="./media/backup-azure-arm-restore-vms/azure-virtual-machine-cross-zonal-restore.png" alt-text="Screenshot showing you how to select an available zone for VM restore.":::
+
+ >[!Note]
+ >Azure Backup supports CZR only for vaults with ZRS or CRR redundancy.
++ 1. Select **Restore** to trigger the restore operation. >[!Note]
As one of the [restore options](#restore-options), you can create a disk from a
Azure Backup now supports Cross Subscription Restore (CSR). Like Azure VM, you can now restore Azure VM disks using a recovery point from default subscription to another. Default subscription is the subscription where recovery point is available.
+1. Choose the required zone from the **Availability Zone** drop-down list to restore the VM disks to a different zone.
+
+ Azure Backup now supports Cross Zonal Restore (CZR). Like Azure VM, you can now restore Azure VM disks from the default zone to any available zones. Default zone is the zone in which the VM disks reside.
+
+ >[!Note]
+ >Azure Backup supports CZR only for vaults with ZRS or CRR redundancy.
+ 1. Select **Restore** to trigger the restore operation. When your virtual machine uses managed disks and you select the **Create virtual machine** option, Azure Backup doesn't use the specified storage account. In the case of **Restore disks** and **Instant Restore**, the storage account is used only for storing the template. Managed disks are created in the specified resource group. When your virtual machine uses unmanaged disks, they're restored as blobs to the storage account.
backup Backup Center Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-community.md
- Title: Access community resources using Backup center
-description: Use Backup center to access sample templates, scripts, and feature requests
- Previously updated : 02/18/2021--
-# Access community resources using Backup center
-
-You can use Backup center to access various community resources useful for a backup admin or operator.
-
-## Using Community Hub
-
-To access the Community Hub, navigate to Backup Center in the Azure portal and select the **Community** menu item.
-
-![Community Hub](./media/backup-center-community/backup-center-community-hub.png)
-
-Some of the resources available via the Community Hub are:
--- **Microsoft Q&A**: You can use this forum to ask and discover questions about various product features and obtain guidance from the community.--- **Feature Requests**: You can navigate to UserVoice and file feature requests.--- **Samples for automated deployments**: Using the Community Hub, you can discover sample Azure Resource Manager(ARM) templates and Azure Policies that you can use out of the box. You can also find sample PowerShell Scripts, CLI commands, and Microsoft Database Backup scripts.-
-## Next Steps
--- [Learn More about Backup center](backup-center-overview.md)
backup Backup Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-overview.md
Title: Overview of Backup center
+ Title: Overview of Backup center for Azure Backup
description: This article provides an overview of Backup center for Azure. Previously updated : 09/30/2020 Last updated : 11/16/2022++++
-# Overview of Backup center
+# Overview of Backup center for Azure Backup
-Backup Center provides a **single unified management experience** in Azure for enterprises to govern, monitor, operate, and analyze backups at scale. As such, it's consistent with AzureΓÇÖs native management experiences.
+Backup center provides a *single unified management experience* in Azure for enterprises to govern, monitor, operate, and analyze backups at scale. So, it's consistent with Azure's native management experiences.
+
+In this article, you'll learn about:
+
+> [!div class="checklist"]
+> - Key benefits
+> - Supported scenarios
+> - Get started
+> - Access community resources on Community Hub
+
+## Key benefits
Some of the key benefits of Backup center include:
-* **Single pane of glass to manage backups** ΓÇô Backup center is designed to function well across a large and distributed Azure environment. You can use Backup center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and [Azure Lighthouse](../lighthouse/overview.md) tenants.
-* **Datasource-centric management** ΓÇô Backup center provides views and filters that are centered on the datasources that you're backing up (for example, VMs and databases). This allows a resource owner or a backup admin to monitor and operate backups of items without needing to focus on which vault an item is backed up to. A key feature of this design is the ability to filter views by datasource-specific properties, such as datasource subscription, datasource resource group, and datasource tags. For example, if your organization follows a practice of assigning different tags to VMs belonging to different departments, you can use Backup center to filter backup information based on the tags of the underlying VMs being backed up without needing to focus on the tag of the vault.
-* **Connected experiences** ΓÇô Backup center provides native integrations to existing Azure services that enable management at scale. For example, Backup center uses the [Azure Policy](../governance/policy/overview.md) experience to help you govern your backups. It also leverages [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) and [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) to help you view detailed reports on backups. So you don't need to learn any new principles to use the varied features that Backup center offers. You can also discover community resources from the Backup center.
+- **Single pane of glass to manage backups**: Backup center is designed to function well across a large and distributed Azure environment. You can use Backup center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and [Azure Lighthouse](../lighthouse/overview.md) tenants.
+- **Datasource-centric management**: Backup center provides views and filters that are centered on the datasources that you're backing up (for example, VMs and databases). This allows a resource owner or a backup admin to monitor and operate backups of items without needing to focus on which vault an item is backed up to. A key feature of this design is the ability to filter views by datasource-specific properties, such as datasource subscription, datasource resource group, and datasource tags. For example, if your organization follows a practice of assigning different tags to VMs belonging to different departments, you can use Backup center to filter backup information based on the tags of the underlying VMs being backed up without needing to focus on the tag of the vault.
+- **Connected experiences**: Backup center provides native integrations to existing Azure services that enable management at scale. For example, Backup center uses the [Azure Policy](../governance/policy/overview.md) experience to help you govern your backups. It also leverages [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) and [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) to help you view detailed reports on backups. So, you don't need to learn any new principles to use the varied features that the Backup center offers. You can also [discover community resources from the Backup center](#access-community-resources-on-community-hub).
## Supported scenarios
-* Backup center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, Azure Blobs backup, Azure Managed Disks backup, and Azure Database for PostgreSQL Server backup.
-* Refer to the [support matrix](backup-center-support-matrix.md) for a detailed list of supported and unsupported scenarios.
+Backup center is currently supported for:
+
+- Azure VM backup
+- SQL in Azure VM backup
+- SAP HANA on Azure VM backup
+- Azure Files backup
+- Azure Blobs backup
+- Azure Managed Disks backup
+- Azure Database for PostgreSQL Server backup
+
+Learn more about [supported and unsupported scenarios](backup-center-support-matrix.md).
## Get started To get started with using Backup center, search for **Backup center** in the Azure portal and navigate to the **Backup center** dashboard.
-![Backup Center Search](./media/backup-center-overview/backup-center-search.png)
+
+On the **Overview** blade, two tiles appear ΓÇô **Jobs** and **Backup instances**.
-The first screen that you see is the **Overview**. It contains two tiles ΓÇô **Jobs** and **Backup instances**.
-![Backup Center tiles](./media/backup-center-overview/backup-center-overview-widgets.png)
+On the **Jobs** tile, you get a summarized view of all backup and restore related jobs that were triggered across your backup estate in the last 24 hours.
-In the **Jobs** tile, you get a summarized view of all backup and restore related jobs that were triggered across your backup estate in the last 24 hours. You can view information on the number of jobs that have completed, failed, and are in-progress. Selecting any of the numbers in this tile allows you to view more information on jobs for a particular datasource type, operation type, and status.
+- You can view information on the number of jobs that have completed, failed, and are in-progress.
+- Select any of the numbers in this tile allows you to view more information on jobs for a particular datasource type, operation type, and status.
-In the **Backup Instances** tile, you get a summarized view of all backup instances across your backup estate. For example, you can see the number of backup instances that are in soft-deleted state compared to the number of instances that are still configured for protection. Selecting any of the numbers in this tile allows you to view more information on backup instances for a particular datasource type and protection state. You can also view all backup instances whose underlying datasource is not found (the datasource might be deleted, or you may not have access to the datasource).
+On the **Backup Instances** tile, you get a summarized view of all backup instances across your backup estate. For example, you can see the number of backup instances that are in soft-deleted state compared to the number of instances that are still configured for protection.
+
+- Select any of the numbers in this tile allows you to view more information on backup instances for a particular datasource type and protection state.
+- You can also view all backup instances whose underlying datasource isn't found (the datasource might be deleted, or you may not have access to the datasource).
Watch the following video to understand the capabilities of Backup center: > [!VIDEO https://www.youtube.com/embed/pFRMBSXZcUk?t=497]
-Follow the [next steps](#next-steps) to understand the different capabilities that Backup center provides, and how you can use these capabilities to manage your backup estate efficiently.
+See the [next steps](#next-steps) to understand the different capabilities that Backup center provides, and how you can use these capabilities to manage your backup estate efficiently.
+
+## Access community resources on Community Hub
+
+You can use Backup center to access various community resources useful for a backup admin or operator.
+
+To access the Community Hub, navigate to the Backup center in the Azure portal and select the **Community** menu item.
++
+Some of the resources available via the Community Hub are:
+
+- **Microsoft Q&A**: You can use this forum to ask and discover questions about various product features and obtain guidance from the community.
+
+- **Feature Requests**: You can navigate to UserVoice and file feature requests.
+
+- **Samples for automated deployments**: Using the Community Hub, you can discover sample Azure Resource Manager (ARM) templates and Azure Policies that you can use out of the box. You can also find sample PowerShell Scripts, CLI commands, and Microsoft Database Backup scripts.
## Next steps
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 11/14/2022 Last updated : 11/22/2022
Back up disks after migrating to managed disks | Supported.<br/><br/> Backup wil
Back up managed disks after enabling resource group lock | Not supported.<br/><br/> Azure Backup can't delete the older restore points, and backups will start to fail when the maximum limit of restore points is reached. Modify backup policy for a VM | Supported.<br/><br/> The VM will be backed up by using the schedule and retention settings in new policy. If retention settings are extended, existing recovery points are marked and kept. If they're reduced, existing recovery points will be pruned in the next cleanup job and eventually deleted. Cancel a backup job| Supported during snapshot process.<br/><br/> Not supported when the snapshot is being transferred to the vault.
-Back up the VM to a different region or subscription |Not supported.<br><br>To successfully back up, virtual machines must be in the same subscription as the vault for backup.
+Back up the VM to a different region or subscription |Not supported.<br><br>For successful backup, virtual machines must be in the same subscription as the vault for backup.
Backups per day (via the Azure VM extension) | Four backups per day - one scheduled backup as per the Backup policy, and three on-demand backups. <br><br> However, to allow user retries in case of failed attempts, hard limit for on-demand backups is set to nine attempts. Backups per day (via the MARS agent) | Three scheduled backups per day. Backups per day (via DPM/MABS) | Two scheduled backups per day.
Monthly/yearly backup| Not supported when backing up with Azure VM extension. On
Automatic clock adjustment | Not supported.<br/><br/> Azure Backup doesn't automatically adjust for daylight saving time changes when backing up a VM.<br/><br/> Modify the policy manually as needed. [Security features for hybrid backup](./backup-azure-security-feature.md) |Disabling security features isn't supported. Back up the VM whose machine time is changed | Not supported.<br/><br/> If the machine time is changed to a future date-time after enabling backup for that VM, however even if the time change is reverted, successful backup isn't guaranteed.
-Multiple Backups Per Day | Supported (in preview), using *Enhanced policy* (in preview). <br><br> For hourly backup, the minimum RPO is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively. Learn about how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
+Multiple Backups Per Day | Supported (in preview), using *Enhanced policy* (in preview). <br><br> For hourly backup, the minimum RPO is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively. Learn how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
Back up a VM with deprecated plan when publisher has removed it from Azure Marketplace | Not supported. <br><br> Backup is possible. However, restore will fail. <br><br> If you've already configured backup for VM with deprecated virtual machine offer and encounter restore error, see [Troubleshoot backup errors with Azure VMs](backup-azure-vms-troubleshoot.md#usererrormarketplacevmnotsupportedvm-creation-failed-due-to-market-place-purchase-request-being-not-present). ## Operating system support (Windows)
The following table summarizes support for backup during VM management tasks, su
| <a name="backup-azure-cross-subscription-restore">Restore across subscription</a> | [Cross Subscription Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs. [Restore across region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported.
-Restore across zone | Unsupported.
+<a name="backup-azure-cross-zonal-restore">Restore across zone</a> | [Cross Zonal Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
Restore to an existing VM | Use replace disk option. Restore disk with storage account enabled for Azure Storage Service Encryption (SSE) | Not supported.<br/><br/> Restore to an account that doesn't have SSE enabled. Restore to mixed storage accounts |Not supported.<br/><br/> Based on the storage account type, all restored disks will be either premium or standard, and not mixed.
Back up VMs that are deployed from a custom image (third-party) |Supported.<br/>
Back up VMs that are migrated to Azure| Supported.<br/><br/> To back up the VM, the VM agent must be installed on the migrated machine. Back up Multi-VM consistency | Azure Backup doesn't provide data and application consistency across multiple VMs. Backup with [Diagnostic Settings](../azure-monitor/essentials/platform-logs-overview.md) | Unsupported. <br/><br/> If the restore of the Azure VM with diagnostic settings is triggered using the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, then the restore fails.
-Restore of Zone-pinned VMs | Supported (for a VM that's backed-up after Jan 2019 and where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>We currently support restoring to the same zone that's pinned in VMs. However, if the zone is unavailable due to an outage, the restore will fail.
+Restore of Zone-pinned VMs | Supported (where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>Azure Backup now supports [restoring Azure VMs to a any available zones](backup-azure-arm-restore-vms.md#restore-options) other that the zone that's pinned in VMs. This enables you to restore VMs when the primary zone is unavailable.d
Gen2 VMs | Supported <br> Azure Backup supports backup and restore of [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). When these VMs are restored from Recovery point, they're restored as [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Supported for managed VMs. [Spot VMs](../virtual-machines/spot-vms.md) | Unsupported. Azure Backup restores Spot VMs as regular Azure VMs.
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where Trusted Launch VM is available. <br><br> - Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. <br><br> - Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm).
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where Trusted Launch VM is available. <br><br> - Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. <br><br> - Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm).
[Confidential VM](../confidential-computing/confidential-vm-overview.md) | The backup support is in Limited Preview. <br><br> Backup is supported only for those Confidential VMs with no confidential disk encryption and for Confidential VMs with confidential OS disk encryption using Platform Managed Key (PMK). <br><br> Backup is currently not supported for Confidential VMs with confidential OS disk encryption using Customer Managed Key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where Confidential VM is available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported using [Enhanced Policy](backup-azure-vms-enhanced-policy.md) only. You can configure backup through [Create VM blade](backup-azure-arm-vms-prepare.md), [VM Manage blade](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and File Recovery (Item level Restore) for Confidential VM are currently not supported. ## VM storage support
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
Previously updated : 09/09/2022 Last updated : 11/17/2022
After you deploy this feature, there are two different sets of connection instru
* Set up concurrent VM sessions with Bastion. * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
-Currently, this feature has the following limitation:
+**Limitations**
* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
+* This feature is not supported on Cloud Shell.
## <a name="prereq"></a>Prerequisites
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
By default, users in your org will have only read access to shared links. If a u
## Considerations
-* Shareable Links isn't currently supported on peered VNets that aren't in the same subscription.
-* Shareable Links is not supported for national clouds during preview.
+* Shareable Links isn't currently supported for peered VNets that aren't in the same subscription.
+* Shareable Links isn't currently supported for peered VNets that aren't in the same region.
+* Shareable Links isn't supported for national clouds during preview.
* The Standard SKU is required for this feature. ## Prerequisites
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
+
+ Title: Batch synthesis API (Preview) for text to speech - Speech service
+
+description: Learn how to use the batch synthesis API for asynchronous synthesis of long-form text to speech.
++++++ Last updated : 11/16/2022+++
+# Batch synthesis API (Preview) for text to speech
+
+The Batch synthesis API (Preview) can synthesize a large volume of text input (long and short) asynchronously. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The batch synthesis API can create synthesized audio longer than 10 minutes.
+
+> [!IMPORTANT]
+> The Batch synthesis API is currently in public preview. Once it's generally available, the Long Audio API will be deprecated. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
+
+The batch synthesis API is asynchronous and doesn't return synthesized audio in real time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success. The text inputs must be plain text or [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) text.
+
+This diagram provides a high-level overview of the workflow.
+
+![Diagram of the Batch Synthesis API workflow.](media/long-audio-api/long-audio-api-workflow.png)
+
+> [!TIP]
+> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks.
+
+You can use the following REST API operations for batch synthesis:
+
+| Operation | Method | REST API call |
+| - | -- | |
+| Create batch synthesis | `POST` | texttospeech/3.1-preview1/batchsynthesis |
+| Get batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+| List batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis |
+| Delete batch synthesis | `DELETE` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+
+## Create batch synthesis
+
+To submit a batch synthesis request, construct the HTTP POST request body according to the following instructions:
+
+- Set the required `textType` property.
+- If the `textType` property is set to "PlainText", then you must also set the `voice` property in the `synthesisConfig`. In the example below, the `textType` is set to "SSML", so the `speechSynthesis` isn't set.
+- Set the required `displayName` property. Choose a name that you can refer to later. The display name doesn't have to be unique.
+- Optionally you can set the `description`, `timeToLive`, and other properties. For more information, see [batch synthesis properties](#batch-synthesis-properties).
+
+> [!NOTE]
+> The maximum JSON payload size that will be accepted is 500 kilobytes. Each Speech resource can have up to 200 batch synthesis jobs that are running concurrently.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key, replace `YourSpeechRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: application/json" -d '{
+ "displayName": "batch synthesis sample",
+ "description": "my ssml test",
+ "textType": "SSML",
+ "inputs": [
+ {
+ "text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''>
+ <voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>
+ The rainbow has seven colors.
+ </voice>
+ </speak>",
+ },
+ ],
+ "properties": {
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false,
+ "concatenateResult": false,
+ "decompressOutputFiles": false
+ },
+}' "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "lastActionDateTime": "2022-11-16T15:07:04.121Z",
+ "status": "NotStarted",
+ "id": "1e2e0fe8-e403-417c-a382-b55eb2ea943d",
+ "createdDateTime": "2022-11-16T15:07:04.121Z",
+ "displayName": "batch synthesis sample",
+ "description": "my ssml test"
+}
+```
+
+The `status` property should progress from `NotStarted` status, to `Running`, and finally to `Succeeded` or `Failed`. You can call the [GET batch synthesis API](#get-batch-synthesis) periodically until the returned status is `Succeeded` or `Failed`.
+
+## Get batch synthesis
+
+To get the status of the batch synthesis job, make an HTTP GET request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "audioSize": 100000,
+ "durationInTicks": 31250000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT3.125S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T14:00:32.523Z",
+ "status": "Succeeded",
+ "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "createdDateTime": "2022-11-05T14:00:31.523Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ }
+```
+
+From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
+
+## List batch synthesis
+
+To list all batch synthesis jobs for the Speech resource, make an HTTP GET request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key and replace `YourSpeechRegion` with your Speech resource region. Optionally, you can set the `skip` and `top` (page size) query parameters in URL. The default value for `skip` is 0 and the default value for `top` is 100.
+
+```azurecli-interactive
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "values": [
+ {
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "audioSize": 100000,
+ "durationInTicks": 31250000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT3.125S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T14:00:32.523Z",
+ "status": "Succeeded",
+ "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "createdDateTime": "2022-11-05T14:00:31.523Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ }
+ {
+ "textType": "PlainText",
+ "synthesisConfig": {
+ "voice": "en-US-JennyNeural",
+ "style": "chat",
+ "rate": "+30.00%",
+ "pitch": "x-high",
+ "volume": "80"
+ },
+ "customVoices": {},
+ "properties": {
+ "audioSize": 79384,
+ "durationInTicks": 24800000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT2.48S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/38e249bf-2607-4236-930b-82f6724048d8/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T18:52:23.210Z",
+ "status": "Succeeded",
+ "id": "38e249bf-2607-4236-930b-82f6724048d8",
+ "createdDateTime": "2022-11-05T18:52:22.807Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ },
+ ],
+ // The next page link of the list of batch synthesis.
+ "@nextLink": "https://{region}.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2"
+}
+```
+
+From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
+
+The `values` property in the json response lists your synthesis requests. The list is paginated, with a maximum page size of 100. The `"@nextLink"` property is provided as needed to get the next page of the paginated list.
+
+## Delete batch synthesis
+
+Delete the batch synthesis job history after you retrieved the audio output results. The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+
+To delete a batch synthesis job, make an HTTP DELETE request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X DELETE "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+The response headers will include `HTTP/1.1 204 No Content` if the delete request was successful.
+
+## Batch synthesis results
+
+After you [get a batch synthesis job](#get-batch-synthesis) with `status` of "Succeeded", you can download the audio output results. Use the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response.
+
+To get the batch synthesis results file, make an HTTP GET request using the URI as shown in the following example. Replace `YourOutputsResultUrl` with the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response. Replace `YourSpeechKey` with your Speech resource key.
+
+```azurecli-interactive
+curl -v -X GET "YourOutputsResultUrl" -H "Ocp-Apim-Subscription-Key: YourSpeechKey" > results.zip
+```
+
+The results are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. The numbered prefix of each filename (shown below as `[nnnn]`) is in the same order as the text inputs used when you created the batch synthesis.
+
+> [!NOTE]
+> The `[nnnn].debug.json` file contains the synthesis result ID and other information that might help with troubleshooting. The properties that it contains might change, so you shouldn't take any dependencies on the JSON format.
+
+The summary file contains the synthesis results for each text input. Here's an example `summary.json` file:
+
+```json
+{
+ "jobID": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "status": "Succeeded",
+ "results": [
+ {
+ "texts": [
+ "<speak version='1.0' xml:lang='en-US'>\n\t\t\t\t<voice xml:lang='en-US' xml:gender='Female' name='en-US-JennyNeural'>\n\t\t\t\t\tThe rainbow has seven colors.\n\t\t\t\t</voice>\n\t\t\t</speak>"
+ ],
+ "status": "Succeeded",
+ "billingDetails": {
+ "CustomNeural": "0",
+ "Neural": "33"
+ },
+ "audioFileName": "0001.wav",
+ "properties": {
+ "audioSize": "100000",
+ "duration": "PT3.1S",
+ "durationInTicks": "31250000"
+ }
+ }
+ ]
+}
+```
+
+If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file will be included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file will be included in the results.
+
+Here's an example word data file with both audio offset and duration in milliseconds:
+
+```json
+[
+ {
+ "Text": "the",
+ "AudioOffset": 38,
+ "Duration": 153
+ },
+ {
+ "Text": "rainbow",
+ "AudioOffset": 201,
+ "Duration": 326
+ },
+ {
+ "Text": "has",
+ "AudioOffset": 567,
+ "Duration": 96
+ },
+ {
+ "Text": "seven",
+ "AudioOffset": 673,
+ "Duration": 96
+ },
+ {
+ "Text": "colors",
+ "AudioOffset": 778,
+ "Duration": 451
+ },
+]
+```
+
+## Batch synthesis properties
+
+Batch synthesis properties are described in the following table.
+
+| Property | Description |
+|-|-|
+|`createdDateTime`|The date and time when the batch synthesis job was created.<br/><br/>This property is read-only.|
+|`customProperties`|A custom set of optional batch synthesis configuration settings.<br/><br/>This property is stored for your convenience to associate the synthesis jobs that you created with the synthesis jobs that you get or list. This property is stored, but isn't used by the Speech service.<br/><br/>You can specify up to 10 custom properties as key and value pairs. The maximum allowed key length is 64 characters, and the maximum allowed value length is 256 characters.|
+|`customVoices`|The map of a custom voice name and its deployment ID.<br/><br/>For example: `"customVoices": {"your-custom-voice-name": "502ac834-6537-4bc3-9fd6-140114daa66d"}`<br/><br/>You can use the voice name in your `synthesisConfig.voice` (when the `textType` is set to `"PlainText"`) or within the SSML text of `inputs` (when the `textType` is set to `"SSML"`).<br/><br/>This property is required to use a custom voice. If you try to use a custom voice that isn't defined here, the service returns an error.|
+|`description`|The description of the batch synthesis.<br/><br/>This property is optional.|
+|`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
+|`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
+|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result will be written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but keep in mind that the maximum JSON payload size (including all text inputs and other properties) that will be accepted is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
+|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.|
+|`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.|
+|`properties`|A defined set of optional batch synthesis configuration settings.|
+|`properties.audioSize`|The audio output size in bytes.<br/><br/>This property is read-only.|
+|`properties.billingDetails`|The number of words that were processed and billed by `customNeural` versus `neural` (prebuilt) voices.<br/><br/>This property is read-only.|
+|`properties.concatenateResult`|Determines whether to concatenate the result. This optional `bool` value ("true" or "false") is "false" by default.|
+|`properties.decompressOutputFiles`|Determines whether to unzip the synthesis result files in the destination container. This property can only be set when the `destinationContainerUrl` property is set or BYOS (Bring Your Own Storage) is configured for the Speech resource. This optional `bool` value ("true" or "false") is "false" by default.|
+|`properties.destinationContainerUrl`|The batch synthesis results can be stored in a writable Azure container. If you don't specify a container URI with [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) token, the Speech service stores the results in a container managed by Microsoft. SAS with stored access policies isn't supported. When the synthesis job is deleted, the result data is also deleted.<br/><br/>This optional property isn't included in the response when you get the synthesis job.|
+|`properties.duration`|The audio output duration. The value is an ISO 8601 encoded duration.<br/><br/>This property is read-only.|
+|`properties.durationInTicks`|The audio output duration in ticks.<br/><br/>This property is read-only.|
+|`properties.failedAudioCount`|The count of batch synthesis inputs to audio output failed.<br/><br/>This property is read-only.|
+|`properties.outputFormat`|The audio output format.<br/><br/>For information about the accepted values, see [audio output formats](rest-text-to-speech.md#audio-outputs). The default output format is `riff-24khz-16bit-mono-pcm`.|
+|`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file will be included in the results data ZIP file.|
+|`properties.succeededAudioCount`|The count of batch synthesis inputs to audio output succeeded.<br/><br/>This property is read-only.|
+|`properties.timeToLive`|A duration after the synthesis job is created, when the synthesis results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. This optional setting is `P31D` (31 days) by default. The maximum time to live is 31 days. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.<br/><br/>Otherwise, you can call the [delete](#delete-batch-synthesis) synthesis method to remove the job sooner.|
+|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file will be included in the results data ZIP file.|
+|`status`|The batch synthesis processing status.<br/><br/>The status should progress from "NotStarted" to "Running", and finally to either "Succeeded" or "Failed".<br/><br/>This property is read-only.|
+|`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.rate`|The rate of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.style`|For some voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=stt-tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=stt-tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property.<br/><br/>This property is required when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.volume`|The volume of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`textType`|Indicates whether the `inputs` text property should be plain text or SSML. The possible case-insensitive values are "PlainText" and "SSML". When the `textType` is set to `"PlainText"`, you must also set the `synthesisConfig` voice property.<br/><br/>This property is required.|
+
+## HTTP status codes
+
+The section details the HTTP response codes and messages from the batch synthesis API.
+
+### HTTP 200 OK
+
+HTTP 200 OK indicates that the request was successful.
+
+### HTTP 201 Created
+
+HTTP 201 Created indicates that the create batch synthesis request (via HTTP POST) was successful.
+
+### HTTP 204 error
+
+An HTTP 204 error indicates that the request was successful, but the resource doesn't exist. For example:
+- You tried to get or delete a synthesis job that doesn't exist.
+- You successfully deleted a synthesis job.
+
+### HTTP 400 error
+
+Here are examples that can result in the 400 error:
+- The `outputFormat` is unsupported or invalid. Provide a valid format value, or leave `outputFormat` empty to use the default setting.
+- The number of requested text inputs exceeded the limit of 1,000.
+- The `top` query parameter exceeded the limit of 100.
+- You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.
+- You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
+- You tried to use a `F0` Speech resource, but the region only supports the `S0` (standard) Speech resource pricing tier.
+- You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
+
+### HTTP 404 error
+
+The specified entity can't be found. Make sure the synthesis ID is correct.
+
+### HTTP 429 error
+
+There are too many recent requests. Each client application can submit up to 50 requests per 5 seconds for each Speech resource. Reduce the number of requests per second.
+
+You can check the rate limit and quota remaining via the HTTP headers as shown in the following example:
+
+```http
+X-RateLimit-Limit: 50
+X-RateLimit-Remaining: 49
+X-RateLimit-Reset: 2022-11-11T01:49:43Z
+```
+
+### HTTP 500 error
+
+HTTP 500 Internal Server Error indicates that the request failed. The response body contains the error message.
+
+### HTTP error example
+
+Here's an example request that results in an HTTP 400 error, because the `top` query parameter is set to a value greater than 100.
+
+```console
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+In this case, the response headers will include `HTTP/1.1 400 Bad Request`.
+
+The response body will resemble the following JSON example:
+
+```json
+{
+ "code": "InvalidRequest",
+ "message": "The top parameter should not be greater than 100.",
+ "innerError": {
+ "code": "InvalidParameter",
+ "message": "The top parameter should not be greater than 100."
+ }
+}
+```
+
+## Next steps
+
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Text-to-speech quickstart](get-started-text-to-speech.md)
+- [Migrate to batch synthesis](migrate-to-batch-synthesis.md)
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions" ``` - You should receive a response body in the following format: ```json
Here are some property options that you can use to configure a transcription whe
|`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models).| |`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. | |`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.|
-|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. For example, specify `PT12H` for 12 hours. As an alternative, you can call [DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription) regularly after you retrieve the transcription results.|
+|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription) regularly after you retrieve the transcription results.|
|`wordLevelTimestampsEnabled`|Specifies if word level timestamps should be included in the output. The default value is `false`.|
cognitive-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md
Depending in part on the request parameters set when you created the transcripti
|`confidence`|The confidence value for the recognition.| |`display`|The display form of the recognized text. Added punctuation and capitalization are included.| |`displayPhraseElements`|A list of results with display text for each word of the phrase. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.|
-|`duration`|The audio duration, ISO 8601 encoded duration.|
+|`duration`|The audio duration. The value is an ISO 8601 encoded duration.|
|`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).| |`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.| |`lexical`|The actual words recognized.| |`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.| |`maskedITN`|The ITN form with profanity masking applied.| |`nBest`|A list of possible transcriptions for the current phrase with confidences.|
-|`offset`|The offset in audio of this phrase, ISO 8601 encoded duration.|
+|`offset`|The offset in audio of this phrase. The value is an ISO 8601 encoded duration.|
|`offsetInTicks`|The offset in audio of this phrase in ticks (1 tick is 100 nanoseconds).| |`recognitionStatus`|The recognition state. For example: "Success" or "Failure".| |`recognizedPhrases`|The list of results for each phrase.| |`source`|The URL that was provided as the input audio source. The source corresponds to the `contentUrls` or `contentContainerUrl` request property. The `source` property is the only way to confirm the audio input for a transcription.| |`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property is not present.|
-|`timestamp`|The creation time of the transcription, ISO 8601 encoded timestamp, combined date and time.|
+|`timestamp`|The creation date and time of the transcription. The value is an ISO 8601 encoded timestamp.|
|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.|
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
If you want to allow a user to grant access to other users, you need to assign t
## Next steps
-* [Long Audio API](./long-audio-api.md)
-
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Batch synthesis](batch-synthesis.md)
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Navigate to the project where you copied the model to [deploy the model copy](ho
- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) - [How to record voice samples](record-custom-voice-samples.md) - [Text-to-Speech API reference](rest-text-to-speech.md)-- [Long Audio API](long-audio-api.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
The HTTP status code for each response indicates success or common errors.
- [How to record voice samples](record-custom-voice-samples.md) - [Text-to-Speech API reference](rest-text-to-speech.md)-- [Long Audio API](long-audio-api.md)
+- [Batch synthesis](batch-synthesis.md)
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
Previously updated : 01/24/2022 Last updated : 11/12/2022
-zone_pivot_groups: acs-js-csharp
+zone_pivot_groups: acs-js-csharp-python
ms.devlang: csharp, javascript
You can transcribe meetings and other conversations with the ability to add, rem
[!INCLUDE [C# Basics include](includes/how-to/conversation-transcription/real-time-csharp.md)] ::: zone-end + ## Next steps > [!div class="nextstepaction"]
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
- Title: Synthesize long-form text to speech - Speech service-
-description: Learn how the Long Audio API is designed for asynchronous synthesis of long-form text to speech.
------ Previously updated : 11/11/2022---
-# Synthesize long-form text to speech
-
-The Long Audio API provides asynchronous synthesis of long-form text to speech. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The Long Audio API can create synthesized audio longer than 10 minutes.
-
-> [!TIP]
-> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks. For a C# example, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs).
-
-## Workflow
-
-The Long Audio API is asynchronous and doesn't return synthesized audio in real time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success.
-
-This diagram provides a high-level overview of the workflow.
-
-![Long Audio API workflow diagram](media/long-audio-api/long-audio-api-workflow.png)
-
-## Prepare content for synthesis
-
-When preparing your text file, make sure it:
-
-* Is a single plain text (.txt) or SSML text (.txt). Don't use compressed files such as ZIP.
-* Is encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom).
-* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs.
- * For plain text, each paragraph is separated by pressing **Enter/Return**. See [plain text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/en-US.txt).
- * For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs. See [SSML text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/SSMLTextInputSample.txt).
-
-> [!NOTE]
-> When using SSML text, be sure to use the [supported SSML elements](speech-synthesis-markup.md?tabs=csharp#supported-ssml-elements) except the `audio`, `mstts:backgroundaudio`, and `lexicon` elements. The `audio`, `mstts:backgroundaudio`, and `lexicon` elements are not supported by Long Audio API. The `audio` and `lexicon` elements will be ignored without any error message. The `mstts:backgroundaudio` element will cause the systhesis task failure. If your synthesis task fails, download the audio result (.zip file) and check the error report with suffix name "err.txt" within the zip file for details.
-
-## Sample code
-
-The rest of this page focuses on Python, but sample code for the Long Audio API is available on GitHub for the following programming languages:
-
-* [Sample code: Python](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/Python)
-* [Sample code: C#](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/LongAudioAPI/CSharp/LongAudioAPISample)
-* [Sample code: Java](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/)
-
-## Python example
-
-This section contains Python examples that show the basic usage of the Long Audio API. Create a new Python project using your favorite IDE or editor. Then copy this code snippet into a file named `long_audio_synthesis_client.py`.
-
-```python
-import json
-import ntpath
-import requests
-```
-
-These libraries are used to construct the HTTP request, and call the text-to-speech long audio synthesis REST API.
-
-### Get a list of supported voices
-
-The Long Audio API supports a subset of [Public Neural Voices](language-support.md?tabs=stt-tts) and [Custom Neural Voices](language-support.md?tabs=stt-tts).
-
-To get a list of supported voices, send a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
-
-This code gets a full list of voices you can use at a specific region/endpoint.
-
-```python
-def get_voices():
- region = '<region>'
- key = '<your_key>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/voices'.format(region)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- response = requests.get(url, headers=header)
- print(response.text)
-
-get_voices()
-```
-
-Replace the following values:
-
-* Replace `<your_key>` with your Speech resource key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-* Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-
-You'll see output that looks like this:
-
-```json
-{
- "values": [
- {
- "locale": "en-US",
- "voiceName": "en-US-AriaNeural",
- "description": "",
- "gender": "Female",
- "createdDateTime": "2020-05-21T05:57:39.123Z",
- "properties": {
- "publicAvailable": true
- }
- },
- {
- "id": "8fafd8cd-5f95-4a27-a0ce-59260f873141"
- "locale": "en-US",
- "voiceName": "my custom neural voice",
- "description": "",
- "gender": "Male",
- "createdDateTime": "2020-05-21T05:25:40.243Z",
- "properties": {
- "publicAvailable": false
- }
- }
- ]
-}
-```
-
-If **properties.publicAvailable** is **true**, the voice is a public neural voice. Otherwise, it's a custom neural voice.
-
-### Convert text to speech
-
-Prepare an input text file, in either plain text or SSML text, then add the following code to `long_audio_synthesis_client.py`:
-
-> [!NOTE]
-> `concatenateResult` is an optional parameter. If this parameter isn't set, the audio outputs will be generated per paragraph. You can also concatenate the audios into one output by including the parameter.
-> `outputFormat` is also optional. By default, the audio output is set to `riff-24khz-16bit-mono-pcm`. For more information about supported audio output formats, see [Audio output formats](#audio-output-formats).
-
-```python
-def submit_synthesis():
- region = '<region>'
- key = '<your_key>'
- input_file_path = '<input_file_path>'
- locale = '<locale>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis'.format(region)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- voice_identities = [
- {
- 'voicename': '<voice_name>'
- }
- ]
-
- payload = {
- 'displayname': 'long audio synthesis sample',
- 'description': 'sample description',
- 'locale': locale,
- 'voices': json.dumps(voice_identities),
- 'outputformat': 'riff-24khz-16bit-mono-pcm',
- 'concatenateresult': True,
- }
-
- filename = ntpath.basename(input_file_path)
- files = {
- 'script': (filename, open(input_file_path, 'rb'), 'text/plain')
- }
-
- response = requests.post(url, payload, headers=header, files=files)
- print('response.status_code: %d' % response.status_code)
- print(response.headers['Location'])
-
-submit_synthesis()
-```
-
-Replace the following values:
-
-* Replace `<your_key>` with your Speech resource key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-* Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-* Replace `<input_file_path>` with the path to the text file you've prepared for text-to-speech.
-* Replace `<locale>` with the desired output locale. For more information, see [language support](language-support.md?tabs=stt-tts).
-
-Use one of the voices returned by your previous call to the `/voices` endpoint.
-
-* If you're using public neural voice, replace `<voice_name>` with the desired output voice.
-* To use a custom neural voice, replace `voice_identities` variable with following, and replace `<voice_id>` with the `id` of your custom neural voice.
-```Python
-voice_identities = [
- {
- 'id': '<voice_id>'
- }
-]
-```
-
-You'll see output that looks like this:
-
-```console
-response.status_code: 202
-https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/<guid>
-```
-
-> [!NOTE]
-> If you have more than one input file, you will need to submit multiple requests, and there are limitations to consider.
-> * The client can submit up to **5** requests per second for each Azure subscription account. If it exceeds the limitation, a **429 error code (too many requests)** is returned. Reduce the rate of submissions to avoid this limit.
-> * The server can queue up to **120** requests for each Azure subscription account. If the queue exceeds this limitation, server will return **429 error code(too many requests)**. Wait for completed requests before submitting additional requests.
-
-You can use the URL in output to get the request status.
-
-### Get details about a submitted request
-
-To get status of a submitted synthesis request, send a GET request to the URL returned in previous step.
-
-```Python
-
-def get_synthesis():
- url = '<url>'
- key = '<your_key>'
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
- response = requests.get(url, headers=header)
- print(response.text)
-
-get_synthesis()
-```
-
-Output will be like this:
-
-```json
-response.status_code: 200
-{
- "models": [
- {
- "voiceName": "en-US-AriaNeural"
- }
- ],
- "properties": {
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "totalDuration": "PT5M57.252S",
- "billableCharacterCount": 3048
- },
- "id": "eb3d7a81-ee3e-4e9a-b725-713383e71677",
- "lastActionDateTime": "2021-01-14T11:12:27.240Z",
- "status": "Succeeded",
- "createdDateTime": "2021-01-14T11:11:02.557Z",
- "locale": "en-US",
- "displayName": "long audio synthesis sample",
- "description": "sample description"
-}
-```
-
-The `status` property changes from `NotStarted` status, to `Running`, and finally to `Succeeded` or `Failed`. You can poll this API in a loop until the status becomes `Succeeded` or `Failed`.
-
-### Download audio result
-
-Once a synthesis request succeeds, you can download the audio result by calling the GET `/files` API.
-
-```python
-def get_files():
- id = '<request_id>'
- region = '<region>'
- key = '<your_key>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/{}/files'.format(region, id)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- response = requests.get(url, headers=header)
- print('response.status_code: %d' % response.status_code)
- print(response.text)
-
-get_files()
-```
-
-Replace `<request_id>` with the ID of request you want to download the result. It can be found in the response of previous step.
-
-Output will be like this:
-
-```json
-response.status_code: 200
-{
- "values": [
- {
- "name": "2779f2aa-4e21-4d13-8afb-6b3104d6661a.txt",
- "kind": "LongAudioSynthesisScript",
- "properties": {
- "size": 4200
- },
- "createdDateTime": "2021-01-14T11:11:02.410Z",
- "links": {
- "contentUrl": "https://customvoice-usw.blob.core.windows.net/artifacts/input.txt?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
- }
- },
- {
- "name": "voicesynthesis_waves.zip",
- "kind": "LongAudioSynthesisResult",
- "properties": {
- "size": 9290000
- },
- "createdDateTime": "2021-01-14T11:12:27.226Z",
- "links": {
- "contentUrl": "https://customvoice-usw.blob.core.windows.net/artifacts/voicesynthesis_waves.zip?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
- }
- }
- ]
-}
-```
-This example output contains information for two files. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request.
-
-The result is zip that contains the audio output files generated, along with a copy of the input text.
-
-Both files can be downloaded from the URL in their `links.contentUrl` property.
-
-### Get all synthesis requests
-
-The following code lists all submitted requests:
-
-```python
-def get_synthesis():
- region = '<region>'
- key = '<your_key>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/'.format(region)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- response = requests.get(url, headers=header)
- print('response.status_code: %d' % response.status_code)
- print(response.text)
-
-get_synthesis()
-```
-
-Output will be like:
-
-```json
-response.status_code: 200
-{
- "values": [
- {
- "models": [
- {
- "id": "8fafd8cd-5f95-4a27-a0ce-59260f873141",
- "voiceName": "my custom neural voice"
- }
- ],
- "properties": {
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "totalDuration": "PT1S",
- "billableCharacterCount": 5
- },
- "id": "f9f0bb74-dfa5-423d-95e7-58a5e1479315",
- "lastActionDateTime": "2021-01-05T07:25:42.433Z",
- "status": "Succeeded",
- "createdDateTime": "2021-01-05T07:25:13.600Z",
- "locale": "en-US",
- "displayName": "Long Audio Synthesis",
- "description": "Long audio synthesis sample"
- },
- {
- "models": [
- {
- "voiceName": "en-US-AriaNeural"
- }
- ],
- "properties": {
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "totalDuration": "PT5M57.252S",
- "billableCharacterCount": 3048
- },
- "id": "eb3d7a81-ee3e-4e9a-b725-713383e71677",
- "lastActionDateTime": "2021-01-14T11:12:27.240Z",
- "status": "Succeeded",
- "createdDateTime": "2021-01-14T11:11:02.557Z",
- "locale": "en-US",
- "displayName": "long audio synthesis sample",
- "description": "sample description"
- }
- ]
-}
-```
-
-The `values` property lists your synthesis requests. The list is paginated, with a maximum page size of 100. If there are more than 100 requests, a `"@nextLink"` property is provided to get the next page of the paginated list.
-
-```console
- "@nextLink": "https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/?top=100&skip=100"
-```
-
-You can also customize page size and skip number by providing `skip` and `top` in URL parameter.
-
-### Remove previous requests
-
-The service will keep up to **20,000** requests for each Azure subscription account. If your request amount exceeds this limitation, remove previous requests before making new ones. If you don't remove existing requests, you'll receive an error notification.
-
-The following code shows how to remove a specific synthesis request.
-
-```python
-def delete_synthesis():
- id = '<request_id>'
- region = '<region>'
- key = '<your_key>'
- url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/{}/'.format(region, id)
- header = {
- 'Ocp-Apim-Subscription-Key': key
- }
-
- response = requests.delete(url, headers=header)
- print('response.status_code: %d' % response.status_code)
-```
-
-If the request is successfully removed, the response status code will be HTTP 204 (No Content).
-
-```console
-response.status_code: 204
-```
-
-> [!NOTE]
-> Requests with a status of `NotStarted` or `Running` cannot be removed or deleted.
-
-The completed `long_audio_synthesis_client.py` is available on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Python/voiceclient.py).
-
-## HTTP status codes
-
-The following table details the HTTP response codes and messages from the REST API.
-
-| API | HTTP status code | Description | Solution |
-|--||-|-|
-| Create | 400 | The voice synthesis isn't enabled in this region. | Change the speech resource key with a supported region. |
-| | 400 | Only the **Standard** speech resource for this region is valid. | Change the speech resource key to the "Standard" pricing tier. |
-| | 400 | Exceed the 20,000 request limit for the Azure account. Remove some requests before submitting new ones. | The server will keep up to 20,000 requests for each Azure account. Delete some requests before submitting new ones. |
-| | 400 | This model can't be used in the voice synthesis: {modelID}. | Make sure the {modelID}'s state is correct. |
-| | 400 | The region for the request doesn't match the region for the model: {modelID}. | Make sure the {modelID}'s region match with the request's region. |
-| | 400 | The voice synthesis only supports the text file in the UTF-8 encoding with the byte-order marker. | Make sure the input files are in UTF-8 encoding with the byte-order marker. |
-| | 400 | Only valid SSML inputs are allowed in the voice synthesis request. | Make sure the input SSML expressions are correct. |
-| | 400 | The voice name {voiceName} isn't found in the input file. | The input SSML voice name isn't aligned with the model ID. |
-| | 400 | The number of paragraphs in the input file should be less than 10,000. | Make sure the number of paragraphs in the file is less than 10,000. |
-| | 400 | The input file should be more than 400 characters. | Make sure your input file exceeds 400 characters. |
-| | 404 | The model declared in the voice synthesis definition can't be found: {modelID}. | Make sure the {modelID} is correct. |
-| | 429 | Exceed the active voice synthesis limit. Wait until some requests finish. | The server is allowed to run and queue up to 120 requests for each Azure account. Wait and avoid submitting new requests until some requests are completed. |
-| All | 429 | There are too many requests. | The client is allowed to submit up to five requests to the server per second for each Azure account. Reduce the request amount per second. |
-| Delete | 400 | The voice synthesis task is still in use. | You can only delete requests that are **Completed** or **Failed**. |
-| GetByID | 404 | The specified entity can't be found. | Make sure the synthesis ID is correct. |
-
-## Regions and endpoints
-
-The Long audio API is available in multiple regions with unique endpoints.
-
-| Region | Endpoint |
-|--|-|
-| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
-| East US | `https://eastus.customvoice.api.speech.microsoft.com` |
-| India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
-| South Central US | `https://southcentralus.customvoice.api.speech.microsoft.com` |
-| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` |
-| UK South | `https://uksouth.customvoice.api.speech.microsoft.com` |
-| West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
-
-## Audio output formats
-
-We support flexible audio output formats. You can generate audio outputs per paragraph or concatenate the audio outputs into a single output by setting the `concatenateResult` parameter. The following audio output formats are supported by the Long Audio API:
-
-> [!NOTE]
-> The default audio format is riff-24khz-16bit-mono-pcm.
->
-> The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
-
-* riff-8khz-16bit-mono-pcm
-* riff-16khz-16bit-mono-pcm
-* riff-24khz-16bit-mono-pcm
-* riff-48khz-16bit-mono-pcm
-* audio-16khz-32kbitrate-mono-mp3
-* audio-16khz-64kbitrate-mono-mp3
-* audio-16khz-128kbitrate-mono-mp3
-* audio-24khz-48kbitrate-mono-mp3
-* audio-24khz-96kbitrate-mono-mp3
-* audio-24khz-160kbitrate-mono-mp3
cognitive-services Migrate To Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-to-batch-synthesis.md
+
+ Title: Migrate to Batch synthesis API - Speech service
+
+description: This document helps developers migrate code from Long Audio REST API to Batch synthesis REST API.
++++++ Last updated : 09/01/2022+
+ms.devlang: csharp
+++
+# Migrate code from Long Audio API to Batch synthesis API
+
+The [Batch synthesis API](batch-synthesis.md) (Preview) provides asynchronous synthesis of long-form text to speech. Benefits of upgrading from Long Audio API to Batch synthesis API, and details about how to do so, are described in the sections below.
+
+> [!IMPORTANT]
+> [Batch synthesis API](batch-synthesis.md) is currently in public preview. Once it's generally available, the Long Audio API will be deprecated.
+
+## Base path
+
+You must update the base path in your code from `/texttospeech/v3.0/longaudiosynthesis` to `/texttospeech/3.1-preview1/batchsynthesis`. For example, to list synthesis jobs for your Speech resource in the `eastus` region, use `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis` instead of `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis`.
+
+## Regions and endpoints
+
+Batch synthesis API is available in all [Speech regions](regions.md).
+
+The Long Audio API is limited to the following regions:
+
+| Region | Endpoint |
+|--|-|
+| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
+| East US | `https://eastus.customvoice.api.speech.microsoft.com` |
+| India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
+| South Central US | `https://southcentralus.customvoice.api.speech.microsoft.com` |
+| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` |
+| UK South | `https://uksouth.customvoice.api.speech.microsoft.com` |
+| West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
+
+## Voices list
+
+Batch synthesis API supports all [text-to-speech voices and styles](language-support.md?tabs=stt-tts).
+
+The Long Audio API is limited to the set of voices returned by a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
+
+## Text inputs
+
+Batch synthesis text inputs are sent in a JSON payload of up to 500 kilobytes.
+
+Long Audio API text inputs are uploaded from a file that meets the following requirements:
+* One plain text (.txt) or SSML text (.txt) file encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom). Don't use compressed files such as ZIP. If you have more than one input file, you must submit multiple requests.
+* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs. For plain text, each paragraph is separated by a new line. For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs.
+
+With Batch synthesis API, you can use any of the [supported SSML elements](speech-synthesis-markup.md?tabs=csharp#supported-ssml-elements), including the `audio`, `mstts:backgroundaudio`, and `lexicon` elements. The `audio`, `mstts:backgroundaudio`, and `lexicon` elements aren't supported by Long Audio API.
+
+## Audio output formats
+
+Batch synthesis API supports all [text-to-speech audio output formats](rest-text-to-speech.md#audio-outputs).
+
+The Long Audio API is limited to the following set of audio output formats. The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
+
+* riff-8khz-16bit-mono-pcm
+* riff-16khz-16bit-mono-pcm
+* riff-24khz-16bit-mono-pcm
+* riff-48khz-16bit-mono-pcm
+* audio-16khz-32kbitrate-mono-mp3
+* audio-16khz-64kbitrate-mono-mp3
+* audio-16khz-128kbitrate-mono-mp3
+* audio-24khz-48kbitrate-mono-mp3
+* audio-24khz-96kbitrate-mono-mp3
+* audio-24khz-160kbitrate-mono-mp3
+
+## Getting results
+
+With batch synthesis API, use the URL from the `outputs.result` property of the GET batch synthesis response. The [results](batch-synthesis.md#batch-synthesis-results) are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details.
+
+Long Audio API text inputs and results are returned via two separate content URLs as shown in the following example. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request. Both ZIP files can be downloaded from the URL in their `links.contentUrl` property.
+
+## Cleaning up resources
+
+Batch synthesis API supports up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed". The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+
+The Long Audio API is limited to 20,000 requests for each Azure subscription account. The Speech service doesn't remove job history automatically. You must remove the previous job run history before making new requests that would otherwise exceed the limit.
+
+## Next steps
+
+- [Batch synthesis API](batch-synthesis.md)
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Text-to-speech quickstart](get-started-text-to-speech.md)
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
riff-48khz-16bit-mono-pcm
## Next steps - [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)-- [Asynchronous synthesis for long-form audio](./long-audio-api.md) - [Get started with custom neural voice](how-to-custom-voice.md)
+- [Batch synthesis](batch-synthesis.md)
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp
<sup>1</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/> <sup>2</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit).<br/>
-### Text-to-speech quotas and limits per resource
+### Text-to-speech quotas and limits per Speech resource
In the following tables, the parameters without the **Adjustable** row aren't adjustable for all price tiers.
In the following tables, the parameters without the **Adjustable** row aren't ad
| Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| **Max number of transactions per certain time period per Speech service resource** | | |
+| **Max number of transactions per certain time period** | | |
| Real-time API. Prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds | 200 transactions per second (TPS) (default value) | | Adjustable | No<sup>4</sup> | Yes<sup>5</sup>, up to 1000 TPS | | **HTTP-specific quotas** | | |
In the following tables, the parameters without the **Adjustable** row aren't ad
| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 | | Max SSML message size per turn | 64 KB | 64 KB |
-#### Long Audio API
-
-| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
-|--|--|--|
-| Min text length | N/A | 400 characters for plain text; 400 [billable characters](text-to-speech.md#pricing-note) for SSML |
-| Max text length | N/A | 10000 paragraphs |
-| Start time | N/A | 10 tasks or 10000 characters accumulated |
- #### Custom Neural Voice | Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| Max number of transactions per second (TPS) per Speech service resource | Not available for F0 | See [General](#general) |
-| Max number of datasets per Speech service resource | N/A | 500 |
-| Max number of simultaneous dataset uploads per Speech service resource | N/A | 5 |
+| Max number of transactions per second (TPS) | Not available for F0 | See [General](#general) |
+| Max number of datasets | N/A | 500 |
+| Max number of simultaneous dataset uploads | N/A | 5 |
| Max data file size for data import per dataset | N/A | 2 GB | | Upload of long audios or audios without script | N/A | Yes |
-| Max number of simultaneous model trainings per Speech service resource | N/A | 3 |
-| Max number of custom endpoints per Speech service resource | N/A | 50 |
-| *Concurrent request limit for Custom Neural Voice* | | |
+| Max number of simultaneous model trainings | N/A | 3 |
+| Max number of custom endpoints | N/A | 50 |
+| Concurrent request limit for Custom Neural Voice | | |
| Default value | N/A | 10 | | Adjustable | N/A | Yes<sup>5</sup> |
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Use the `mstts:silence` element to insert pauses before or after text, or betwee
| Attribute | Description | Required or optional | | - | - | -- |
-| `type` | Specifies the location of silence to be added: <ul><li>`Leading` ΓÇô At the beginning of text </li><li>`Tailing` ΓÇô At the end of text </li><li>`Sentenceboundary` ΓÇô Between adjacent sentences </li></ul> | Required |
-| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`.| Required |
+| `type` | Specifies where and how to add silence. The following silence types are supported:<br/><ul><li>`Leading` ΓÇô Additional silence at the beginning of the text. The value that you set is added to the natural silence before the start of text.</li><li>`Leading-exact` ΓÇô Silence at the beginning of the text. The value is an absolute silence length.</li><li>`Tailing` ΓÇô Additional silence at the end of text. The value that you set is added to the natural silence after the last word.</li><li>`Tailing-exact` ΓÇô Silence at the end of the text. The value is an absolute silence length.</li><li>`Sentenceboundary` ΓÇô Additional silence between adjacent sentences. The actual silence length for this type includes the natural silence after the last word in the previous sentence, the value you set for this type, and the natural silence before the starting word in the next sentence.</li><li>`Sentenceboundary-exact` ΓÇô Silence between adjacent sentences. The value is an absolute silence length.</li></ul><br/>An absolute silence type (with the `-exact` suffix) replaces any otherwise natural leading or trailing silence. Absolute silence types take precedence over the corresponding non-absolute type. For example, if you set both `Leading` and `Leading-exact` types, the `Leading-exact` type will take effect.| Required |
+| `Value` | Specifies the duration of a pause in seconds or milliseconds. This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`.| Required |
**Example**
Sometimes text-to-speech can't accurately pronounce a word. Examples might be th
``` > [!NOTE]
-> The `lexicon` element is not supported by the [Long Audio API](long-audio-api.md).
+> The `lexicon` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
**Attribute**
Any audio included in the SSML document must meet these requirements:
* The audio must not contain any customer-specific or other sensitive information. > [!NOTE]
-> The 'audio' element is not supported by the [Long Audio API](long-audio-api.md).
+> The 'audio' element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
**Syntax**
Only one background audio file is allowed per SSML document. You can intersperse
> [!NOTE] > The `mstts:backgroundaudio` element should be put in front of all `voice` elements, i.e., the first child of the `speak` element. >
-> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](long-audio-api.md).
+> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
**Syntax**
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Here's more information about neural text-to-speech features in the Speech servi
* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md?tabs=stt-tts) or [custom neural voices](custom-neural-voice.md).
-* **Asynchronous synthesis of long audio**: Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
+* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to:
cognitive-services Modifications Deprecations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/modifications-deprecations.md
+
+ Title: Modifications to Translator Service
+description: Translator Service changes, modifications, and deprecations
++++++ Last updated : 11/15/2022+++
+# Modifications to Translator Service
+
+Learn about Translator service changes, modification, and deprecations.
+
+> [!NOTE]
+> Looking for updates and preview announcements? Visit our [What's new](whats-new.md) page to stay up to date with release notes, feature enhancements, and our newest documentation.
+
+## November 2022
+
+### Changes to Translator `Usage` metrics
+
+> [!IMPORTANT]
+> **`Characters Translated`** and **`Characters Trained`** metrics are deprecated and have been removed from the Azure portal.
+
+|Deprecated metric| Current metric(s) | Description|
+||||
+|Characters Translated (Deprecated)</br></br></br></br>|**&bullet; Text Characters Translated**</br></br>**&bullet;Text Custom Characters Translated**| &bullet; Number of characters in incoming **text** translation request.</br></br> &bullet; Number of characters in incoming **custom** translation request. |
+|Characters Trained (Deprecated) | **&bullet; Text Trained Characters** | &bullet; Number of characters **trained** using text translation service.|
+
+* In 2021, two new metrics, **Text Characters Translated** and **Text Custom Characters Translated**, were added to help with granular metrics data service usage. These metrics replaced **Characters Translated** which provided combined usage data for the general and custom text translation service.
+
+* Similarly, the **Text Trained Characters** metric was added to replace the **Characters Trained** metric.
+
+* **Characters Trained** and **Characters Translated** metrics have had continued support in the Azure portal with the deprecated flag to allow migration to the current metrics. As of October 2022, Characters Trained and Characters Translated are no longer available in the Azure portal.
++
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-overview.md
Previously updated : 07/11/2022 Last updated : 11/16/2022 -
-keywords: translator, text translation, machine translation, translation service, custom translator
# What is Azure Cognitive Services Translator?
-Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Translator can be used with any operating system and powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
+Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
Translator documentation contains the following article types:
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Previously updated : 06/14/2022 Last updated : 11/16/2022 <!-- markdownlint-disable MD024 -->
# What's new in Azure Cognitive Services Translator?
-Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
Translator is a language service that enables users to translate text and documents, helps entities expand their global outreach, and supports preservation of at-risk and endangered languages.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
Creating a custom text classification project typically involves several differe
Follow these steps to get the most out of your model:
-1. **Define your schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, avoid ambiguity.
+1. **Define your schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, to avoid ambiguity.
2. **Label your data**: The quality of data labeling is a key factor in determining model performance. Documents that belong to the same class should always have the same class, if you have a document that can fall into two classes use **Multi label classification** projects. Avoid class ambiguity, make sure that your classes are clearly separable from each other, especially with single label classification projects.
cognitive-services Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md
The entity in this category can have the following subcategories.
:::column-end::: :::row-end:::
-## Category: Quantity
+
+## Category: Age
This category contains the following entities:
This category contains the following entities:
:::column span=""::: **Entity**
- Quantity
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Numbers and numeric quantities.
-
- To get this entity category, add `Quantity` to the `piiCategories` parameter. `Quantity` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Age
+ Age
:::column-end::: :::column span="2":::
The entity in this category can have the following subcategories.
Ages. To get this entity category, add `Age` to the `piiCategories` parameter. `Age` will be returned in the API response if detected.
-
+ :::column-end::: :::column span="2"::: **Supported document languages**
cognitive-services How To Call For Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
The API will attempt to detect all the [defined entity categories](concepts/conv
For spoken transcripts, the entities detected will be returned on the `redactionSource` parameter value provided. Currently, the supported values for `redactionSource` are `text`, `lexical`, `itn`, and `maskedItn` (which maps to Microsoft Speech to Text API's `display`\\`displayText`, `lexical`, `itn` and `maskedItn` format respectively). Additionally, for the spoken transcript input, this API will also provide audio timing information to empower audio redaction. For using the audioRedaction feature, use the optional `includeAudioRedaction` flag with `true` value. The audio redaction is performed based on the lexical input format.
+> [!NOTE]
+> Conversation PII now supports 40,000 characters as document size.
+ ## Getting PII results
When you get results from PII detection, you can stream the results to an applic
|Language |Package version | ||| |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
- |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.1.0b2) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## November 2022
+* Conversational PII now supports up to 40,000 characters as document size.
+ ## October 2022 * The summarization feature now has the following capabilities:
communication-services Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights.md
The **Recording** tab displays data relevant to total recordings, recording form
:::image type="content" source="media\workbooks\azure-communication-services-recording-insights.png" alt-text="Screenshot displays recording count, duration, recording usage by format and type as well as number of recordings per call."::: -
+The **Call Automation** tab displays data about calls placed or answered using Call Automation SDK, like active call count, operations executed and errors encountered by your resource over time. You can also examine a particular call by looking at the sequence of operations taken on that call using the SDK:
## Editing dashboards
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
Previously updated : 03/29/2022 Last updated : 11/16/2022 # Network Diagnostics Tool The **Network Diagnostics Tool** enables Azure Communication Services developers to ensure that their device and network conditions are optimal for connecting to the service to ensure a great call experience. The tool can be found at [aka.ms/acsdiagnostics](https://azurecommdiagnostics.net/). Users can quickly run a test, by pressing the start test button. The tool performs diagnostics on the network, devices, and call quality. The results of the diagnostics are directly provided through the tools UI. No sign-in required to use the tool. After the test, a GUID is presented which can be provided to our support team for further help.
The **Network Diagnostics Tool** enables Azure Communication Services developers
As part of the diagnostics performed, the user is asked to enable permissions for the tool to access their devices. Next, the user is asked to record their voice, which is then played back using an echo bot to ensure that the microphone is working. The tool finally, performs a video test. The test uses the camera to detect video and measure the quality for sent and received frames.
-If you are looking to build your own Network Diagnostic Tool or to perform deeper integration of this tool into your application, you can levearge [pre-call diagnostic APIs](../voice-video-calling/pre-call-diagnostics.md) for the calling SDK.
+If you are looking to build your own Network Diagnostic Tool or to perform deeper integration of this tool into your application, you can leverage [pre-call diagnostic APIs](../voice-video-calling/pre-call-diagnostics.md) for the calling SDK.
## Performed tests
If you are looking to build your own Network Diagnostic Tool or to perform deepe
|--|| | Browser Diagnostic | Checks for browser compatibility. Azure Communication Services supports specific browsers for [calling](../voice-video-calling/calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser) and [chat](../chat/sdk-features.md#javascript-chat-sdk-support-by-os-and-browser). | | Media Device Diagnostic | Checks for availability of device (camera, microphone and speaker) and enabled permissions for those devices on the browser. |
- | Service Connectivity | Checks whether it can connect to Azure Communication Services |
| Audio Test | Performs an echo bot call. Here the user can talk to echo bot and hear themselves back. The test records media quality statistics for audio including jitter, bitrate, packet loss and RTT with thresholds for optimal conditions. | | Video Test | Performs a loop back video test, where video captured by the camera is sent back and forth to check for network quality conditions. The test records media quality statistics for video including jitter, bitrate, packet loss and RTT with thresholds for optimal conditions. |
The test provides a **unique identifier** for your test which you can provide ou
- [Use Pre-Call Diagnostic APIs to build your own tech check](../voice-video-calling/pre-call-diagnostics.md) - [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)-- [Debug your application with Monitoring tool](./real-time-inspection.md) - [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
Title: Quickstart - Add a bot to your chat app
+ Title: Add a bot to your chat app
description: This quickstart shows you how to build chat experience with a bot using Communication Services Chat SDK and Bot Services. --++ - Previously updated : 01/25/2022+ Last updated : 10/18/2022
-# Quickstart: Add a bot to your chat app
+# Add a bot to your chat app
> [!IMPORTANT]
-> This functionality is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/HBm8jRuuGZ) and we will review your scenario(s) and evaluate your participation in the preview.
+> This functionality is in public preview.
>
-> Private Preview APIs and SDKs are provided without a service-level agreement, and are not appropriate for production workloads and should only be used with test users and test data. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-In this quickstart, we'll learn how to build conversational AI experiences in our chat application using 'Communication Services-Chat' messaging channel available under Azure Bot Services. We'll create a bot using BotFramework SDK and learn how to integrate this bot into our chat application that is built using Communication Services Chat SDK.
-You'll learn how to:
+In this quickstart, you will learn how to build conversational AI experiences in a chat application using Azure Communication Services Chat messaging channel that is available under Azure Bot Services. This article will describe how to create a bot using BotFramework SDK and how to integrate this bot into any chat application that is built using Communication Services Chat SDK.
-- [Create and deploy a bot](#step-1create-and-deploy-a-bot)-- [Get an Azure Communication Services Resource](#step-2get-an-azure-communication-services-resource)-- [Enable Communication Services' Chat Channel for the bot](#step-3enable-azure-communication-services-chat-channel)
+You will learn how to:
+
+- [Create and deploy an Azure bot](#step-1create-and-deploy-an-azure-bot)
+- [Get an Azure Communication Services resource](#step-2get-an-azure-communication-services-resource)
+- [Enable Communication Services Chat channel for the bot](#step-3enable-azure-communication-services-chat-channel)
- [Create a chat app and add bot as a participant](#step-4create-a-chat-app-and-add-bot-as-a-participant)-- [Explore additional features available for bot](#more-things-you-can-do-with-bot)
+- [Explore more features available for bot](#more-things-you-can-do-with-a-bot)
## Prerequisites - Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) - [Visual Studio (2019 and above)](https://visualstudio.microsoft.com/vs/)-- [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) (Make sure to install version that corresponds with your visual studio instance, 32 vs 64 bit)-
+- Latest version of .NET Core. For this tutorial, we have used [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) (Make sure to install the version that corresponds with your visual studio instance, 32 vs 64 bit)
-## Step 1 - Create and deploy a bot
-In order to use Azure Communication Services chat as a channel in Azure Bot Service, the first step would be to deploy a bot. Please follow these steps:
+## Step 1 - Create and deploy an Azure bot
-### Provision a bot service resource in Azure
+To use Azure Communication Services chat as a channel in Azure Bot Service, the first step is to deploy a bot. You can do so by following below steps:
- 1. Click on create a resource option in Azure portal.
-
- :::image type="content" source="./media/create-a-new-resource.png" alt-text="Create a new resource":::
-
- 2. Search Azure Bot in the list of available resource types.
-
- :::image type="content" source="./media/search-azure-bot.png" alt-text="Search Azure Bot":::
+### Create an Azure bot service resource in Azure
+ Refer to the Azure Bot Service documentation on how to [create a bot](/azure/bot-service/abs-quickstart?tabs=userassigned).
- 3. Choose Azure Bot to create it.
+ For this example, we have selected a multitenant bot but if you wish to use single tenant or managed identity bots refer to [configuring single tenant and managed identity bots](#support-for-single-tenant-and-managed-identity-bots).
- :::image type="content" source="./media/create-azure-bot.png" alt-text="Creat Azure Bot":::
-
- 4. Finally create an Azure Bot resource. You might use an existing Microsoft app ID or use a new one created automatically.
-
- :::image type="content" source="./media/smaller-provision-azure-bot.png" alt-text="Provision Azure Bot" lightbox="./media/provision-azure-bot.png":::
### Get Bot's MicrosoftAppId and MicrosoftAppPassword
-After creating the Azure Bot resource, next step would be to set a password for the App ID we set for the Bot credential if you chose to create one automatically in the first step.
-
- 1. Go to Azure Active Directory
-
- :::image type="content" source="./media/azure-ad.png" alt-text="Azure Active Directory":::
+ Fetch your Azure bot's [Microsoft App ID and secret](/azure/bot-service/abs-quickstart?tabs=userassigned#to-get-your-app-or-tenant-id) as you will need those values for configurations.
-2. Find your app in the App Registration blade
+### Create a Web App where the bot logic resides
- :::image type="content" source="./media/smaller-app-registration.png" alt-text="App Registration" lightbox="./media/app-registration.png":::
+ You can check out some samples at [Bot Builder Samples](https://github.com/Microsoft/BotBuilder-Samples) and tweak them or use [Bot Builder SDK](/composer/introduction) to create one. One of the simplest samples is [Echo Bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot). Generally, the Azure Bot Service expects the Bot Application Web App Controller to expose an endpoint `/api/messages`, which handles all the messages reaching the bot. To create the bot application, you can either use Azure CLI to [create an App Service](/azure/bot-service/provision-app-service?tabs=singletenant%2Cexistingplan) or directly create from the portal using below steps.
-3. Create a new password for your app from the `Certificates and Secrets` blade and save the password you create as you won't be able to copy it again.
-
- :::image type="content" source="./media/smaller-save-password.png" alt-text="Save password" lightbox="./media/save-password.png":::
-
-### Create a Web App where actual bot logic resides
-
-Create a Web App where actual bot logic resides. You could check out some samples at [Bot Builder Samples](https://github.com/Microsoft/BotBuilder-Samples) and tweak them or use Bot Builder SDK to create one: [Bot Builder documentation](/composer/introduction). One of the simplest ones to play around with is Echo Bot located here with steps on how to use it and it's the one being used in this example [Echo Bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot). Generally, the Bot Service expects the Bot Application Web App Controller to expose an endpoint `/api/messages`, which handles all the messages reaching the bot. To create the Bot application, follow these steps.
-
- 1. As in previously shown create a resource and choose `Web App` in search.
+ 1. Select `Create a resource` and in the search box, search for web app and select `Web App`.
- :::image type="content" source="./media/web-app.png" alt-text="Web app":::
+ :::image type="content" source="./media/web-app.png" alt-text="Screenshot of creating a Web app resource in Azure portal.":::
2. Configure the options you want to set including the region you want to deploy it to.
- :::image type="content" source="./media/web-app-create-options.png" alt-text="Web App Create Options":::
-
+ :::image type="content" source="./media/web-app-create-options.png" alt-text="Screenshot of specifying Web App create options to set.":::
-
- 3. Review your options and create the Web App and move to the resource once its been provisioned and copy the hostname URL exposed by the Web App.
+ 3. Review your options and create the Web App and once it has been created, copy the hostname URL exposed by the Web App.
- :::image type="content" source="./media/web-app-endpoint.png" alt-text="Web App endpoint":::
+ :::image type="content" source="./media/web-app-endpoint.png" alt-text="Diagram that shows how to copy the newly created Web App endpoint.":::
### Configure the Azure Bot
-Configure the Azure Bot we created with its Web App endpoint where the bot logic is located. To do this, copy the hostname URL of the Web App and append it with `/api/messages`
+Configure the Azure Bot you created with its Web App endpoint where the bot logic is located. To do this configuration, copy the hostname URL of the Web App from previous step and append it with `/api/messages`
- :::image type="content" source="./media/smaller-bot-configure-with-endpoint.png" alt-text="Bot Configure with Endpoint" lightbox="./media/bot-configure-with-endpoint.png":::
+ :::image type="content" source="./media/smaller-bot-configure-with-endpoint.png" alt-text="Diagram that shows how to set bot messaging endpoint with the copied Web App endpoint." lightbox="./media/bot-configure-with-endpoint.png":::
### Deploy the Azure Bot
-The final step would be to deploy the bot logic to the Web App we created. As we mentioned for this tutorial, we'll be using the Echo Bot. This bot only demonstrates a limited set of capabilities, such as echoing the user input. Here's how we deploy it to Azure Web App.
+The final step would be to deploy the Web App we created. The Echo bot functionality is limited to echoing the user input. Here's how we deploy it to Azure Web App.
1. To use the samples, clone this GitHub repository using Git. ```
- git clone https://github.com/Microsoft/BotBuilder-Samples.gitcd BotBuilder-Samples
+ git clone https://github.com/Microsoft/BotBuilder-Samples.git
+ cd BotBuilder-Samples
``` 2. Open the project located here [Echo bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot) in Visual Studio.
- 3. Go to the appsettings.json file inside the project and copy the application ID and password we created in step 2 in respective places.
+ 3. Go to the appsettings.json file inside the project and copy the [Microsoft application ID and secret](#get-bots-microsoftappid-and-microsoftapppassword) in their respective placeholders.
```js { "MicrosoftAppId": "<App-registration-id>", "MicrosoftAppPassword": "<App-password>" } ```
+ For deploying the bot, you can either use command line to [deploy an Azure bot](/azure/bot-service/provision-and-publish-a-bot?tabs=userassigned%2Ccsharp) or use Visual studio for C# bots as described below.
- 4. Click on the project to publish the Web App code to Azure. Choose the publish option in Visual Studio.
+ 1. Select the project to publish the Web App code to Azure. Choose the publish option in Visual Studio.
- :::image type="content" source="./media/publish-app.png" alt-text="Publish app":::
+ :::image type="content" source="./media/publish-app.png" alt-text="Screenshot of publishing your Web App from Visual Studio.":::
- 5. Click on New to create a new publishing profile, choose Azure as the target, and Azure App Service as the specific target.
+ 2. Select New to create a new publishing profile, choose Azure as the target, and Azure App Service as the specific target.
- :::image type="content" source="./media/select-azure-as-target.png" alt-text="Select Azure as Target":::
+ :::image type="content" source="./media/select-azure-as-target.png" alt-text="Diagram that shows how to select Azure as target in a new publishing profile.":::
- :::image type="content" source="./media/select-app-service.png" alt-text="Select App Service":::
+ :::image type="content" source="./media/select-app-service.png" alt-text="Diagram that shows how to select specific target as Azure App Service.":::
- 6. Lastly, the above option opens the deployment config. Choose the Web App we had provisioned from the list of options it comes up with after signing into your Azure account. Once ready click on `Finish` to start the deployment.
+ 3. Lastly, the above option opens the deployment config. Choose the Web App we had created from the list of options it comes up with after signing into your Azure account. Once ready select `Finish` to complete the profile, and then select `Publish` to start the deployment.
- :::image type="content" source="./media/smaller-deployment-config.png" alt-text="Deployment config" lightbox="./media/deployment-config.png":::
+ :::image type="content" source="./media/smaller-deployment-config.png" alt-text="Screenshot of setting deployment config with the created Web App name." lightbox="./media/deployment-config.png":::
## Step 2 - Get an Azure Communication Services Resource
-Now that you got the bot part sorted out, we'll need to get an Azure Communication Services resource, which we would use for configuring the Azure Communication Services channel.
-1. Create an Azure Communication Services resource. For details, see [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md).
-2. Create a Azure Communication Services User and issue a user access token [User Access Token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**.
+Now that bot is created and deployed, you will need an Azure Communication Services resource, which you can use to configure the Azure Communication Services channel.
+1. Create an Azure Communication Services resource. For details, see [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md).
+
+2. Create an Azure Communication Services User and issue a [User Access Token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**.
-## Step 3 - Enable Azure Communication Services Chat Channel
-With the Azure Communication Services resource, we can configure the Azure Communication Services channel in Azure Bot to bind an Azure Communication Services User ID with a bot. Note that currently, only the allowlisted Azure account will be able to see Azure Communication Services - Chat channel.
-1. Go to your Bot Services resource on Azure portal. Navigate to `Channels` blade and click on `Azure Communications Services - Chat` channel from the list provided.
+## Step 3 - Enable Azure Communication Services Chat channel
+With the Azure Communication Services resource, you can set up the Azure Communication Services channel in Azure Bot to assign an Azure Communication Services User ID to a bot.
+
+1. Go to your Bot Services resource on Azure portal. Navigate to `Channels` configuration on the left pane and select `Azure Communications Services - Chat` channel from the list provided.
- :::image type="content" source="./media/smaller-demoapp-launch-acs-chat.png" alt-text="DemoApp Launch Acs Chat" lightbox="./media/demoapp-launch-acs-chat.png":::
+ :::image type="content" source="./media/smaller-demoapp-launch-acs-chat.png" alt-text="Screenshot of launching Azure Communication Services Chat channel." lightbox="./media/demoapp-launch-acs-chat.png":::
-2. Choose from the dropdown list the Azure Communication Services resource that you want to connect with.
+2. Select the connect button to see a list of ACS resources available under your subscriptions.
+
+ :::image type="content" source="./media/smaller-bot-connect-acs-chat-channel.png" alt-text="Diagram that shows how to connect an Azure Communication Service Resource to this bot." lightbox="./media/bot-connect-acs-chat-channel.png":::
- :::image type="content" source="./media/smaller-demoapp-connect-acsresource.png" alt-text="DemoApp Connect Acs Resource" lightbox="./media/demoapp-connect-acsresource.png":::
+3. Once you have selected the required Azure Communication Services resource from the resources dropdown list, press the apply button.
+ :::image type="content" source="./media/smaller-bot-choose-resource.png" alt-text="Diagram that shows how to save the selected Azure Communication Service resource to create a new Azure Communication Services user ID." lightbox="./media/bot-choose-resource.png":::
-3. Once the provided resource details are verified, you'll see the **bot's Azure Communication Services ID** assigned. With this ID, you can add the bot to the conversation at whenever appropriate using Chat's AddParticipant API. Once the bot is added as participant to a chat, it will start receiving chat related activities and can respond back in the chat thread.
+4. Once the provided resource details are verified, you will see the **bot's Azure Communication Services ID** assigned. With this ID, you can add the bot to the conversation whenever appropriate using Chat's AddParticipant API. Once the bot is added as participant to a chat, it will start receiving chat related activities, and can respond back in the chat thread.
- :::image type="content" source="./media/smaller-demoapp-bot-detail.png" alt-text="DemoApp Bot Detail" lightbox="./media/demoapp-bot-detail.png":::
+ :::image type="content" source="./media/smaller-acs-chat-channel-saved.png" alt-text="Screenshot of new Azure Communication Services user ID assigned to the bot." lightbox="./media/acs-chat-channel-saved.png":::
## Step 4 - Create a chat app and add bot as a participant
-Now that you have the bot's Azure Communication Services ID, you'll be able to create a chat thread with bot as a participant.
+Now that you have the bot's Azure Communication Services ID, you can create a chat thread with the bot as a participant.
+ ### Create a new C# application ```console
dotnet add package Azure.Communication.Chat
### Create a chat client
-To create a chat client, you'll use your Communication Services endpoint and the access token that was generated as part of Step 2. You need to use the `CommunicationIdentityClient` class from the Identity SDK to create a user and issue a token to pass to your chat client.
+To create a chat client, you will use your Azure Communication Services endpoint and the access token that was generated as part of Step 2. You need to use the `CommunicationIdentityClient` class from the Identity SDK to create a user and issue a token to pass to your chat client.
Copy the following code snippets and paste into source file: **Program.cs**
await foreach (ChatMessage message in allMessages)
} ``` You should see bot's echo reply to "Hello World" in the list of messages.
-When creating the actual chat applications, you can also receive real-time chat messages by subscribing to listen for new incoming messages using our JavaScript or mobile SDKs. An example using JavaScript SDK would be:
+When creating the chat applications, you can also receive real-time notifications by subscribing to listen for new incoming messages using our JavaScript or mobile SDKs. An example using JavaScript SDK would be:
```js // open notifications channel await chatClient.startRealtimeNotifications();
chatClient.on("chatMessageReceived", (e) => {
}); ```
+### Clean up the chat thread
+
+Delete the thread when finished.
+
+```csharp
+chatClient.DeleteChatThread(threadId);
+```
### Deploy the C# chat application
-If you would like to deploy the chat application, you can follow these steps:
+Follow these steps to deploy the chat application:
1. Open the chat project in Visual Studio.
-2. Right click on the ChatQuickstart project and click Publish
+2. Select the ChatQuickstart project and from the right-click menu, select Publish
- :::image type="content" source="./media/deploy-chat-application.png" alt-text="Deploy Chat Application":::
+ :::image type="content" source="./media/deploy-chat-application.png" alt-text="Screenshot of deploying chat application to Azure from Visual Studio.":::
-## More things you can do with bot
-Besides simple text message, bot is also able to receive and send many other activities including
+## More things you can do with a bot
+In addition to sending a plain text message, a bot is also able to receive many other activities from the user through Azure Communications Services Chat channel including
- Conversation update - Message update - Message delete - Typing indicator - Event activity
+- Various attachments including Adaptive cards
+- Bot channel data
+
+Below are some samples to illustrate these features:
### Send a welcome message when a new user is added to the thread
-With the current Echo Bot logic, it accepts input from the user and echoes it back. If you would like to add additional logic such as responding to a participant added Azure Communication Services event, copy the following code snippets and paste into the source file: [EchoBot.cs](https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/csharp_dotnetcore/02.echo-bot/Bots/EchoBot.cs)
+ The current Echo Bot logic accepts input from the user and echoes it back. If you would like to add extra logic such as responding to a participant added Azure Communication Services event, copy the following code snippets and paste into the source file: [EchoBot.cs](https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/csharp_dotnetcore/02.echo-bot/Bots/EchoBot.cs)
```csharp using System.Threading;
namespace Microsoft.BotBuilderSamples.Bots
``` ### Send an adaptive card
-To help you increase engagement and efficiency and communicate with users in a variety of ways, you can send adaptive cards to the chat thread. You can send adaptive cards from a bot by adding them as bot activity attachments.
+Sending adaptive cards to the chat thread can help you increase engagement and efficiency and communicate with users in a variety of ways. You can send adaptive cards from a bot by adding them as bot activity attachments.
```csharp
await turnContext.SendActivityAsync(reply, cancellationToken);
``` You can find sample payloads for adaptive cards at [Samples and Templates](https://adaptivecards.io/samples)
-And on the Azure Communication Services User side, the Azure Communication Services message's metadata field will indicate this is a message with attachment.The key is microsoft.azure.communication.chat.bot.contenttype, which is set to the value azurebotservice.adaptivecard. This is an example of the chat message that will be received:
+On the Azure Communication Services User side, the Azure Communication Services Chat channel will add a field to the message's metadata that will indicate that this message has an attachment. The key in the metadata is `microsoft.azure.communication.chat.bot.contenttype`, which is set to the value `azurebotservice.adaptivecard`. Here is an example of the chat message that will be received:
```json {
And on the Azure Communication Services User side, the Azure Communication Servi
} ```
+* ### Send a message from user to bot
+
+You can send a simple text message from user to bot just the same way you send a text message to another user.
+However, when sending a message carrying an attachment from a user to the bot, you will need to add this flag to the ACS Chat metadata `"microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard"`. For sending an event activity from user to bot, you will need to add to ACS Chat metadata `"microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.event"`. Below are sample formats for user to bot ACS Chat messages.
+
+ * #### Simple text message
+
+```json
+{
+ "content":"Simple text message",
+ "senderDisplayName":"Acs-Dev-Bot",
+ "metadata":{
+ "text":"random text",
+ "key1":"value1",
+ "key2":"{\r\n \"subkey1\": \"subValue1\"\r\n
+ "},
+ "messageType": "Text"
+}
+```
+
+ * #### Message with an attachment
+
+```json
+{
+ "content": "{
+ \"text\":\"sample text\",
+ \"attachments\": [{
+ \"contentType\":\"application/vnd.microsoft.card.adaptive\",
+ \"content\": { \"*adaptive card payload*\" }
+ }]
+ }",
+ "senderDisplayName": "Acs-Dev-Bot",
+ "metadata": {
+ "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard",
+ "text": "random text",
+ "key1": "value1",
+ "key2": "{\r\n \"subkey1\": \"subValue1\"\r\n}"
+ },
+ "messageType": "Text"
+}
+```
+
+ * #### Message with an event activity
+
+Event payload comprises all json fields in the message content except name field, which should contain the name of the event. Below event name `endOfConversation` with the payload `"{field1":"value1", "field2": { "nestedField":"nestedValue" }}` is sent to the bot.
+```json
+{
+ "content":"{
+ \"name\":\"endOfConversation\",
+ \"field1\":\"value1\",
+ \"field2\": {
+ \"nestedField\":\"nestedValue\"
+ }
+ }",
+ "senderDisplayName":"Acs-Dev-Bot",
+ "metadata":{
+ "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.event",
+ "text":"random text",
+ "key1":"value1",
+ "key2":"{\r\n \"subkey1\": \"subValue1\"\r\n}"
+ },
+ "messageType": "Text"
+}
+```
+
+> The metadata field `"microsoft.azure.communication.chat.bot.contenttype"` is only needed in user to bot direction. It is not needed in bot to user direction.
+
+## Supported bot activity fields
+
+### Bot to user flow
+
+#### Activities
+
+- Message activity
+- Typing activity
+
+#### Message activity fields
+- `Text`
+- `Attachments`
+- `AttachmentLayout`
+- `SuggestedActions`
+- `From.Name` (Converted to ACS SenderDisplayName)
+- `ChannelData` (Converted to ACS Chat Metadata. If any `ChannelData` mapping values are objects, then they'll be serialized in JSON format and sent as a string)
+
+### User to bot flow
+
+#### Activities and fields
+
+- Message activity
+ - `Id` (ACS Chat message ID)
+ - `TimeStamp`
+ - `Text`
+ - `Attachments`
+- Conversation update activity
+ - `MembersAdded`
+ - `MembersRemoved`
+ - `TopicName`
+- Message update activity
+ - `Id` (Updated ACS Chat message ID)
+ - `Text`
+ - `Attachments`
+- Message delete activity
+ - `Id` (Deleted ACS Chat message ID)
+- Event activity
+ - `Name`
+ - `Value`
+- Typing activity
+
+#### Other common fields
+
+- `Recipient.Id` and `Recipeint.Name` (ACS Chat user ID and display name)
+- `From.Id` and `From.Name` (ACS Chat user ID and display name)
+- `Conversation.Id` (ACS Chat thread ID)
+- `ChannelId` (AcsChat if empty)
+- `ChannelData` (ACS Chat message metadata)
+
+## Support for single tenant and managed identity bots
+
+ACS Chat channel supports single tenant and managed identity bots as well. Refer to [bot identity information](/azure/bot-service/bot-builder-authentication?tabs=userassigned%2Caadv2%2Ccsharp#bot-identity-information) to set up your bot web app.
+
+For managed identity bots, additionally, you might have to [update bot service identity](/azure/bot-service/bot-builder-authentication?tabs=userassigned%2Caadv2%2Ccsharp#to-update-your-app-service).
+
+## Bot handoff patterns
+
+Sometimes the bot wouldn't be able to understand or answer a question or a customer can request to be connected to a human agent. Then it will be necessary to handoff the chat thread from a bot to a human agent. In such cases, you can design your application to [transition conversation from bot to human](/azure/bot-service/bot-service-design-pattern-handoff-human).
+
+## Handling bot to bot communication
+
+ There may be certain usecases where two bots need to be added to the same thread. If this occurs, then the bots may start replying to each other's messages. If this scenario is not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. This scenario is handled by Azure Communication Services Chat by throttling the requests which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](/azure/communication-services/concepts/service-limits#chat).
+
+## Troubleshooting
+
+### Chat channel cannot be added
+
+- Verify that in the ABS portal, Configuration -> Bot Messaging endpoint has been set correctly.
+
+### Bot gets a forbidden exception while replying to a message
+
+- Verify that bot's Microsoft App ID and secret are saved correctly in the bot configuration file uploaded to the webapp.
+
+### Bot is not able to be added as a participant
+
+- Verify that bot's ACS ID is being used correctly while sending a request to add bot to a chat thread.
+ ## Next steps Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
Title: Azure Confidential virtual machine options on AMD processors (preview)
+ Title: Azure Confidential virtual machine options on AMD processors
description: Azure Confidential Computing offers multiple options for confidential virtual machines that run on AMD processors backed by SEV-SNP technology.
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Although analytical store has built-in protection against physical failures, bac
Synapse Link, and analytical store by consequence, has different compatibility levels with Azure Cosmos DB backup modes: * Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account.
-* Continuous backup mode isn't fully supported yet:
- * Database accounts with Synapse Link enabled currently can't use continuous backup mode.
- * Database accounts with continuous backup mode enabled can enable Synapse Link through a support case. This capability is in preview now.
- * Database accounts that have neither continuous backup nor Synapse Link enabled can use these two features together through a support case. This capability is in preview now.
+* Currently Continuous backup mode and Synapse Link aren't supported in the same database account. Customers have to choose one of these two features and this decision can't be changed.
### Backup Polices
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views.md
Title: Materialized Views for Azure Cosmos DB for Apache Cassandra. (Preview)
-description: This documentation is provided as a resource for participants in the preview of Azure Cosmos DB for Apache Cassandra Materialized View.
+ Title: Materialized views (preview)
+
+description: This documentation is provided as a resource for participants in the preview of Azure Cosmos DB Cassandra API Materialized View.
+ - Previously updated : 01/06/2022- Last updated : 11/17/2022+
-# Enable materialized views for Azure Cosmos DB for Apache Cassandra operations (Preview)
+# Materialized views in Azure Cosmos DB for Apache Cassandra (preview)
+ [!INCLUDE[Cassandra](../includes/appliesto-cassandra.md)] > [!IMPORTANT]
-> Materialized Views for Azure Cosmos DB for Apache Cassandra is currently in gated preview. Please send an email to mv-preview@microsoft.com to try this feature.
-> Materialized View preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Feature overview
-
-Materialized Views when defined will help provide a means to efficiently query a base table (container on Azure Cosmos DB) with non-primary key filters. When users write to the base table, the Materialized view is built automatically in the background. This view can have a different primary key for lookups. The view will also contain only the projected columns from the base table. It will be a read-only table.
+> Materialized views in Azure Cosmos DB for Cassandra is currently in preview. You can enable this feature using the Azure portal. This preview of materialized views is provided without a service-level agreement. At this time, materialized views are not recommended for production workloads. Certain features of this preview may not be supported or may have constrained capabilities. For more information, see [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-You can query a column store without specifying a partition key by using Secondary Indexes. However, the query won't be effective for columns with high cardinality (scanning through all data for a small result set) or columns with low cardinality. Such queries end up being expensive as they end up being a cross partition query.
+Materialized views, when defined, help provide a means to efficiently query a base table (or container in Azure Cosmos DB) with filters that aren't primary keys. When users write to the base table, the materialized view is built automatically in the background. This view can have a different primary key for efficient lookups. The view will also only contain columns explicitly projected from the base table. This view will be a read-only table.
-With Materialized view, you can
-- Use as Global Secondary Indexes and save cross partition scans that reduce expensive queries -- Provide SQL based conditional predicate to populate only certain columns and certain data that meet the pre-condition -- Real time MVs that simplify real time event based scenarios where customers today use Change feed trigger for precondition checks to populate new collections"
+You can query a column store without specifying a partition key by using secondary indexes. However, the query won't be effective for columns with high or low cardinality. The query could scan through all data for a small result set. Such queries end up being expensive as they end up inadvertently executing as a cross-partition query.
-## Main benefits
+With a materialized view, you can:
-- With Materialized View (Server side denormalization), you can avoid multiple independent tables and client side denormalization. -- Materialized view feature takes on the responsibility of updating views in order to keep them consistent with the base table. With this feature, you can avoid dual writes to the base table and the view.-- Materialized Views helps optimize read performance-- Ability to specify throughput for the materialized view independently-- Based on the requirements to hydrate the view, you can configure the MV builder layer appropriately.-- Speeding up write operations as it only needs to be written to the base table.-- Additionally, This implementation on Azure Cosmos DB is based on a pull model, which doesn't affect the writer performance.
+- Use as lookup or mapping table to persist cross partition scans that would otherwise be expensive queries.
+- Provide a SQL-based conditional predicate to populate only certain columns and data that meet the pre-condition.
+- Create real-time views that simplify event-based scenarios that are commonly stored as separate collections using change feed triggers.
+## Benefits of materialized views
+Materialized views have many benefits that include, but aren't limited to:
-## How to get started?
+- You can implement server-side denormalization using materialized views. With server-side denormalization, you can avoid multiple independent tables and computationally complex denormalization in client applications.
+- Materialized views automatically updating views to keep them consistent with the base table. This automatic update abstracts the responsibilities of your client applications with would typically implement custom logic to perform dual writes to the base table and the view.
+- Materialized views optimize read performance by reading from a single view.
+- You can specify throughput for the materialized view independently.
+- You can configure a materialized view builder layer to map to your requirements to hydrate a view.
+- Materialized views improve write performance as write operations only need to be written to the base table.
+- Additionally, the Azure Cosmos DB implementation of materialized views is based on a pull model. This implementation doesn't affect write performance.
-New API for Cassandra accounts with Materialized Views enabled can be provisioned on your subscription by using REST API calls from az CLI.
+## Get started with materialized views
-### Log in to the Azure command line interface
+Create new API for Cassandra accounts by using the Azure CLI to enable the materialized views feature either with a native command or a REST API operation.
-Install Azure CLI as mentioned at [How to install the Azure CLI | Microsoft Docs](/cli/azure/install-azure-cli) and log on using the below:
- ```azurecli-interactive
- az login
- ```
+### [Azure portal](#tab/azure-portal)
-### Create an account
+1. Sign in to the [Azure portal](https://portal.azure.com/).
-To create account with support for customer managed keys and materialized views skip to **this** section
+1. Navigate to your API for Cassandra account.
-To create an account, use the following command after creating body.txt with the below content, replacing {{subscriptionId}} with your subscription ID, {{resourceGroup}} with a resource group name that you should have created in advance, and {{accountName}} with a name for your API for Cassandra account.
+1. In the resource menu, select **Settings**.
- ```azurecli-interactive
- az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview --body @body.txt
- body.txt content:
- {
- "location": "East US",
- "properties":
- {
- "databaseAccountOfferType": "Standard",
- "locations": [ { "locationName": "East US" } ],
- "capabilities": [ { "name": "EnableCassandra" }, { "name": "CassandraEnableMaterializedViews" }],
- "enableMaterializedViews": true
- }
- }
- ```
-
- Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
- ```
- az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview
- ```
-### Create an account with support for customer managed keys and materialized views
+1. In the **Settings** section, select **Materialized View for Cassandra API (Preview)**.
-This step is optional ΓÇô you can skip this step if you don't want to use Customer Managed Keys for your Azure Cosmos DB account.
+1. In the new dialog, select **Enable** to enable this feature for this account.
-To use Customer Managed Keys feature and Materialized views together on Azure Cosmos DB account, you must first configure managed identities with Azure Active Directory for your account and then enable support for materialized views.
+ :::image type="content" source="media/materialized-views/enable-in-portal.png" lightbox="media/materialized-views/enable-in-portal.png" alt-text="Screenshot of the Materialized Views feature being enabled in the Azure portal.":::
-You can use the documentation [here](../how-to-setup-cmk.md) to configure your Azure Cosmos DB Cassandra account with customer managed keys and setup managed identity access to the key Vault. Make sure you follow all the steps in [Using a managed identity in Azure key vault access policy](../how-to-setup-managed-identity.md). The next step to enable materializedViews on the account.
+### [Azure CLI](#tab/azure-cli)
-Once your account is set up with CMK and managed identity, you can enable materialized views on the account by enabling ΓÇ£enableMaterializedViewsΓÇ¥ property in the request body.
+1. Sign-in to the Azure CLI.
- ```azurecli-interactive
- az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
+ ```azurecli
+ az login
+ ```
+ > [!NOTE]
+ > If you do not have the Azure CLI installed, see [how to install the Azure CLI](/cli/azure/install-azure-cli).
-body.txt content:
-{
- "properties":
- {
- "enableMaterializedViews": true
- }
-}
- ```
+1. Install the [`cosmosdb-preview`](https://github.com/azure/azure-cli-extensions/tree/main/src/cosmosdb-preview) extension.
+ ```azurecli
+ az extension add \
+ --name cosmosdb-preview
+ ```
- Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
- ```
-az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview
-```
+1. Create shell variables for `accountName` and `resourceGroupName`.
-Perform another patch to set ΓÇ£CassandraEnableMaterializedViewsΓÇ¥ capability and wait for it to succeed
+ ```azurecli
+ # Variable for resource group name
+ resourceGroupName="<resource-group-name>"
+
+ # Variable for account name
+ accountName="<account-name>"
+ ```
-```
-az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
+1. Enable the preview materialized views feature for the account using [`az cosmosdb update`](/cli/azure/cosmosdb#az-cosmosdb-update).
-body.txt content:
-{
- "properties":
- {
- "capabilities":
-[{"name":"EnableCassandra"},
- {"name":"CassandraEnableMaterializedViews"}]
- }
-}
-```
+ ```azurecli
+ az cosmosdb update \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --enable-materialized-views true \
+ --capabilities CassandraEnableMaterializedViews
+ ```
-### Create materialized view builder
+### [REST API](#tab/rest-api)
-Following this step, you'll also need to provision a Materialized View Builder:
+1. Sign-in to the Azure CLI.
-```
-az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview --body @body.txt
+ ```azurecli
+ az login
+ ```
-body.txt content:
-{
- "properties":
- {
- "serviceType": "materializedViewsBuilder",
- "instanceCount": 1,
- "instanceSize": "Cosmos.D4s"
- }
-}
-```
+ > [!NOTE]
+ > If you do not have the Azure CLI installed, see [how to install the Azure CLI](/cli/azure/install-azure-cli).
-Wait for a couple of minutes and check the status using the below, the status in the output should have become Running:
+1. Create shell variables for `accountName` and `resourceGroupName`.
-```
-az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview
-```
+ ```azurecli
+ # Variable for resource group name
+ resourceGroupName="<resource-group-name>"
+
+ # Variable for account name
+ accountName="<account-name>"
+ ```
-## Caveats and current limitations
+1. Create a new JSON file with the capabilities manifest.
-Once your account and Materialized View Builder is set up, you should be able to create Materialized views per the documentation [here](https://cassandra.apache.org/doc/latest/cql/mvs.html) :
+ ```json
+ {
+ "properties": {
+ "capabilities": [
+ {
+ "name": "CassandraEnableMaterializedViews"
+ }
+ ],
+ "enableMaterializedViews": true
+ }
+ }
+ ```
-However, there are a few caveats with Azure Cosmos DB for Apache CassandraΓÇÖs preview implementation of Materialized Views:
-- Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined.-- For the MV definitionΓÇÖs WHERE clause, only ΓÇ£IS NOT NULLΓÇ¥ filters are currently allowed.-- After a Materialized View is created against a base table, ALTER TABLE ADD operations aren't allowed on the base tableΓÇÖs schema - they're allowed only if none of the MVs have select * in their definition.
+ > [!NOTE]
+ > In this example, we named the JSON file **capabilities.json**.
-In addition to the above, note the following limitations
+1. Get the unique identifier for your existing account using [`az cosmosdb show`](/cli/azure/cosmosdb#az-cosmosdb-show).
-### Availability zones limitations
+ ```azurecli
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query id
+ ```
-- Materialized views can't be enabled on an account that has Availability zone enabled regions. -- Adding a new region with Availability zone is not supported once ΓÇ£enableMaterializedViewsΓÇ¥ is set to true on the account.
+ Store the unique identifier in a shell variable named `$uri`.
-### Periodic backup and restore limitations
+ ```azurecli
+ uri=$(
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --query id \
+ --output tsv
+ )
+ ```
-Materialized views aren't automatically restored with the restore process. Customer needs to re-create the materialized views after the restore process is complete. Customer needs to enableMaterializedViews on their restored account before creating the materialized views and provision the builders for the materialized views to be built.
+1. Enable the preview materialized views feature for the account using the REST API and [`az rest`](/cli/azure/reference-index#az-rest) with an HTTP `PATCH` verb.
-Other limitations similar to **Open Source Apache Cassandra** behavior
+ ```azurecli
+ az rest \
+ --method PATCH \
+ --uri "https://management.azure.com/$uri/?api-version=2021-11-15-preview" \
+ --body @capabilities.json
+ ```
-- Defining Conflict resolution policy on Materialized Views is not allowed.-- Write operations from customer aren't allowed on Materialized views.-- Cross document queries and use of aggregate functions aren't supported on Materialized views.-- Modifying MaterializedViewDefinitionString after MV creation is not supported.-- Deleting base table is not allowed if at least one MV is defined on it. All the MVs must first be deleted and then the base table can be deleted.-- Defining materialized views on containers with Static columns is not allowed+ ## Under the hood
-Azure Cosmos DB for Apache Cassandra uses a MV builder compute layer to maintain Materialized views. Customer gets flexibility to configure the MV builder compute instances depending on the latency and lag requirements to hydrate the views. The compute containers are shared among all MVs within the database account. Each provisioned compute container spawns off multiple tasks that read change feed from base table partitions and write data to MV (which is also another table) after transforming them as per MV definition for every MV in the database account.
-
-## Frequently asked questions (FAQs) …
--
-### What transformations/actions are supported?
--- Specifying a partition key that is different from base table partition key.-- Support for projecting selected subset of columns from base table.-- Determine if row from base table can be part of materialized view based on conditions evaluated on primary key columns of base table row. Filters supported - equalities, inequalities, contains. (Planned for GA)
+The API for Cassandra uses a materialized view builder compute layer to maintain the views.
-### What consistency levels will be supported?
+You get the flexibility to configure the view builder's compute instances based on your latency and lag requirements to hydrate the views. From a technical stand point, this compute layer helps manage connections between partitions in a more efficient manner even when the data size is large and the number of partitions is high.
-Data in materialized view is eventually consistent. User might read stale rows when compared to data on base table due to redo of some operations on MVs. This behavior is acceptable since we guarantee only eventual consistency on the MV. Customers can configure (scale up and scale down) the MV builder layer depending on the latency requirement for the view to be consistent with base table.
+The compute containers are shared among all materialized views within an Azure Cosmos DB account. Each provisioned compute container spawns off multiple tasks that read the change feed from base table partitions and writes data to the target materialized view\[s\]. The compute container transforms the data per the materialized view definition for each materialized view in the account.
-### Will there be an autoscale layer for the MV builder instances?
+## Create a materialized view builder
-Autoscaling for MV builder is not available right now. The MV builder instances can be manually scaled by modifying the instance count(scale out) or instance size(scale up).
+Create a materialized view builder to automatically transform data and write to a materialized view.
-### Details on the billing model
+### [Azure portal](#tab/azure-portal)
-The proposed billing model will be to charge the customers for:
+1. Sign in to the [Azure portal](https://portal.azure.com/).
-**MV Builder compute nodes** MV Builder Compute ΓÇô Single tenant layer
+1. Navigate to your API for Cassandra account.
-**Storage** The OLTP storage of the base table and MV based on existing storage meter for Containers. LogStore won't be charged.
+1. In the resource menu, select **Materialized Views Builder**.
-**Request Units** The provisioned RUs for base container and Materialized View.
+1. On the **Materialized Views Builder** page, configure the SKU and number of instances for the builder.
-### What are the different SKUs that will be available?
-Refer to Pricing - [Azure Cosmos DB | Microsoft Azure](https://azure.microsoft.com/pricing/details/cosmos-db/) and check instances under Dedicated Gateway
+ > [!NOTE]
+ > This resource menu option and page will only appear when the Materialized Views feature is enabled for the account.
-### What type of TTL support do we have?
+1. Select **Save**.
-Setting table level TTL on MV is not allowed. TTL from base table rows will be applied on MV as well.
+### [Azure CLI](#tab/azure-cli)
+1. Enable the materialized views builder for the account using [`az cosmosdb service create`](/cli/azure/cosmosdb/service#az-cosmosdb-service-create).
-### Initial troubleshooting if MVs aren't up to date:
-- Check if MV builder instances are provisioned-- Check if enough RUs are provisioned on the base table-- Check for unavailability on Base table or MV
+ ```azurecli
+ az cosmosdb service create \
+ --resource-group $resourceGroupName \
+ --name materialized-views-builder \
+ --account-name $accountName \
+ --count 1 \
+ --kind MaterializedViewsBuilder \
+ --size Cosmos.D4s
+ ```
-### What type of monitoring is available in addition to the existing monitoring for API for Cassandra?
+### [REST API](#tab/rest-api)
-- Max Materialized View Catchup Gap in Minutes ΓÇô Value(t) indicates rows written to base table in last ΓÇÿtΓÇÖ minutes is yet to be propagated to MV. -- Metrics related to RUs consumed on base table for MV build (read change feed cost)-- Metrics related to RUs consumed on MV for MV build (write cost)-- Metrics related to resource consumption on MV builders (CPU, memory usage metrics)
+1. Create a new JSON file with the builder manifest.
+ ```json
+ {
+ "properties": {
+ "serviceType": "materializedViewsBuilder",
+ "instanceCount": 1,
+ "instanceSize": "Cosmos.D4s"
+ }
+ }
+ ```
-### What are the restore options available for MVs?
-MVs can't be restored. Hence, MVs will need to be recreated once the base table is restored.
-
-### Can you create more than one view on a base table?
-
-Multiple views can be created on the same base table. Limit of five views is enforced.
-
-### How is uniqueness enforced on the materialized view? How will the mapping between the records in base table to the records in materialized view look like?
-
-The partition and clustering key of the base table are always part of primary key of any materialized view defined on it and enforce uniqueness of primary key after data repartitioning.
+ > [!NOTE]
+ > In this example, we named the JSON file **builder.json**.
-### Can we add or remove columns on the base table once materialized view is defined?
+1. Enable the materialized views builder for the account using the REST API and `az rest` with an HTTP `PUT` verb.
-You'll be able to add a column to the base table, but you won't be able to remove a column. After a MV is created against a base table, ALTER TABLE ADD operations aren't allowed on the base table - they're allowed only if none of the MVs have select * in their definition. Cassandra doesn't support dropping columns on the base table if it has a materialized view defined on it.
+ ```azurecli
+ az rest \
+ --method PUT \
+ --uri "https://management.azure.com/$uri/services/materializedViewsBuilder?api-version=2021-11-15-preview" \
+ --body @builder.json
+ ```
-### Can we create MV on existing base table?
+1. Wait a couple of minutes and check the status using `az rest` again with the HTTP `GET` verb. The status in the output should now be `Running`:
-No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. You would need to create a new table with materialized views defined and move the existing data using [container copy jobs](../intra-account-container-copy.md). MV on existing table is planned for the future.
+ ```azurecli
+ az rest \
+ --method GET \
+ --uri "https://management.azure.com/$uri/services/materializedViewsBuilder?api-version=2021-11-15-preview"
+ ```
-### What are the conditions on which records won't make it to MV and how to identify such records?
+
-Below are some of the identified cases where data from base table can't be written to MV as they violate some constraints on MV table-
-- Rows that donΓÇÖt satisfy partition key size limit in the materialized views-- Rows that don't satisfy clustering key size limit in materialized views
-
-Currently we drop these rows but plan to expose details related to dropped rows in future so that the user can reconcile the missing data.
+## Create a materialized view
+
+Once your account and Materialized View Builder is set up, you should be able to create Materialized views using CQLSH.
+
+> [!NOTE]
+> If you do not already have the standalone CQLSH tool installed, see [install the CQLSH Tool](support.md#cql-shell). You should also [update your connection string](manage-data-cqlsh.md#update-your-connection-string) in the tool.
+
+Here are a few sample commands to create a materialized view:
+
+1. First, create a **keyspace** name `uprofile`.
+
+ ```sql
+ CREATE KEYSPACE IF NOT EXISTS uprofile WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'datacenter1' : 1 };
+ ```
+
+1. Next, create a table named `user` within the keyspace.
+
+ ```sql
+ CREATE TABLE IF NOT EXISTS uprofile.USER (user_id INT PRIMARY KEY, user_name text, user_bcity text);
+ ```
+
+1. Now, create a materialized view named `user_by_bcity` within the same keyspace. Specify, using a query, how data is projected into the view from the base table.
+
+ ```sql
+ CREATE MATERIALIZED VIEW uprofile.user_by_bcity AS
+ SELECT
+ user_id,
+ user_name,
+ user_bcity
+ FROM
+ uprofile.USER
+ WHERE
+ user_id IS NOT NULL
+ AND user_bcity IS NOT NULL PRIMARY KEY (user_bcity, user_id);
+ ```
+
+1. Insert rows into the base table.
+
+ ```sql
+ INSERT INTO
+ uprofile.USER (user_id, user_name, user_bcity)
+ VALUES
+ (
+ 101, 'johnjoe', 'New York'
+ );
+
+ INSERT INTO
+ uprofile.USER (user_id, user_name, user_bcity)
+ VALUES
+ (
+ 102, 'james', 'New York'
+ );
+ ```
+
+1. Query the materialized view.
+
+ ```sql
+ SELECT * FROM user_by_bcity;
+ ```
+
+1. Observe the output from the materialized view.
+
+ ```output
+ user_bcity | user_id | user_name
+ ++--
+ New York | 101 | johnjoe
+ New York | 102 | james
+
+ (2 rows)
+ ```
+
+Optionally, you can also use the resource provider to create or update a materialized view.
+
+- [Create or Update a view in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/create-update-cassandra-view)
+- [Get a view in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/get-cassandra-view)
+- [List views in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/list-cassandra-views)
+- [Delete a view in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/delete-cassandra-view)
+- [Update the throughput of a view in API for Cassandra](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/cassandra-resources/update-cassandra-view-throughput)
+
+## Current limitations
+
+There are a few limitations with the API for Cassandra's preview implementation of materialized views:
+
+- Materialized views can't be created on a table that existed before support for materialized views was enabled on the account. To use materialized views, create a new table after the feature is enabled.
+- For the materialized view definitionΓÇÖs `WHERE` clause, only `IS NOT NULL` filters are currently allowed.
+- After a materialized view is created against a base table, `ALTER TABLE ADD` operations aren't allowed on the base tableΓÇÖs schema. `ALTER TABLE APP` is allowed only if none of the materialized views have selected `*` in their definition.
+- There are limits on partition key size (**2 Kb**) and total length of clustering key size (**1 Kb**). If this size limit is exceeded, the responsible message will end up in poison message queue.
+- If a base table has user-defined types (UDTs) and materialized view definition has either `SELECT * FROM` or has the UDT in one of projected columns, UDT updates aren't permitted on the account.
+- Materialized views may become inconsistent with the base table for a few rows after automatic regional failover. To avoid this inconsistency, rebuild the materialized view after the failover.
+- Creating materialized view builder instances with **32 cores** isn't supported. If needed, you can create multiple builder instances with a smaller number of cores.
+
+In addition to the above limitations, consider the following extra limitations:
+
+- Availability zones
+ - Materialized views can't be enabled on an account that has availability zone enabled regions.
+ - Adding a new region with an availability zone isn't supported once `enableMaterializedViews` is set to true on the account.
+- Periodic backup and restore
+ - Materialized views aren't automatically restored with the restore process. You'll need to re-create the materialized views after the restore process is complete. Then, you should configure `enableMaterializedViews` on their restored account before creating the materialized views and builders again.
+- Apache Cassandra
+ - Defining conflict resolution policy on materialized views isn't allowed.
+ - Write operations aren't allowed on materialized views.
+ - Cross document queries and use of aggregate functions aren't supported on materialized views.
+ - A materialized view's schema can't be modified after creation.
+ - Deleting the base table isn't allowed if at least one materialized view is defined on it. All the views must first be deleted and then the base table can be deleted.
+ - Defining materialized views on containers with static columns isn't allowed.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Review frequently asked questions (FAQ) about materialized views in API for Cassandra](materialized-views-faq.yml)
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB for MongoDB and .NET
+ Title: Get started with Azure Cosmos DB for MongoDB using .NET
description: Get started developing a .NET application that works with Azure Cosmos DB for MongoDB. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for MongoDB database.--++
+ms.devlang: csharp
Last updated 10/17/2022-+
-# Get started with Azure Cosmos DB for MongoDB and .NET Core
+# Get started with Azure Cosmos DB for MongoDB using .NET
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] This article shows you how to connect to Azure Cosmos DB for MongoDB using .NET Core and the relevant NuGet packages. Once connected, you can perform operations on databases, collections, and documents.
This article shows you how to connect to Azure Cosmos DB for MongoDB using .NET
[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/csharp) | [MongoDB Package (NuGet)](https://www.nuget.org/packages/MongoDB.Driver) - ## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [.NET 6.0](https://dotnet.microsoft.com/en-us/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-* [Azure Cosmos DB for MongoDB resource](quickstart-dotnet.md#create-an-azure-cosmos-db-account)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [.NET 6.0](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- [Azure Cosmos DB for MongoDB resource](quickstart-dotnet.md#create-an-azure-cosmos-db-account)
## Create a new .NET Core app
-1. Create a new .NET Core application in an empty folder using your preferred terminal. For this scenario you'll use a console application. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command to create and name the console app.
+1. Create a new .NET Core application in an empty folder using your preferred terminal. For this scenario, you'll use a console application. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command to create and name the console app.
```console dotnet new console -o app
The following guides show you how to use each of these classes to build your app
**Guide**:
-* [Manage databases](how-to-dotnet-manage-databases.md)
-* [Manage collections](how-to-dotnet-manage-collections.md)
-* [Manage documents](how-to-dotnet-manage-documents.md)
-* [Use queries to find documents](how-to-dotnet-manage-queries.md)
+- [Manage databases](how-to-dotnet-manage-databases.md)
+- [Manage collections](how-to-dotnet-manage-collections.md)
+- [Manage documents](how-to-dotnet-manage-documents.md)
+- [Use queries to find documents](how-to-dotnet-manage-queries.md)
## See also
The following guides show you how to use each of these classes to build your app
## Next steps
-Now that you've connected to a API for MongoDB account, use the next guide to create and manage databases.
+Now that you've connected to an API for MongoDB account, use the next guide to create and manage databases.
> [!div class="nextstepaction"] > [Create a database in Azure Cosmos DB for MongoDB using .NET](how-to-dotnet-manage-databases.md)
cosmos-db How To Dotnet Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-collections.md
Title: Create a collection in Azure Cosmos DB for MongoDB using .NET description: Learn how to work with a collection in your Azure Cosmos DB for MongoDB database using the .NET SDK.--++
+ms.devlang: csharp
Last updated 07/22/2022-+ # Manage a collection in Azure Cosmos DB for MongoDB using .NET
In Azure Cosmos DB, a collection is analogous to a table in a relational databas
Here are some quick rules when naming a collection:
-* Keep collection names between 3 and 63 characters long
-* Collection names can only contain lowercase letters, numbers, or the dash (-) character.
-* Container names must start with a lowercase letter or number.
+- Keep collection names between 3 and 63 characters long
+- Collection names can only contain lowercase letters, numbers, or the dash (-) character.
+- Container names must start with a lowercase letter or number.
## Get collection instance Use an instance of the **Collection** class to access the collection on the server.
-* [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+- [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
The following code snippets assume you've already created your [client connection](how-to-dotnet-get-started.md#create-mongoclient-with-connection-string).
The following code snippets assume you've already created your [client connectio
To create a collection, insert a document into the collection.
-* [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
-* [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
-* [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
+- [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
+- [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+- [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/110-manage-collections/program.cs" id="create_collection"::: ## Drop a collection
-* [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
+- [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
Drop the collection from the database to remove it permanently. However, the next insert or update operation that accesses the collection will create a new collection with that name.
Drop the collection from the database to remove it permanently. However, the nex
An index is used by the MongoDB query engine to improve performance to database queries.
-* [MongoClient.Database.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
+- [MongoClient.Database.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/110-manage-collections/program.cs" id="get_indexes"::: - ## See also - [Get started with Azure Cosmos DB for MongoDB and .NET](how-to-dotnet-get-started.md)
cosmos-db How To Dotnet Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-databases.md
Title: Manage a MongoDB database using .NET description: Learn how to manage your Azure Cosmos DB resource when it provides the API for MongoDB with a .NET SDK.--++
+ms.devlang: csharp
Last updated 07/22/2022-+ # Manage a MongoDB database using .NET
In Azure Cosmos DB, a database is analogous to a namespace. When you create a da
Here are some quick rules when naming a database:
-* Keep database names between 3 and 63 characters long
-* Database names can only contain lowercase letters, numbers, or the dash (-) character.
-* Database names must start with a lowercase letter or number.
+- Keep database names between 3 and 63 characters long
+- Database names can only contain lowercase letters, numbers, or the dash (-) character.
+- Database names must start with a lowercase letter or number.
Once created, the URI for a database is in this format:
Once created, the URI for a database is in this format:
You can use the `MongoClient` to get an instance of a database, or create one if it doesn't exist already. The `MongoDatabase` class provides access to collections and their documents.
-* [MongoClient](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm)
-* [MongoClient.Database](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm)
+- [MongoClient](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm)
+- [MongoClient.Database](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm)
-The following code snippet creates a new database by inserting a document into a collection. Remember, the database will not be created until it is needed for this type of operation.
+The following code snippet creates a new database by inserting a document into a collection. Remember, the database won't be created until it's needed for this type of operation.
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/105-manage-databases/program.cs" id="create_database":::
The following code snippet creates a new database by inserting a document into a
You can also retrieve an existing database by name using the `GetDatabase` method to access its collections and documents.
-* [MongoClient.GetDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_GetDatabase.htm)
+- [MongoClient.GetDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_GetDatabase.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/105-manage-databases/program.cs" id="get_database":::
You can also retrieve an existing database by name using the `GetDatabase` metho
You can retrieve a list of all the databases on the server using the `MongoClient`.
-* [MongoClient.Database.ListDatabaseNames](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_ListDatabaseNames_3.htm)
+- [MongoClient.Database.ListDatabaseNames](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_ListDatabaseNames_3.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/105-manage-databases/program.cs" id="get_all_databases":::
This technique can then be used to check if a database already exists.
## Drop a database
-A database is removed from the server using the `DropDatabase` method on the DB class.
+A database is removed from the server using the `DropDatabase` method on the DB class.
-* [MongoClient.DropDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_DropDatabase_1.htm)
+- [MongoClient.DropDatabase](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_MongoClient_DropDatabase_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/105-manage-databases/program.cs" id="drop_database":::
cosmos-db How To Dotnet Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-documents.md
Title: Create a document in Azure Cosmos DB for MongoDB using .NET description: Learn how to work with a document in your Azure Cosmos DB for MongoDB database using the .NET SDK.--++
+ms.devlang: csharp
Last updated 07/22/2022-+ # Manage a document in Azure Cosmos DB for MongoDB using .NET
Manage your MongoDB documents with the ability to insert, update, and delete doc
Insert one or many documents, defined with a JSON schema, into your collection.
-* [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertOne_1.htm)
-* [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertMany_1.htm)
+- [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertOne_1.htm)
+- [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertMany_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/115-manage-documents/program.cs" id="insert_document"::: ## Update a document
-To update a document, specify the query filter used to find the document along with a set of properties of the document that should be updated.
+To update a document, specify the query filter used to find the document along with a set of properties of the document that should be updated.
-* [MongoClient.Database.Collection.UpdateOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateOne_1.htm)
-* [MongoClient.Database.Collection.UpdateMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateMany_1.htm)
+- [MongoClient.Database.Collection.UpdateOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateOne_1.htm)
+- [MongoClient.Database.Collection.UpdateMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_UpdateMany_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/115-manage-documents/program.cs" id="update_document"::: ## Bulk updates to a collection
-You can perform several different types of operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+You can perform several different types of operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
The following bulk operations are available:
-* [MongoClient.Database.Collection.BulkWrite](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_BulkWrite_1.htm)
+- [MongoClient.Database.Collection.BulkWrite](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_BulkWrite_1.htm)
- * insertOne
- * updateOne
- * updateMany
- * deleteOne
- * deleteMany
+ - insertOne
+
+ - updateOne
+
+ - updateMany
+
+ - deleteOne
+
+ - deleteMany
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/115-manage-documents/program.cs" id="bulk_write"::: ## Delete a document
-To delete documents, use a query to define how the documents are found.
+To delete documents, use a query to define how the documents are found.
-* [MongoClient.Database.Collection.DeleteOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteOne_1.htm)
-* [MongoClient.Database.Collection.DeleteMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteMany_1.htm)
+- [MongoClient.Database.Collection.DeleteOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteOne_1.htm)
+- [MongoClient.Database.Collection.DeleteMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_DeleteMany_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/115-manage-documents/program.cs" id="delete_document":::
cosmos-db How To Dotnet Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-queries.md
Title: Query documents in Azure Cosmos DB for MongoDB using .NET description: Learn how to query documents in your Azure Cosmos DB for MongoDB database using the .NET SDK.--++
+ms.devlang: csharp
Last updated 07/22/2022-+ # Query documents in Azure Cosmos DB for MongoDB using .NET
Use queries to find documents in a collection.
## Query for documents
-To find documents, use a query filter on the collection to define how the documents are found.
+To find documents, use a query filter on the collection to define how the documents are found.
-* [MongoClient.Database.Collection.Find](https://www.mongodb.com/docs/manual/reference/method/db.collection.find/)
-* [FilterDefinition](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinition_1.htm)
-* [FilterDefinitionBuilder](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinitionBuilder_1.htm)
+- [MongoClient.Database.Collection.Find](https://www.mongodb.com/docs/manual/reference/method/db.collection.find/)
+- [FilterDefinition](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinition_1.htm)
+- [FilterDefinitionBuilder](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_FilterDefinitionBuilder_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/125-manage-queries/program.cs" id="query_documents":::
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-get-started.md
Title: Get started with Azure Cosmos DB for MongoDB and JavaScript
+ Title: Get started with Azure Cosmos DB for MongoDB using JavaScript
description: Get started developing a JavaScript application that works with Azure Cosmos DB for MongoDB. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for MongoDB database.
ms.devlang: javascript Last updated 06/23/2022-+
-# Get started with Azure Cosmos DB for MongoDB and JavaScript
+# Get started with Azure Cosmos DB for MongoDB using JavaScript
+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] This article shows you how to connect to Azure Cosmos DB for MongoDB using the native MongoDB npm package. Once connected, you can perform operations on databases, collections, and docs.
This article shows you how to connect to Azure Cosmos DB for MongoDB using the n
[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb) - ## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [Node.js LTS](https://nodejs.org/en/download/)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-* [Azure Cosmos DB for MongoDB resource](quickstart-nodejs.md#create-an-azure-cosmos-db-account)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [Node.js LTS](https://nodejs.org/en/download/)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- [Azure Cosmos DB for MongoDB resource](quickstart-nodejs.md#create-an-azure-cosmos-db-account)
## Create a new JavaScript app
-1. Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
+1. Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
```console npm init ```
-2. Add the [MongoDB](https://www.npmjs.com/package/mongodb) npm package to the JavaScript project. Use the [``npm install package``](https://docs.npmjs.com/cli/v8/commands/npm-install) command specifying the name of the npm package. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
+1. Add the [MongoDB](https://www.npmjs.com/package/mongodb) npm package to the JavaScript project. Use the [``npm install package``](https://docs.npmjs.com/cli/v8/commands/npm-install) command specifying the name of the npm package. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
```console npm install mongodb dotenv ```
-3. To run the app, use a terminal to navigate to the application directory and run the application.
+1. To run the app, use a terminal to navigate to the application directory and run the application.
```console node index.js
This article shows you how to connect to Azure Cosmos DB for MongoDB using the n
## Connect with MongoDB native driver to Azure Cosmos DB for MongoDB
-To connect with the MongoDB native driver to Azure Cosmos DB, create an instance of the [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) class. This class is the starting point to perform all operations against databases.
+To connect with the MongoDB native driver to Azure Cosmos DB, create an instance of the [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) class. This class is the starting point to perform all operations against databases.
The most common constructor for **MongoClient** has two parameters:
Skip this step and use the information for the portal in the next step.
## Create MongoClient with connection string -
-1. Add dependencies to reference the MongoDB and DotEnv npm packages.
+1. Add dependencies to reference the MongoDB and DotEnv npm packages.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="package_dependencies":::
-2. Define a new instance of the ``MongoClient`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to use the connection string.
+1. Define a new instance of the ``MongoClient`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to use the connection string.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="client_credentials":::
For more information on different ways to create a ``MongoClient`` instance, see
## Close the MongoClient connection
-When your application is finished with the connection remember to close it. That `.close()` call should be after all database calls are made.
+When your application is finished with the connection, remember to close it. The `.close()` call should be after all database calls are made.
```javascript client.close()
The following guides show you how to use each of these classes to build your app
**Guide**:
-* [Manage databases](how-to-javascript-manage-databases.md)
-* [Manage collections](how-to-javascript-manage-collections.md)
-* [Manage documents](how-to-javascript-manage-documents.md)
-* [Use queries to find documents](how-to-javascript-manage-queries.md)
+- [Manage databases](how-to-javascript-manage-databases.md)
+- [Manage collections](how-to-javascript-manage-collections.md)
+- [Manage documents](how-to-javascript-manage-documents.md)
+- [Use queries to find documents](how-to-javascript-manage-queries.md)
## See also
The following guides show you how to use each of these classes to build your app
## Next steps
-Now that you've connected to a API for MongoDB account, use the next guide to create and manage databases.
+Now that you've connected to an API for MongoDB account, use the next guide to create and manage databases.
> [!div class="nextstepaction"] > [Create a database in Azure Cosmos DB for MongoDB using JavaScript](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-collections.md
ms.devlang: javascript Last updated 06/23/2022-+ # Manage a collection in Azure Cosmos DB for MongoDB using JavaScript
Manage your MongoDB collection stored in Azure Cosmos DB with the native MongoDB
[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb) - ## Name a collection In Azure Cosmos DB, a collection is analogous to a table in a relational database. When you create a collection, the collection name forms a segment of the URI used to access the collection resource and any child docs. Here are some quick rules when naming a collection:
-* Keep collection names between 3 and 63 characters long
-* Collection names can only contain lowercase letters, numbers, or the dash (-) character.
-* Container names must start with a lowercase letter or number.
+- Keep collection names between 3 and 63 characters long
+- Collection names can only contain lowercase letters, numbers, or the dash (-) character.
+- Container names must start with a lowercase letter or number.
## Get collection instance Use an instance of the **Collection** class to access the collection on the server.
-* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+- [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
The following code snippets assume you've already created your [client connection](how-to-javascript-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-javascript-get-started.md#close-the-mongoclient-connection) after these code snippets.
The following code snippets assume you've already created your [client connectio
To create a collection, insert a document into the collection.
-* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
-* [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
-* [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
+- [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
+- [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+- [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/203-insert-doc/index.js" id="database_object"::: ## Drop a collection
-* [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
+- [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
Drop the collection from the database to remove it permanently. However, the next insert or update operation that accesses the collection will create a new collection with that name.
The preceding code snippet displays the following example console output:
An index is used by the MongoDB query engine to improve performance to database queries.
-* [MongoClient.Db.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
+- [MongoClient.Db.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/225-get-collection-indexes/index.js" id="collection":::
cosmos-db How To Javascript Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-databases.md
ms.devlang: javascript Last updated 06/23/2022-+ # Manage a MongoDB database using JavaScript
Your MongoDB server in Azure Cosmos DB is available from the common npm packages for MongoDB such as:
-* [MongoDB](https://www.npmjs.com/package/mongodb)
+- [MongoDB](https://www.npmjs.com/package/mongodb)
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
In Azure Cosmos DB, a database is analogous to a namespace. When you create a da
Here are some quick rules when naming a database:
-* Keep database names between 3 and 63 characters long
-* Database names can only contain lowercase letters, numbers, or the dash (-) character.
-* Database names must start with a lowercase letter or number.
+- Keep database names between 3 and 63 characters long
+- Database names can only contain lowercase letters, numbers, or the dash (-) character.
+- Database names must start with a lowercase letter or number.
Once created, the URI for a database is in this format:
Once created, the URI for a database is in this format:
## Get database instance
-The database holds the collections and their documents. Use an instance of the **Db** class to access the databases on the server.
+The database holds the collections and their documents. Use an instance of the `Db` class to access the databases on the server.
-* [MongoClient.Db](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+- [MongoClient.Db](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
The following code snippets assume you've already created your [client connection](how-to-javascript-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-javascript-get-started.md#close-the-mongoclient-connection) after these code snippets.
The following code snippets assume you've already created your [client connectio
Access the **Admin** class to retrieve server information. You don't need to specify the database name in the `db` method. The information returned is specific to MongoDB and doesn't represent the Azure Cosmos DB platform itself.
-* [MongoClient.Db.Admin](https://mongodb.github.io/node-mongodb-native/4.7/classes/Admin.html)
+- [MongoClient.Db.Admin](https://mongodb.github.io/node-mongodb-native/4.7/classes/Admin.html)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/200-admin/index.js" id="server_info":::
The preceding code snippet displays the following example console output:
The native MongoDB driver for JavaScript creates the database if it doesn't exist when you access it. If you would prefer to know if the database already exists before using it, get the list of current databases and filter for the name:
-* [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+- [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/201-does-database-exist/index.js" id="does_database_exist":::
The preceding code snippet displays the following example console output:
When you manage your MongoDB server programmatically, it's helpful to know what databases and collections are on the server and how many documents in each collection.
-* [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
-* [MongoClient.Db.listCollections](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#listCollections)
-* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
-* [MongoClient.Db.Collection.countDocuments](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#countDocuments)
+- [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+- [MongoClient.Db.listCollections](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#listCollections)
+- [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+- [MongoClient.Db.Collection.countDocuments](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#countDocuments)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/202-get-doc-count/index.js" id="database_object":::
The preceding code snippet displays the following example console output:
To get a database object instance, call the following method. This method accepts an optional database name and can be part of a chain.
-* [``MongoClient.Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)
+- [``MongoClient.Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)
-A database is created when it is accessed. The most common way to access a new database is to add a document to a collection. In one line of code using chained objects, the database, collection, and doc are created.
+A database is created when it's accessed. The most common way to access a new database is to add a document to a collection. In one line of code using chained objects, the database, collection, and doc are created.
```javascript const insertOneResult = await client.db("adventureworks").collection("products").insertOne(doc);
Learn more about working with [collections](how-to-javascript-manage-collections
## Drop a database
-A database is removed from the server using the dropDatabase method on the DB class.
+A database is removed from the server using the dropDatabase method on the DB class.
-* [DB.dropDatabase](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropDatabase)
+- [DB.dropDatabase](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropDatabase)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/300-drop-database/index.js" id="drop_database":::
cosmos-db How To Javascript Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-documents.md
ms.devlang: javascript Last updated 06/23/2022-+ # Manage a document in Azure Cosmos DB for MongoDB using JavaScript
Manage your MongoDB documents with the ability to insert, update, and delete doc
Insert a document, defined with a JSON schema, into your collection.
-* [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
-* [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
+- [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+- [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/203-insert-doc/index.js" id="database_object":::
The preceding code snippet displays the following example console output:
If you don't provide an ID, `_id`, for your document, one is created for you as a BSON object. The value of the provided ID is accessed with the ObjectId method.
-* [ObjectId](https://mongodb.github.io/node-mongodb-native/4.7/classes/ObjectId.html)
+- [ObjectId](https://mongodb.github.io/node-mongodb-native/4.7/classes/ObjectId.html)
Use the ID to query for documents:
const query = { _id: ObjectId("62b1f43a9446918500c875c5")};
## Update a document
-To update a document, specify the query used to find the document along with a set of properties of the document that should be updated. You can choose to upsert the document, which inserts the document if it doesn't already exist.
-
-* [MongoClient.Db.Collection.updateOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateOne)
-* [MongoClient.Db.Collection.updateMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateMany)
+To update a document, specify the query used to find the document along with a set of properties of the document that should be updated. You can choose to upsert the document, which inserts the document if it doesn't already exist.
+- [MongoClient.Db.Collection.updateOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateOne)
+- [MongoClient.Db.Collection.updateMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateMany)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/250-upsert-doc/index.js" id="upsert":::
The preceding code snippet displays the following example console output for an
## Bulk updates to a collection
-You can perform several operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+You can perform several operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Azure Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
The following bulk operations are available:
-* [MongoClient.Db.Collection.bulkWrite](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#bulkWrite)
+- [MongoClient.Db.Collection.bulkWrite](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#bulkWrite)
+
+ - insertOne
+
+ - updateOne
+
+ - updateMany
+
+ - deleteOne
- * insertOne
- * updateOne
- * updateMany
- * deleteOne
- * deleteMany
+ - deleteMany
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/251-bulk_write/index.js" id="bulk_write":::
The preceding code snippet displays the following example console output:
## Delete a document
-To delete documents, use a query to define how the documents are found.
+To delete documents, use a query to define how the documents are found.
-* [MongoClient.Db.Collection.deleteOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteOne)
-* [MongoClient.Db.Collection.deleteMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteMany)
+- [MongoClient.Db.Collection.deleteOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteOne)
+- [MongoClient.Db.Collection.deleteMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteMany)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/290-delete-doc/index.js" id="delete":::
cosmos-db How To Javascript Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-queries.md
ms.devlang: javascript Last updated 07/29/2022-+ # Query data in Azure Cosmos DB for MongoDB using JavaScript
Use [queries](#query-for-documents) and [aggregation pipelines](#aggregation-pip
[API for MongoDB reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb) - ## Query for documents
-To find documents, use a query to define how the documents are found.
+To find documents, use a query to define how the documents are found.
-* [MongoClient.Db.Collection.findOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#findOne)
-* [MongoClient.Db.Collection.find](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#find)
-* [FindCursor](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html)
+- [MongoClient.Db.Collection.findOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#findOne)
+- [MongoClient.Db.Collection.find](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#find)
+- [FindCursor](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html)
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/275-find/index.js" id="read_doc":::
The preceding code snippet displays the following example console output:
## Aggregation pipelines
-Aggregation pipelines are useful to isolate expensive query computation, transformations, and other processing on your Azure Cosmos DB server, instead of performing these operations on the client.
+Aggregation pipelines are useful to isolate expensive query computation, transformations, and other processing on your Azure Cosmos DB server, instead of performing these operations on the client.
-For specific **aggregation pipeline support**, refer to the following:
+For specific **aggregation pipeline support**, refer to the following:
-* [Version 4.2](feature-support-42.md#aggregation-pipeline)
-* [Version 4.0](feature-support-40.md#aggregation-pipeline)
-* [Version 3.6](feature-support-36.md#aggregation-pipeline)
-* [Version 3.2](feature-support-32.md#aggregation-pipeline)
+- [Version 4.2](feature-support-42.md#aggregation-pipeline)
+- [Version 4.0](feature-support-40.md#aggregation-pipeline)
+- [Version 3.6](feature-support-36.md#aggregation-pipeline)
+- [Version 3.2](feature-support-32.md#aggregation-pipeline)
### Aggregation pipeline syntax
-A pipeline is an array with a series of stages as JSON objects.
+A pipeline is an array with a series of stages as JSON objects.
```javascript const pipeline = [
const pipeline = [
A _stage_ defines the operation and the data it's applied to, such as:
-* $match - find documents
-* $addFields - add field to cursor, usually from previous stage
-* $limit - limit the number of results returned in cursor
-* $project - pass along new or existing fields, can be computed fields
-* $group - group results by a field or fields in pipeline
-* $sort - sort results
+- $match - find documents
+- $addFields - add field to cursor, usually from previous stage
+- $limit - limit the number of results returned in cursor
+- $project - pass along new or existing fields, can be computed fields
+- $group - group results by a field or fields in pipeline
+- $sort - sort results
```javascript // reduce collection to relative documents
const sortStage = {
### Aggregate the pipeline to get iterable cursor
-The pipeline is aggregated to produce an iterable cursor.
+The pipeline is aggregated to produce an iterable cursor.
```javascript const db = 'adventureworks';
await aggCursor.forEach(product => {
## Use an aggregation pipeline in JavaScript
-Use a pipeline to keep data processing on the server before returning to the client.
+Use a pipeline to keep data processing on the server before returning to the client.
-### Example product data
+### Example product data
The aggregations below use the [sample products collection](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/252-insert-many/products.json) with data in the shape of:
The aggregations below use the [sample products collection](https://github.com/A
### Example 1: Product subcategories, count of products, and average price
-Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/average-price-in-each-product-subcategory.js) to report on average price in each product subcategory.
+Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/average-price-in-each-product-subcategory.js) to report on average price in each product subcategory.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/280-aggregation/average-price-in-each-product-subcategory.js" id="aggregation_1" highlight="26, 43, 53, 56, 66"::: - ### Example 2: Bike types with price range
-Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/bike-types-and-price-ranges.js) to report on the `Bikes` subcategory.
+Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/bike-types-and-price-ranges.js) to report on the `Bikes` subcategory.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/280-aggregation/bike-types-and-price-ranges.js" id="aggregation_1" highlight="23, 30, 38, 45, 68, 80, 85, 98"::: -- ## See also - [Get started with Azure Cosmos DB for MongoDB and JavaScript](how-to-javascript-get-started.md)
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-python-get-started.md
+
+ Title: Get started with Azure Cosmos DB for MongoDB and Python
+description: Get started developing a Python application that works with Azure Cosmos DB for MongoDB. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for MongoDB database.
+++++
+ms.devlang: python
+ Last updated : 11/16/2022+++
+# Get started with Azure Cosmos DB for MongoDB and Python
+
+This article shows you how to connect to Azure Cosmos DB for MongoDB using the PyMongo driver package. Once connected, you can perform operations on databases, collections, and docs.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) are available on GitHub as a Python project.
+
+This article shows you how to communicate with the Azure Cosmos DBΓÇÖs API for MongoDB by using one of the open-source MongoDB client drivers for Python, [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/).
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [Python 3.8+](https://www.python.org/downloads/)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+* [Azure Cosmos DB for MongoDB resource](quickstart-python.md#create-an-azure-cosmos-db-account)
+
+## Create a new Python app
+
+1. Create a new empty folder using your preferred terminal and change directory to the folder.
+
+ > [!NOTE]
+ > If you just want the finished code, download or fork and clone the [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) repo that has the full example. You can also `git clone` the repo in Azure Cloud Shell to walk through the steps shown in this quickstart.
+
+2. Create a *requirements.txt* file that lists the [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) and [python-dotenv](https://pypi.org/project/python-dotenv/) packages. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
+
+ ```text
+ # requirements.txt
+ pymongo
+ python-dotenv
+ ```
+
+3. Create a virtual environment and install the packages.
+
+ #### [Windows](#tab/venv-windows)
+
+ ```bash
+ # py -3 uses the global python interpreter. You can also use python3 -m venv .venv.
+ py -3 -m venv .venv
+ source .venv/Scripts/activate
+ pip install -r requirements.txt
+ ```
+
+ #### [Linux / macOS](#tab/venv-linux+macos)
+
+ ```bash
+ python3 -m venv .venv
+ source .venv/bin/activate
+ pip install -r requirements.txt
+ ```
+
+
+
+## Connect with PyMongo driver to Azure Cosmos DB for MongoDB
+
+To connect with the PyMongo driver to Azure Cosmos DB, create an instance of the [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient) object. This class is the starting point to perform all operations against databases.
+
+The most common constructor for **MongoClient** requires just the `host` parameter, which in this article is set to the `COSMOS_CONNECTION_STRING` environment variable. There are other optional parameters and keyword parameters you can use in the constructor. Many of the optional parameters can also be specified with the `host` parameter. If the same option is passed in with `host` and as a parameter, the parameter takes precedence.
+
+Refer to the [Troubleshooting guide](error-codes-solutions.md) for connection issues.
+
+## Get resource name
+
+In the commands below, we show *msdocs-cosmos* as the resource group name. Change the name as appropriate for your situation.
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
+
+Skip this step and use the information for the portal in the next step.
++
+## Retrieve your connection string
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
++++
+## Configure environment variables
++
+## Create MongoClient with connection string
+
+1. Add dependencies to reference the [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) and [python-dotenv](https://pypi.org/project/python-dotenv/) packages.
+
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/101-client-connection-string/run.py" id="package_dependencies":::
+
+2. Define a new instance of the `MongoClient` class using the constructor and the connection string read from an environment variable.
+
+ :::code language="python" source="~/azure-cosmos-db-mongodb-python-getting-started/101-client-connection-string/run.py" id="client_credentials":::
+
+For more information on different ways to create a ``MongoClient`` instance, see [Making a Connection with MongoClient](https://pymongo.readthedocs.io/en/stable/tutorial.html#making-a-connection-with-mongoclient).
+
+## Close the MongoClient connection
+
+When your application is finished with the connection, remember to close it. That `.close()` call should be after all database calls are made.
+
+```python
+client.close()
+```
+
+## Use MongoDB client classes with Azure Cosmos DB for API for MongoDB
+
+Let's look at the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+
+* [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html) - The first step when working with PyMongo is to create a MongoClient to connect to Azure Cosmos DB's API for MongoDB. The client object is used to configure and execute requests against the service.
+
+* [Database](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html) - Azure Cosmos DB's API for MongoDB can support one or more independent databases.
+
+* [Collection](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html) - A database can contain one or more collections. A collection is a group of documents stored in MongoDB, and can be thought of as roughly the equivalent of a table in a relational database.
+
+* [Document](https://pymongo.readthedocs.io/en/stable/tutorial.html#documents) - A document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema means that documents in the same collection don't need to have the same set of fields or structure. And common fields in a collection's documents may hold different types of data.
+
+To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](../account-databases-containers-items.md) article.
+
+## See also
+
+- [PyPI Package](https://pypi.org/project/azure-cosmos/)
+- [API reference](https://www.mongodb.com/docs/drivers/python/)
+
+## Next steps
+
+Now that you've connected to an API for MongoDB account, use the next guide to create and manage databases.
+
+> [!div class="nextstepaction"]
+> [Create a database in Azure Cosmos DB for MongoDB using Python](how-to-python-manage-databases.md)
cosmos-db How To Python Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-python-manage-databases.md
+
+ Title: Manage a MongoDB database using Python
+description: Learn how to manage your Azure Cosmos DB resource when it provides the API for MongoDB with a Python SDK.
++++
+ms.devlang: python
+ Last updated : 11/15/2022+++
+# Manage a MongoDB database using Python
++
+Your MongoDB server in Azure Cosmos DB is available from the common Python packages for MongoDB such as:
+
+* [PyMongo](https://www.mongodb.com/docs/drivers/pymongo/) for synchronous Python applications and used in this article.
+* [Motor](https://www.mongodb.com/docs/drivers/motor/) for asynchronous Python applications.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-mongodb-python-getting-started) are available on GitHub as a Python project.
+
+## Name a database
+
+In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
+
+Here are some quick rules when naming a database:
+
+* Keep database names between 3 and 63 characters long
+* Database names can only contain lowercase letters, numbers, or the dash (-) character.
+* Database names must start with a lowercase letter or number.
+
+Once created, the URI for a database is in this format:
+
+`https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>`
+
+## Get database instance
+
+The database holds the collections and their documents. To access a database, use attribute style access or dictionary style access of the MongoClient. For more information, see [Getting a Database](https://pymongo.readthedocs.io/en/stable/tutorial.html#getting-a-database).
+
+The following code snippets assume you've already created your [client connection](how-to-python-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-python-get-started.md#close-the-mongoclient-connection) after these code snippets.
+
+## Get server information
+
+Access server info with the [server_info](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.server_info) method of the MongoClient class. You don't need to specify the database name to get this information. The information returned is specific to MongoDB and doesn't represent the Azure Cosmos DB platform itself.
+
+You can also list databases using the [MongoClient.list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method and issue a [MongoDB command](https://www.mongodb.com/docs/manual/reference/command/nav-diagnostic/) to a database with the [MongoClient.db.command](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.command) method.
++
+The preceding code snippet displays output similar to the following example console output:
++
+## Does database exist?
+
+The PyMongo driver for Python creates a database if it doesn't exist when you access it. However, we recommend that instead you use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. To create a new database if it doesn't exist, use the [create database extension](/azure/cosmos-db/mongodb/custom-commands#create-database) as shown in the following code snippet.
+
+To see if the database already exists before using it, get the list of current databases with the [list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method.
++
+The preceding code snippet displays output similar to the following example console output:
++
+## Get list of databases, collections, and document count
+
+When you manage your MongoDB server programmatically, it's helpful to know what databases and collections are on the server and how many documents in each collection. For more information, see:
+
+* [Getting a database](https://pymongo.readthedocs.io/en/stable/tutorial.html#getting-a-database)
+* [Getting a collection](https://pymongo.readthedocs.io/en/stable/tutorial.html#getting-a-collection)
+* [Counting documents](https://pymongo.readthedocs.io/en/stable/tutorial.html#counting)
++
+The preceding code snippet displays output similar to the following example console output:
++
+## Get database object instance
+
+If a database doesn't exist, the PyMongo driver for Python creates it when you access it. However, we recommend that instead you use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. The pattern is shown above in the section [Does database exist?](#does-database-exist).
+
+When working with PyMongo, you access databases using attribute style access on MongoClient instances. Once you have a database instance, you can use database level operations as shown below.
+
+```python
+collections = client[db].list_collection_names()
+```
+
+For an overview of working with databases using the PyMongo driver, see [Database level operations](https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database).
++
+## Drop a database
+
+A database is removed from the server using the [drop_database](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.drop_database) method of the MongoClient.
++
+The preceding code snippet displays output similar to the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB for MongoDB and Python](how-to-python-get-started.md)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB for MongoDB for .NET with MongoDB driver description: Learn how to build a .NET app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.---++
+ms.devlang: csharp
Last updated 07/06/2022-+ # Quickstart: Azure Cosmos DB for MongoDB for .NET with the MongoDB driver
Get started with MongoDB to create databases, collections, and docs within your
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [.NET 6.0](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [.NET 6.0](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
### Prerequisite check
-* In a terminal or command window, run ``dotnet --list-sdks`` to check that .NET 6.x is one of the available versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+- In a terminal or command window, run ``dotnet --list-sdks`` to check that .NET 6.x is one of the available versions.
+- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
## Setting up
Before you start building the application, let's look into the hierarchy of reso
You'll use the following MongoDB classes to interact with these resources:
-* [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
-* [``MongoDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Collection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoCollection.htm) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
+- [``MongoClient``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoClient.htm) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
+- [``MongoDatabase``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoDatabase.htm) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+- [``Collection``](https://mongodb.github.io/mongo-csharp-driver/2.16/apidocs/html/T_MongoDB_Driver_MongoCollection.htm) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
## Code examples
-* [Authenticate the client](#authenticate-the-client)
-* [Create a database](#create-a-database)
-* [Create a container](#create-a-collection)
-* [Create an item](#create-an-item)
-* [Get an item](#get-an-item)
-* [Query items](#query-items)
+- [Authenticate the client](#authenticate-the-client)
+- [Create a database](#create-a-database)
+- [Create a container](#create-a-collection)
+- [Create an item](#create-an-item)
+- [Get an item](#get-an-item)
+- [Query items](#query-items)
The sample code demonstrated in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-nodejs.md
Title: Quickstart - Azure Cosmos DB for MongoDB for JavaScript with MongoDB drier
-description: Learn how to build a JavaScript app to manage Azure Cosmos DB for MongoDB account resources in this quickstart.
---
+ Title: Quickstart - Azure Cosmos DB for MongoDB driver for MongoDB
+description: Learn how to build a Node.js app to manage Azure Cosmos DB for MongoDB account resources and data in this quickstart.
++ ms.devlang: javascript Last updated 07/06/2022-+
-# Quickstart: Azure Cosmos DB for MongoDB for JavaScript with MongoDB driver
+# Quickstart: Azure Cosmos DB for MongoDB driver for Node.js
[!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
Get started with the MongoDB npm package to create databases, collections, and d
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [Node.js LTS](https://nodejs.org/en/download/)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [Node.js LTS](https://nodejs.org/en/download/)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
### Prerequisite check
-* In a terminal or command window, run ``node --version`` to check that Node.js is one of the LTS versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+- In a terminal or command window, run ``node --version`` to check that Node.js is one of the LTS versions.
+- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
## Setting up
Before you start building the application, let's look into the hierarchy of reso
You'll use the following MongoDB classes to interact with these resources:
-* [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
-* [``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
+- [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html) - This class provides a client-side logical representation for the API for MongoDB layer on Azure Cosmos DB. The client object is used to configure and execute requests against the service.
+- [``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+- [``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html) - This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.
## Code examples
-* [Authenticate the client](#authenticate-the-client)
-* [Get database instance](#get-database-instance)
-* [Get collection instance](#get-collection-instance)
-* [Chained instances](#chained-instances)
-* [Create an index](#create-an-index)
-* [Create a doc](#create-a-doc)
-* [Get an doc](#get-a-doc)
-* [Query docs](#query-docs)
+- [Authenticate the client](#authenticate-the-client)
+- [Get database instance](#get-database-instance)
+- [Get collection instance](#get-collection-instance)
+- [Chained instances](#chained-instances)
+- [Create an index](#create-an-index)
+- [Create a doc](#create-a-doc)
+- [Get an doc](#get-a-doc)
+- [Query docs](#query-docs)
The sample code described in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
Use the [``Collection.createIndex``](https://mongodb.github.io/node-mongodb-nati
Create a doc with the *product* properties for the `adventureworks` database:
-* An _id property for the unique identifier of the product.
-* A *category* property. This property can be used as the logical partition key.
-* A *name* property.
-* An inventory *quantity* property.
-* A *sale* property, indicating whether the product is on sale.
+- An _id property for the unique identifier of the product.
+- A *category* property. This property can be used as the logical partition key.
+- A *name* property.
+- An inventory *quantity* property.
+- A *sale* property, indicating whether the product is on sale.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="new_doc":::
After you insert a doc, you can run a query to get all docs that match a specifi
Troubleshooting:
-* If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
+- If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
## Run the code
cosmos-db How To Dotnet Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-container.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create a container in Azure Cosmos DB for NoSQL using .NET
In Azure Cosmos DB, a container is analogous to a table in a relational database
Here are some quick rules when naming a container:
-* Keep container names between 3 and 63 characters long
-* Container names can only contain lowercase letters, numbers, or the dash (-) character.
-* Container names must start with a lowercase letter or number.
+- Keep container names between 3 and 63 characters long
+- Container names can only contain lowercase letters, numbers, or the dash (-) character.
+- Container names must start with a lowercase letter or number.
Once created, the URI for a container is in this format:
Once created, the URI for a container is in this format:
To create a container, call one of the following methods:
-* [``CreateContainerAsync``](#create-a-container-asynchronously)
-* [``CreateContainerIfNotExistsAsync``](#create-a-container-asynchronously-if-it-doesnt-already-exist)
+- [``CreateContainerAsync``](#create-a-container-asynchronously)
+- [``CreateContainerIfNotExistsAsync``](#create-a-container-asynchronously-if-it-doesnt-already-exist)
### Create a container asynchronously
cosmos-db How To Dotnet Create Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-database.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create a database in Azure Cosmos DB for NoSQL using .NET
In Azure Cosmos DB, a database is analogous to a namespace. When you create a da
Here are some quick rules when naming a database:
-* Keep database names between 3 and 63 characters long
-* Database names can only contain lowercase letters, numbers, or the dash (-) character.
-* Database names must start with a lowercase letter or number.
+- Keep database names between 3 and 63 characters long
+- Database names can only contain lowercase letters, numbers, or the dash (-) character.
+- Database names must start with a lowercase letter or number.
Once created, the URI for a database is in this format:
Once created, the URI for a database is in this format:
To create a database, call one of the following methods:
-* [``CreateDatabaseAsync``](#create-a-database-asynchronously)
-* [``CreateDatabaseIfNotExistsAsync``](#create-a-database-asynchronously-if-it-doesnt-already-exist)
+- [``CreateDatabaseAsync``](#create-a-database-asynchronously)
+- [``CreateDatabaseIfNotExistsAsync``](#create-a-database-asynchronously-if-it-doesnt-already-exist)
### Create a database asynchronously
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-create-item.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create an item in Azure Cosmos DB for NoSQL using .NET
When referencing the item using a URI, use the system-generated *resource identi
To create an item, call one of the following methods:
-* [``CreateItemAsync<>``](#create-an-item-asynchronously)
-* [``ReplaceItemAsync<>``](#replace-an-item-asynchronously)
-* [``UpsertItemAsync<>``](#create-or-replace-an-item-asynchronously)
+- [``CreateItemAsync<>``](#create-an-item-asynchronously)
+- [``ReplaceItemAsync<>``](#replace-an-item-asynchronously)
+- [``UpsertItemAsync<>``](#create-or-replace-an-item-asynchronously)
## Create an item asynchronously
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB for NoSQL and .NET
+ Title: Get started with Azure Cosmos DB for NoSQL using .NET
description: Get started developing a .NET application that works with Azure Cosmos DB for NoSQL. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for NoSQL endpoint.
ms.devlang: csharp Last updated 07/06/2022-+
-# Get started with Azure Cosmos DB for NoSQL and .NET
+# Get started with Azure Cosmos DB for NoSQL using .NET
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
This article shows you how to connect to Azure Cosmos DB for NoSQL using the .NE
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
+- [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
## Set up your project
dotnet build
## <a id="connect-to-azure-cosmos-db-sql-api"></a>Connect to Azure Cosmos DB for NoSQL
-To connect to the API for NoSQL of Azure Cosmos DB, create an instance of the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) class. This class is the starting point to perform all operations against databases. There are three core ways to connect to a API for NoSQL account using the **CosmosClient** class:
+To connect to the API for NoSQL of Azure Cosmos DB, create an instance of the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) class. This class is the starting point to perform all operations against databases. There are three core ways to connect to an API for NoSQL account using the **CosmosClient** class:
-* [Connect with a API for NoSQL endpoint and read/write key](#connect-with-an-endpoint-and-key)
-* [Connect with a API for NoSQL connection string](#connect-with-a-connection-string)
-* [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
+- [Connect with a API for NoSQL endpoint and read/write key](#connect-with-an-endpoint-and-key)
+- [Connect with a API for NoSQL connection string](#connect-with-a-connection-string)
+- [Connect with Azure Active Directory](#connect-using-the-microsoft-identity-platform)
### Connect with an endpoint and key
Create a new instance of the **CosmosClient** class with the ``COSMOS_ENDPOINT``
As you build your application, your code will primarily interact with four types of resources:
-* The API for NoSQL account, which is the unique top-level namespace for your Azure Cosmos DB data.
+- The API for NoSQL account, which is the unique top-level namespace for your Azure Cosmos DB data.
-* Databases, which organize the containers in your account.
+- Databases, which organize the containers in your account.
-* Containers, which contain a set of individual items in your database.
+- Containers, which contain a set of individual items in your database.
-* Items, which represent a JSON document in your container.
+- Items, which represent a JSON document in your container.
The following diagram shows the relationship between these resources.
The following guides show you how to use each of these classes to build your app
## See also
-* [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
-* [Samples](samples-dotnet.md)
-* [API reference](/dotnet/api/microsoft.azure.cosmos)
-* [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)
-* [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
+- [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+- [Samples](samples-dotnet.md)
+- [API reference](/dotnet/api/microsoft.azure.cosmos)
+- [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Give Feedback](https://github.com/Azure/azure-cosmos-dotnet-v3/issues)
## Next steps
-Now that you've connected to a API for NoSQL account, use the next guide to create and manage databases.
+Now that you've connected to an API for NoSQL account, use the next guide to create and manage databases.
> [!div class="nextstepaction"] > [Create a database in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-database.md)
cosmos-db How To Dotnet Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-query-items.md
ms.devlang: csharp Last updated 06/15/2022-+ # Query items in Azure Cosmos DB for NoSQL using .NET
To learn more about the SQL syntax for Azure Cosmos DB for NoSQL, see [Getting s
To query items in a container, call one of the following methods:
-* [``GetItemQueryIterator<>``](#query-items-using-a-sql-query-asynchronously)
-* [``GetItemLinqQueryable<>``](#query-items-using-linq-asynchronously)
+- [``GetItemQueryIterator<>``](#query-items-using-a-sql-query-asynchronously)
+- [``GetItemLinqQueryable<>``](#query-items-using-linq-asynchronously)
## Query items using a SQL query asynchronously
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-read-item.md
ms.devlang: csharp Last updated 07/06/2022-+ # Read an item in Azure Cosmos DB for NoSQL using .NET
Every item in Azure Cosmos DB for NoSQL has a unique identifier specified by the
To perform a point read of an item, call one of the following methods:
-* [``ReadItemAsync<>``](#read-an-item-asynchronously)
-* [``ReadItemStreamAsync<>``](#read-an-item-as-a-stream-asynchronously)
-* [``ReadManyItemsAsync<>``](#read-multiple-items-asynchronously)
+- [``ReadItemAsync<>``](#read-an-item-asynchronously)
+- [``ReadItemStreamAsync<>``](#read-an-item-as-a-stream-asynchronously)
+- [``ReadManyItemsAsync<>``](#read-multiple-items-asynchronously)
## Read an item asynchronously
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp Last updated 11/07/2022-+ # Quickstart: Azure Cosmos DB for NoSQL client library for .NET
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
Title: Quickstart- Use Node.js to query from Azure Cosmos DB for NoSQL account
-description: How to use Node.js to create an app that connects to Azure Cosmos DB for NoSQL account and queries data.
+ Title: Quickstart - Azure Cosmos DB for NoSQL client library for Node.js
+description: Learn how to build a Node.js app to manage Azure Cosmos DB for NoSQL account resources in this quickstart.
+ ms.devlang: javascript Last updated 09/22/2022---+
-# Quickstart: Use Node.js to connect and query data from Azure Cosmos DB for NoSQL account
+# Quickstart - Azure Cosmos DB for NoSQL client library for Node.js
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
->
-> * [.NET](quickstart-dotnet.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Java](quickstart-java.md)
-> * [Spring Data](quickstart-java-spring-data.md)
-> * [Python](quickstart-python.md)
-> * [Spark v3](quickstart-spark.md)
-> * [Go](quickstart-go.md)
->
Get started with the Azure Cosmos DB client library for JavaScript to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
Get started with the Azure Cosmos DB client library for JavaScript to create dat
## Prerequisites
-* In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions.
-* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
+- In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions.
+- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
## Setting up
Add the following code at the end of the `index.js` file to include the required
### Add variables for names
-Add the following variables to manage unique database and container names and the [partition key (pk)](../partitioning-overview.md).
+Add the following variables to manage unique database and container names and the [**partition key (`pk`)**](../partitioning-overview.md).
:::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/index.js" range="13-19":::
In this example, we chose to add a timeStamp to the database and container in ca
You'll use the following JavaScript classes to interact with these resources:
-* [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
-* [``Database``](/javascript/api/@azure/cosmos/database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
-* [``Container``](/javascript/api/@azure/cosmos/container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
-* [``SqlQuerySpec``](/javascript/api/@azure/cosmos/sqlqueryspec) - This interface represents a SQL query and any query parameters.
-* [``QueryIterator<>``](/javascript/api/@azure/cosmos/queryiterator) - This class represents an iterator that can track the current page of results and get a new page of results.
-* [``FeedResponse<>``](/javascript/api/@azure/cosmos/feedresponse) - This class represents a single page of responses from the iterator.
+- [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+- [``Database``](/javascript/api/@azure/cosmos/database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.
+- [``Container``](/javascript/api/@azure/cosmos/container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.
+- [``SqlQuerySpec``](/javascript/api/@azure/cosmos/sqlqueryspec) - This interface represents a SQL query and any query parameters.
+- [``QueryIterator<>``](/javascript/api/@azure/cosmos/queryiterator) - This class represents an iterator that can track the current page of results and get a new page of results.
+- [``FeedResponse<>``](/javascript/api/@azure/cosmos/feedresponse) - This class represents a single page of responses from the iterator.
## Code examples
-* [Authenticate the client](#authenticate-the-client)
-* [Create a database](#create-a-database)
-* [Create a container](#create-a-container)
-* [Create an item](#create-an-item)
-* [Get an item](#get-an-item)
-* [Query items](#query-items)
+- [Authenticate the client](#authenticate-the-client)
+- [Create a database](#create-a-database)
+- [Create a container](#create-a-container)
+- [Create an item](#create-an-item)
+- [Get an item](#get-an-item)
+- [Query items](#query-items)
The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
Touring-1000 Blue, 50 read
In this quickstart, you learned how to create an Azure Cosmos DB SQL API account, create a database, and create a container using the JavaScript SDK. You can now dive deeper into the SDK to import more data, perform complex queries, and manage your Azure Cosmos DB SQL API resources. > [!div class="nextstepaction"]
-> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md)
+> [Tutorial: Build a Node.js console app](sql-api-nodejs-get-started.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
ms.devlang: python Last updated 11/03/2022-+ # Quickstart: Azure Cosmos DB for NoSQL client library for Python
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-dotnet.md
ms.devlang: csharp-+ Last updated 07/06/2022-+ # Examples for Azure Cosmos DB for NoSQL SDK for .NET [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
->
-> * [.NET](samples-dotnet.md)
->
The [cosmos-db-nosql-dotnet-samples](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) GitHub repository includes multiple sample projects. These projects illustrate how to perform common operations on Azure Cosmos DB for NoSQL resources. ## Prerequisites
-* An Azure account with an active subscription. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
-* Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- An Azure account with an active subscription. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
+- Azure Cosmos DB for NoSQL account. [Create a API for NoSQL account](how-to-create-account.md).
+- [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
## Samples
cosmos-db Samples Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-nodejs.md
Title: Node.js examples to manage data in Azure Cosmos DB database
+ Title: Examples for Azure Cosmos DB for NoSQL SDK for JS
description: Find Node.js examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.-+++
+ms.devlang: javascript
Last updated 08/26/2021--+
-# Node.js examples to manage data in Azure Cosmos DB
+
+# Examples for Azure Cosmos DB for NoSQL SDK for JS
+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
-> * [.NET SDK Examples](samples-dotnet.md)
-> * [Java V4 SDK Examples](samples-java.md)
-> * [Spring Data V3 SDK Examples](samples-java-spring-data.md)
-> * [Node.js Examples](samples-nodejs.md)
-> * [Python Examples](samples-python.md)
-> * [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
->
Sample solutions that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-cosmos-js](https://github.com/Azure/azure-cosmos-js/tree/master/samples) GitHub repository. This article provides:
-* Links to the tasks in each of the Node.js example project files.
-* Links to the related API reference content.
+- Links to the tasks in each of the Node.js example project files.
+- Links to the related API reference content.
-**Prerequisites**
+## Prerequisites
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
Sample solutions that perform CRUD operations and other common operations on Azu
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] You also need the [JavaScript SDK](sdk-nodejs.md).
-
+ > [!NOTE] > Each sample is self-contained, it sets itself up and cleans up after itself. As such, the samples issue multiple calls to [Containers.create](/javascript/api/%40azure/cosmos/containers). Each time this is done your subscription will be billed for 1 hour of usage per the performance tier of the container being created.
- >
- >
## Database examples
-The [DatabaseManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts) file shows how to perform the CRUD operations on the database. To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+The [DatabaseManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts) file shows how to perform the CRUD operations on the database. To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | |
-| [Create a database if it does not exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L12-L14) |[Databases.createIfNotExists](/javascript/api/@azure/cosmos/databases#createifnotexists-databaserequest--requestoptions-) |
+| [Create a database if it doesn't exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L12-L14) |[Databases.createIfNotExists](/javascript/api/@azure/cosmos/databases#createifnotexists-databaserequest--requestoptions-) |
| [List databases for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L16-L18) |[Databases.readAll](/javascript/api/@azure/cosmos/databases#readall-feedoptions-) | | [Read a database by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L20-L29) |[Database.read](/javascript/api/@azure/cosmos/database#read-requestoptions-) | | [Delete a database](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L31-L32) |[Database.delete](/javascript/api/@azure/cosmos/database#delete-requestoptions-) | ## Container examples
-The [ContainerManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts) file shows how to perform the CRUD operations on the container. To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+The [ContainerManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts) file shows how to perform the CRUD operations on the container. To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | |
-| [Create a container if it does not exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L14-L15) |[Containers.createIfNotExists](/javascript/api/@azure/cosmos/containers#createifnotexists-containerrequest--requestoptions-) |
+| [Create a container if it doesn't exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L14-L15) |[Containers.createIfNotExists](/javascript/api/@azure/cosmos/containers#createifnotexists-containerrequest--requestoptions-) |
| [List containers for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L17-L21) |[Containers.readAll](/javascript/api/@azure/cosmos/containers#readall-feedoptions-) | | [Read a container definition](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L23-L26) |[Container.read](/javascript/api/@azure/cosmos/container#read-requestoptions-) | | [Delete a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L28-L30) |[Container.delete](/javascript/api/@azure/cosmos/container#delete-requestoptions-) | ## Item examples
-The [ItemManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts) file shows how to perform the CRUD operations on the item. To learn about the Azure Cosmos DB documents before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
+The [ItemManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts) file shows how to perform the CRUD operations on the item. To learn about the Azure Cosmos DB documents before running the following samples, see [Working with databases, containers, and items](../account-databases-containers-items.md) conceptual article.
| Task | API reference | | | | | [Create items](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L18-L21) |[Items.create](/javascript/api/@azure/cosmos/items#create-t--requestoptions-) | | [Read all items in a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L23-L28) |[Items.readAll](/javascript/api/@azure/cosmos/items#readall-feedoptions-) | | [Read an item by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L30-L33) |[Item.read](/javascript/api/@azure/cosmos/item#read-requestoptions-) |
-| [Read item only if item has changed](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L45-L56) |[Item.read](/javascript/api/%40azure/cosmos/item)<br/>[RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
+| [Read item only if item has changed](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L45-L56) |[Item.read](/javascript/api/%40azure/cosmos/item) - [RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
| [Query for documents](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L58-L79) |[Items.query](/javascript/api/%40azure/cosmos/items) | | [Replace an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L81-L96) |[Item.replace](/javascript/api/%40azure/cosmos/item) |
-| [Replace item with conditional ETag check](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L98-L135) |[Item.replace](/javascript/api/%40azure/cosmos/item)<br/>[RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
+| [Replace item with conditional ETag check](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L98-L135) |[Item.replace](/javascript/api/%40azure/cosmos/item) - [RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
| [Delete an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L137-L140) |[Item.delete](/javascript/api/%40azure/cosmos/item) | ## Indexing examples
-The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
+The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#index-types), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
| Task | API reference | | | |
The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/sampl
| [Manually exclude a specific item from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L17-L29) |[RequestOptions.indexingDirective: 'exclude'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) | | [Exclude a path from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L142-L167) |[IndexingPolicy.ExcludedPath](/javascript/api/%40azure/cosmos/indexingpolicy#excludedpaths) | | [Create a range index on a string path](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L87-L112) |[IndexKind.Range](/javascript/api/%40azure/cosmos/indexkind), [IndexingPolicy](/javascript/api/%40azure/cosmos/indexingpolicy), [Items.query](/javascript/api/%40azure/cosmos/items) |
-| [Create a container with default indexPolicy, then update this online](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L13-L15) |[Containers.create](/javascript/api/%40azure/cosmos/containers)
+| [Create a container with default indexPolicy, then update the container online](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L13-L15) |[Containers.create](/javascript/api/%40azure/cosmos/containers)
## Server-side programming examples
-The [index.ts](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) file of the [ServerSideScripts](https://github.com/Azure/azure-cosmos-js/tree/master/samples/ServerSideScripts) project shows how to perform the following tasks. To learn about Server-side programming in Azure Cosmos DB before running the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md) conceptual article.
+The [index.ts](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) file of the [ServerSideScripts](https://github.com/Azure/azure-cosmos-js/tree/master/samples/ServerSideScripts) project shows how to perform the following tasks. To learn about Server-side programming in Azure Cosmos DB before running the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md) conceptual article.
| Task | API reference | | | |
For more information about server-side programming, see [Azure Cosmos DB server-
## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+
+- If all you know is the number of vCores and servers in your existing database cluster, see [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-python.md
Title: API for NoSQL Python examples for Azure Cosmos DB
+ Title: Examples for Azure Cosmos DB for NoSQL SDK for Python
description: Find Python examples on GitHub for common tasks in Azure Cosmos DB, including CRUD operations.
ms.devlang: python Last updated 10/18/2021-+
-# Azure Cosmos DB Python examples
+# Examples for Azure Cosmos DB for NoSQL SDK for Python
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-> [!div class="op_single_selector"]
->
-> - [.NET SDK Examples](samples-dotnet.md)
-> - [Java V4 SDK Examples](samples-java.md)
-> - [Spring Data V3 SDK Examples](samples-java-spring-data.md)
-> - [Node.js Examples](samples-nodejs.md)
-> - [Python Examples](samples-python.md)
-> - [Azure Code Sample Gallery](https://azure.microsoft.com/resources/samples/?sort=0&service=cosmos-db)
->
Sample solutions that do CRUD operations and other common operations on Azure Cosmos DB resources are included in the `main/sdk/cosmos` folder of the [azure/azure-sdk-for-python](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/cosmos) GitHub repository. This article provides:
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-item.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create an item in Azure Cosmos DB for Table using .NET
The [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) class is a gene
Use one of the following strategies to model items that you wish to create in a table:
-* [Create an instance of the ``TableEntity`` class](#use-a-built-in-class)
-* [Implement the ``ITableEntity`` interface](#implement-interface)
+- [Create an instance of the ``TableEntity`` class](#use-a-built-in-class)
+- [Implement the ``ITableEntity`` interface](#implement-interface)
### Use a built-in class
cosmos-db How To Dotnet Create Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-table.md
ms.devlang: csharp Last updated 07/06/2022-+ # Create a table in Azure Cosmos DB for Table using .NET
In Azure Cosmos DB, a table is analogous to a table in a relational database.
Here are some quick rules when naming a table:
-* Keep table names between 3 and 63 characters long
-* Table names can only contain lowercase letters, numbers, or the dash (-) character.
-* Table names must start with a lowercase letter or number.
+- Keep table names between 3 and 63 characters long
+- Table names can only contain lowercase letters, numbers, or the dash (-) character.
+- Table names must start with a lowercase letter or number.
## Create a table To create a table, call one of the following methods:
-* [``CreateAsync``](#create-a-table-asynchronously)
-* [``CreateIfNotExistsAsync``](#create-a-table-asynchronously-if-it-doesnt-already-exist)
+- [``CreateAsync``](#create-a-table-asynchronously)
+- [``CreateIfNotExistsAsync``](#create-a-table-asynchronously-if-it-doesnt-already-exist)
### Create a table asynchronously
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB for Table and .NET
+ Title: Get started with Azure Cosmos DB for Table using .NET
description: Get started developing a .NET application that works with Azure Cosmos DB for Table. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for Table endpoint.
ms.devlang: csharp Last updated 07/06/2022-+
-# Get started with Azure Cosmos DB for Table and .NET
+# Get started with Azure Cosmos DB for Table using .NET
[!INCLUDE[Table](../includes/appliesto-table.md)]
This article shows you how to connect to Azure Cosmos DB for Table using the .NE
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* Azure Cosmos DB for Table account. [Create a API for Table account](how-to-create-account.md).
-* [.NET 6.0 or later](https://dotnet.microsoft.com/download)
-* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- Azure Cosmos DB for Table account. [Create a API for Table account](how-to-create-account.md).
+- [.NET 6.0 or later](https://dotnet.microsoft.com/download)
+- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
## Set up your project
dotnet build
To connect to the API for Table of Azure Cosmos DB, create an instance of the [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) class. This class is the starting point to perform all operations against tables. There are two primary ways to connect to an API for Table account using the **TableServiceClient** class:
-* [Connect with a API for Table connection string](#connect-with-a-connection-string)
+- [Connect with a API for Table connection string](#connect-with-a-connection-string)
### Connect with a connection string
Create a new instance of the **TableServiceClient** class with the ``COSMOS_CONN
As you build your application, your code will primarily interact with four types of resources:
-* The API for Table account, which is the unique top-level namespace for your Azure Cosmos DB data.
+- The API for Table account, which is the unique top-level namespace for your Azure Cosmos DB data.
-* Tables, which contain a set of individual items in your account.
+- Tables, which contain a set of individual items in your account.
-* Items, which represent an individual item in your table.
+- Items, which represent an individual item in your table.
The following diagram shows the relationship between these resources.
The following guides show you how to use each of these classes to build your app
## See also
-* [Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
-* [Samples](samples-dotnet.md)
-* [API reference](/dotnet/api/azure.data.tables)
-* [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/tables/Azure.Data.Tables)
-* [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
+- [Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
+- [Samples](samples-dotnet.md)
+- [API reference](/dotnet/api/azure.data.tables)
+- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/tables/Azure.Data.Tables)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
## Next steps
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-read-item.md
ms.devlang: csharp Last updated 07/06/2022-+ # Read an item in Azure Cosmos DB for Table using .NET
Azure Cosmos DB requires both the unique identifier and the partition key value
To perform a point read of an item, use one of the following strategies:
-* [Return a ``TableEntity`` object using ``GetEntityAsync<>``](#read-an-item-using-a-built-in-class)
-* [Return an object of your own type using ``GetEntityAsync<>``](#read-an-item-using-your-own-type)
+- [Return a ``TableEntity`` object using ``GetEntityAsync<>``](#read-an-item-using-a-built-in-class)
+- [Return an object of your own type using ``GetEntityAsync<>``](#read-an-item-using-your-own-type)
### Read an item using a built-in class
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB for Table for .NET description: Learn how to build a .NET app to manage Azure Cosmos DB for Table resources in this quickstart.--++
+ms.devlang: csharp
Last updated 08/22/2022-+ # Quickstart: Azure Cosmos DB for Table for .NET
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/samples-dotnet.md
ms.devlang: csharp Last updated 07/06/2022-+ # Examples for Azure Cosmos DB for Table SDK for .NET
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.m
┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored. > [!NOTE]
-> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+> Try Azure Cosmos DB supports global distribution in only the **East US**, **North Europe**, **Southeast Asia**, and **North Central US** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
### [MongoDB](#tab/mongodb)
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.m
┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored. > [!NOTE]
-> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+> Try Azure Cosmos DB supports global distribution in only the **East US**, **North Europe**, **Southeast Asia**, and **North Central US** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
### [PostgreSQL](#tab/postgresql)
The following table lists the limits for the [Try Azure Cosmos DB](https://aka.m
┬▓ After expiration, the information stored in your account is deleted. You can upgrade your account prior to expiration and migrate the information stored. > [!NOTE]
-> Try Azure Cosmos DB supports global distribution in only the **Central US**, **North Europe**, and **Southeast Asia** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
+> Try Azure Cosmos DB supports global distribution in only the **East US**, **North Europe**, **Southeast Asia**, and **North Central US** regions. Azure support tickets can't be created for Try Azure Cosmos DB accounts. However, support is provided for subscribers with existing support plans.
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
+
+ Title: Group and allocate costs using tag inheritance
+
+description: This article explains how to group costs using tag inheritance.
++ Last updated : 11/16/2022++++++
+# Group and allocate costs using tag inheritance
+
+Azure tags are widely used to group costs to align with different business units, engineering environments, and cost departments. Tags provide the visibility needed for businesses to manage and allocate costs across the different groups.
+
+This article explains how to use the tag inheritance setting in Cost Management. When enabled, tag inheritance applies resource group and subscription tags to child resource usage records. You don't have to tag every resource or rely on resources that emit usage to have their own tags.
+
+Tag inheritance is available for customers with an Enterprise Account (EA) or a Microsoft Customer Agreement (MCA) account.
+
+## Required permissions
+
+- For subscriptions:
+ - Cost Management reader to view
+ - Cost Management Contributor to edit
+- For EA billing accounts:
+ - Enterprise Administrator (read-only) to view
+ - Enterprise Administrator to edit
+- For MCA billing profiles:
+ - Billing profile reader to view
+ - Billing profile contributor to edit
+
+## Enable tag inheritance
+
+You can enable the tag inheritance setting in the Azure portal. You apply the setting at the EA billing account, MCA billing profile, and subscription scopes. After the setting is enabled, all resource group and subscription tags are automatically applied to child resource usage records.
+
+To enable tag inheritance in the Azure portal:
+
+1. In the Azure portal, navigate to Cost Management.
+2. Select a scope.
+3. In the left menu under **Settings**, select either **Manage billing account** or **Manage subscription**, depending on your scope.
+4. Under **Tag inheritance**, select **Edit**.
+ :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance.png" alt-text="Screenshot showing the Edit option for Tag inheritance." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance.png" :::
+5. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+ :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" :::
+
+Here's an example diagram showing how a tag is inherited.
++
+## Choose between resource and inherited tags
+
+When a resource tag matches the resource group or subscription tag being applied, the resource tag is applied to its usage record by default. You can change the default behavior to have the subscription or resource group tag override the resource tag.
+
+In the Tag inheritance window, select the **Use the subscription or resource group tag** option.
++
+Let's look at an example of how a resource tag gets applied. In the following diagram, resource 4 and resource group 2 have the same tag: *App*. Because the user chose to keep the resource tag, usage record 4 is updated with the resource tag value *E2E*.
++
+Let's look at another example where a resource tag gets overridden. In the following diagram, resource 4 and resource group 2 have the same tag: **App**. Because the user chose to use the resource group or subscription tag, usage record 4 is updated with the resource group tag value, which is *backend*.
++
+## Usage record updates
+
+After the tag inheritance setting is enabled, it takes about 8-24 hours for the child resource usage records to get updated with subscription and resource group tags. The usage records are updated for the current month using the existing subscription and resource group tags.
+
+For example, if the tag inheritance setting is enabled on October 20, child resource usage records are updated from October 1 using the tags that existed on October 20.
+
+Similarly, if the tag inheritance setting is disabled, the inherited tags will be removed from the usage records for the current month.
+
+> [!NOTE]
+> If there are purchases or resources that donΓÇÖt emit usage at a subscription scope, they will not have the subscription tags applied even if the setting is enabled.
+
+## View costs grouped by tags
+
+You can use cost analysis to view the costs grouped by tags.
+
+1. In the Azure portal, navigate to **Cost Management**.
+1. In the left menu, select **Cost Analysis**.
+1. Select a scope.
+1. In the **Group by** list, select the tag you want to view costs for.
+
+Here's an example showing costs for the *org* tag.
++
+You can also view the inherited tags by downloading your Azure usage. For more information, see [View and download your Azure usage and charges](../understand/download-azure-daily-usage.md).
+
+## Next steps
+
+- Learn how to [split shared costs](allocate-costs.md).
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
description: This article helps you better understand data that's included in Cost Management and how frequently it's processed, collected, shown, and closed. Previously updated : 11/04/2022 Last updated : 11/16/2022
If you don't see a specific tag in Cost Management, consider the following quest
Here are a few tips for working with tags: - Plan ahead and define a tagging strategy that allows you to break down costs by organization, application, environment, and so on.-- Use Azure Policy to copy resource group tags to individual resources and enforce your tagging strategy.
+- [Group and allocate costs using tag inheritance](enable-tag-inheritance.md) to apply resource group and subscription tags to child resource usage records. If you were using Azure policy to enforce tagging for cost reporting, consider enabling the tag inheritance setting for easier management and more flexibility.
- Use the Tags API with either Query or UsageDetails to get all cost based on the current tags. ## Cost and usage data updates and retention
cost-management-billing Ea Portal Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-rest-apis.md
Title: Azure Enterprise REST APIs
description: This article describes the REST APIs for use with your Azure enterprise enrollment. Previously updated : 11/16/2022 Last updated : 11/17/2022
Microsoft Enterprise Azure customers can get usage and billing information throu
**Billing Periods -** The [Billing Periods API](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods) returns a list of billing periods that have consumption data for an enrollment in reverse chronological order. Each period contains a property pointing to the API route for the four sets of data, BalanceSummary, UsageDetails, Marketplace Charges, and PriceSheet. For more information, see [Reporting APIs for Enterprise customers - Billing Periods](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods).
-### Enable API data access
+### API key generation
Role owners can perform the following steps in the Azure EA portal. Navigate to **Reports** > **Download Usage** > **API Access Key**. Then they can:
cost-management-billing Link Partner Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md
description: Track engagements with Azure customers by linking a partner ID to t
Previously updated : 06/28/2022 Last updated : 11/17/2022
# Link a partner ID to your account thatΓÇÖs used to manage customers
-Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When acting on behalf of the customer managing, configuring, and supporting Azure services, the partner users will need access to the customerΓÇÖs environment. Using Partner Admin Link (PAL), partners can associate their partner network ID with the credentials used for service delivery.
+Microsoft partners provide services that help customers achieve business and mission objectives using Microsoft products. When a partner acts on behalf of the customer to manage, configure, and support Azure services, the partner users will need access to the customerΓÇÖs environment. When partners use Partner Admin Link (PAL), they can associate their partner network ID with the credentials used for service delivery.
PAL enables Microsoft to identify and recognize partners who drive Azure customer success. Microsoft can attribute influence and Azure consumed revenue to your organization based on the account's permissions (Azure role) and scope (subscription, resource group, resource). If a group has Azure RBAC access, then PAL is recognized for all the users in the group.
Yes. A linked partner ID can be changed, added, or removed.
The link between the partner ID and the account is done for each customer tenant. Link the partner ID in each customer tenant.
-However, if you are managing customer resources through Azure Lighthouse, you should create the link in your service provider tenant, using an account that has access to the customer resources. For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
+However, if you're managing customer resources through Azure Lighthouse, you should create the link in your service provider tenant, using an account that has access to the customer resources. For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
**Can other partners or customers edit or remove the link to the partner ID?**
Yes, You can link your partner ID for Azure Stack.
**How do I link my partner ID if my company uses [Azure Lighthouse](../../lighthouse/overview.md) to access customer resources?**
-In order for Azure Lighthouse activities to be recognized, you'll need to associate your Partner ID with at least one user account that has access to each of your onboarded subscriptions. Note that you'll need to do this in your service provider tenant rather than in each customer tenant. For simplicity, we recommend creating a service principal account in your tenant, associating it with your Partner ID, then granting it access to every customer you onboard with an [Azure built-in role that is eligible for partner earned credit](/partner-center/azure-roles-perms-pec). For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
+In order for Azure Lighthouse activities to be recognized, you need to associate your Partner ID with at least one user account that has access to each of your onboarded subscriptions. The association is needed in your service provider tenant rather than in each customer tenant. For simplicity, we recommend creating a service principal account in your tenant, associating it with your Partner ID, then granting it access to every customer you onboard with an [Azure built-in role that is eligible for partner earned credit](/partner-center/azure-roles-perms-pec). For more information, see [Link your partner ID to track your impact on delegated resources](../../lighthouse/how-to/partner-earned-credit.md).
**How do I explain Partner Admin Link (PAL) to my Customer?**
Partner Admin Link (PAL) enables Microsoft to identify and recognize those partn
**What data does PAL collect?**
-The PAL association to existing credentials provides no new customer data to Microsoft. It simply provides the telemetry to Microsoft where a partner is actively involved in a customerΓÇÖs Azure environment. Microsoft can attribute influence and Azure consumed revenue from customer environment to partner organization based on the account's permissions (Azure role) and scope (Management Group, Subscription, Resource Group, Resource) provided to the partner by customer.
+The PAL association to existing credentials provides no new customer data to Microsoft. It simply provides the information to Microsoft where a partner is actively involved in a customerΓÇÖs Azure environment. Microsoft can attribute influence and Azure consumed revenue from customer environment to partner organization based on the account's permissions (Azure role) and scope (Management Group, Subscription, Resource Group, Resource) provided to the partner by customer.
**Does this impact the security of a customerΓÇÖs Azure Environment?**
-PAL association only adds partnerΓÇÖs ID to the credential already provisioned and it does not alter any permissions (Azure role) or provide additional Azure service data to partner or Microsoft.
+PAL association only adds partnerΓÇÖs ID to the credential already provisioned and it doesn't alter any permissions (Azure role) or provide other Azure service data to partner or Microsoft.
+
+**What happens if the PAL identity is deleted?**
+
+If the partner network ID, also called MPN ID, is deleted, then all the recognition mechanisms including Azure Consumed Revenue (ACR) attribution stops working.
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 11/16/2022 Last updated : 11/17/2022
A billing account is created when you sign up to use Azure. You use your billing
Azure portal supports the following type of billing accounts: - **Microsoft Online Services Program**: A billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/).
+ - A new billing account for a Microsoft Online Services Program can have a maximum of 5 subscriptions. However, subscriptions transferred to the new billing account don't count against the limit.
- The ability to create other Microsoft Online Services Program subscriptions is determined on an individual basis according to your history with Azure. - **Enterprise Agreement**: A billing account for an Enterprise Agreement (EA) is created when your organization signs an [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. An EA enrollment can contain an unlimited number of EA accounts.
databox Data Box Troubleshoot Data Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot-data-upload.md
When the following errors occur, you can resolve the errors and include the file
|Storage account deleted or moved |One or more storage accounts were moved or deleted. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts deleted or moved**<br>Storage accounts: &lt;*storage accounts list*&gt; were either deleted, or moved to a different subscription or resource group. Recover or re-create the storage accounts with the original set of properties, and then confirm to resume data copy.<br>[Learn more on how to recover a storage account](../storage/common/storage-account-recover.md). | |Storage account location changed |One or more storage accounts were moved to a different region. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts location changed**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved to a different region. Restore the account to the original destination region and then confirm to resume data copy.<br>[Learn more on how to move storage accounts](../storage/common/storage-account-move.md). | |Virtual network restriction on storage account |One or more storage accounts are behind a virtual network and have restricted access. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Storage accounts behind virtual network**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved behind a virtual network. Add Data Box to the list of trusted services to allow access and then confirm to resume data copy.<br>[Learn more about trusted first party access](../storage/common/storage-network-security.md#exceptions). |
-|Storage account owned by a different tenant |One or more storage accounts were moved under a different tenant. Resolve the error and resume data copy, or skip to data erasure and complete the order.|**Storage accounts moved to a different tenant**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved to a different tenant. Restore the account to the original tenant and then confirm to resume data copy.<br>[Learn more on how to move storage accounts](../storage/common/storage-account-recover.md#recover-a-deleted-account-via-a-support-ticket). |
+|Storage account owned by a different tenant |One or more storage accounts were moved under a different tenant. Resolve the error and resume data copy, or skip to data erasure and complete the order.|**Storage accounts moved to a different tenant**<br>Storage accounts: &lt;*storage accounts list*&gt; were moved to a different tenant. Restore the account to the original tenant and then confirm to resume data copy.<br>[Learn more on how to move storage accounts](../storage/common/storage-account-move.md). |
|Kek user identity not found |The user identity that has access to the customer-managed key wasn’t found in the active directory. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**User identity not found**<br>Applied a customer-managed key but the user assigned identity that has access to the key was not found in the active directory.<br>This error may occur if a user identity is deleted from Azure.<br>Try adding another user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](data-box-customer-managed-encryption-key-portal.md#enable-key).<br>Confirm to resume data copy after the error is resolved. | |Cross tenant identity access not allowed |Managed identity couldn’t access the customer-managed key. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Cross tenant identity access not allowed**<br>Managed identity couldn’t access the customer-managed key.<br>This error may occur if a subscription is moved to a different tenant. To resolve this error, manually move the identity to the new tenant.<br>Try adding another user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](data-box-customer-managed-encryption-key-portal.md#enable-key).<br>Confirm to resume data copy after the error is resolved. | |Key details not found |Couldn’t fetch the passkey as the customer-managed key wasn’t found. Resolve the error and resume data copy, or skip to data erasure and complete the order. |**Key details not found**<br>If you deleted the key vault, you can't recover the customer-managed key. If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault and it is still in the purge-protection duration, use the steps at [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).<br>If the key vault was migrated to a different tenant, use one of the following steps to recover the vault:<ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity` = `None` and then set the value back to `Identity` = `SystemAssigned`. This deletes and recreates the identity after the new identity is created. Enable `Get`, `WrapKey`, and `UnwrapKey` permissions for the new identity in the key vault's access policy.</li></ol> |
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
To protect your Kubernetes containers, Defender for Containers receives and anal
- Workload configuration from Azure Policy - Security signals and events from the node level
+To learn more about implementation details such as supported operating systems, feature availability, outbound proxy, see [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ ## Architecture for each Kubernetes environment ## [**Azure (AKS)**](#tab/defender-for-container-arch-aks)
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
Title: Install OT network monitoring software - Microsoft Defender for IoT description: Learn how to install agentless monitoring software for an OT sensor and an on-premises management console for Microsoft Defender for IoT. Use this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances. Previously updated : 07/13/2022 Last updated : 11/09/2022
Mount the ISO file onto your hardware appliance or VM using one of the following
- DVDs: First burn the software to the DVD as an image - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
- Your physical media must have a minimum of 4 GB storage.
+ Your physical media must have a minimum of 4-GB storage.
- **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
After installing OT monitoring software, make sure to run the following tests:
- **ifconfig**: Verify that all the input interfaces configured during the installation process are running.
+#### Gateway checks
+
+Use the `route` command to show the gateway's IP address. For example:
+
+``` CLI
+<root@xsense:/# route -n
+Kernel IP routing table
+Destination Gateway Genmask Flags Metric Ref Use Iface
+0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
+172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
+>
+```
+
+Use the `arp -a` command to verify that there is a binding between the MAC address and the IP address of the default gateway. For example:
+
+``` CLI
+<root@xsense:/# arp -a
+cusalvtecca101-gi0-02-2851.network.microsoft.com (172.18.0.1) at 02:42:b0:3a:e8:b5 [ether] on eth0
+mariadb_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.5) at 02:42:ac:12:00:05 [ether] on eth0
+redis_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.3) at 02:42:ac:12:00:03 [ether] on eth0
+>
+```
+
+#### DNS checks
+
+Use the `cat /etc/resolv.conf` command to find the IP address that's configured for DNS traffic. For example:
+``` CLI
+<root@xsense:/# cat /etc/resolv.conf
+search reddog.microsoft.com
+nameserver 127.0.0.11
+options ndots:0
+>
+```
+
+Use the `host` command to resolve an FQDN. For example:
+
+``` CLI
+<root@xsense:/# host www.apple.com
+www.apple.com is an alias for www.apple.com.edgekey.net.
+www.apple.com.edgekey.net is an alias for www.apple.com.edgekey.net.globalredir.akadns.net.
+www.apple.com.edgekey.net.globalredir.akadns.net is an alias for e6858.dscx.akamaiedge.net.
+e6858.dscx.akamaiedge.net has address 72.246.148.202
+e6858.dscx.akamaiedge.net has IPv6 address 2a02:26f0:5700:1b4::1aca
+e6858.dscx.akamaiedge.net has IPv6 address 2a02:26f0:5700:182::1aca
+>
+```
+
+#### Firewall checks
+
+Use the `wget` command to verify that port 443 is open for communication. For example:
+
+``` CLI
+<root@xsense:/# wget https://www.apple.com
+--2022-11-09 11:21:15-- https://www.apple.com/
+Resolving www.apple.com (www.apple.com)... 72.246.148.202, 2a02:26f0:5700:1b4::1aca, 2a02:26f0:5700:182::1aca
+Connecting to www.apple.com (www.apple.com)|72.246.148.202|:443... connected.
+HTTP request sent, awaiting response... 200 OK
+Length: 99966 (98K) [text/html]
+Saving to: 'https://docsupdatetracker.net/index.html.1'
+
+https://docsupdatetracker.net/index.html.1 100%[===================>] 97.62K --.-KB/s in 0.02s
+
+2022-11-09 11:21:15 (5.88 MB/s) - 'https://docsupdatetracker.net/index.html.1' saved [99966/99966]
+
+>
+```
+ For more information, see [Check system health](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) in our sensor and on-premises management console troubleshooting article. ## Configure tunneling access for sensors through the on-premises management console
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
Complete the following steps in the Azure CLI to create an environment and confi
```azurecli az devcenter dev environment create --dev-center-name <devcenter-name> --project-name <project-name> -n <name> --environment-type <environment-type-name>
- --catalog-item-name <catalog-item-name> catalog-name <catalog-name>
+ --catalog-item-name <catalog-item-name> --catalog-name <catalog-name>
``` If the specific *catalog-item* requires any parameters, use `--parameters` and provide the parameters as a JSON string or a JSON file. For example:
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration.md
Start by selecting the tab below for your preferred interface.
Navigate to [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) in the Azure portal (you can use this link or find it with the portal search bar). Select **App registrations** from the service menu, and then **+ New registration**. In the **Register an application** page that follows, fill in the requested values: * **Name**: An Azure AD application display name to associate with the registration
In the **Register an application** page that follows, fill in the requested valu
When you're finished, select the **Register** button. When the registration is finished setting up, the portal will redirect you to its details page.
Start on your app registration page in the Azure portal.
1. Select **Certificates & secrets** from the registration's menu, and then select **+ New client secret**.
- :::image type="content" source="media/how-to-create-app-registration/client-secret.png" alt-text="Screenshot of the Azure portal showing an Azure AD app registration and a highlight around 'New client secret'.":::
+ :::image type="content" source="media/how-to-create-app-registration/client-secret.png" alt-text="Screenshot of the Azure portal showing an Azure AD app registration and a highlight around 'New client secret'." lightbox="media/how-to-create-app-registration/client-secret.png":::
1. Enter whatever values you want for Description and Expires, and select **Add**.
- :::row:::
- :::column:::
- :::image type="content" source="media/how-to-create-app-registration/add-client-secret.png" alt-text="Screenshot of the Azure portal while adding a client secret.":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
+ :::image type="content" source="media/how-to-create-app-registration/add-client-secret.png" alt-text="Screenshot of the Azure portal while adding a client secret.":::
1. Verify that the client secret is visible on the **Certificates & secrets** page with Expires and Value fields. 1. Take note of its **Secret ID** and **Value** to use later (you can also copy them to the clipboard with the Copy icons).
- :::image type="content" source="media/how-to-create-app-registration/client-secret-value.png" alt-text="Screenshot of the Azure portal showing how to copy the client secret value.":::
+ :::image type="content" source="media/how-to-create-app-registration/client-secret-value.png" alt-text="Screenshot of the Azure portal showing how to copy the client secret value." lightbox="media/how-to-create-app-registration/client-secret-value.png":::
>[!IMPORTANT] >Make sure to copy the values now and store them in a safe place, as they can't be retrieved again. If you can't find them later, you'll have to create a new secret.
Use these steps to create the role assignment for your registration.
| Assign access to | User, group, or service principal | | Members | Search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
- ![Add role assignment page](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the 'Add role assignment' page." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
#### Verify role assignment You can view the role assignment you've set up under **Access control (IAM) > Role assignments**. The app registration should show up in the list along with the role you assigned to it.
Select **Add permissions** when finished.
On the **API permissions** page, verify that there's now an entry for Azure Digital Twins reflecting **Read.Write** permissions: You can also verify the connection to Azure Digital Twins within the app registration's *manifest.json*, which was automatically updated with the Azure Digital Twins information when you added the API permissions.
To do so, select **Manifest** from the menu to view the app registration's manif
These values are shown in the screenshot below: If these values are missing, retry the steps in the [section for adding the API permission](#provide-api-permissions).
It's possible that your organization requires more actions from subscription own
Here are some common potential activities that an owner or administrator on the subscription may need to do. These and other operations can be performed from the [Azure AD App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) page in the Azure portal. * Grant admin consent for the app registration. Your organization may have **Admin Consent Required** globally turned on in Azure AD for all app registrations within your subscription. If so, the owner/administrator will need to select this button for your company on the app registration's **API permissions** page for the app registration to be valid:
- :::image type="content" source="media/how-to-create-app-registration/grant-admin-consent.png" alt-text="Screenshot of the Azure portal showing the 'Grant admin consent' button under API permissions.":::
- - If consent was granted successfully, the entry for Azure Digital Twins should then show a **Status** value of **Granted for (your company)**
+ :::image type="content" source="media/how-to-create-app-registration/grant-admin-consent.png" alt-text="Screenshot of the Azure portal showing the 'Grant admin consent' button under API permissions." lightbox="media/how-to-create-app-registration/grant-admin-consent.png":::
+
+ - If consent was granted successfully, the entry for Azure Digital Twins should then show a **Status** value of **Granted for (your company)**
- :::image type="content" source="media/how-to-create-app-registration/granted-admin-consent-done.png" alt-text="Screenshot of the Azure portal showing the admin consent granted for the company under API permissions.":::
+ :::image type="content" source="media/how-to-create-app-registration/granted-admin-consent-done.png" alt-text="Screenshot of the Azure portal showing the admin consent granted for the company under API permissions." lightbox="media/how-to-create-app-registration/granted-admin-consent-done.png":::
+ * Activate public client access * Set specific reply URLs for web and desktop access * Allow for implicit OAuth2 authentication flows
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-enable-private-link.md
The Private Link options are located in the **Networking** tab of instance setup
1. In the **Create private endpoint** page that opens, enter the details of a new private endpoint.
- :::image type="content" source="media/how-to-enable-private-link/create-private-endpoint-full.png" alt-text="Screenshot of the Azure portal showing the Create private endpoint page. It contains the fields described below.":::
+ :::image type="content" source="media/how-to-enable-private-link/create-private-endpoint-full.png" alt-text="Screenshot of the Azure portal showing the Create private endpoint page. It contains the fields described below." lightbox="media/how-to-enable-private-link/create-private-endpoint-full.png":::
1. Fill in selections for your **Subscription** and **Resource group**. Set the **Location** to the same location as the VNet you'll be using. Choose a **Name** for the endpoint, and for **Target sub-resources** select *API*.
To disable or enable public network access in the [Azure portal](https://portal.
1. In the **Public access** tab, set **Allow public network access to** either **Disabled** or **All networks**.
- :::row:::
- :::column:::
- :::image type="content" source="media/how-to-enable-private-link/network-flag-portal.png" alt-text="Screenshot of the Azure portal showing the Networking page for an Azure Digital Twins instance, highlighting how to toggle public access." lightbox="media/how-to-enable-private-link/network-flag-portal.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
+ :::image type="content" source="media/how-to-enable-private-link/network-flag-portal.png" alt-text="Screenshot of the Azure portal showing the Networking page for an Azure Digital Twins instance, highlighting how to toggle public access." lightbox="media/how-to-enable-private-link/network-flag-portal.png":::
Select **Save**.
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-time-series-insights.md
Before you can set up a relationship with Time Series Insights, you'll need to s
You'll be attaching Time Series Insights to Azure Digital Twins through the following path.
- :::column:::
- :::image type="content" source="media/how-to-integrate-time-series-insights/diagram-simple.png" alt-text="Diagram of Azure services in an end-to-end scenario, highlighting Time Series Insights." lightbox="media/how-to-integrate-time-series-insights/diagram-simple.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
## Create Event Hubs namespace
In this section, you'll create an Azure function that will convert twin update e
2. Create a new Azure function called *ProcessDTUpdatetoTSI.cs* to update device telemetry events to the Time Series Insights. The function type will be **Event Hub trigger**.
- :::image type="content" source="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png" alt-text="Screenshot of Visual Studio to create a new Azure function of type event hub trigger.":::
+ :::image type="content" source="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png" alt-text="Screenshot of Visual Studio to create a new Azure function of type event hub trigger." lightbox="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png":::
3. Add the following packages to your project (you can use the Visual Studio NuGet package manager, or the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in a command-line tool). * [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/)
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
You can find these details in the [Azure portal](https://portal.azure.com) after
Select your instance from the results to see these details in the Overview for your instance: Follow the instructions below if you intend to use the Azure CLI while following this guide.
To create a new endpoint, go to your instance's page in the [Azure portal](https
1. Complete the other details that are required for your endpoint type, including your subscription and the endpoint resources described [above](#prerequisite-create-endpoint-resources). 1. For Event Hubs and Service Bus endpoints only, you must select an **Authentication type**. You can use key-based authentication with a pre-created authorization rule, or identity-based authentication if you'll be using the endpoint with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for your Azure Digital Twins instance.
- :::row:::
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png" alt-text="Screenshot of creating an endpoint of type Event Hubs in the Azure portal." lightbox="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
1. Finish creating your endpoint by selecting **Save**.
To create a new endpoint, go to your instance's page in the [Azure portal](https
After creating your endpoint, you can verify that the endpoint was successfully created by checking the notification icon in the top Azure portal bar:
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-endpoint-notifications.png" alt-text="Screenshot of the notification to verify the creation of an endpoint in the Azure portal.":::
- :::column-end:::
- :::column:::
- :::column-end:::
+ If the endpoint creation fails, observe the error message and retry after a few minutes.
You can either select from some basic common filter options, or use the advanced
To use the basic filters, expand the **Event types** option and select the checkboxes corresponding to the events you want to send to your endpoint.
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-event-route-filter-basic-1.png" alt-text="Screenshot of creating an event route with a basic filter in the Azure portal, highlighting the checkboxes of the events.":::
- :::column-end:::
- :::column:::
- :::column-end:::
Doing so will autopopulate the filter text box with the text of the filter you've selected:
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-event-route-filter-basic-2.png" alt-text="Screenshot of creating an event route with a basic filter in the Azure portal, highlighting the autopopulated filter text after selecting the events.":::
- :::column-end:::
- :::column:::
- :::column-end:::
### Use the advanced filters
You can also use the advanced filter option to write your own custom filters.
To create an event route with advanced filter options, toggle the switch for the **Advanced editor** to enable it. You can then write your own event filters in the **Filter** box:
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-event-route-filter-advanced.png" alt-text="Screenshot of creating an event route with an advanced filter in the Azure portal.":::
- :::column-end:::
- :::column:::
- :::column-end:::
# [API](#tab/api)
digital-twins How To Route With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-route-with-managed-identity.md
# Mandatory fields. Title: Route events with a managed identity
-description: See how to enable a system-assigned identity for Azure Digital Twins and use it to forward events, using the Azure portal or CLI.
+description: See how to use a system-assigned identity to forward events in Azure Digital Twins.
Previously updated : 02/23/2022 Last updated : 11/17/2022
# Enable a managed identity for routing Azure Digital Twins events
-This article describes how to enable a [system-assigned identity for an Azure Digital Twins instance](concepts-security.md#managed-identity-for-accessing-other-resources), and use the identity when forwarding events to supported routing destinations. Setting up a managed identity isn't required for routing, but it can help the instance to easily access other Azure AD-protected resources, such as [Event Hubs](../event-hubs/event-hubs-about.md), [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md).
+This article describes how to use a [system-assigned identity for an Azure Digital Twins instance](concepts-security.md#managed-identity-for-accessing-other-resources) when forwarding events to supported routing destinations. Setting up a managed identity isn't required for routing, but it can help the instance to easily access other Azure AD-protected resources, such as [Event Hubs](../event-hubs/event-hubs-about.md), [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md).
Here are the steps that are covered in this article:
Here are the steps that are covered in this article:
1. Add an appropriate role or roles to the identity. For example, assign the **Azure Event Hub Data Sender** role to the identity if the endpoint is Event Hubs, or **Azure Service Bus Data Sender role** if the endpoint is Service Bus. 1. Create an endpoint in Azure Digital Twins that can use system-assigned identities for authentication.
-## Enable system-managed identity for the instance
+## Create an Azure Digital Twins instance with a managed identity
-When you enable a system-assigned identity on your Azure Digital Twins instance, Azure automatically creates an identity for it in [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md). That identity can then be used to authenticate to Azure Digital Twins endpoints for event forwarding.
+If you already have an Azure Digital Twins instance, ensure that you've enabled a [system-managed identity](how-to-set-up-instance-cli.md#enabledisable-system-managed-identity-for-the-instance) for it.
-You can enable system-managed identities for an Azure Digital Twins instance in two different ways:
+If you don't have an Azure Digital Twins instance, follow the instructions in [Create the instance with a system-managed identity](how-to-set-up-instance-cli.md#create-the-instance-with-a-system-managed-identity) to create an Azure Digital Twins instance with a managed identity for the first time.
-- Enable it as part of the instance's initial setup.-- Enable it later on an instance that already exists.-
-Either of these creation methods will give the same configuration options and the same end result for your instance. This section describes how to do both.
-
-### Add a system-managed identity during instance creation
-
-In this section, you'll learn how to enable a system-managed identity for an Azure Digital Twins instance while the instance is being created. You can enable the identity whether you're creating the instance with the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/what-is-azure-cli). Use the tabs below to select instructions for your preferred experience.
-
-# [Portal](#tab/portal)
-
-To add a managed identity during instance creation in the portal, begin [creating an instance as you normally would](how-to-set-up-instance-portal.md).
-
-The system-managed identity option is located in the **Advanced** tab of instance setup.
-
-In this tab, select the **On** option for **System managed identity** to turn on this feature.
--
-You can then use the bottom navigation buttons to continue with the rest of instance setup.
-
-# [CLI](#tab/cli)
-
-In the CLI, you can add an `--assign-identity` parameter to the `az dt create` command that's used to create the instance. (For more information about this command, see its [reference documentation](/cli/azure/dt#az-dt-create) or the [general instructions for setting up an Azure Digital Twins instance](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance)).
-
-To create an instance with a system managed identity, add the `--assign-identity` parameter like this:
-
-```azurecli-interactive
-az dt create --dt-name <new-instance-name> --resource-group <resource-group> --assign-identity
-```
---
-### Add a system-managed identity to an existing instance
-
-In this section, you'll add a system-managed identity to an Azure Digital Twins instance that already exists. Use the tabs below to select instructions for your preferred experience.
-
-# [Portal](#tab/portal)
-
-Start by opening the [Azure portal](https://portal.azure.com) in a browser.
-
-1. Search for the name of your instance in the portal search bar, and select it to view its details.
-
-1. Select **Identity** in the left-hand menu.
-
-1. On this page, select the **On** option to turn on this feature.
-
-1. Select the **Save** button, and **Yes** to confirm.
-
- :::image type="content" source="media/how-to-route-with-managed-identity/identity-digital-twins.png" alt-text="Screenshot of the Azure portal showing the Identity page for an Azure Digital Twins instance.":::
-
-After the change is saved, more fields will appear on this page for the new identity's **Object ID** and **Permissions**.
-
-You can copy the **Object ID** from here if needed, and use the **Permissions** button to view the Azure roles that are assigned to the identity. To set up some roles, continue to the next section.
-
-# [CLI](#tab/cli)
-
-Again, you can add the identity to your instance by using the `az dt create` command and `--assign-identity` parameter. Instead of providing a new name of an instance to create, you can provide the name of an instance that already exists to update the value of `--assign-identity` for that instance.
-
-The command to enable managed identity is the same as the command to create an instance with a system managed identity. All that changes is the value of the instance name parameter:
-
-```azurecli-interactive
-az dt create --dt-name <name-of-existing-instance> --resource-group <resource-group> --assign-identity
-```
-
-To disable managed identity on an instance where it's currently enabled, use the following similar command to set `--assign-identity` to `false`.
-
-```azurecli-interactive
-az dt create --dt-name <name-of-existing-instance> --resource-group <resource-group> --assign-identity false
-```
--
+Then, make sure you have *Azure Digital Twins Data Owner* role on the instance. You can find instructions in [Set up user access permissions](how-to-set-up-instance-cli.md#set-up-user-access-permissions).
## Assign Azure roles to the identity
To assign a role to the identity, start by opening the [Azure portal](https://po
| Assign access to | Under **System assigned managed identity**, select **Digital Twins**. | | Members | Select the managed identity of your Azure Digital Twins instance that's being assigned the role. The name of the managed identity matches the name of the instance, so choose the name of your Azure Digital Twins instance. |
- ![Add role assignment page](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the 'Add role assignment' page for an Azure Digital Twins instance." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
# [CLI](#tab/cli)
Start following the [instructions to create an Azure Digital Twins endpoint](how
When you get to the step of completing the details required for your endpoint type, make sure to select **Identity-based** for the Authentication type.
- :::column:::
- :::image type="content" source="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png" alt-text="Screenshot of creating an endpoint of type Event Hubs." lightbox="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
Finish setting up your endpoint and select **Save**.
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-cli.md
description: See how to set up an instance of the Azure Digital Twins service using the CLI Previously updated : 02/24/2022 Last updated : 11/17/2022
az dt create --dt-name <name-for-your-Azure-Digital-Twins-instance> --resource-g
There are several optional parameters that can be added to the command to specify additional things about your resource during creation, including creating a [system managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for the instance or enabling/disabling public network access. For a full list of supported parameters, see the [az dt create](/cli/azure/dt#az-dt-create) reference documentation.
+### Create the instance with a system-managed identity
+
+When you enable a [system-assigned identity](concepts-security.md#managed-identity-for-accessing-other-resources) on your Azure Digital Twins instance, Azure automatically creates an identity for it in [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md). That identity can then be used to authenticate to Azure Digital Twins endpoints for event forwarding. You can enable a system-managed identity for an Azure Digital Twins instance during instance creation, or [later on an existing instance](#enabledisable-system-managed-identity-for-the-instance).
+
+To create an Azure Digital Twins instance with system-assigned identity enabled, you can add an `--assign-identity` parameter to the `az dt create` command that's used to create the instance. (For more information about this command, see its [reference documentation](/cli/azure/dt#az-dt-create) or the [general instructions for setting up an Azure Digital Twins instance](how-to-set-up-instance-cli.md#create-the-azure-digital-twins-instance)).
+
+To create an instance with a system managed identity, add the `--assign-identity` parameter like this:
+
+```azurecli-interactive
+az dt create --dt-name <new-instance-name> --resource-group <resource-group> --assign-identity
+```
+ ### Verify success and collect important values If the instance was created successfully, the result in the CLI looks something like this, outputting information about the resource you have created:
The result of this command is outputted information about the role assignment th
You now have an Azure Digital Twins instance ready to go, and have assigned permissions to manage it.
+## Enable/disable system-managed identity for the instance
+
+This section shows you how to add a system-managed identity to an existing Azure Digital Twins instance. You can also disable system-managed identity on an instance that has it already.
+
+The command to enable managed identity for an existing instance is the same `az dt create` command that's used to [create a new instance with a system managed identity](#create-the-instance-with-a-system-managed-identity). Instead of providing a new name of an instance to create, you can provide the name of an instance that already exists. Then, make sure to add the `--assign-identity` parameter.
+
+```azurecli-interactive
+az dt create --dt-name <name-of-existing-instance> --resource-group <resource-group> --assign-identity
+```
+
+To disable managed identity on an instance where it's currently enabled, use the following similar command to set `--assign-identity` to `false`.
+
+```azurecli-interactive
+az dt create --dt-name <name-of-existing-instance> --resource-group <resource-group> --assign-identity false
+```
+
+### Considerations for disabling system-managed identities
+
+It's important to consider the effects that any changes to the identity or its roles can have on the resources that use it. If you're [using managed identities with your Azure Digital Twins endpoints](how-to-route-with-managed-identity.md) or for [data history](how-to-use-data-history.md) and the identity is disabled, or a necessary role is removed from it, the endpoint or data history connection can become inaccessible and the flow of events will be disrupted.
+
+To continue using an endpoint that was set up with a managed identity that's now been disabled, you'll need to delete the endpoint and [re-create it](how-to-manage-routes.md#create-an-endpoint-for-azure-digital-twins) with a different authentication type. It may take up to an hour for events to resume delivery to the endpoint after this change.
+ ## Next steps Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands:
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-portal.md
description: See how to set up an instance of the Azure Digital Twins service using the Azure portal Previously updated : 02/24/2022 Last updated : 11/17/2022
This version of this article goes through these steps manually, one by one, usin
Here are the additional options you can configure during setup, using the other tabs in the **Create Resource** process. * **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [Enable private access with Private Link](./how-to-enable-private-link.md?tabs=portal#add-a-private-endpoint-during-instance-creation).
-* **Advanced**: In this tab, you can enable a system-managed identity for your instance that can be used when forwarding events along [event routes](concepts-route-events.md). For more information about using system-managed identities with Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#managed-identity-for-accessing-other-resources).
+* **Advanced**: In this tab, you can enable a [system-managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for your instance. When this is enabled, Azure automatically creates an identity for the instance in [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md). That identity can then be used to authenticate to Azure Digital Twins endpoints for event forwarding. You can enable that system-managed identity here, or [later on an existing instance](#enabledisable-system-managed-identity-for-the-instance).
* **Tags**: In this tab, you can add tags to your instance to help you organize it among your Azure resources. For more about Azure resource tags, see [Tag resources, resource groups, and subscriptions for logical organization](../azure-resource-manager/management/tag-resources.md). ### Verify success and collect important values
You can view the role assignment you've set up under **Access control (IAM) > Ro
You now have an Azure Digital Twins instance ready to go, and have assigned permissions to manage it.
+## Enable/disable system-managed identity for the instance
+
+This section shows you how to add a system-managed identity to an existing Azure Digital Twins instance. You can also use this page to disable system-managed identity on an instance that has it already.
+
+Start by opening the [Azure portal](https://portal.azure.com) in a browser.
+
+1. Search for the name of your instance in the portal search bar, and select it to view its details.
+
+1. Select **Identity** in the left-hand menu.
+
+1. On this page, select the **On** option to turn on this feature.
+
+1. Select the **Save** button, and **Yes** to confirm.
+
+ :::image type="content" source="media/how-to-route-with-managed-identity/identity-digital-twins.png" alt-text="Screenshot of the Azure portal showing the Identity page for an Azure Digital Twins instance." lightbox="media/how-to-route-with-managed-identity/identity-digital-twins.png":::
+
+After the change is saved, more fields will appear on this page for the new identity's **Object ID** and **Permissions**.
+
+You can copy the **Object ID** from here if needed, and use the **Permissions** button to view the Azure roles that are assigned to the identity. To set up some roles, continue to the next section.
+
+### Considerations for disabling system-managed identities
+
+It's important to consider the effects that any changes to the identity or its roles can have on the resources that use it. If you're [using managed identities with your Azure Digital Twins endpoints](how-to-route-with-managed-identity.md) or for [data history](how-to-use-data-history.md) and the identity is disabled, or a necessary role is removed from it, the endpoint or data history connection can become inaccessible and the flow of events will be disrupted.
+ ## Next steps Test out individual REST API calls on your instance using the Azure Digital Twins CLI commands:
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
There are two possible error scenarios that each give their own error message:
Both of these error messages are shown in the screenshot below:
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/properties-errors.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Properties panel, showing two error messages. One error indicates that models are missing, and the other indicates that properties are missing a model. " lightbox="media/how-to-use-azure-digital-twins-explorer/properties-errors.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
#### View a twin's relationships
You can create a new digital twin from its model definition in the **Models** pa
To create a twin from a model, find that model in the list and choose the menu dots next to the model name. Then, select **Create a Twin**. You'll be asked to enter a **name** for the new twin, which must be unique. Then save the twin, which will add it to your graph.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-create-a-twin.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The menu dots for a single model are highlighted, and the menu option to Create a Twin is also highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-create-a-twin.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
To add property values to your twin, see [Edit twin and relationship properties](#edit-twin-and-relationship-properties).
You can use the **Model Graph** panel to view a graphical representation of the
To see the full definition of a model, find that model in the **Models** pane and select the menu dots next to the model name. Then, select **View Model**. Doing so will display a **Model Information** modal showing the raw DTDL definition of the model.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-view.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The menu dots for a single model are highlighted, and the menu option to View Model is also highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-view.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
You can also view a model's full definition by selecting it in the **Model Graph**, and using the **Toggle model details** button to expand the **Model Detail** panel. This panel will also display the full DTDL code for the model.
You can upload custom images to represent different models in the Model Graph an
To upload an image for a single model, find that model in the **Models** panel and select the menu dots next to the model name. Then, select **Upload Model Image**. In the file selector box that appears, navigate on your machine to the image file you want to upload for that model. Choose **Open** to upload it.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-one-image.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The menu dots for a single model are highlighted, and the menu option to Upload Model Image is also highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-one-image.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
You can also upload model images in bulk.
First, use the following instructions to set the image file names before uploadi
Then, to upload the images at the same time, use the **Upload Model Images** icon at the top of the Models panel. In the file selector box, choose which image files to upload.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-images.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Upload Model Images icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-images.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
## Manage models
You can upload models from your machine by selecting them individually, or by up
To upload one or more models that are individually selected, select the **Upload a model** icon showing an upwards arrow.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-upload.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Upload a model icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-upload.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
In the file selector box that appears, navigate on your machine to the model(s) you want to upload. You can select one or more JSON model files and select **Open** to upload them. To upload a folder of models, including everything that's inside it, select the **Upload a directory of Models** icon showing a file folder.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-directory.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Upload a directory of Models icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-upload-directory.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
In the file selector box that appears, navigate on your machine to a folder containing JSON model files. Select **Open** to upload that top-level folder and all of its contents.
You can use the Models panel to delete individual models, or all of the models i
To delete a single model, find that model in the list and select the menu dots next to the model name. Then, select **Delete Model**.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-delete-one.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The menu dots for a single model are highlighted, and the menu option to Delete Model is also highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-delete-one.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
To delete all of the models in your instance at once, choose the **Delete All Models** icon at the top of the Models panel.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-delete-all.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Delete All Models icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-delete-all.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
### Refresh models
When you open Azure Digital Twins Explorer, the Models panel should automaticall
However, you can manually refresh the panel at any time to reload the list of all models in your Azure Digital Twins instance. To do so, select the **Refresh models** icon.
- :::column:::
- :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/models-panel-refresh.png" alt-text="Screenshot of Azure Digital Twins Explorer Models panel. The Refresh models icon is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/models-panel-refresh.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
## Import/export graph
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
description: See how to set up and use data history for Azure Digital Twins, using the CLI or Azure portal. Previously updated : 03/23/2022 Last updated : 11/17/2022
databasename="<name-for-your-database>"
## Create an Azure Digital Twins instance with a managed identity
-If you already have an Azure Digital Twins instance, ensure that you've enabled a [system-managed identity](how-to-route-with-managed-identity.md#add-a-system-managed-identity-to-an-existing-instance) for it.
+If you already have an Azure Digital Twins instance, ensure that you've enabled a [system-managed identity](how-to-set-up-instance-cli.md#enabledisable-system-managed-identity-for-the-instance) for it.
-If you don't have an Azure Digital Twins instance, set one up using the instructions in this section.
+If you don't have an Azure Digital Twins instance, follow the instructions in [Create the instance with a system-managed identity](how-to-set-up-instance-cli.md#create-the-instance-with-a-system-managed-identity) to create an Azure Digital Twins instance with a managed identity for the first time.
-# [CLI](#tab/cli)
-
-Use the following command to create a new instance with a system-managed identity. The command uses three local variables (`$dtname`, `$resourcegroup`, and `$location`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
-
-```azurecli-interactive
-az dt create --dt-name $dtname --resource-group $resourcegroup --location $location --assign-identity
-```
-
-Next, use the following command to grant yourself the *Azure Digital Twins Data Owner* role on the instance. The command has one placeholder, `<owneruser@microsoft.com>`, that you should replace with your own Azure account information, and uses a local variable (`$dtname`) that was created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
-
-```azurecli-interactive
-az dt role-assignment create --dt-name $dtname --assignee "<owneruser@microsoft.com>" --role "Azure Digital Twins Data Owner"
-```
-
->[!NOTE]
->It may take up to five minutes for this RBAC change to apply.
-
-# [Portal](#tab/portal)
-
-Follow the instructions in [Set up an Azure Digital Twins instance and authentication](how-to-set-up-instance-portal.md) to create an instance, making sure to enable a **system-managed identity** in the [Advanced](how-to-set-up-instance-portal.md#additional-setup-options) tab during setup. Then, continue through the article's instructions to set up user access permissions so that you have the Azure Digital Twins Data Owner role on the instance.
-
-Remember the name you give to your instance so you can use it later.
--
+Then, make sure you have *Azure Digital Twins Data Owner* role on the instance. You can find instructions in [Set up user access permissions](how-to-set-up-instance-cli.md#set-up-user-access-permissions).
## Create an Event Hubs namespace and event hub
digital-twins How To Use Postman With Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman-with-digital-twins.md
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
3. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/console-access-token.png" alt-text="Screenshot of the console showing the result of the az account get-access-token command. The accessToken field and its sample value is highlighted.":::
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/console-access-token.png" alt-text="Screenshot of the console showing the result of the az account get-access-token command. The accessToken field and its sample value is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/console-access-token.png":::
>[!TIP] >This token is valid for at least five minutes and a maximum of 60 minutes. If you run out of time allotted for the current token, you can repeat the steps in this section to get a new one.
To find the collection, navigate to the repo link and choose the folder for your
Here's how to download your chosen collection to your machine so that you can import it into Postman. 1. Use the links above to open the collection file in GitHub in your browser. 1. Select the **Raw** button to open the raw text of the file.
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There is a highlight around the Raw button." lightbox="media/how-to-use-postman-with-digital-twins/swagger-raw.png":::
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There is a highlight around the Raw button." lightbox="media/how-to-use-postman-with-digital-twins/swagger-raw.png":::
1. Copy the text from the window, and paste it into a new file on your machine. 1. Save the file with a .json extension (the file name can be whatever you want, as long as you can remember it to find the file later).
Next, import the collection into Postman.
1. In the **Import** window that follows, select **Upload Files** and navigate to the collection file on your machine that you created earlier. Select Open. 1. Select the **Import** button to confirm.
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-import-collection-2.png" alt-text="Screenshot of Postman's 'Import' window, showing the file to import as a collection and the Import button.":::
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-import-collection-2.png" alt-text="Screenshot of Postman's 'Import' window, showing the file to import as a collection and the Import button." lightbox="media/how-to-use-postman-with-digital-twins/postman-import-collection-2.png":::
The newly imported collection can now be seen from your main Postman view, in the Collections tab.
Now that your collection is set up, you can add your own requests to the Azure D
1. This action opens the SAVE REQUEST window, where you can enter a name for your request, give it an optional description, and choose the collection that it's a part of. Fill in the details and save the request to the collection you created earlier.
- :::row:::
- :::column:::
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-save-request.png" alt-text="Screenshot of 'Save request' window in Postman showing the fields described. The 'Save to Azure Digital Twins collection' button is highlighted.":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
You can now view your request under the collection, and select it to pull up its editable details.
digital-twins Reference Query Clause Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-clause-select.md
For the following examples, consider a twin graph that contains the following da
Here's a diagram illustrating this scenario:
- :::column:::
- :::image type="content" source="media/reference-query-clause-select/projections-graph.png" alt-text="Diagram showing the sample graph described above.":::
- :::column-end:::
- :::column:::
- :::column-end:::
#### Project collection example
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-code.md
There's also a section showing the complete code at the end of the tutorial. You
To begin, open the file *Program.cs* in any code editor. You'll see a minimal code template that looks something like this:
- :::column:::
- :::image type="content" source="media/tutorial-code/starter-template.png" alt-text="Screenshot of a snippet of sample code in a code editor." lightbox="media/tutorial-code/starter-template.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
First, add some `using` lines at the top of the code to pull in necessary dependencies.
energy-data-services Tutorial Seismic Ddms Sdutil https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms-sdutil.md
# Tutorial: Seismic store sdutil
-Sdutil is a command line python utility tool designed to easily interact with seismic store. The seismic store is a cloud-based solution designed to store and manage datasets of any size in the cloud by enabling a secure way to access them through a scoped authorization mechanism. Seismic Store overcomes the object size limitations imposed by a cloud provider by managing generic datasets as multi-independent objects. This provides a generic, reliable, and better performing solution to handle data in cloud storage.
+Sdutil is a command line Python utility tool designed to easily interact with seismic store. The seismic store is a cloud-based solution designed to store and manage datasets of any size in the cloud by enabling a secure way to access them through a scoped authorization mechanism. Seismic Store overcomes the object size limitations imposed by a cloud provider by managing generic datasets as multi-independent objects. This provides a generic, reliable, and better performing solution to handle data in cloud storage.
**Sdutil** is an intuitive command line utility tool to interact with seismic store and perform some basic operations like upload or download datasets to or from seismic store, manage users, list folders content and more.
Run the changelog script (`./changelog-generator.sh`) to automatically generate
Microsoft Energy Data Services instance is using OSDU&trade; M12 Version of sdutil. Follow the below steps if you would like to use SDUTIL to leverage the SDMS API of your MEDS instance.
-1. Ensure you have followed the [installation](#prerequisites) and [configuration](#configuration) steps from above. This includes downloading the SDUTIL source code, configuring your python virtual environment, editing the `config.yaml` file and setting your three environment variables.
+1. Ensure you have followed the [installation](#prerequisites) and [configuration](#configuration) steps from above. This includes downloading the SDUTIL source code, configuring your Python virtual environment, editing the `config.yaml` file and setting your three environment variables.
2. Run below commands to sign in, list, upload and download files in the seismic store.
event-grid Event Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-domains.md
Title: Event Domains in Azure Event Grid description: This article describes how to use event domains to manage the flow of custom events to your various business organizations, customers, or applications. Previously updated : 04/13/2021 Last updated : 11/17/2022 # Understand event domains for managing Event Grid topics-
-This article describes how to use event domains to manage the flow of custom events to your various business organizations, customers, or applications. Use event domains to:
+An event domain is a management tool for large number of Event Grid topics related to the same application. You can think of it as a meta-topic that can have thousands of individual topics. It allows an event publisher to publish events to thousands of topics at the same time. Domains also give you authentication and authorization control over each topic so you can partition your tenants. This article describes how to use event domains to manage the flow of custom events to your various business organizations, customers, or applications. Use event domains to:
* Manage multitenant eventing architectures at scale.
-* Manage your authorization and authentication.
+* Manage your authentication and authorization.
* Partition your topics without managing each individually. * Avoid individually publishing to each of your topic endpoints.
-## Event domain overview
-
-An event domain is a management tool for large numbers of Event Grid topics related to the same application. You can think of it as a meta-topic that can have thousands of individual topics.
-
-Event domains provide you the same architecture used by Azure services like Storage and IoT Hub to publish their events. They allow you to publish events to thousands of topics. Domains also give you authorization and authentication control over each topic so you can partition your tenants.
- ## Example use case [!INCLUDE [event-grid-domain-example-use-case.md](./includes/event-grid-domain-example-use-case.md)] ## Access management
-With a domain, you get fine grain authorization and authentication control over each topic via Azure role-based access control (Azure RBAC). You can use these roles to restrict each tenant in your application to only the topics you wish to grant them access to.
-
-Azure RBAC in event domains works the same way [managed access control](security-authorization.md) works in the rest of Event Grid and Azure. Use Azure RBAC to create and enforce custom role definitions in event domains.
+With a domain, you get fine grain authorization and authentication control over each topic via Azure role-based access control (Azure RBAC). You can use these roles to restrict each tenant in your application to only the topics you wish to grant them access to. Azure RBAC in event domains works the same way [managed access control](security-authorization.md) works in the rest of Event Grid and Azure. Use Azure RBAC to create and enforce custom role definitions in event domains.
### Built in roles
-Event Grid has two built-in role definitions to make Azure RBAC easier for working with event domains. These roles are **EventGrid EventSubscription Contributor (Preview)** and **EventGrid EventSubscription Reader (Preview)**. You assign these roles to users who need to subscribe to topics in your event domain. You scope the role assignment to only the topic that users need to subscribe to.
-
-For information about these roles, see [Built-in roles for Event Grid](security-authorization.md#built-in-roles).
+Event Grid has two built-in role definitions to make Azure RBAC easier for working with event domains. These roles are **EventGrid EventSubscription Contributor** and **EventGrid EventSubscription Reader**. You assign these roles to users who need to subscribe to topics in your event domain. You scope the role assignment to only the topic that users need to subscribe to. For information about these roles, see [Built-in roles for Event Grid](security-authorization.md#built-in-roles).
## Subscribing to topics
-Subscribing to events on a topic within an event domain is the same as [creating an Event Subscription on a custom topic](./custom-event-quickstart.md) or subscribing to an event from an Azure service.
+Subscribing to events for a topic within an event domain is the same as [creating an Event Subscription on a custom topic](./custom-event-quickstart.md) or subscribing to an event from an Azure service.
> [!IMPORTANT] > Domain topic is considered an **auto-managed** resource in Event Grid. You can create an event subscription at the [domain scope](#domain-scope-subscriptions) without creating the domain topic. In this case, Event Grid automatically creates the domain topic on your behalf. Of course, you can still choose to create the domain topic manually. This behavior allows you to worry about one less resource when dealing with a huge number of domain topics. When the last subscription to a domain topic is deleted, the domain topic is also deleted irrespective of whether the domain topic was manually created or auto-created.
Event domains also allow for domain-scope subscriptions. An event subscription o
## Publishing to an event domain
-When you create an event domain, you're given a publishing endpoint similar to if you had created a topic in Event Grid.
-
-To publish events to any topic in an Event Domain, push the events to the domain's endpoint the [same way you would for a custom topic](./post-to-custom-topic.md). The only difference is that you must specify the topic you'd like the event to be delivered to.
-
-For example, publishing the following array of events would send event with `"id": "1111"` to topic `foo` while the event with `"id": "2222"` would be sent to topic `bar`:
+When you create an event domain, you're given a publishing endpoint similar to if you had created a topic in Event Grid. To publish events to any topic in an event domain, push the events to the domain's endpoint the [same way you would for a custom topic](./post-to-custom-topic.md). The only difference is that you must specify the topic you'd like the event to be delivered to. For example, publishing the following array of events would send event with `"id": "1111"` to topic `foo` while the event with `"id": "2222"` would be sent to topic `bar`:
```json [{
Here are the limits and quotas related to event domains:
If these limits don't suit you, open a support ticket or send an email to [askgrid@microsoft.com](mailto:askgrid@microsoft.com). ## Pricing
-Event domains use the same [operations pricing](https://azure.microsoft.com/pricing/details/event-grid/) that all other features in Event Grid use.
-
-Operations work the same in event domains as they do in custom topics. Each ingress of an event to an event domain is an operation, and each delivery attempt for an event is an operation.
--
+Event domains use the same [operations pricing](https://azure.microsoft.com/pricing/details/event-grid/) that all other features in Event Grid use. Operations work the same in event domains as they do in custom topics. Each ingress of an event to an event domain is an operation, and each delivery attempt for an event is an operation.
## Next steps-
-* To learn about setting up event domains, creating topics, creating event subscriptions, and publishing events, see [Manage event domains](./how-to-event-domains.md).
+To learn about setting up event domains, creating topics, creating event subscriptions, and publishing events, see [Manage event domains](./how-to-event-domains.md).
event-grid Event Schema Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-key-vault.md
Title: Azure Key Vault as Event Grid source description: Describes the properties and schema provided for Azure Key Vault events with Azure Event Grid Previously updated : 09/15/2021 Last updated : 11/17/2022 # Azure Key Vault as Event Grid source
-This article provides the properties and schema for events in [Azure Key Vault](../key-vault/index.yml). For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+This article provides the properties and schema for events in [Azure Key Vault](../key-vault/index.yml). For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md) and [Cloud event schema](cloud-event-schema.md).
## Available event types
event-grid Handler Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-service-bus.md
Title: Service Bus queues and topics as event handlers for Azure Event Grid events description: Describes how you can use Service Bus queues and topics as event handlers for Azure Event Grid events. Previously updated : 09/30/2021 Last updated : 11/17/2022 # Service Bus queues and topics as event handlers for Azure Event Grid events
-An event handler is the place where the event is sent. The handler takes some further action to process the event. Several Azure services are automatically configured to handle events and **Azure Service Bus** is one of them.
-
-You can use a Service queue or topic as a handler for events from Event Grid.
+An event handler receives events from an event source via Event Grid, and processes those events. You can use instances of a few Azure services to handle events and **Azure Service Bus** is one of them. This article shows you how to use a Service Bus queue or topic as a handler for events from Event Grid.
## Service Bus queues
-> [!NOTE]
-> Session enabled queues are not supported as event handlers for Azure Event Grid events
-
-You can route events in Event Grid directly to Service Bus queues for use in buffering or command & control scenarios in enterprise applications.
+You can route events in Event Grid directly to Service Bus queues for use in buffering or command and control scenarios in enterprise applications.
-In the Azure portal, while creating an event subscription, select **Service Bus Queue** as endpoint type and then click **select an endpoint** to choose a Service Bus queue.
+### Use Azure portal
+In the Azure portal, while creating an event subscription, select **Service Bus Queue** as the endpoint type and then click **select an endpoint** to choose a Service Bus queue.
-### Using CLI to add a Service Bus queue handler
-For Azure CLI, the following example subscribes and connects an event grid topic to a Service Bus queue:
+> [!NOTE]
+> Session enabled queues are not supported as event handlers for Azure Event Grid events
+
+### Use Azure CLI
+Use the [`az eventgrid event-subscription create`](/cli/azure/eventgrid/event-subscription) command with `--endpoint-type` set to `servicebusqueue` and `--endpoint` set to `/subscriptions/{AZURE SUBSCRIPTION}/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ServiceBus/namespaces/<NAMESPACE NAME>/queues/<QUEUE NAME>`. Here's an example:
```azurecli-interactive az eventgrid event-subscription create \
az eventgrid event-subscription create \
--endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/queues/queue1 ```
+You can also use the [`az eventgrid topic event-subscription`](/cli/azure/eventgrid/topic/event-subscription) command for custom topics, the [`az eventgrid system-topic event-subscription`](/cli/azure/eventgrid/system-topic/event-subscription) command for system topics, and the [`az eventgrid partner topic event-subscription create`](/cli/azure/eventgrid/partner/topic/event-subscription#az-eventgrid-partner-topic-event-subscription-create) command for partner topics.
+
+### Use Azure PowerShell
+Use the [New-AzEventGridSubscription](/powershell/module/az.eventgrid/new-azeventgridsubscription) command with `-EndpointType` set to `servicebusqueue` and `-Endpoint` set to `/subscriptions/{AZURE SUBSCRIPTION}/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ServiceBus/namespaces/<NAMESPACE NAME>/queues/<QUEUE NAME>`. Here's an example:
++
+```azurepowershell-interactive
+New-AzEventGridSubscription -ResourceGroup MyResourceGroup `
+ -TopicName Topic1 `
+ -EndpointType servicebusqueue `
+ -Endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/queues/queue1 `
+ -EventSubscriptionName EventSubscription1
+```
+
+You can also use the [`New-AzEventGridSystemTopicEventSubscription`](/powershell/module/az.eventgrid/new-azeventgridsystemtopiceventsubscription) command for system topics, and the [`New-AzEventGridPartnerTopicEventSubscription`](/powershell/module/az.eventgrid/new-azeventgridpartnertopiceventsubscription) command for partner topics.
+ ## Service Bus topics
-You can route events in Event Grid directly to Service Bus topics to handle Azure system events with Service Bus topics, or for command & control messaging scenarios.
+You can route events in Event Grid directly to Service Bus topics for command and control messaging scenarios.
-In the Azure portal, while creating an event subscription, select **Service Bus Topic** as endpoint type and then click **select and endpoint** to choose a Service Bus topic.
+### Use Azure portal
+In the Azure portal, while creating an event subscription, select **Service Bus Topic** as the endpoint type and then click **select an endpoint** to choose a Service Bus topic.
-### Using CLI to add a Service Bus topic handler
-For Azure CLI, the following example subscribes and connects an event grid topic to a Service Bus topic:
+### Use Azure CLI
+Use the [`az eventgrid event-subscription create`](/cli/azure/eventgrid/event-subscription) command with `--endpoint-type` set to `servicebustopic` and `--endpoint` set to `/subscriptions/{AZURE SUBSCRIPTION}/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ServiceBus/namespaces/<NAMESPACE NAME>/topics/<TOPIC NAME>`. Here's an example:
```azurecli-interactive az eventgrid event-subscription create \
az eventgrid event-subscription create \
--endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/topics/topic1 ```
+You can also use the [`az eventgrid topic event-subscription`](/cli/azure/eventgrid/topic/event-subscription) command for custom topics, the [`az eventgrid system-topic event-subscription`](/cli/azure/eventgrid/system-topic/event-subscription) command for system topics, and the [`az eventgrid partner topic event-subscription create`](/cli/azure/eventgrid/partner/topic/event-subscription#az-eventgrid-partner-topic-event-subscription-create) command for partner topics.
+
+### Use Azure PowerShell
+Use the [New-AzEventGridSubscription](/powershell/module/az.eventgrid/new-azeventgridsubscription) command with `-EndpointType` set to `servicebustopic` and `-Endpoint` set to `/subscriptions/{AZURE SUBSCRIPTION}/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.ServiceBus/namespaces/<NAMESPACE NAME>/topics/<TOPIC NAME>`. Here's an example:
++
+```azurepowershell-interactive
+New-AzEventGridSubscription -ResourceGroup MyResourceGroup `
+ -TopicName Topic1 `
+ -EndpointType servicebustopic `
+ -Endpoint /subscriptions/{SubID}/resourceGroups/TestRG/providers/Microsoft.ServiceBus/namespaces/ns1/topics/topic1 `
+ -EventSubscriptionName EventSubscription1
+```
+
+You can also use the [`New-AzEventGridSystemTopicEventSubscription`](/powershell/module/az.eventgrid/new-azeventgridsystemtopiceventsubscription) command for system topics, and the [`New-AzEventGridPartnerTopicEventSubscription`](/powershell/module/az.eventgrid/new-azeventgridpartnertopiceventsubscription) command for partner topics.
++ [!INCLUDE [event-grid-message-headers](./includes/event-grid-message-headers.md)] When sending an event to a Service Bus queue or topic as a brokered message, the `messageid` of the brokered message is an internal system ID. The internal system ID for the message will be maintained across redelivery of the event so that you can avoid duplicate deliveries by turning on **duplicate detection** on the service bus entity. We recommend that you enable duration of the duplicate detection on the Service Bus entity to be either the time-to-live (TTL) of the event or max retry duration, whichever is longer.
+## Delivery properties
+Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set custom headers on the events that are delivered to Azure Service Bus queues and topics.
+
+Azure Service Bus supports the use of following message properties when sending single messages.
+
+| Header name | Header type |
+| :-- | :-- |
+| `MessageId` | Dynamic |
+| `PartitionKey` | Static or dynamic |
+| `SessionId` | Static or dynamic |
+| `CorrelationId` | Static or dynamic |
+| `Label` | Static or dynamic |
+| `ReplyTo` | Static or dynamic |
+| `ReplyToSessionId` | Static or dynamic |
+| `To` |Static or dynamic |
+| `ViaPartitionKey` | Static or dynamic |
+
+> [!NOTE]
+> - The default value of `MessageId` is the internal ID of the Event Grid event. You can override it. For example, `data.field`.
+> - You can only set either `SessionId` or `MessageId`.
+
+For more information, see [Custom delivery properties](delivery-properties.md).
+ ## REST examples (for PUT) ### Service Bus queue
The internal system ID for the message will be maintained across redelivery of t
} ```
-## Delivery properties
-Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set custom headers on the events that are delivered to Azure Service Bus queues and topics.
-
-Azure Service Bus supports the use of following message properties when sending single messages.
-
-| Header name | Header type |
-| :-- | :-- |
-| `MessageId` | Dynamic |
-| `PartitionKey` | Static or dynamic |
-| `SessionId` | Static or dynamic |
-| `CorrelationId` | Static or dynamic |
-| `Label` | Static or dynamic |
-| `ReplyTo` | Static or dynamic |
-| `ReplyToSessionId` | Static or dynamic |
-| `To` |Static or dynamic |
-| `ViaPartitionKey` | Static or dynamic |
-
-> [!NOTE]
-> - The default value of `MessageId` is the internal ID of the Event Grid event. You can override it. For example, `data.field`.
-> - You can only set either `SessionId` or `MessageId`.
-
-For more information, see [Custom delivery properties](delivery-properties.md).
## Next steps See the [Event handlers](event-handlers.md) article for a list of supported event handlers.
event-grid Handler Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-webhooks.md
Title: Webhooks as event handlers for Azure Event Grid events description: Describes how you can use webhooks as event handlers for Azure Event Grid events. Azure Automation runbooks and logic apps are supported as event handlers via webhooks. Previously updated : 09/15/2021 Last updated : 11/17/2022 # Webhooks, Automation runbooks, Logic Apps as event handlers for Azure Event Grid events
-An event handler is the place where the event is sent. The handler takes some further action to process the event. Several Azure services are automatically configured to handle events. You can also use any WebHook for handling events. The WebHook doesn't need to be hosted in Azure to handle events. Event Grid only supports HTTPS Webhook endpoints.
+An event handler receives events from an event source via Event Grid, and processes those events. You can use any WebHook as an event handler for events forwarded by Event Grid. The WebHook doesn't need to be hosted in Azure to handle events. Event Grid supports only HTTPS Webhook endpoints. You can also use an Azure Automation workbook or an Azure logic app as an event handler via webhooks. This article provides you links to conceptual, quickstart, and tutorial articles that provide you with more information.
> [!NOTE]
-> - Azure Automation runbooks and logic apps are supported as event handlers via webhooks.
-> - Even though you can use **Webhook** as an **endpoint type** to configure an Azure function as an event handler, use **Azure Function** as an endpoint type. For more information, see [Azure function as an event handler](handler-functions.md).
+> Even though you can use **Webhook** as an **endpoint type** to configure an Azure function as an event handler, use **Azure Function** as an endpoint type. For more information, see [Azure function as an event handler](handler-functions.md).
## Webhooks See the following articles for an overview and examples of using webhooks as event handlers.
See the following articles for an overview and examples of using webhooks as eve
| Quickstart: create and route custom events with - [Azure CLI](custom-event-quickstart.md), [PowerShell](custom-event-quickstart-powershell.md), and [portal](custom-event-quickstart-portal.md). | Shows how to send custom events to a WebHook. | | Quickstart: route Blob storage events to a custom web endpoint with - [Azure CLI](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json), [PowerShell](../storage/blobs/storage-blob-event-quickstart-powershell.md?toc=%2fazure%2fevent-grid%2ftoc.json), and [portal](blob-event-quickstart-portal.md). | Shows how to send blob storage events to a WebHook. | | [Quickstart: send container registry events](../container-registry/container-registry-event-grid-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) | Shows how to use Azure CLI to send Container Registry events. |
-| [Overview: receive events to an HTTP endpoint](receive-events.md) | Describes how to validate an HTTP endpoint to receive events from an Event Subscription, and receive and deserialize events. |
+| [Overview: receive events to an HTTP endpoint](receive-events.md) | Describes how to validate an HTTP endpoint to receive events from an event subscription, and receive and deserialize events. |
## Azure Automation
event-grid Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/network-security.md
Title: Network security for Azure Event Grid resources
description: This article describes how to use service tags for egress, IP firewall rules for ingress, and private endpoints for ingress with Azure Event Grid. Previously updated : 09/28/2021 Last updated : 11/17/2022
event-grid Post To Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/post-to-custom-topic.md
Title: Post event to custom Azure Event Grid topic description: This article describes how to post an event to a custom topic. It shows the format of the post and event data. Previously updated : 08/19/2021 Last updated : 11/17/2022
This article describes how to post an event to a custom topic using an access ke
## Endpoint
-When sending the HTTP POST to a custom topic, use the URI format: `https://<topic-endpoint>?api-version=2018-01-01`.
-
-For example, a valid URI is: `https://exampletopic.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01`.
-
-To get the endpoint for a custom topic with Azure CLI, use:
+When sending the HTTP POST to a custom topic, use the URI format: `https://<topic-endpoint>?api-version=2018-01-01`. For example, a valid URI is: `https://exampletopic.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01`. To get the endpoint for a custom topic using Azure CLI, use:
```azurecli-interactive az eventgrid topic show --name <topic-name> -g <topic-resource-group> --query "endpoint" ```
-To get the endpoint for a custom topic with Azure PowerShell, use:
+To get the endpoint for a custom topic using Azure PowerShell, use:
```powershell (Get-AzEventGridTopic -ResourceGroupName <topic-resource-group> -Name <topic-name>).Endpoint
To get the endpoint for a custom topic with Azure PowerShell, use:
## Header
-In the request, include a header value named `aeg-sas-key` that contains a key for authentication.
-
-For example, a valid header value is `aeg-sas-key: VXbGWce53249Mt8wuotr0GPmyJ/nDT4hgdEj9DpBeRr38arnnm5OFg==`.
-
-To get the key for a custom topic with Azure CLI, use:
+In the request, include a header value named `aeg-sas-key` that contains a key for authentication. For example, a valid header value is `aeg-sas-key: xxxxxxxxxxxxxxxxxxxxxxx`. To get the key for a custom topic using Azure CLI, use:
```azurecli az eventgrid topic key list --name <topic-name> -g <topic-resource-group> --query "key1" ```
-To get the key for a custom topic with PowerShell, use:
+To get the key for a custom topic using PowerShell, use:
```powershell (Get-AzEventGridTopicKey -ResourceGroupName <topic-resource-group> -Name <topic-name>).Key1
To get the key for a custom topic with PowerShell, use:
## Event data
-For custom topics, the top-level data contains the same fields as standard resource-defined events. One of those properties is a data property that contains properties unique to the custom topic. As event publisher, you determine the properties for that data object. Use the following schema:
+For custom topics, the top-level data contains the same fields as standard resource-defined events. One of those properties is a `data` property that contains properties unique to the custom topic. As an event publisher, you determine properties for that data object. Here's the schema:
```json [
For custom topics, the top-level data contains the same fields as standard resou
] ```
-For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an event grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments. When receiving events in a batch, the maximum allowed number of events is 5,000 per batch.
+For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an Event Grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments. When receiving events in a batch, the maximum allowed number of events is 5,000 per batch.
For example, a valid event data schema is:
firewall-manager Rule Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/rule-hierarchy.md
Previously updated : 08/26/2020 Last updated : 11/17/2022 + # Use Azure Firewall policy to define a rule hierarchy
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Azure Firewall Premium includes the following features:
- **TLS inspection** - decrypts outbound traffic, processes the data, then encrypts the data and sends it to the destination. - **IDPS** - A network intrusion detection and prevention system (IDPS) allows you to monitor network activities for malicious activity, log information about this activity, report it, and optionally attempt to block it.-- **URL filtering** - extends Azure FirewallΓÇÖs FQDN filtering capability to consider an entire URL. For example, `www.contoso.com/a/c` instead of `www.contoso.com`.
+- **URL filtering** - extends Azure FirewallΓÇÖs FQDN filtering capability to consider an entire URL along with any additional path. For example, `www.contoso.com/a/c` instead of `www.contoso.com`.
- **Web categories** - administrators can allow or deny user access to website categories such as gambling websites, social media websites, and others. ## TLS inspection
You can identify what category a given FQDN or URL is by using the **Web Categor
### Category change
-Under the **Web Categories** tab in **Firewall Policy Settings**, you can request a categorization change if you:
+Under the **Web Categories** tab in **Firewall Policy Settings**, you can request a category change if you:
- think an FQDN or URL should be under a different category
frontdoor Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/billing.md
If the request can be served from the Front Door edge location's cache, Front Do
### Data transfer from origin to Front Door
-When your origin server processes a request, it sends data back to Front Door so that it can be returned to the client. This traffic not billed by Front Door, even if the origin is in a different region to the Front Door edge location for the request.
+When your origin server processes a request, it sends data back to Front Door so that it can be returned to the client. This traffic is not billed by Front Door, even if the origin is in a different region to the Front Door edge location for the request.
If your origin is within Azure, the data egress from the Azure origin to Front Door isn't charged. However, you should determine whether those Azure services might bill you to process your requests.
hdinsight Apache Kafka Spark Structured Streaming Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md
description: Learn how to use Apache Spark Structured Streaming to read data fro
Previously updated : 04/08/2022 Last updated : 11/16/2022 # Use Apache Spark Structured Streaming with Apache Kafka and Azure Cosmos DB
While you can create an Azure virtual network, Kafka, and Spark clusters manuall
|Ssh User Name|The SSH user to create for the Spark and Kafka clusters.| |Ssh Password|The password for the SSH user for the Spark and Kafka clusters.|
- :::image type="content" source="./media/apache-kafka-spark-structured-streaming-cosmosdb/hdi-custom-parameters.png" alt-text="HDInsight custom deployment values":::
+ :::image type="content" source="./media/apache-kafka-spark-structured-streaming-cosmosdb/hdi-custom-parameters-40.png" alt-text="HDInsight version 4.0 custom deployment values":::
1. Read the **Terms and Conditions**, and then select **I agree to the terms and conditions stated above**.
hdinsight Apache Hadoop Etl At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-etl-at-scale.md
description: Learn how extract, transform, and load is used in HDInsight with Ap
Previously updated : 04/01/2022 Last updated : 11/17/2022 # Extract, transform, and load (ETL) at scale
Azure Data Lake Storage is a managed, hyperscale repository for analytics data.
Data is usually ingested into Data Lake Storage through Azure Data Factory. You can also use Data Lake Storage SDKs, the AdlCopy service, Apache DistCp, or Apache Sqoop. The service you choose depends on where the data is. If it's in an existing Hadoop cluster, you might use Apache DistCp, the AdlCopy service, or Azure Data Factory. For data in Azure Blob storage, you might use Azure Data Lake Storage .NET SDK, Azure PowerShell, or Azure Data Factory.
-Data Lake Storage is optimized for event ingestion through Azure Event Hubs or Apache Storm.
+Data Lake Storage is optimized for event ingestion through Azure Event Hubs.
### Considerations for both storage options
hdinsight Apache Hadoop Linux Tutorial Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started-bicep.md
Previously updated : 07/18/2022 Last updated : 11/17/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Bicep
Two Azure resources are defined in the Bicep file:
You need to provide values for the parameters: * Replace **\<cluster-name\>** with the name of the HDInsight cluster to create.
- * Replace **\<cluster-type\>** with the type of the HDInsight cluster to create. Allowed strings include: `hadoop`, `interactivehive`, `hbase`, `storm`, and `spark`.
+ * Replace **\<cluster-type\>** with the type of the HDInsight cluster to create. Allowed strings include: `hadoop`, `interactivehive`, `hbase`, and `spark`.
* Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards. * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username cannot be admin.
hdinsight Apache Hadoop On Premises Migration Best Practices Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-architecture.md
description: Learn architecture best practices for migrating on-premises Hadoop
Previously updated : 07/18/2022 Last updated : 11/17/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - architecture best practices
Azure HDInsight clusters are designed for a specific type of compute usage. Beca
||| |Batch processing (ETL / ELT)|Hadoop, Spark| |Data warehousing|Hadoop, Spark, Interactive Query|
-|IoT / Streaming|Kafka, Storm, Spark|
+|IoT / Streaming|Kafka, Spark|
|NoSQL Transactional processing|HBase| |Interactive and Faster queries with in-memory caching|Interactive Query| |Data Science| Spark|
hdinsight Apache Hadoop On Premises Migration Motivation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-motivation.md
description: Learn the motivation and benefits for migrating on-premises Hadoop
Previously updated : 04/28/2022 Last updated : 11/17/2022 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - motivation and benefits
Azure HDInsight is a cloud distribution of Hadoop components. Azure HDInsight ma
- Apache Spark - Apache Hive with LLAP - Apache Kafka-- Apache Storm - Apache HBase-- R ## Azure HDInsight advantages over on-premises Hadoop
This section provides template questionnaires to help gather important informati
|**Topic**: **Environment**||| |Cluster Distribution version|HDP 2.6.5, CDH 5.7| |Big Data eco-system components|HDFS, Yarn, Hive, LLAP, Impala, Kudu, HBase, Spark, MapReduce, Kafka, Zookeeper, Solr, Sqoop, Oozie, Ranger, Atlas, Falcon, Zeppelin, R|
-|Cluster types|Hadoop, Spark, Confluent Kafka, Storm, Solr|
+|Cluster types|Hadoop, Spark, Confluent Kafka, Solr|
|Number of clusters|4| |Number of master nodes|2| |Number of worker nodes|100|
hdinsight Apache Hbase Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-overview.md
description: An introduction to Apache HBase in HDInsight, a NoSQL database buil
Previously updated : 05/11/2022 Last updated : 11/17/2022 #Customer intent: As a developer new to Apache HBase and Apache HBase in Azure HDInsight, I want to have a basic understanding of Microsoft's implementation of Apache HBase in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
The canonical use case for which BigTable (and by extension, HBase) was created
|Key-value store|HBase can be used as a key-value store, and it's suitable for managing message systems. Facebook uses HBase for their messaging system, and it's ideal for storing and managing Internet communications. WebTable uses HBase to search for and manage tables that are extracted from webpages.| |Sensor data|HBase is useful for capturing data that is collected incrementally from various sources. This data includes social analytics, and time series. And keeping interactive dashboards up to date with trends and counters, and managing audit log systems. Examples include Bloomberg trader terminal and the Open Time Series Database (OpenTSDB). OpenTSDB stores and provides access to metrics collected about the health of server systems.| |Real-time query|[Apache Phoenix](https://phoenix.apache.org/) is a SQL query engine for Apache HBase. It's accessed as a JDBC driver, and it enables querying and managing HBase tables by using SQL.|
-|HBase as a platform|Applications can run on top of HBase by using it as a datastore. Examples include Phoenix, OpenTSDB, `Kiji`, and Titan. Applications can also integrate with HBase. Examples include: [Apache Hive](https://hive.apache.org/), Apache Pig, [Solr](https://lucene.apache.org/solr/), Apache Storm, Apache Flume, [Apache Impala](https://impala.apache.org/), Apache Spark, `Ganglia`, and Apache Drill.|
+|HBase as a platform|Applications can run on top of HBase by using it as a datastore. Examples include Phoenix, OpenTSDB, `Kiji`, and Titan. Applications can also integrate with HBase. Examples include: [Apache Hive](https://hive.apache.org/), Apache Pig, [Solr](https://lucene.apache.org/solr/), Apache Flume, [Apache Impala](https://impala.apache.org/), Apache Spark, `Ganglia`, and Apache Drill.|
## Next steps
hdinsight Hdinsight Administer Use Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-command-line.md
Title: Manage Azure HDInsight clusters using Azure CLI
-description: Learn how to use the Azure CLI to manage Azure HDInsight clusters. Cluster types include Apache Hadoop, Spark, HBase, Storm, Kafka, Interactive Query.
+description: Learn how to use the Azure CLI to manage Azure HDInsight clusters. Cluster types include Apache Hadoop, Spark, HBase, Kafka, Interactive Query.
Previously updated : 06/16/2022 Last updated : 11/17/2022 # Manage Azure HDInsight clusters using Azure CLI
hdinsight Hdinsight Hadoop Create Linux Clusters Curl Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-curl-rest.md
description: Learn how to create HDInsight clusters by submitting Azure Resource
Previously updated : 08/05/2022 Last updated : 11/17/2022 # Create Apache Hadoop clusters using the Azure REST API
The following JSON document is a merger of the template and parameters files fro
"type": "string", "allowedValues": ["hadoop", "hbase",
- "storm",
"spark"], "metadata": { "description": "The type of the HDInsight cluster to create."
hdinsight Hdinsight Hadoop Customize Cluster Bootstrap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-bootstrap.md
description: Learn how to customize HDInsight cluster configuration programmatic
Previously updated : 05/31/2022 Last updated : 11/17/2022 # Customize HDInsight clusters using Bootstrap
For example, using these programmatic methods, you can configure options in thes
* mapred-site * oozie-site.xml * oozie-env.xml
-* storm-site.xml
* tez-site.xml * webhcat-site.xml * yarn-site.xml
hdinsight Hdinsight Hadoop Linux Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-information.md
description: Get implementation tips for using Linux-based HDInsight (Hadoop) cl
Previously updated : 04/29/2020 Last updated : 11/17/2022 # Information about using HDInsight on Linux
In HDInsight, the data storage resources (Azure Blob Storage and Azure Data Lake
### <a name="URI-and-scheme"></a>URI and scheme
-Some commands may require you to specify the scheme as part of the URI when accessing a file. For example, the Storm-HDFS component requires you to specify the scheme. When using non-default storage (storage added as "additional" storage to the cluster), you must always use the scheme as part of the URI.
+Some commands may require you to specify the scheme as part of the URI when accessing a file. When using non-default storage (storage added as "additional" storage to the cluster), you must always use the scheme as part of the URI.
When using [**Azure Storage**](./hdinsight-hadoop-use-blob-storage.md), use one of the following URI schemes:
To use a different version of a component, upload the version you need and use i
* [Manage HDInsight clusters by using the Apache Ambari REST API](./hdinsight-hadoop-manage-ambari-rest-api.md) * [Use Apache Hive with HDInsight](hadoop/hdinsight-use-hive.md)
-* [Use MapReduce jobs with HDInsight](hadoop/hdinsight-use-mapreduce.md)
+* [Use MapReduce jobs with HDInsight](hadoop/hdinsight-use-mapreduce.md)
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md
description: Learn how to query data from Azure Data Lake Storage Gen1 and to st
Previously updated : 09/15/2022 Last updated : 11/17/2022 # Use Data Lake Storage Gen1 with Azure HDInsight clusters
Currently, only some of the HDInsight cluster types/versions support using Data
| HDInsight version 3.4 | No | Yes | | | HDInsight version 3.3 | No | No | | | HDInsight version 3.2 | No | Yes | |
-| Storm | | |You can use Data Lake Storage Gen1 to write data from a Storm topology. You can also use Data Lake Storage Gen1 for reference data that can then be read by a Storm topology.|
> [!WARNING] > HDInsight HBase is not supported with Azure Data Lake Storage Gen1
hdinsight Hdinsight Key Scenarios To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-key-scenarios-to-monitor.md
description: How to monitor health and performance of Apache Hadoop clusters in
Previously updated : 04/28/2022 Last updated : 11/17/2022 # Monitor cluster performance in Azure HDInsight
If your cluster's backing store is Azure Data Lake Storage (ADLS), your throttli
* [Performance tuning guidance for Apache Hive on HDInsight and Azure Data Lake Storage](../data-lake-store/data-lake-store-performance-tuning-hive.md) * [Performance tuning guidance for MapReduce on HDInsight and Azure Data Lake Storage](../data-lake-store/data-lake-store-performance-tuning-mapreduce.md)
-* [Performance tuning guidance for Apache Storm on HDInsight and Azure Data Lake Storage](../data-lake-store/data-lake-store-performance-tuning-storm.md)
## Troubleshoot sluggish node performance
hdinsight Hdinsight Scaling Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-scaling-best-practices.md
Previously updated : 07/21/2022 Last updated : 11/17/2022 # Manually scale Azure HDInsight clusters
The impact of changing the number of data nodes varies for each type of cluster
For more information on using the HBase shell, see [Get started with an Apache HBase example in HDInsight](hbase/apache-hbase-tutorial-get-started-linux.md).
-* Apache Storm
-
- You can seamlessly add or remove data nodes while Storm is running. However, after a successful completion of the scaling operation, you'll need to rebalance the topology. Rebalancing allows the topology to readjust [parallelism settings](https://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html) based on the new number of nodes in the cluster. To rebalance running topologies, use one of the following options:
-
- * Storm web UI
-
- Use the following steps to rebalance a topology using the Storm UI.
-
- 1. Open `https://CLUSTERNAME.azurehdinsight.net/stormui` in your web browser, where `CLUSTERNAME` is the name of your Storm cluster. If prompted, enter the HDInsight cluster administrator (admin) name and password you specified when creating the cluster.
-
- 1. Select the topology you wish to rebalance, then select the **Rebalance** button. Enter the delay before the rebalance operation is done.
-
- :::image type="content" source="./media/hdinsight-scaling-best-practices/hdinsight-portal-scale-cluster-storm-rebalance.png" alt-text="HDInsight Storm scale rebalance":::
-
- * Command-line interface (CLI) tool
-
- Connect to the server and use the following command to rebalance a topology:
-
- ```bash
- storm rebalance TOPOLOGYNAME
- ```
-
- You can also specify parameters to override the parallelism hints originally provided by the topology. For example, the code below reconfigures the `mytopology` topology to 5 worker processes, 3 executors for the blue-spout component, and 10 executors for the yellow-bolt component.
-
- ```bash
- ## Reconfigure the topology "mytopology" to use 5 worker processes,
- ## the spout "blue-spout" to use 3 executors, and
- ## the bolt "yellow-bolt" to use 10 executors
- $ storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
- ```
- * Kafka You should rebalance partition replicas after scaling operations. For more information, see the [High availability of data with Apache Kafka on HDInsight](./kafk) document.
hdinsight Hdinsight Virtual Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-virtual-network-architecture.md
Title: Azure HDInsight virtual network architecture
description: Learn the resources available when you create an HDInsight cluster in an Azure Virtual Network. Previously updated : 04/01/2022 Last updated : 11/17/2022 # Azure HDInsight virtual network architecture
Azure HDInsight clusters have different types of virtual machines, or nodes. Eac
| Type | Description | | | |
-| Head node | For all cluster types except Apache Storm, the head nodes host the processes that manage execution of the distributed application. The head node is also the node that you can SSH into and execute applications that are then coordinated to run across the cluster resources. The number of head nodes is fixed at two for all cluster types. |
| ZooKeeper node | Zookeeper coordinates tasks between the nodes that are doing data processing. It also does leader election of the head node, and keeps track of which head node is running a specific master service. The number of ZooKeeper nodes is fixed at three. | | Worker node | Represents the nodes that support data processing functionality. Worker nodes can be added or removed from the cluster to scale computing capability and manage costs. | | Region node | For the HBase cluster type, the region node (also referred to as a Data Node) runs the Region Server. Region Servers serve and manage a portion of the data managed by HBase. Region nodes can be added or removed from the cluster to scale computing capability and manage costs.|
-| Nimbus node | For the Storm cluster type, the Nimbus node provides functionality similar to the Head node. The Nimbus node assigns tasks to other nodes in a cluster through Zookeeper, which coordinates the running of Storm topologies. |
-| Supervisor node | For the Storm cluster type, the supervisor node executes the instructions provided by the Nimbus node to do the processing. |
## Resource naming conventions
hdinsight Apache Kafka Azure Container Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-azure-container-services.md
description: Learn how to use Kafka on HDInsight from container images hosted in
Previously updated : 08/23/2022 Last updated : 11/17/2022 # Use Azure Kubernetes Service with Apache Kafka on HDInsight
Use the following links to learn how to use Apache Kafka on HDInsight:
* [Use MirrorMaker to create a replica of Apache Kafka on HDInsight](apache-kafka-mirroring.md)
-* [Use Apache Storm with Apache Kafka on HDInsight](../hdinsight-apache-storm-with-kafka.md)
* [Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md)
hdinsight Apache Kafka Connector Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connector-iot-hub.md
description: Learn how to use Apache Kafka on HDInsight with Azure IoT Hub. The
Previously updated : 09/15/2022 Last updated : 11/17/2022 # Use Apache Kafka on HDInsight with Azure IoT Hub
For more information on using the sink connector, see [https://github.com/Azure/
In this document, you learned how to use the Apache Kafka Connect API to start the IoT Kafka Connector on HDInsight. Use the following links to discover other ways to work with Kafka: * [Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md)
-* [Use Apache Storm with Apache Kafka on HDInsight](../hdinsight-apache-storm-with-kafka.md)
hdinsight Apache Kafka Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-introduction.md
description: 'Learn about Apache Kafka on HDInsight: What it is, what it does, a
Previously updated : 03/30/2022 Last updated : 10/17/2022 #Customer intent: As a developer, I want to understand how Kafka on HDInsight is different from Kafka on other platforms.
The following are common tasks and patterns that can be performed using Kafka on
||| |Replication of Apache Kafka data|Kafka provides the MirrorMaker utility, which replicates data between Kafka clusters. For information on using MirrorMaker, see [Replicate Apache Kafka topics with Apache Kafka on HDInsight](apache-kafka-mirroring.md).| |Publish-subscribe messaging pattern|Kafka provides a Producer API for publishing records to a Kafka topic. The Consumer API is used when subscribing to a topic. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).|
-|Stream processing|Kafka is often used with Apache Storm or Spark for real-time stream processing. Kafka 0.10.0.0 (HDInsight version 3.5 and 3.6) introduced a streaming API that allows you to build streaming solutions without requiring Storm or Spark. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).|
+|Stream processing|Kafka is often used with Spark for real-time stream processing. Kafka 0.10.0.0 (HDInsight version 3.5 and 3.6) introduced a streaming API that allows you to build streaming solutions without requiring Spark. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).|
|Horizontal scale|Kafka partitions streams across the nodes in the HDInsight cluster. Consumer processes can be associated with individual partitions to provide load balancing when consuming records. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).| |In-order delivery|Within each partition, records are stored in the stream in the order that they were received. By associating one consumer process per partition, you can guarantee that records are processed in-order. For more information, see [Start with Apache Kafka on HDInsight](apache-kafka-get-started.md).| |Messaging|Since it supports the publish-subscribe message pattern, Kafka is often used as a message broker.|
Use the following links to learn how to use Apache Kafka on HDInsight:
* [Tutorial: Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md)
-* [Tutorial: Use Apache Storm with Apache Kafka on HDInsight](../hdinsight-apache-storm-with-kafka.md)
hdinsight Apache Kafka Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-scalability.md
To control the number of disks used by the worker nodes in a Kafka cluster, use
For more information on working with Apache Kafka on HDInsight, see the following documents: * [Use MirrorMaker to create a replica of Apache Kafka on HDInsight](apache-kafka-mirroring.md)
-* [Use Apache Storm with Apache Kafka on HDInsight](../hdinsight-apache-storm-with-kafka.md)
* [Use Apache Spark with Apache Kafka on HDInsight](../hdinsight-apache-spark-with-kafka.md) * [Connect to Apache Kafka through an Azure Virtual Network](apache-kafka-connect-vpn-gateway.md)
hdinsight Apache Kafka Streams Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-streams-api.md
description: Tutorial - Learn how to use the Apache Kafka Streams API with Kafka
Previously updated : 08/23/2022 Last updated : 11/17/2022 #Customer intent: As a developer, I need to create an application that uses the Kafka streams API with Kafka on HDInsight
Learn how to create an application that uses the Apache Kafka Streams API and ru
The application used in this tutorial is a streaming word count. It reads text data from a Kafka topic, extracts individual words, and then stores the word and count into another Kafka topic.
-Kafka stream processing is often done using Apache Spark or Apache Storm. Kafka version 1.1.0 (in HDInsight 3.5 and 3.6) introduced the Kafka Streams API. This API allows you to transform data streams between input and output topics. In some cases, this may be an alternative to creating a Spark or Storm streaming solution.
+Kafka stream processing is often done using Apache Spark. Kafka version 1.1.0 (in HDInsight 3.5 and 3.6) introduced the Kafka Streams API. This API allows you to transform data streams between input and output topics.
For more information on Kafka Streams, see the [Intro to Streams](https://kafka.apache.org/10/documentation/streams/) documentation on Apache.org.
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
Previously updated : 09/02/2022 Last updated : 11/17/2022 # Log Analytics migration guide for Azure HDInsight clusters
The following charts show the table mappings from the classic Azure Monitoring I
| HDInsightHBaseMetrics | <ul><li>**Description**: This table contains JMX metrics from HBase. It contains all the same JMX metrics from the tables listed in the Old Schema column. In contrast from the old tables, each row contains one metric.</li><li>**Old table**: metrics\_regionserver\_CL, metrics\_regionserver\_wal\_CL, metrics\_regionserver\_ipc\_CL, metrics\_regionserver\_os\_CL, metrics\_regionserver\_replication\_CL, metrics\_restserver\_CL, metrics\_restserver\_jvm\_CL, metrics\_hmaster\_assignmentmanager\_CL, metrics\_hmaster\_ipc\_CL, metrics\_hmaser\_os\_CL, metrics\_hmaster\_balancer\_CL, metrics\_hmaster\_jvm\_CL, metrics\_hmaster\_CL,metrics\_hmaster\_fs\_CL</li></ul>| | HDInsightHBaseLogs | <ul><li>**Description**: This table contains logs from HBase and its related components: Phoenix and HDFS.</li><li>**Old table**: log\_regionserver\_CL, log\_restserver\_CL, log\_phoenixserver\_CL, log\_hmaster\_CL, log\_hdfsnamenode\_CL, log\_garbage\_collector\_CL</li></ul>|
-## Storm workload
-
-| New Table | Details |
-| | |
-| HDInsightStormMetrics | <ul><li>**Description**: This table contains the same JMX metrics as the tables in the Old Tables section. Its rows contain one metric per record.</li><li>**Old table**: metrics\_stormnimbus\_CL, metrics\_stormsupervisor\_CL</li></ul>|
-| HDInsightStormTopologyMetrics | <ul><li>**Description**: This table contains topology level metrics from Storm. It's the same shape as the table listed in Old Tables section.</li><li>**Old table**: metrics\_stormrest\_CL</li></ul>|
-| HDInsightStormLogs | <ul><li>**Description**: This table contains all logs generated from Storm.</li><li>**Old table**: log\_supervisor\_CL, log\_nimbus\_CL</li></ul>|
## Oozie workload
hdinsight Manage Clusters Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/manage-clusters-runbooks.md
description: Learn how to create and delete Azure HDInsight clusters with script
Previously updated : 12/27/2019 Last updated : 11/17/2022 # Tutorial: Create Azure HDInsight clusters with Azure Automation
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
#Automation credential for user to SSH into cluster $sshCreds = Get-AutomationPSCredential ΓÇôName 'ssh-password'
- $clusterType = "Hadoop" #Use any supported cluster type (Hadoop, HBase, Storm, etc.)
+ $clusterType = "Hadoop" #Use any supported cluster type (Hadoop, HBase, etc.)
$clusterOS = "Linux" $clusterWorkerNodes = 3 $clusterNodeSize = "Standard_D3_v2"
When no longer needed, delete the Azure Automation Account that was created to a
## Next steps > [!div class="nextstepaction"]
-> [Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell](hdinsight-administer-use-powershell.md)
+> [Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell](hdinsight-administer-use-powershell.md)
hdinsight Apache Spark Improve Performance Iocache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-improve-performance-iocache.md
Title: Apache Spark performance - Azure HDInsight IO Cache (Preview)
description: Learn about Azure HDInsight IO Cache and how to use it to improve Apache Spark performance. Previously updated : 11/09/2022 Last updated : 11/16/2022 # Improve performance of Apache Spark workloads using Azure HDInsight IO Cache > [!NOTE]
-> * IO Cache is only available for Spark 2.4(HDInsight 4.0).
-> * Spark 3.1.2 (HDInsight 5.0) doesnΓÇÖt support IO Cache.
+> * IO Cache was supported till Spark 2.3 and will not be supported in Spark 2.4 (HDInsight 4.0) and Spark 3.1.2 (HDInsight 5.0)
IO Cache is a data caching service for Azure HDInsight that improves the performance of Apache Spark jobs. IO Cache also works with [Apache TEZ](https://tez.apache.org/) and [Apache Hive](https://hive.apache.org/) workloads, which can be run on [Apache Spark](https://spark.apache.org/) clusters. IO Cache uses an open-source caching component called RubiX. RubiX is a local disk cache for use with big data analytics engines that access data from cloud storage systems. RubiX is unique among caching systems, because it uses Solid-State Drives (SSDs) rather than reserve operating memory for caching purposes. The IO Cache service launches and manages RubiX Metadata Servers on each worker node of the cluster. It also configures all services of the cluster for transparent use of RubiX cache.
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Substitutable Medical Applications and Reusable Technologies [SMART on FHIR](htt
- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository - Users have the ability to grant applications access to a further limited set of their data by using SMART clinical scopes.
-<!SMART Implementation Guide v1.0.0 is supported by Azure API for FHIR and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services.
+<!SMART Implementation Guide v1.0.0 is supported by Azure Health Data Services and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services.
Sample demonstrates and list steps that can be referenced to pass ONC G(10) with Inferno test suite. >
-One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
+One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure Health Data Services (FHIR Service) has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
Below tutorial describes steps to enable SMART on FHIR applications with FHIR Service.
Below tutorial describes steps to enable SMART on FHIR applications with FHIR Se
- An instance of the FHIR Service - .NET SDK 6.0 - [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)-- [Register public client application in Azure AD](/register-public-azure-ad-client-app.md)
+- [Register public client application in Azure AD]([https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app]
- After registering the application, make note of the applicationId for client application. <! Tutorial : To enable SMART on FHIR using APIM, follow below steps
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
# Tutorial: Receive device data through Azure IoT Hub
-The MedTech service may be used with devices created and managed through an [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md) for enhanced workflows and ease of use. This tutorial uses an Azure Resource Manager (ARM) template and a **Deploy to Azure** button to deploy a MedTech service using an Azure IoT Hub for device creation, management, and routing of device messages to the MedTech service device message event hub. The ARM template used in this article is available from the [Azure Quickstart Templates](/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
+The MedTech service may be used with devices created and managed through an [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md) for enhanced workflows and ease of use. This tutorial uses an Azure Resource Manager (ARM) template and a **Deploy to Azure** button to deploy a MedTech service using an Azure IoT Hub for device creation, management, and routing of device messages to the MedTech service device message event hub. The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
> [!TIP] > For more information about using Azure PowerShell and CLI to deploy MedTech service ARM templates, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates](deploy-08-new-ps-cli.md).
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
In this article, you'll learn how to use the [MedTech service](iot-connector-ove
:::image type="content" source="media\iot-monitoring-tab\pin-metrics-to-dashboard.png" alt-text="Screenshot the MedTech service monitoring tile with red box around the pin icon." lightbox="media\iot-monitoring-tab\pin-metrics-to-dashboard.png"::: > [!TIP]
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure/azure/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
## Available metrics for the MedTech service
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
flowchart TB
id1-- Issued by -- -> id2 -->
-The root certificate authority (CA) is the [Baltimore CyberTrust Root](https://baltimore-cybertrust-root.chain-demos.digicert.com/info/https://docsupdatetracker.net/index.html) certificate. This root certificate is signed by DigiCert, and is widely trusted and stored in many operating systems. For example, both Ubuntu and Windows include it in the default certificate store.
+The root certificate authority (CA) is the [Baltimore CyberTrust Root](https://www.digicert.com/kb/digicert-root-certificates.htm) certificate. This root certificate is signed by DigiCert, and is widely trusted and stored in many operating systems. For example, both Ubuntu and Windows include it in the default certificate store.
Windows certificate store:
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
Changes made in `config.toml` to `edgeAgent` environment variables like the `hos
When using Node.js to send device to cloud messages with the AMQP protocol to an IoT Edge runtime, messages stop sending after 2047 messages. No error is thrown and the messages eventually start sending again, then cycle repeats. If the client connects directly to Azure IoT Hub, there's no issue with sending messages. This issue has been fixed in IoT Edge 1.2 and later.
+### NTLM Authentication
+
+IoT Edge does not currently support network proxies that use NTLM authentication. Users may consider bypassing the proxy by adding the required endpoints to the firewall allow-list.
+ :::moniker-end <!-- end 1.1 -->
lab-services Class Type Adobe Creative Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-adobe-creative-cloud.md
description: Learn how to set up a lab for digital arts and media classes that u
Last updated 04/21/2021-+ # Set up a lab for Adobe Creative Cloud
lab-services Class Type Arcgis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-arcgis.md
description: Learn how to set up a lab for classes using ArcGIS.
Last updated 02/28/2022- + # Set up a lab for ArcMap\ArcGIS Desktop
lab-services Class Type Autodesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-autodesk.md
description: Learn how to set up labs to teach engineering classes with Autodesk
Last updated 02/02/2022-+ # Set up labs for Autodesk
lab-services Class Type Big Data Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-big-data-analytics.md
Last updated 03/08/2022 -+ # Set up a lab for big data analytics using Docker deployment of HortonWorks Data Platform
lab-services Class Type Database Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-database-management.md
description: Learn how to set up a lab to teach the management of relational dat
Last updated 02/22/2022-+
lab-services Class Type Jupyter Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-jupyter-notebook.md
description: Learn how to set up a lab to teach data science using Python and Ju
Last updated 01/04/2022-+ # Set up a lab to teach data science with Python and Jupyter Notebooks
lab-services Class Type Matlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-matlab.md
Last updated 04/06/2022 -+ # Setup a lab to teach MATLAB
lab-services Class Type React Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-react-linux.md
Last updated 04/25/2022 -+ # Set up lab for React on Linux
lab-services Class Type React Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-react-windows.md
description: Learn how to set up labs to teach front-end development with React.
Last updated 05/16/2021-+ # Set up lab for React on Windows
lab-services Class Type Rstudio Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-linux.md
description: Learn how to set up labs to teach R using RStudio on Linux
Last updated 08/25/2021-+ # Set up a lab to teach R on Linux
lab-services Class Type Rstudio Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-windows.md
description: Learn how to set up labs to teach R using RStudio on Windows
Last updated 08/26/2021-+ # Set up a lab to teach R on Windows
lab-services Class Type Solidworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-solidworks.md
description: Learn how to set up a lab for engineering courses using SOLIDWORKS.
Last updated 01/05/2022-+ # Set up a lab for engineering classes using SOLIDWORKS
lab-services Class Type Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-sql-server.md
description: Learn how to set up a lab to manage and develop with Azure SQL Data
Last updated 06/26/2020-+ # Set up a lab to manage and develop with SQL Server
lab-services Classroom Labs Fundamentals 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals-1.md
Last updated 05/30/2022 - # Architecture Fundamentals in Azure Lab Services when using lab accounts
lab-services Classroom Labs Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md
description: This article will cover the fundamental resources used by Lab Servi
Last updated 05/30/2022-
lab-services Connect Virtual Machine Chromebook Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-chromebook-remote-desktop.md
Last updated 01/27/2022-+ # Connect to a VM using Remote Desktop Protocol on a Chromebook
lab-services Cost Management Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/cost-management-guide.md
Title: Cost management guide for Azure Lab Services description: Understand the different ways to view costs for Lab Services.--++ Last updated 07/04/2022
lab-services How To Attach External Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-external-storage.md
description: Learn how to set up a lab that uses external file storage in Lab Se
Last updated 03/30/2021-+ # Use external file storage in Lab Services
lab-services How To Create A Lab With Shared Resource 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-a-lab-with-shared-resource-1.md
Last updated 03/03/2022 -+ # How to create a lab with a shared resource in Azure Lab Services when using lab accounts
lab-services How To Create A Lab With Shared Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-a-lab-with-shared-resource.md
Last updated 07/04/2022 -+ # How to create a lab with a shared resource in Azure Lab Services
lab-services How To Enable Nested Virtualization Template Vm Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-ui.md
description: Learn how to create a template VM with multiple VMs inside. In oth
Last updated 01/27/2022-+ # Enable nested virtualization manually on a template VM in Azure Lab Services
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
description: Generic steps to prepare a Windows template machine in Lab Services
Last updated 06/26/2020-+ # Guide to setting up a Windows template machine in Azure Lab Services
lab-services How To Setup Lab Gpu 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu-1.md
description: Learn how to set up a lab with graphics processing unit (GPU) virtu
Last updated 06/26/2020-+ # Set up GPU virtual machines in labs contained within lab accounts
lab-services How To Setup Lab Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu.md
description: Learn how to set up a lab with graphics processing unit (GPU) virtu
Last updated 06/09/2022-+ # Set up a lab with GPU virtual machines
lab-services Quick Create Lab Plan Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-python.md
In this article you, as the admin, use Python and the Azure Python SDK to create
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). - [Setup Local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment).-- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/RogerBestMsft/azure-samples-python-management/blob/rbest_ALSSample/samples/labservices/requirements.txt)
+- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/Azure-Samples/azure-samples-python-management/blob/main/samples/labservices/requirements.txt)
## Create a lab plan
lab-services Quick Create Lab Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-python.md
In this quickstart, you, as the educator, create a lab using Python and the Azur
- Azure subscription. If you donΓÇÖt have one, [create a free account](https://azure.microsoft.com/free/) before you begin. - [Setup Local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment).-- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/RogerBestMsft/azure-samples-python-management/blob/rbest_ALSSample/samples/labservices/requirements.txt)
+- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/Azure-Samples/azure-samples-python-management/blob/main/samples/labservices/requirements.txt)
- Lab plan. To create a lab plan, see [Quickstart: Create a lab plan using Python and the Azure Python libraries (SDK)](quick-create-lab-plan-python.md). ## Create a lab
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
In this article, learn about Azure Machine Learning CLI (v2) releases.
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes-v2%22&locale=en-us`
+## 2022-11-08
+
+### Azure Machine Learning CLI (v2) v2.11.0
+
+- The CLI is depending on azure-ai-ml 1.1.0.
+- `az ml registry`
+ - Added `ml registry delete` command.
+ - Adjusted registry experimental tags and imports to avoid warning printouts for unrelated operations.
+- `az ml environment`
+ - Prevented registering an already existing environment that references conda file.
+ ## 2022-10-10 ### Azure Machine Learning CLI (v2) v2.10.0
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
For more information about the MLTable YAML schema, see [CLI (v2) mltable YAML s
- [Install and set up the CLI (v2)](how-to-configure-cli.md#install-and-set-up-the-cli-v2) - [Create datastores](how-to-datastore.md#create-datastores) - [Create data assets](how-to-create-data-assets.md#create-data-assets)-- [Read and write data in a job](how-to-read-write-data-v2.md#read-and-write-data-in-a-job)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)
+- [Access data in a job](how-to-read-write-data-v2.md)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
+
+ Title: Access data from Azure cloud storage during interactive development
+
+description: Access data from Azure cloud storage during interactive development
++++++ Last updated : 11/17/2022+
+#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Access data from Azure cloud storage during interactive development
++
+Typically the beginning of a machine learning project involves exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and building prototypes of ML models to validate hypotheses. This *prototyping* phase of the project is highly interactive in nature that lends itself to developing in a Jupyter notebook or an IDE with a *Python interactive console*. In this article you'll learn how to:
+
+> [!div class="checklist"]
+> * Access data from a Azure ML Datastores URI as if it were a file system.
+> * Materialize data into Pandas using `mltable` Python library.
+> * Materialize Azure ML data assets into Pandas using `mltable` Python library.
+> * Materialize data through an explicit download with the `azcopy` utility.
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. For more information, see [Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)](how-to-manage-workspace.md).
+* An Azure Machine Learning Datastore. For more information, see [Create datastores](how-to-datastore.md).
+
+> [!TIP]
+> The guidance in this article to access data during interactive development applies to any host that can run a Python session - for example: your local machine, a cloud VM, a GitHub Codespace, etc. We recommend using an Azure Machine Learning compute instance - a fully managed and pre-configured cloud workstation. For more information, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
+
+> [!IMPORTANT]
+> Ensure you have the latest `azure-fsspec` and `mltable` python libraries installed in your python environment:
+>
+> ```bash
+> pip install -U azureml-fsspec mltable
+> ```
+
+## Access data from a datastore URI, like a filesystem (preview)
++
+An Azure ML datastore is a *reference* to an *existing* storage account on Azure. The benefits of creating and using a datastore include:
+
+> [!div class="checklist"]
+> * A common and easy-to-use API to interact with different storage types (Blob/Files/ADLS).
+> * Easier to discover useful datastores when working as a team.
+> * Supports both credential-based (for example, SAS token) and identity-based (use Azure Active Directory or Manged identity) to access data.
+> * When using credential-based access, the connection information is secured so you don't expose keys in scripts.
+> * Browse data and copy-paste datastore URIs in the Studio UI.
+
+A *Datastore URI* is a Uniform Resource Identifier, which is a *reference* to a storage *location* (path) on your Azure storage account. The format of the datastore URI is:
+
+```python
+# AzureML workspace details:
+subscription = '<subscription_id>'
+resource_group = '<resource_group>'
+workspace = '<workspace>'
+datastore_name = '<datastore>'
+path_on_datastore '<path>'
+
+# long-form Datastore uri format:
+uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'.
+```
+
+These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/latest/https://docsupdatetracker.net/index.html#) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage.
+
+The Azure ML Datastore implementation of `fsspec` automatically handles credential/identity passthrough used by the Azure ML datastore. This means you don't need to expose account keys in your scripts or do additional sign-in procedures on a compute instance.
+
+For example, you can directly use Datastore URIs in Pandas - below is an example of reading a CSV file:
+
+```python
+import pandas as pd
+
+df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv")
+df.head()
+```
+
+> [!TIP]
+> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
+> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
+> 1. Select your datastore name and then **Browse**.
+> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
+
+You can also instantiate an Azure ML filesystem and do filesystem-like commands like `ls`, `glob`, `exists`, `open`, etc. The `open()` method will return a file-like object, which can be passed to any other library that expects to work with python files, or used by your own code as you would a normal python file object. These file-like objects respect the use of `with` contexts, for example:
+
+```python
+from azureml.fsspec import AzureMachineLearningFileSystem
+
+# instantiate file system using datastore URI
+fs = AzureMachineLearningFileSystem('azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>')
+
+# list files in the path
+fs.ls()
+# output example:
+# /datastore_name/folder/file1.csv
+# /datastore_name/folder/file2.csv
+
+# use an open context
+with fs.open('/datastore_name/folder/file1.csv') as f:
+ # do some process
+ process_file(f)
+```
+
+### Examples
+
+In this section we provide some examples of how to use Filesystem spec, for some common scenarios.
+
+#### Read a single CSV file into pandas
+
+If you have a *single* CSV file, then as outlined above you can read that into pandas with:
+
+```python
+import pandas as pd
+
+df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv")
+```
+
+#### Read a folder of CSV files into pandas
+
+The Pandas `read_csv()` method doesn't support reading a folder of CSV files. You need to glob csv paths and concatenate them to a data frame using Pandas `concat()` method. The code below demonstrates how to achieve this concatenation with the Azure ML filesystem:
+
+```python
+import pandas as pd
+from azureml.fsspec import AzureMachineLearningFileSystem
+
+# define the URI - update <> placeholders
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/*.csv'
+
+# create the filesystem
+fs = AzureMachineLearningFileSystem(uri)
+
+# append csv files in folder to a list
+dflist = []
+for path in fs.ls():
+ with fs.open(path) as f:
+ dflist.append(pd.read_csv(f))
+
+# concatenate data frames
+df = pd.concat(dflist)
+df.head()
+```
+
+#### Reading CSV files into Dask
+
+Below is an example of reading a CSV file into a Dask data frame:
+
+```python
+import dask.dd as dd
+
+df = dd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv")
+df.head()
+```
+
+#### Read a folder of parquet files into pandas
+Parquet files are typically written to a folder as part of an ETL process, which can emit files pertaining to the ETL such as progress, commits, etc. Below is an example of files created from an ETL process (files beginning with `_`) to produce a parquet file of data.
++
+In these scenarios, you'll only want to read the parquet files in the folder and ignore the ETL process files. The code below shows how you can use glob patterns to read only parquet files in a folder:
+
+```python
+import pandas as pd
+from azureml.fsspec import AzureMachineLearningFileSystem
+
+# define the URI - update <> placeholders
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/*.parquet'
+
+# create the filesystem
+fs = AzureMachineLearningFileSystem(uri)
+
+# append csv files in folder to a list
+dflist = []
+for path in fs.ls():
+ with fs.open(path) as f:
+ dflist.append(pd.read_parquet(f))
+
+# concatenate data frames
+df = pd.concat(dflist)
+df.head()
+```
+
+#### Accessing data from your Azure Databricks filesystem (`dbfs`)
+
+Filesystem spec (`fsspec`) has a range of [known implementations](https://filesystem-spec.readthedocs.io/en/stable/_modules/https://docsupdatetracker.net/index.html), one of which is the Databricks Filesystem (`dbfs`).
+
+To access data from `dbfs` you will need:
+
+- **Instance name**, which is in the form of `adb-<some-number>.<two digits>.azuredatabricks.net`. You can glean this from the URL of your Azure Databricks workspace.
+- **Personal Access Token (PAT)**, for more information on creating a PAT, please see [Authentication using Azure Databricks personal access tokens](/azure/databricks/dev-tools/api/latest/authentication)
+
+Once you have these, you will need to create an environment variable on your compute instance for the PAT token:
+
+```bash
+export ADB_PAT=<pat_token>
+```
+
+You can then access data in Pandas using:
+
+```python
+import os
+import pandas as pd
+
+pat = os.getenv(ADB_PAT)
+path_on_dbfs = '<absolute_path_on_dbfs>' # e.g. /folder/subfolder/file.csv
+
+storage_options = {
+ 'instance':'adb-<some-number>.<two digits>.azuredatabricks.net',
+ 'token': pat
+}
+
+df = pd.read_csv(f'dbfs://{path_on_dbfs}', storage_options=storage_options)
+```
+
+#### Reading images with `pillow`
+
+```python
+from PIL import Image
+from azureml.fsspec import AzureMachineLearningFileSystem
+
+# define the URI - update <> placeholders
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<image.jpeg>'
+
+# create the filesystem
+fs = AzureMachineLearningFileSystem(uri)
+
+with fs.open() as f:
+ img = Image.open(f)
+ img.show()
+```
+
+#### PyTorch custom dataset example
+
+In this example, you create a PyTorch custom dataset for processing images. The assumption is that an annotations file (in CSV format) exists that looks like:
+
+```text
+image_path, label
+0/image0.png, label0
+0/image1.png, label0
+1/image2.png, label1
+1/image3.png, label1
+2/image4.png, label2
+2/image5.png, label2
+```
+
+The images are stored in subfolders according to their label:
+
+```text
+/
+└── 📁images
+ ├── 📁0
+ │ ├── 📷image0.png
+ │ └── 📷image1.png
+ ├── 📁1
+ │ ├── 📷image2.png
+ │ └── 📷image3.png
+ └── 📁2
+ ├── 📷image4.png
+ └── 📷image5.png
+```
+
+A custom Dataset class in PyTorch must implement three functions: `__init__`, `__len__`, and `__getitem__`, which are implemented below:
+
+```python
+import os
+import pandas as pd
+from PIL import Image
+from torch.utils.data import Dataset
+
+class CustomImageDataset(Dataset):
+ def __init__(self, filesystem, annotations_file, img_dir, transform=None, target_transform=None):
+ self.fs = filesystem
+ f = filesystem.open(annotations_file)
+ self.img_labels = pd.read_csv(f)
+ f.close()
+ self.img_dir = img_dir
+ self.transform = transform
+ self.target_transform = target_transform
+
+ def __len__(self):
+ return len(self.img_labels)
+
+ def __getitem__(self, idx):
+ img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
+ f = self.fs.open(img_path)
+ image = Image.open(f)
+ f.close()
+ label = self.img_labels.iloc[idx, 1]
+ if self.transform:
+ image = self.transform(image)
+ if self.target_transform:
+ label = self.target_transform(label)
+ return image, label
+```
+
+You can then instantiate the dataset using:
+
+```python
+from azureml.fsspec import AzureMachineLearningFileSystem
+from torch.utils.data import DataLoader
+
+# define the URI - update <> placeholders
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/'
+
+# create the filesystem
+fs = AzureMachineLearningFileSystem(uri)
+
+# create the dataset
+training_data = CustomImageDataset(
+ filesystem=fs,
+ annotations_file='<datastore_name>/<path>/annotations.csv',
+ img_dir='<datastore_name>/<path_to_images>/'
+)
+
+# Preparing your data for training with DataLoaders
+train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
+```
+
+## Materialize data into Pandas using `mltable` library (preview)
++
+Another method for accessing data in cloud storage is to use the `mltable` library. The general format for reading data into pandas using `mltable` is:
+
+```python
+import mltable
+
+# define a path or folder or pattern
+path = {
+ 'file': '<supported_path>'
+ # alternatives
+ # 'folder': '<supported_path>'
+ # 'pattern': '<supported_path>'
+}
+
+# create an mltable from paths
+tbl = mltable.from_delimited_files(paths=[path])
+# alternatives
+# tbl = mltable.from_parquet_files(paths=[path])
+# tbl = mltable.from_json_lines_files(paths=[path])
+# tbl = mltable.from_delta_lake(paths=[path])
+
+# materialize to pandas
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+### Supported paths
+
+You'll notice the `mltable` library supports reading tabular data from different path types:
+
+|Location | Examples |
+|||
+|A path on your local computer | `./home/username/data/my_data` |
+|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
+|A path on Azure Storage | `wasbs://<container_name>@<account_name>.blob.core.windows.net/<path>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
+|A long-form Azure ML datastore | `azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<path>` |
+
+> [!NOTE]
+> `mltable` does user credential passthrough for paths on Azure Storage and Azure ML datastores. If you do not have permission to the data on the underlying storage then you will not be able to access the data.
+
+### Files, folders and globs
+
+`mltable` supports reading from:
+
+- file(s), for example: `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-csv.csv`
+- folder(s), for example `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-folder/`
+- [glob](https://wikipedia.org/wiki/Glob_(programming)) pattern(s), for example `abfss://<file_system>@<account_name>.dfs.core.windows.net/my-folder/*.csv`
+- Or, a combination of files, folders, globbing patterns
+
+The flexibility of `mltable` allows you to materialize data into a single dataframe from a combination of local/cloud storage and combinations of files/folder/globs. For example:
+
+```python
+path1 = {
+ 'file': 'abfss://filesystem@account1.dfs.core.windows.net/my-csv.csv'
+}
+
+path2 = {
+ 'folder': './home/username/data/my_data'
+}
+
+path3 = {
+ 'pattern': 'abfss://filesystem@account2.dfs.core.windows.net/folder/*.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path1, path2, path3])
+```
+
+### Supported file formats
+`mltable` supports the following file formats:
+
+- Delimited Text (for example: CSV files): `mltable.from_delimited_files(paths=[path])`
+- Parquet: `mltable.from_parquet_files(paths=[path])`
+- Delta: `mltable.from_delta_lake(paths=[path])`
+- JSON lines format: `mltable.from_json_lines_files(paths=[path])`
+
+### Examples
+
+#### Read a CSV file
+
+##### [ADLS gen2](#tab/adls)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'file': 'abfss://<filesystem>@<account>.dfs.core.windows.net/<folder>/<file_name>.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+##### [Blob storage](#tab/blob)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'file': 'wasbs://<container>@<account>.blob.core.windows.net/<folder>/<file_name>.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+##### [Azure ML Datastore](#tab/datastore)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'file': 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<folder>/<file>.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+> [!TIP]
+> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
+> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
+> 1. Select your datastore name and then **Browse**.
+> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
+
+##### [HTTP Server](#tab/http)
+```python
+import mltable
+
+path = {
+ 'file': 'https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+++
+#### Read parquet files in a folder
+The example code below shows how `mltable` can use [glob](https://wikipedia.org/wiki/Glob_(programming)) patterns - such as wildcards - to ensure only the parquet files are read.
+
+##### [ADLS gen2](#tab/adls)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'pattern': 'abfss://<filesystem>@<account>.dfs.core.windows.net/<folder>/*.parquet'
+}
+
+tbl = mltable.from_parquet_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+##### [Blob storage](#tab/blob)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'pattern': 'wasbs://<container>@<account>.blob.core.windows.net/<folder>/*.parquet'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+##### [Azure ML Datastore](#tab/datastore)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+```python
+import mltable
+
+path = {
+ 'pattern': 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<folder>/*.parquet'
+}
+
+tbl = mltable.from_parquet_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+> [!TIP]
+> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI by following these steps:
+> 1. Select **Data** from the left-hand menu followed by the **Datastores** tab.
+> 1. Select your datastore name and then **Browse**.
+> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script.
+> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
+
+##### [HTTP Server](#tab/http)
+
+Update the placeholders (`<>`) in the code snippet with your details.
+
+> [!IMPORTANT]
+> To glob the pattern on a public HTTP server, there must be access at the **folder** level.
+
+```python
+import mltable
+
+path = {
+ 'pattern': '<https_address>/<folder>/*.parquet'
+}
+
+tbl = mltable.from_parquet_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+++
+### Reading data assets
+In this section, you'll learn how to access your Azure ML data assets into pandas.
+
+#### Table asset
+
+If you've previously created a Table asset in Azure ML (an `mltable`, or a V1 `TabularDataset`), you can load that into pandas using:
+
+```python
+import mltable
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+ml_client = MLClient.from_config(credential=DefaultAzureCredential())
+data_asset = ml_client.data.get(name="<name_of_asset>", version="<version>")
+
+tbl = mltable.load(f'azureml:/{data_asset.id}')
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+#### File asset
+
+If you've registered a File asset that you want to read into Pandas data frame - for example, a CSV file - you can achieve this using:
+
+```python
+import mltable
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+ml_client = MLClient.from_config(credential=DefaultAzureCredential())
+data_asset = ml_client.data.get(name="<name_of_asset>", version="<version>")
+
+path = {
+ 'file': data_asset.path
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+#### Folder asset
+
+If you've registered a Folder asset (`uri_folder` or a V1 `FileDataset`) that you want to read into Pandas data frame - for example, a folder containing CSV file - you can achieve this using:
+
+```python
+import mltable
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+
+ml_client = MLClient.from_config(credential=DefaultAzureCredential())
+data_asset = ml_client.data.get(name="<name_of_asset>", version="<version>")
+
+path = {
+ 'folder': data_asset.path
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+## A note on reading and processing large data volumes with Pandas
+> [!TIP]
+> Pandas is not designed to handle large datasets - you will only be able to process data that can fit into the memory of the compute instance.
+>
+> For large datasets we recommend that you use AzureML managed Spark, which provides the [PySpark Pandas API](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/https://docsupdatetracker.net/index.html).
+
+You may wish to iterate quickly on a smaller subset of a large dataset before scaling up to a remote asynchronous job. `mltable` provides in-built functionality to get samples of large data using the [take_random_sample](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-take-random-sample) method:
+
+```python
+import mltable
+
+path = {
+ 'file': 'https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv'
+}
+
+tbl = mltable.from_delimited_files(paths=[path])
+# take a random 30% sample of the data
+tbl = tbl.take_random_sample(probability=.3)
+df = tbl.to_pandas_dataframe()
+df.head()
+```
+
+You can also take subsets of large data by using:
+
+- [filter](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-filter)
+- [keep_columns](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-keep-columns)
+- [drop_columns](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-drop-columns)
++
+## Downloading data using the `azcopy` utility
+
+You may want to download the data to the local SSD of your host (local machine, cloud VM, Azure ML Compute Instance) and use the local filesystem. You can do this with the `azcopy` utility, which is pre-installed on an Azure ML compute instance. If you are **not** using an Azure ML compute instance or a Data Science Virtual Machine (DSVM), you may need to install `azcopy`. For more information please read [azcopy](../storage/common/storage-ref-azcopy.md).
+
+> [!CAUTION]
+> We do not recommend downloading data in the `/home/azureuser/cloudfiles/code` location on a compute instance. This is designed to store notebook and code artifacts, **not** data. Reading data from this location will incur significant performance overhead when training. Instead we recommend storing your data in `home/azureuser`, which is the local SSD of the compute node.
+
+Open a terminal and create a new directory, for example:
+
+```bash
+mkdir /home/azureuser/data
+```
+
+Sign-in to azcopy using:
+
+```bash
+azcopy login
+```
+
+Next, you can copy data using a storage URI
+
+```bash
+SOURCE=https://<account_name>.blob.core.windows.net/<container>/<path>
+DEST=/home/azureuser/data
+azcopy cp $SOURCE $DEST
+```
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
Access all Git operations from the terminal. All Git files and folders will be s
> [!NOTE] > Add your files and folders anywhere under the **~/cloudfiles/code/Users** folder so they will be visible in all your Jupyter environments.
+To integrate Git with your Azure Machine Learning workspace, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md).
++ ## Install packages Install packages from a terminal window. Install Python packages into the **Python 3.8 - AzureML** environment. Install R packages into the **R** environment.
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Last updated 09/22/2022
> * [v1](./v1/how-to-create-register-datasets.md) > * [v2 (current version)](how-to-create-data-assets.md)
-In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from AzureML datastores, Azure Storage, public URLs, and local files.
+In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from Azure ML datastores, Azure Storage, public URLs, and local files.
> [!IMPORTANT]
-> If you didn't creat/register the data source as a data asset, you can still [consume the data via specifying the data path in a job](how-to-read-write-data-v2.md#read-data-in-a-job) without benefits below.
-
-The benefits of creating data assets are:
-
-* You can **share and reuse data** with other members of the team such that they don't need to remember file locations.
-
-* You can **seamlessly access data** during model training (on any supported compute type) without worrying about connection strings or data paths.
-
-* You can **version** the data.
--
+> If you just want to access your data in an interactive session (for example, a Notebook) or a job, you are **not** required to create a data asset first. Creating a data asset would be an unnecessary step for you.
+>
+> For more information about accessing your data in a notebook, please see [Access data from Azure cloud storage for interactive development](how-to-access-data-interactive.md).
+>
+> For more information about accessing your data - both local and cloud storage - in a job, please see [Access data in a job](how-to-read-write-data-v2.md).
+>
+> Creating data assets are useful when you want to:
+> - **Share and reuse** data with other members of your team so that they don't need to remember file locations in cloud storage.
+> - **Version** the metadata such as location, description and tags.
+
## Prerequisites
To create a Folder data asset in the Azure Machine Learning studio, use the foll
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+1. Under **Assets** in the left navigation, select **Data**. On the Data assets tab, select Create
:::image type="content" source="./media/how-to-create-data-assets/data-assets-create.png" alt-text="Screenshot highlights Create in the Data assets tab."::: 1. Give your data asset a name and optional description. Then, select the "Folder (uri_folder)" option under Type, if it is not already selected.
To create a File data asset in the Azure Machine Learning studio, use the follow
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+1. Under **Assets** in the left navigation, select **Data**. On the Data assets tab, select Create
:::image type="content" source="./media/how-to-create-data-assets/data-assets-create.png" alt-text="Screenshot highlights Create in the Data assets tab."::: 1. Give your data asset a name and optional description. Then, select the "File (uri_file)" option under Type.
paths:
- pattern: ./*.txt transformations: - read_delimited:
- delimiter: ','
+ delimiter: ,
encoding: ascii header: all_files_same_headers ```
For more transformations available in `mltable`, please look into [reference-yam
### Create an MLTable artifact via Python SDK: from_* If you would like to create an MLTable object in memory via Python SDK, you could use from_* methods.
-The from_* methods does not materialize the data, but rather stores is as a transformation in the MLTable definition.
+The from_* methods do not materialize the data, but rather stores it as a transformation in the MLTable definition.
For example you can use from_delta_lake() to create an in-memory MLTable artifact to read delta lake data from the path `delta_table_path`. ```python
To create a Table data asset in the Azure Machine Learning studio, use the follo
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
+1. Under **Assets** in the left navigation, select **Data**. On the Data assets tab, select Create
:::image type="content" source="./media/how-to-create-data-assets/data-assets-create.png" alt-text="Screenshot highlights Create in the Data assets tab."::: 1. Give your data asset a name and optional description. Then, select the "Table (mltable)" option under Type.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
This setting can be configured during CI creation or for existing CIs via the fo
```YAML # Note that this is just a snippet for the idle shutdown property. Refer to the "Create" Azure CLI section for more information.
+ # Note that idle_time_before_shutdown has been deprecated.
idle_time_before_shutdown_minutes: 30 ``` * Python SDKv2: only configurable during new CI creation ```Python
+ # Note that idle_time_before_shutdown has been deprecated.
ComputeInstance(name=ci_basic_name, size="STANDARD_DS3_v2", idle_time_before_shutdown_minutes="30") ```
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
Previously updated : 08/12/2022 Last updated : 11/09/2022 #Customer intent: As a DevOps person, I need to automate or customize the creation of Azure Machine Learning by using templates.
For more information, see [Deploy an application with Azure Resource Manager tem
[!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)]
+* The example template may not always use the latest API version for Azure Machine Learning. Before using the template, we recommend modifying it to use the latest API versions. For information on the latest API versions for Azure Machine Learning, see the [Azure Machine Learning REST API](/rest/api/azureml/).
+
+ > [!TIP]
+ > Each Azure service has its own set of API versions. For information on the API for a specific service, check the service information in the [Azure REST API reference](/rest/api/azure/).
+
+ To update the API version, find the `"apiVersion": "YYYY-MM-DD"` entry for the resource type and update it to the latest version. The following example is an entry for Azure Machine Learning:
+
+ ```json
+ "type": "Microsoft.MachineLearningServices/workspaces",
+ "apiVersion": "2020-03-01",
+ ```
+ ### Multiple workspaces in the same VNet The template doesn't support multiple Azure Machine Learning workspaces deployed in the same VNet. This is because the template creates new DNS zones during deployment.
New-AzResourceGroupDeployment `
-<!-- Workspaces need a private endpoint when associated resources are behind a virtual network to work properly. To set up a private endpoint for the workspace with a new virtual network:
-
-> [!IMPORTANT]
-> The deployment is only valid in regions which support private endpoints.
-
-# [Azure CLI](#tab/azcli)
-
-```azurecli
-az deployment group create \
- --name "exampledeployment" \
- --resource-group "examplegroup" \
- --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-vnet/azuredeploy.json" \
- --parameters workspaceName="exampleworkspace" \
- location="eastus" \
- vnetOption="new" \
- vnetName="examplevnet" \
- privateEndpointType="AutoApproval"
-```
-
-# [Azure PowerShell](#tab/azpowershell)
-
-```azurepowershell
-New-AzResourceGroupDeployment `
- -Name "exampledeployment" `
- -ResourceGroupName "examplegroup" `
- -TemplateUri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-vnet/azuredeploy.json" `
- -workspaceName "exampleworkspace" `
- -location "eastus" `
- -vnetOption "new" `
- -vnetName "examplevnet" `
- -privateEndpointType "AutoApproval"
-```
-
- -->
### Use an existing virtual network & resources
To deploy a workspace with existing associated resources you have to set the **v
```
-<!-- Workspaces need a private endpoint when associated resources are behind a virtual network to work properly. To set up a private endpoint for the workspace with an existing virtual network:
-
-> [!IMPORTANT]
-> The deployment is only valid in regions which support private endpoints.
-
-# [Azure CLI](#tab/azcli)
-
-```azurecli
-az deployment group create \
- --name "exampledeployment" \
- --resource-group "examplegroup" \
- --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-vnet/azuredeploy.json" \
- --parameters workspaceName="exampleworkspace" \
- location="eastus" \
- vnetOption="existing" \
- vnetName="examplevnet" \
- vnetResourceGroupName="rg" \
- privateEndpointType="AutoApproval" \
- subnetName="subnet" \
- subnetOption="existing"
-```
-
-# [Azure PowerShell](#tab/azpowershell)
-
-```azurepowershell
-New-AzResourceGroupDeployment `
- -Name "exampledeployment" `
- -ResourceGroupName "examplegroup" `
- -TemplateUri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.machinelearningservices/machine-learning-workspace-vnet/azuredeploy.json" `
- -workspaceName "exampleworkspace" `
- -location "eastus" `
- -vnetOption "existing" `
- -vnetName "examplevnet" `
- -vnetResourceGroupName "rg"
- -privateEndpointType "AutoApproval"
- -subnetName "subnet"
- -subnetOption "existing"
-```
-
- -->
- ## Use the Azure portal 1. Follow the steps in [Deploy resources from custom template](../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template). When you arrive at the __Select a template__ screen, choose the **quickstarts** entry. When it appears, select the link labeled "Click here to open template repository". This link takes you to the `quickstarts` directory in the Azure quickstart templates repository.
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-files.md
To create a new file in a different folder:
## Manage files with Git
-[Use a compute instance terminal](how-to-access-terminal.md#git) to clone and manage Git repositories.
+[Use a compute instance terminal](how-to-access-terminal.md#git) to clone and manage Git repositories. To integrate Git with your Azure Machine Learning workspace, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md).
## Clone samples
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
When you provide a model you want to register, you'll need to specify a `path` p
|A path on an AzureML Datastore | `azureml://datastores/<datastore-name>/paths/<path_on_datastore>` | |A path from an AzureML job | `azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>` | |A path from an MLflow job | `runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>` |
+|A path from a Model Asset in AzureML Workspace | `azureml:<model-name>:<version>`|
+|A path from a Model Asset in AzureML Registry | `azureml://registries/<registry-name>/models/<model-name>/versions/<version>`|
## Supported modes
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
Title: Read and write data in jobs
+ Title: Access data in a job
description: Learn how to read and write data in Azure Machine Learning training jobs.
#Customer intent: As an experienced Python developer, I need to read in my data to make it available to a remote compute to train my machine learning models.
-# Read and write data in a job
+# Access data in a job
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
The following example defines a pipeline containing three nodes and moves data b
* [Train models](how-to-train-model.md) * [Tutorial: Create production ML pipelines with Python SDK v2](tutorial-pipeline-python-sdk.md)
-* Learn more about [Data in Azure Machine Learning](concept-data.md)
+* Learn more about [Data in Azure Machine Learning](concept-data.md)
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
AzureML allows you to either use a curated (or ready-made) environment or create
In this section, we'll begin by introducing the data for training. We'll then cover how to run a training job, using a training script that we've provided. You'll learn to build the training job by configuring the command for running the training script. Then, you'll submit the training job to run in AzureML. ### Obtain the training data
-You'll use data that is stored on a public blob as a [zip file](https://azureopendatastorage.blob.core.windows.net/testpublic/temp/fowl_data.zip). This dataset consists of about 120 training images each for two classes (turkeys and chickens), with 100 validation images for each class. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). We'll download and extract the dataset as part of our training script `pytorch_train.py`.
+You'll use data that is stored on a public blob as a [zip file](https://azuremlexamples.blob.core.windows.net/datasets/fowl_data.zip). This dataset consists of about 120 training images each for two classes (turkeys and chickens), with 100 validation images for each class. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). We'll download and extract the dataset as part of our training script `pytorch_train.py`.
### Prepare the training script
In this article, you trained and registered a deep learning neural network using
- [Track run metrics during training](how-to-log-view-metrics.md) - [Tune hyperparameters](how-to-tune-hyperparameters.md)-- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
+- [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
The specified VM Size failed to provision due to a lack of Azure Machine Learnin
Below is a list of reasons you might run into this error: * [Resource request was greater than limits](#resource-requests-greater-than-limits)
+* [Subscription does not exist](#subscription-does-not-exist)
* [Startup task failed due to authorization error](#authorization-error) * [Startup task failed due to incorrect role assignments on resource](#authorization-error) * [Unable to download user container image](#unable-to-download-user-container-image)
Below is a list of reasons you might run into this error:
Requests for resources must be less than or equal to limits. If you don't set limits, we set default values when you attach your compute to an Azure Machine Learning workspace. You can check limits in the Azure portal or by using the `az ml compute show` command.
+#### Subscription does not exist
+
+The Azure subscription that is entered must be existing. This error occurs when we cannot find the Azure subscription that was referenced. This is likely due to a typo in the subscription ID. Please double-check that the subscription ID was correctly typed and that it is currently active.
+
+For more information about Azure subscriptions, refer to the [prerequisites section](#prerequisites).
+ #### Authorization error After you provisioned the compute resource, during deployment creation, Azure tries to pull the user container image from the workspace private Azure Container Registry (ACR) and mount the user model and code artifacts into the user container from the workspace storage account.
machine-learning Migrate To V2 Command Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-command-job.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
For more information, see: * [V1 - Experiment](/python/api/azureml-core/azureml.core.experiment)
-* [V2 - Command Job](/python/api/azure-ai-ml/azure.ai.ml.md#azure-ai-ml-command)
+* [V2 - Command Job](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command)
* [Train models with the Azure ML Python SDK v2](how-to-train-sdk.md)
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
The Bicep template is made up of the [main.bicep](https://github.com/Azure/azure
| [machinelearningcompute.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/machinelearningcompute.bicep) | Defines an Azure Machine Learning compute cluster and compute instance. | | [privateaks.bicep](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure/modules/privateaks.bicep) | Defines an Azure Kubernetes Services cluster instance. |
+> [!IMPORTANT]
+> The example templates may not always use the latest API version for Azure Machine Learning. Before using the template, we recommend modifying it to use the latest API versions. For information on the latest API versions for Azure Machine Learning, see the [Azure Machine Learning REST API](/rest/api/azureml/).
+>
+> Each Azure service has its own set of API versions. For information on the API for a specific service, check the service information in the [Azure REST API reference](/rest/api/azure/).
+>
+> To update the API version, find the `Microsoft.MachineLearningServices/<resource>` entry for the resource type and update it to the latest version. The following example is an entry for the Azure Machine Learning workspace that uses an API version of `2022-05-01`:
+>
+>```json
+>resource machineLearning 'Microsoft.MachineLearningServices/workspaces@2022-05-01' = {
+>```
+ # [Terraform](#tab/terraform) The template consists of multiple files. The following table describes what each file is responsible for:
marketplace Deprecate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/deprecate-vm.md
This article describes how to deprecate or restore virtual machine images, plans
## What is deprecation?
-Deprecation is the delisting of a VM offer or a subset of the offer from Azure Marketplace so that it is no longer available for customers to deploy additional instances. Reasons to deprecate may vary. Common examples are due to security issues or end of life. You can deprecate image versions, plans, or an entire VM offer:
-
+Deprecation is the removal of a VM offer or a subset of the offer from Azure Marketplace so that it is no longer available for customers to deploy additional instances. Reasons to deprecate may vary. Common examples are due to security issues or end of life. You can deprecate image versions, plans, or an entire VM offer:
- **Deprecation of an image version** ΓÇô The removal of an individual VM image version - **Deprecation of a plan** ΓÇô The removal of a plan and subsequently all images within the plan - **Deprecation of an offer** ΓÇô The removal of an entire VM offer, including all plans within the offer and subsequently all images within each plan
Before the scheduled deprecation date:
- Customers with active deployments are notified. - Customers can continue to deploy new instances up until the deprecation date.-- If deprecating an offer or plan, the offer or plan will no longer be available in the marketplace. This is to reduce the discoverability of the offer or plan.-
+- If deprecating an offer, the offer will no longer be searchable in the marketplace upon scheduling the deprecation. This is to reduce the discoverability of the offer.
After the scheduled deprecation date: - Customers will not be able to deploy new instances using the affected images. If deprecating a plan, all images within the plan will no longer be available and if deprecating an offer, all images within the offer will no longer be available following deprecation. - Active VM instances will not be impacted.-- Existing virtual machine scale sets (VMSS) deployments cannot be scaled out if configured with any of the impacted images. If deprecating a plan or offer, all existing VMSS deployments pinned to any image within the plan or offer respectively cannot be scaled out.-
+- Existing virtual machine scale sets deployments cannot be scaled out. If deprecating a plan or offer, all existing scale sets deployments using any image within the plan or offer respectively cannot be scaled out.
> [!TIP]
-> Before you deprecate an offer or plan, make sure you understand the current usage by reviewing the [Usage dashboard in commercial marketplace analytics](usage-dashboard.md). If usage is high, consider hiding the plan or offer to minimize discoverability within the commercial marketplace. This will steer new customers towards other available options.
+> Before you deprecate an offer, plan, or image, make sure you understand the current usage by reviewing the [Usage dashboard in commercial marketplace analytics](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/usage). If usage is high, consider hiding the plan or offer to minimize discoverability within the commercial marketplace and steer new customers towards other available options. To hide an offer, select the **Hide plan** checkbox on the **Pricing and Availability** page of each individual plan in the offer.
## Deprecate an image Keep the following things in mind when deprecating an image: - You can deprecate any image within a plan.-- Each plan must have at least one image.-- Publish the offer after scheduling the deprecation of an image.-- Images that are published to preview can be deprecated or deleted immediately.-
+- Each plan must have at least one active image.
+- You must publish the offer after scheduling the deprecation of an image.
+- Images that are published only to preview and have never been published live can be deleted immediately when the offer is still in preview state.
**To deprecate an image**: 1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer with the image you want to deprecate.
Keep the following things in mind when deprecating an image:
Keep the following things in mind when restoring a deprecated image: -- Publish the offer after restoring an image for it to become available to customers.
+- You must publish the offer after restoring an image for it to become available to customers.
- You can undo or cancel the deprecation anytime up until the scheduled date. - You can restore an image for a period of time after deprecation. After the window has expired, the image can no longer be restored.
Keep the following things in mind when restoring a deprecated image:
1. In the **Action** column, select one of the following: - If the deprecation date shown in the **Status** column is in the future, you can select **Cancel deprecation**. The image version will then be listed under the Active tab. - If the deprecation date shown in the **Status** column is in the past, select **Restore image**. The image is then listed on the **Active** tab.
- > [!NOTE]
- > If the image can no longer be restored, then no actions will be available.
+1. > [!NOTE]
+ > If the image can no longer be restored, then no actions will be available.
+
1. Save your changes on the **Technical configuration** page. 1. For the change to take effect, select **Review and publish** and publish the offer. ## Deprecate a plan
-Keep the following things in ming when deprecating a plan:
--- Publish the offer after scheduling the deprecation of a plan.
+Keep the following things in mind when deprecating a plan:
+- You must publish the offer after scheduling the deprecation of a plan.
- Upon scheduling the deprecation of a plan, free trials are disabled immediately. - If a test drive is enabled on your offer and itΓÇÖs configured to use the plan thatΓÇÖs being deprecated, be sure to reconfigure the test drive to use another plan in the offer. Otherwise, disable the test drive on the **Offer Setup** page.
Keep the following things in ming when deprecating a plan:
Keep the following things in mind when restoring a plan: - Ensure that there is at least one active image version available on the **Technical Configuration** page of the plan. You can either restore a deprecated image or provide a new one.-- Publish the offer after restoring a plan for it to become available to customers.-
+- You must publish the offer after restoring a plan for it to become available to customers.
**To restore a plan**: 1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer with the plan you want to restore.
On the **Offer Overview** page, you can deprecate the entire offer. This depreca
Keep the following things in mind when deprecating an offer: -- The deprecation will be scheduled 90 days into the future and customers will be notified.
+- The deprecation will immediately be scheduled 90 days into the future upon confirmation and customers will be notified
- Test drive and any free trials will be disabled immediately upon scheduling deprecation of an offer. **To deprecate an offer**:
Keep the following things in mind when deprecating an offer:
1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer you want to deprecate. 1. On the **Offer overview** page, in the upper right, select **Deprecate offer**. 1. In the confirmation dialog box that appears, enter the Offer ID and then confirm that you want to deprecate the offer.
- > [!NOTE]
- > On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, the **Status column** of the offer will say **Deprecation scheduled**. On the **Offer overview** page, under **Publish status**, the scheduled deprecation date is shown.
-
+1. > [!NOTE]
+ > On the **Offer overview** page, under **Publish status**, the scheduled deprecation date is shown.
+
## Restore a deprecated offer You can restore an offer only if the offer contains at least one active plan and at least one active image.
You can restore an offer only if the offer contains at least one active plan and
1. Ensure that there is at least one active image version available on the **Technical Configuration** page of the plan. Note that all deprecated images are listed under **VM Images** on the **Deprecated** tab. You can either [restore a deprecated image](#restore-a-deprecated-image) or [add a new VM image](azure-vm-plan-technical-configuration.md#vm-images). Remember, if the restore window has expired, the image can no longer be restored. 1. Save your changes on the **Technical configuration** page. 1. For the changes to take effect, select **Review and publish** and publish the offer.++
marketplace Publisher Guide By Offer Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/publisher-guide-by-offer-type.md
The following table shows the commercial marketplace offer types in Partner Cent
| [**Azure Application**](plan-azure-application-offer.md) | There are two kinds of Azure application plans: _solution template_ and _managed application_. Both plan types support automating the deployment and configuration of a solution beyond a single virtual machine (VM). You can automate the process of providing multiple resources, including VMs, networking, and storage resources to provide complex solutions, such as IaaS solutions. Both plan types can employ many different kinds of Azure resources, including but not limited to VMs.<ul><li>**Solution template** plans are one of the main ways to publish a solution in the commercial marketplace. Solution template plans are not transactable in the commercial marketplace, but they can be used to deploy paid VM offers that are billed through the commercial marketplace. Use the solution template plan type when the customer will manage the solution and the transactions are billed through another plan.</li><br><li>**Managed application** plans enable you to easily build and deliver fully managed, turnkey applications for your customers. They have the same capabilities as solution template plans, with some key differences:</li><ul><li> The resources are deployed to a resource group and are managed by the publisher of the app. The resource group is present in the consumer's subscription, but an identity in the publisher's tenant has access to the resource group.</li><li>As the publisher, you specify the cost for ongoing support of the solution and transactions are supported through the commercial marketplace.</li></ul>Use the managed application plan type when you or your customer requires that the solution is managed by a partner or you will deploy a subscription-based solution.</ul> | | [**Azure Container**](marketplace-containers.md) | Use the Azure Container offer type when your solution is a Docker container image provisioned as a Kubernetes-based Azure container service. | | [**Azure virtual machine**](marketplace-virtual-machines.md) | Use the virtual machine offer type when you deploy a virtual appliance to the subscription associated with your customer. |
-| [**Consulting service**](./plan-consulting-service-offer.md) | Consulting services help to connect customers with services to support and extend their use of Azure, Dynamics 365, or Power Suite services.|
+| [**Consulting service**](./plan-consulting-service-offer.md) | Consulting services help to connect customers with services to support and extend their use of Azure, Dynamics 365, Microsoft 365 or Power Suite services.|
| [**Dynamics 365**](marketplace-dynamics-365.md) | Publish AppSource offers that build on or extend Dynamics 365 products.| | [**IoT Edge module**](marketplace-iot-edge.md) | Azure IoT Edge modules are the smallest computation units managed by IoT Edge, and can contain Microsoft services (such as Azure Stream Analytics), 3rd-party services, or your own solution-specific code. | | [**Managed service**](./plan-managed-service-offer.md) | Create managed service offers and manage customer-delegated subscriptions or resource groups through [Azure Lighthouse](../lighthouse/overview.md).|
migrate Add Server Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/add-server-credentials.md
ms. Previously updated : 09/12/2022 Last updated : 11/13/2022 # Provide server credentials to discover software inventory, dependencies, web apps, and SQL Server instances and databases
Type of credentials | Description
**Non-domain credentials (Windows/Linux)** | You can add **Windows (Non-domain)** or **Linux (Non-domain)** by selecting the required option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. **SQL Server Authentication credentials** | You can add **SQL Server Authentication** credentials by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. <br/><br/> You can add this type of credentials to discover SQL Server instances and databases running in your VMware environment, if you've configured SQL Server authentication mode on your SQL Servers.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.<br/><br/> You need to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can complete the software inventory to discover SQL installed on the servers before it uses the SQL Server authentication credentials to discover the SQL Server instances and databases.
-> [!Note]
-> Currently, the SQL Server authentication credentials can only be provided in appliance used for discovery and assessment of servers running in VMware environment.
- Check the permissions required on the Windows/Linux credentials to perform the software inventory, agentless dependency analysis and discover web apps, and SQL Server instances and databases. ### Required permissions
Feature | Windows credentials | Linux credentials
**Software inventory** | Guest user account | Regular/normal user account (non-sudo access permissions) **Discovery of SQL Server instances and databases** | User account that is member of the sysadmin server role. | _Not supported currently_ **Discovery of ASP.NET web apps** | Domain or non-domain (local) account with administrative permissions | _Not supported currently_
-**Agentless dependency analysis** | Domain or non-domain (local) account with administrative permissions | Root user account, or <br/> an account with these permissions on /bin/netstat and /bin/ls files: CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE.<br/><br/> Set these capabilities using the following commands: <br/><br/> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br/><br/> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat
+**Agentless dependency analysis** | Domain or non-domain (local) account with administrative permissions | Sudo user account with permissions to execute ls and netstat commands. If you are providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time the sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
### Recommended practices to provide credentials
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md
ms. Previously updated : 05/05/2022 Last updated : 11/13/2022 # Discovery, assessment, and dependency analysis - Common questions
-This article answers common questions about discovery, assessment, and dependency analysis in Azure Migrate. If you have other questions, check these resources:
+This article answers common questions about discovery, assessment, and dependency analysis in Azure Migrate. If you've other questions, check these resources:
- [General questions](resources-faq.md) about Azure Migrate - Questions about the [Azure Migrate appliance](common-questions-appliance.md)
Review the supported geographies for [public](migrate-support-matrix.md#public-c
## How many servers can I discover with an appliance?
-You can discover up to 10,000 servers from VMware environment, up to 5,000 servers from Hyper-V environment, and up to 1000 physical servers by using a single appliance. If you have more servers, read about [scaling a Hyper-V assessment](scale-hyper-v-assessment.md), [scaling a VMware assessment](scale-vmware-assessment.md), or [scaling a physical server assessment](scale-physical-assessment.md).
+You can discover up to 10,000 servers from VMware environment, up to 5,000 servers from Hyper-V environment, and up to 1000 physical servers by using a single appliance. If you've more servers, read about [scaling a Hyper-V assessment](scale-hyper-v-assessment.md), [scaling a VMware assessment](scale-vmware-assessment.md), or [scaling a physical server assessment](scale-physical-assessment.md).
## How do I choose the assessment type? - Use **Azure VM assessments** when you want to assess servers from your on-premises [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs. [Learn More](concepts-assessment-calculation.md).-- Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. for migration to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-azure-sql-assessment-calculation.md).
+- Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. for migration to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-azure-sql-assessment-calculation.md).
- Use assessment type **Azure App Service** when you want to assess your on-premises ASP.NET web apps running on IIS web server from your VMware environment for migration to Azure App Service. [Learn More](concepts-assessment-calculation.md). - Use **Azure VMware Solution (AVS)** assessments when you want to assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md).-- You can use a common group with VMware machines only to run both types of assessments. If you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines.
+- You can use a common group with VMware machines only to run both types of assessments. If you're running AVS assessments in Azure Migrate for the first time, it's advisable to create a new group of VMware machines.
## Why is performance data missing for some/all servers in my Azure VM and/or AVS assessment report?
-For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance cannot collect performance data for the on-premises servers. Check:
+For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance can't collect performance data for the on-premises servers. Check:
-- If the servers are powered on for the duration for which you are creating the assessment-- If only memory counters are missing and you are trying to assess servers in Hyper-V environment. In this scenario, enable dynamic memory on the servers and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for severs in Hyper-V environment only when the server has dynamic memory enabled.
+- If the servers are powered on for the duration for which you're creating the assessment
+- If only memory counters are missing and you're trying to assess servers in Hyper-V environment. In this scenario, enable dynamic memory on the servers and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for severs in Hyper-V environment only when the server has dynamic memory enabled.
- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
For "Performance-based" assessment, the assessment report export says 'Percentag
To ensure performance data is collected, check: -- If the SQL Servers are powered on for the duration for which you are creating the assessment.
+- If the SQL Servers are powered on for the duration for which you're creating the assessment.
- If the connection status of the SQL agent in Azure Migrate is 'Connected', and check the last heartbeat. -- If Azure Migrate connection status for all SQL instances is 'Connected' in the discovered SQL instance blade.
+- If Azure Migrate connection status for all SQL instances is 'Connected' in the discovered SQL instance section.
- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed. If any of the performance counters are missing, Azure SQL assessment recommends the smallest Azure SQL configuration for that instance/database.
-## Why confidence rating is not available for Azure App Service assessments?
+## Why is confidence rating not available for Azure App Service assessments?
-Performance data is not captured for Azure App Service assessment and hence you do not see confidence rating for this assessment type. Azure App Service assessment takes configuration data of web apps in to account while performing assessment calculation.
+Performance data isn't captured for Azure App Service assessment and hence you don't see confidence rating for this assessment type. Azure App Service assessment takes configuration data of web apps in to account while performing assessment calculation.
## Why is the confidence rating of my assessment low? The confidence rating is calculated for "Performance-based" assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. Below are the reasons why an assessment could get a low confidence rating: -- You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, change the performance duration to a smaller period and **Recalculate** the assessment.-- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
+- You didn't profile your environment for the duration for which you're creating the assessment. For example, if you're creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you can't wait for the duration, change the performance duration to a smaller period and **Recalculate** the assessment.
+- Assessment isn't able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
- Servers are powered on for the duration of the assessment - Outbound connections on ports 443 are allowed - For Hyper-V Servers, dynamic memory is enabled - The connection status of agents in Azure Migrate are 'Connected' and check the last heartbeat
- - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
+ - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance section.
**Recalculate** the assessment to reflect the latest changes in confidence rating. -- For Azure VM and AVS assessments, few servers were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few servers were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based).-- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings).
+- For Azure VM and AVS assessments, few servers were created after discovery had started. For example, if you're creating an assessment for the performance history of last one month, but few servers were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based).
+- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, if you're creating an assessment for the performance history of last one month, but few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings).
## Why is my RAM utilization greater than 100%?
By design, in Hyper-V if maximum memory provisioned is less than what is require
## Why can't I see all Azure VM families in the Azure VM assessment properties? There could be two reasons:-- You have chosen an Azure region where a particular series is not supported. Azure VM families shown in Azure VM assessment properties are dependent on the availability of the VM series in the chosen Azure location, storage type and Reserved Instance. -- The VM series is not support in the assessment and is not in the consideration logic of the assessment. We currently do not support B-series burstable, accelerated and high performance SKU series. We are trying to keep the VM series updated, and the ones mentioned are on our roadmap.
+- You've chosen an Azure region where a particular series isn't supported. Azure VM families shown in Azure VM assessment properties are dependent on the availability of the VM series in the chosen Azure location, storage type and Reserved Instance.
+- The VM series isn't support in the assessment and isn't in the consideration logic of the assessment. We currently don't support B-series burstable, accelerated and high performance SKU series. We are trying to keep the VM series updated, and the ones mentioned are on our roadmap.
## The number of Azure VM or AVS assessments on the Discovery and assessment tool are incorrect
- To remediate this, click the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessment. The discovery and assessment tool will then show the correct count for that assessment type.
+ To remediate this, select the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessment. The discovery and assessment tool will then show the correct count for that assessment type.
## I want to try out the new Azure SQL assessment
-Discovery and assessment of SQL Server instances and databases running in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+Discovery and assessment of SQL Server instances and databases running in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you've completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## I want to try out the new Azure App Service assessment
-Discovery and assessment of .NET web apps running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+Discovery and assessment of .NET web apps running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you've completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## I can't see some servers when I am creating an Azure SQL assessment -- Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, wait for some time for the discovery to get completed and then create the assessment.-- If you are not able to see a previously created group while creating the assessment, remove any server without a SQL instance from the group.-- If you are running Azure SQL assessments in Azure Migrate for the first time, it is advisable to create a new group of servers.
+- Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, wait for some time for the discovery, and then create the assessment.
+- If you're not able to see a previously created group while creating the assessment, remove any server without a SQL instance from the group.
+- If you're running Azure SQL assessments in Azure Migrate for the first time, it's advisable to create a new group of servers.
## I can't see some servers when I am creating an Azure App Service assessment -- Azure App Service assessment can only be done on servers running where web server role was discovered. If you don't see the servers that you wish to assess, wait for some time for the discovery to get completed and then create the assessment.-- If you are not able to see a previously created group while creating the assessment, remove any non-VMware server or any server without a web app from the group.-- If you are running Azure App Service assessments in Azure Migrate for the first time, it is advisable to create a new group of servers.
+- Azure App Service assessment can only be done on servers running where web server role was discovered. If you don't see the servers that you wish to assess, wait for some time for the discovery to get completed, and then create the assessment.
+- If you're not able to see a previously created group while creating the assessment, remove any non-VMware server or any server without a web app from the group.
+- If you're running Azure App Service assessments in Azure Migrate for the first time, it's advisable to create a new group of servers.
## I want to understand how was the readiness for my instance computed?
The readiness for your web apps is computed by running series of technical check
## Why is my web app marked as Ready with conditions or Not ready in my Azure App Service assessment?
-This can happen when one or more technical checks fail for a given web app. You may click on the readiness status for the web app to find out details and remediation for failed checks.
+This can happen when one or more technical checks fail for a given web app. You may select the readiness status for the web app to find out details and remediation for failed checks.
## Why is the readiness for all my SQL instances marked as unknown?
The SQL discovery is performed once every 24 hours and you might need to wait up
This could happen if: - The discovery is still in progress. We recommend that you wait for some time for the appliance to profile the environment and then recalculate the assessment.-- There are some discovery issues that you need to fix in the Errors and notifications blade.
+- There are some discovery issues that you need to fix in **Errors and notifications**.
The SQL discovery is performed once every 24 hours and you might need to wait upto a day for the latest configuration changes to reflect.
The Azure SQL assessment only includes databases that are in online status. In c
## I want to compare costs for running my SQL instances on Azure VM vs Azure SQL Database/Azure SQL Managed Instance
-You can create a single **Azure SQL** assessment consisting of desired SQL servers across VMware, Microsoft Hyper-V and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. A single assessment covers readiness, SKUs, estimated costs and migration blockers for all the available SQL migration targets in Azure - Azure SQL Managed Instance, Azure SQL Database and SQL Server on Azure VM. You can then compare the assessment output for the desired targets. [Learn More](./concepts-azure-sql-assessment-calculation.md)
+You can create a single **Azure SQL** assessment consisting of desired SQL servers across VMware, Microsoft Hyper-V and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. A single assessment covers readiness, SKUs, estimated costs and migration blockers for all the available SQL migration targets in Azure - Azure SQL Managed Instance, Azure SQL Database and SQL Server on Azure VM. You can then compare the assessment output for the desired targets. [Learn More](./concepts-azure-sql-assessment-calculation.md)
## The storage cost in my Azure SQL assessment is zero
For Azure SQL Managed Instance, there is no storage cost added for the first 32
## I can't see some groups when I am creating an Azure VMware Solution (AVS) assessment - AVS assessment can be done on groups that have only VMware machines. Remove any non-VMware machine from the group if you intend to perform an AVS assessment.-- If you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines.
+- If you're running AVS assessments in Azure Migrate for the first time, it's advisable to create a new group of VMware machines.
## Queries regarding Ultra disks ### Can I migrate my disks to Ultra disk using Azure Migrate?
-No. Currently, both Azure Migrate and Azure Site Recovery do not support migration to Ultra disks. Find steps to deploy Ultra disk [here](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal#deploy-an-ultra-disk)
+No. Currently, both Azure Migrate and Azure Site Recovery don't support migration to Ultra disks. Find steps to deploy Ultra disk [here](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal#deploy-an-ultra-disk)
### Why are the provisioned IOPS and throughput in my Ultra disk more than my on-premises IOPS and throughput?
Using the 95th percentile value ensures that outliers are ignored. Outliers migh
Import-based Azure VM assessments are assessments created with machines that are imported into Azure Migrate using a CSV file. Only four fields are mandatory to import: Server name, cores, memory, and operating system. Here are some things to note:
+ - The readiness criteria is less stringent in import-based assessments on the boot type parameter. If the boot type isn't provided, it's assumed the machine has BIOS boot type, and the machine isn't marked as **Conditionally Ready**. In assessments with discovery source as appliance, the readiness is marked as **Conditionally Ready** if the boot type is missing. This difference in readiness calculation is because users may not have all information on the machines in the early stages of migration planning when import-based assessments are done.
- Performance-based import assessments use the utilization value provided by the user for right-sizing calculations. Since the utilization value is provided by the user, the **Performance history** and **Percentile utilization** options are disabled in the assessment properties. In assessments with discovery source as appliance, the chosen percentile value is picked from the performance data collected by the appliance. ## Why is the suggested migration tool in import-based AVS assessment marked as unknown?
-For machines imported via a CSV file, the default migration tool in an AVS assessment is unknown. Though, for VMware machines, it is recommended to use the VMware Hybrid Cloud Extension (HCX) solution. [Learn More](../azure-vmware/install-vmware-hcx.md).
+For machines imported via a CSV file, the default migration tool in an AVS assessment is unknown. Though, for VMware machines, it's recommended to use the VMware Hybrid Cloud Extension (HCX) solution. [Learn More](../azure-vmware/install-vmware-hcx.md).
## What is dependency visualization?
The differences between agentless visualization and agent-based visualization ar
**Requirement** | **Agentless** | **Agent-based** | |
-Support | This option is currently in preview, and is only available for servers in VMware environment. [Review](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) supported operating systems. | In general availability (GA).
+Support | This option is currently in preview, and is only available for servers in VMware environment. [Review](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) supported operating systems. | In General Availability (GA).
Agent | No need to install agents on machines you want to cross-check. | Agents to be installed on each on-premises machine that you want to analyze: The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md), and the [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md). Prerequisites | [Review](concepts-dependency-visualization.md#agentless-analysis) the prerequisites and deployment requirements. | [Review](concepts-dependency-visualization.md#agent-based-analysis) the prerequisites and deployment requirements. Log Analytics | Not required. | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization. [Learn more](concepts-dependency-visualization.md#agent-based-analysis).
To use agent-based dependency visualization, download and install agents on each
- [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md) - [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)-- If you have machines that don't have internet connectivity, download and install the Log Analytics gateway on them.
+- If you've machines that don't have internet connectivity, download and install the Log Analytics gateway on them.
You need these agents only if you use agent-based dependency visualization.
For agentless visualization, you can view the dependency map of a single server
## Can I visualize dependencies for groups of more than 10 servers?
-You can [visualize dependencies](./how-to-create-a-group.md#refine-a-group-with-dependency-mapping) for groups that have up to 10 servers. If you have a group that has more than 10 servers, we recommend that you split the group into smaller groups, and then visualize the dependencies.
+You can [visualize dependencies](./how-to-create-a-group.md#refine-a-group-with-dependency-mapping) for groups that have up to 10 servers. If you've a group that has more than 10 servers, we recommend that you split the group into smaller groups, and then visualize the dependencies.
## Next steps
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
ms. Previously updated : 03/18/2021 Last updated : 11/13/2022 # Support matrix for Hyper-V assessment
After the appliance is connected, it gathers configuration and performance data
Support | Details |
-**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
+**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. **SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role. **SQL Server versions** | SQL Server 2008 and later are supported. **SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.
-**Supported SQL configuration** | Currently, only discovery for standalone SQL Server instances and corresponding databases is supported.<br /><br /> Identification of Failover Cluster and Always On availability groups is not supported.
-**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) is not supported.
+**Supported SQL configuration** | Currently, only discovery for standalone SQL Server instances and corresponding databases is supported.<br /><br /> Identification of Failover Cluster and Always On availability groups isn't supported.
+**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) isn't supported.
> [!NOTE] > By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority.
Support | Details
## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers, which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
Support | Details |
Support | Details
**Operating systems** | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/about/supported-guest-os) enabled. **Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date. **Windows server access** | A user account (local or domain) with administrator permissions on servers.
-**Linux server access** | A root user account, or an account that has these permissions on /bin/netstat and /bin/ls files: <br />CAP_DAC_READ_SEARCH<br /> CAP_SYS_PTRACE<br /><br /> Set these capabilities by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP). **Discovery method** | Agentless dependency analysis is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
Support | Details
**Log Analytics** | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br/><br/> You associate a new or existing Log Analytics workspace with a project. The workspace for a project can't be modified after it's added. <br/><br/> The workspace must be in the same subscription as the project.<br/><br/> The workspace must reside in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br/><br/> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br/><br/> In Log Analytics, the workspace associated with Azure Migrate is tagged with the Migration Project key, and the project name. **Required agents** | On each server you want to analyze, install the following agents:<br/><br/> The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md).<br/> The [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md).<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma). **Log Analytics workspace** | The workspace must be in the same subscription as the project.<br/><br/> Azure Migrate supports workspaces residing in the East US, Southeast Asia, and West Europe regions.<br/><br/> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br/><br/> The workspace for a project can't be modified after it's added.
-**Costs** | The Service Map solution doesn't incur any charges for the first 180 days (from the day that you associate the Log Analytics workspace with the project)/<br/><br/> After 180 days, standard Log Analytics charges will apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace is not deleted along with it. After deleting the project, Service Map usage isn't free, and each node will be charged as per the paid tier of Log Analytics workspace/<br/><br/>If you have projects that you created before Azure Migrate general availability (GA- 28 February 2018), you might have incurred additional Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project, since existing workspaces before GA are still chargeable.
-**Management** | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality will not work as expected.
+**Costs** | The Service Map solution doesn't incur any charges for the first 180 days (from the day that you associate the Log Analytics workspace with the project)/<br/><br/> After 180 days, standard Log Analytics charges will apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace isn't deleted along with it. After deleting the project, Service Map usage isn't free, and each node will be charged as per the paid tier of Log Analytics workspace/<br/><br/>If you have projects that you created before Azure Migrate general availability (GA- 28 February 2018), you might have incurred additional Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project, since existing workspaces before GA are still chargeable.
+**Management** | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality won't work as expected.
**Internet connectivity** | If servers aren't connected to the internet, you need to install the Log Analytics gateway on them. **Azure Government** | Agent-based dependency analysis isn't supported.
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
ms. Previously updated : 03/18/2021 Last updated : 11/13/2022 # Support matrix for physical server discovery and assessment
To assess physical servers, you create a project, and add the Azure Migrate: Dis
**Operating system:** All Windows and Linux operating systems can be assessed for migration.
-**Permissions:**
-
-Set up an account that the appliance can use to access the physical servers.
-
-**Windows servers**
+## Permissions for Windows server
For Windows servers, use a domain account for domain-joined servers, and a local account for servers that aren't domain-joined. The user account can be created in one of the two ways:
For Windows servers, use a domain account for domain-joined servers, and a local
- The account needs these permissions for appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed here. - In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md) to enable the required permissions.
- > [!Note]
- > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers.
-
-**Linux servers**
-
-For Linux servers, based on the features you want to perform, you can create a user account in one of three ways:
-
-### Option 1
-- You need a root account on the servers that you want to discover. This account can be used to pull configuration and performance metadata, perform software inventory (discovery of installed applications) and enable agentless dependency analysis.
+> [!Note]
+> For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers.
> [!Note]
-> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Linux servers, it recommended to use Option 1.
+> To discover SQL Server databases on Windows Servers, both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
-### Option 2
-- To discover the configuration and performance metadata from Linux servers, you can provide a user account with sudo permissions.-- The support to add a user account with sudo access is provided by default with the new appliance installer script downloaded from portal after July 20, 2021.-- For older appliances, you can enable the capability by following these steps:
- 1. On the server running the appliance, open the Registry Editor.
- 1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance.
- 1. Create a registry key ΓÇÿisSudoΓÇÖ with DWORD value of 1.
+## Permissions for Linux server
- :::image type="content" source="./media/tutorial-discover-physical/issudo-reg-key.png" alt-text="Screenshot that shows how to enable sudo support.":::
+For Linux servers, based on the features you want to perform, you can create a user account in one of two ways:
-- For the sudo user, you need to provide the bin/bash NOPASSWD permission in the sudoers file in addition to the commands mentioned in the table [here](discovered-metadata.md#linux-server-metadata).
+### Option 1
+- You need a sudo user account on the servers that you want to discover. This account can be used to pull configuration and performance metadata, perform software inventory (discovery of installed applications) and enable agentless dependency analysis using SSH connectivity.
+- You need to enable sudo access for the commands listed [here](discovered-metadata.md#linux-server-metadata). In addition to these commands, the user account also needs to have permissions to execute ls and netstat commands to perform agentless dependency analysis.
+- Make sure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked.
- The following Linux OS distributions are supported for discovery by Azure Migrate using an account with sudo access:
- Operating system | Versions
+ Operating system | Versions
|
- Red Hat Enterprise Linux | 6,7,8
- Cent OS | 6.6, 8.2
- Ubuntu | 14.04,16.04,18.04
- SUSE Linux | 11.4, 12.4
- Debian | 7, 10
+ Red Hat Enterprise Linux | 5.1, 5.3, 5.11, 6.x, 7.x, 8.x
+ Cent OS | 5.1, 5.9, 5.11, 6.x, 7.x, 8.x
+ Ubuntu | 12.04, 14.04, 16.04, 18.04, 20.04
+ Oracle Linux | 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5
+ SUSE Linux | 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3
+ Debian | 7, 8, 9, 10, 11
Amazon Linux | 2.0.2021 CoreOS Container | 2345.3.0
- > [!Note]
- > 'Sudo' account is currently not supported to perform software inventory (discovery of installed applications) and enable agentless dependency analysis.
-### Option 3
-- If you can't provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry and provide a non-root account with the required capabilities using the following commands:
+> [!Note]
+> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Linux servers, it recommended to use Option 1.
+
+### Option 2
+- If you can't provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry on appliance server and provide a non-root account with the required capabilities using the following commands:
**Command** | **Purpose** | |
For Linux servers, based on the features you want to perform, you can create a u
setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID
- To perform agentless dependency analysis on the server, ensure that you also set the required permissions on /bin/netstat and /bin/ls files by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+- To perform agentless dependency analysis on the server, ensure that you also set the required permissions on /bin/netstat and /bin/ls files by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
## Azure Migrate appliance requirements
After the appliance is connected, it gathers configuration and performance data
Support | Details |
-**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
+**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
ms. Previously updated : 03/17/2021 Last updated : 11/13/2022 # Support matrix for VMware discovery
Learn more about [assessments](concepts-assessment-calculation.md).
VMware | Details |
-**vCenter Server** | Servers that you want to discover and assess must be managed by vCenter Server version 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses are not supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers).
+**vCenter Server** | Servers that you want to discover and assess must be managed by vCenter Server version 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses aren't supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers).
**Permissions** | The Azure Migrate: Discovery and assessment tool requires a vCenter Server read-only account.<br /><br /> If you want to use the tool for software inventory, agentless dependency analysis, web apps and SQL discovery, the account must have privileges for guest operations on VMware VMs. ## Server requirements
After the appliance is connected, it gathers configuration and performance data
Support | Details |
-**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
+**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. **SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role. **SQL Server versions** | SQL Server 2008 and later are supported. **SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.
-**Supported SQL configuration** | Currently, only discovery for standalone SQL Server instances and corresponding databases is supported.<br /><br /> Identification of Failover Cluster and Always On availability groups is not supported.
-**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) is not supported.
+**Supported SQL configuration** | Currently, only discovery for standalone SQL Server instances and corresponding databases is supported.<br /><br /> Identification of Failover Cluster and Always On availability groups isn't supported.
+**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) isn't supported.
> [!NOTE] > By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority.
Support | ASP.NET web apps | Java web apps
## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers, which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
Support | Details | **Supported servers** | You can enable agentless dependency analysis on up to 1000 servers (across multiple vCenter Servers), discovered per appliance. **Windows servers** | Windows Server 2019<br />Windows Server 2016<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br />Microsoft Windows Server 2008 (32-bit)
-**Linux servers** | Red Hat Enterprise Linux 7, 6, 5<br /> Ubuntu Linux 16.04, 14.04<br /> Debian 8, 7<br /> Oracle Linux 7, 6<br /> CentOS 7, 6, 5<br /> SUSE Linux Enterprise Server 11 and later
+**Linux servers** | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> Cent OS 5.1, 5.9, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11
**Server requirements** | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers. **vCenter Server account** | The read-only account used by Azure Migrate for assessment must have privileges for guest operations on VMware VMs. **Windows server acesss** | A user account (local or domain) with administrator permissions on servers.
-**Linux server access** | A root user account, or an account that has these permissions on /bin/netstat and /bin/ls files: <br />CAP_DAC_READ_SEARCH<br /> CAP_SYS_PTRACE<br /><br /> Set these capabilities by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time a sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
**Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running the servers that have dependencies you want to discover. The server running vCenter Server returns an ESXi host connection to download the file containing the dependency data. **Discovery method** | Dependency information between servers is gathered by using VMware Tools installed on the server running vCenter Server.<br /><br /> The appliance gathers the information from the server by using vSphere APIs.<br /><br /> No agent is installed on the server, and the appliance doesnΓÇÖt connect directly to servers.
Requirement | Details
**Log Analytics** | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br /><br /> You associate a new or existing Log Analytics workspace with a project. The workspace for a project can't be modified after the workspace is added. <br /><br /> The workspace must be in the same subscription as the project.<br /><br /> The workspace must be located in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /><br /> The workspace must be in a [region in which Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br /><br /> In Log Analytics, the workspace that's associated with Azure Migrate is tagged with the project key and project name. **Required agents** | On each server that you want to analyze, install the following agents:<br />- [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md)<br />- [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the [MMA](how-to-create-group-machine-dependencies.md#install-the-mma). **Log Analytics workspace** | The workspace must be in the same subscription as the project.<br /><br /> Azure Migrate supports workspaces that are located in the East US, Southeast Asia, and West Europe regions.<br /><br /> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br /><br /> The workspace for a project can't be modified after the workspace is added.
-**Cost** | The Service Map solution doesn't incur any charges for the first 180 days (from the day you associate the Log Analytics workspace with the project).<br /><br /> After 180 days, standard Log Analytics charges apply.<br /><br /> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br /><br /> When the project is deleted, the workspace is not automatically deleted. After deleting the project, Service Map usage isn't free, and each node will be charged per the paid tier of Log Analytics workspace.<br /><br />If you have projects that you created before Azure Migrate general availability (February 28, 2018), you might have incurred additional Service Map charges. To ensure that you are charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable.
+**Cost** | The Service Map solution doesn't incur any charges for the first 180 days (from the day you associate the Log Analytics workspace with the project).<br /><br /> After 180 days, standard Log Analytics charges apply.<br /><br /> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br /><br /> When the project is deleted, the workspace isn't automatically deleted. After deleting the project, Service Map usage isn't free, and each node will be charged per the paid tier of Log Analytics workspace.<br /><br />If you have projects that you created before Azure Migrate general availability (February 28, 2018), you might have incurred additional Service Map charges. To ensure that you're charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable.
**Management** | When you register agents to the workspace, use the ID and key provided by the project.<br /><br /> You can use the Log Analytics workspace outside Azure Migrate.<br /><br /> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br /><br /> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality won't work as expected. **Internet connectivity** | If servers aren't connected to the internet, install the Log Analytics gateway on the servers. **Azure Government** | Agent-based dependency analysis isn't supported.
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
ms. Previously updated : 04/27/2022 Last updated : 11/13/2022 #Customer intent: As a server admin I want to discover my AWS instances.
If you just created a free Azure account, you're the owner of your subscription.
Set up an account that the appliance can use to access AWS instances. - For **Windows servers**, set up a local user account on all the Windows servers that you want to include in the discovery. Add the user account to the following groups: - Remote Management Users - Performance Monitor Users - Performance Log users.
+ - For **Linux servers**, you need a root account on the Linux servers that you want to discover. Refer to the instructions in the [support matrix](migrate-support-matrix-physical.md#permissions-for-linux-server) for an alternative.
- Azure Migrate uses password authentication when discovering AWS instances. AWS instances don't support password authentication by default. Before you can discover instance, you need to enable password authentication. - For Windows servers, allow WinRM port 5985 (HTTP). This allows remote WMI calls. - For Linux servers:
Set up an account that the appliance can use to access AWS instances.
2. Open the sshd_config file: vi /etc/ssh/sshd_config 3. In the file, locate the **PasswordAuthentication** line, and change the value to **yes**. 4. Save the file and close it. Restart the ssh service.
- - If you are using a root user to discover your Linux servers, ensure root login is allowed on the servers.
+ - If you are using a root user to discover your Linux servers, ensure root sign in is allowed on the servers.
1. Sign into each Linux machine 2. Open the sshd_config file: vi /etc/ssh/sshd_config 3. In the file, locate the **PermitRootLogin** line, and change the value to **yes**.
To set up the appliance you:
1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**. 2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**. 3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of physical or virtual servers. The name should be alphanumeric with 14 characters or fewer.
-1. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
+1. Select **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
1. After the successful creation of the Azure resources, a **project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration. ### 2. Download the installer script
-In **2: Download Azure Migrate appliance**, click on **Download**.
+In **2: Download Azure Migrate appliance**, select **Download**.
### Verify security
Set up the appliance for the first time.
1. Open a browser on any machine that can connect to the appliance, and open the URL of the appliance web app: **https://*appliance name or IP address*: 44368**.
- Alternately, you can open the app from the desktop by clicking the app shortcut.
+ Alternately, you can open the app from the desktop by selecting the app shortcut.
2. Accept the **license terms**, and read the third-party information. #### Set up prerequisites and register the appliance
In the configuration manager, select **Set up prerequisites**, and then complete
2. The appliance will verify the key and start the auto-update service, which updates all the services on the appliance to their latest versions. When the auto-update has run, you can select **View appliance services** to see the status and versions of the services running on the appliance server. 3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure Login prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt.
- :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and log in.":::
+ :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and sign in.":::
4. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported. > [!NOTE]
- > If you close the login tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
- 5. After you successfully log in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to log in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
+ > If you close the sign in tab accidentally without signing in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
+ 5. After you successfully sign in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to sign in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
After the appliance is successfully registered, to see the registration details, select **View details**.
You can *rerun prerequisites* at any time during appliance configuration to chec
Now, connect from the appliance to the physical servers to be discovered, and start the discovery.
-1. In **Step 1: Provide credentials for discovery of Windows and Linux physical or virtual serversΓÇï**, click on **Add credentials**.
-1. For Windows server, select the source type as **Windows Server**, specify a friendly name for credentials, add the username and password. Click on **Save**.
-1. If you are using password-based authentication for Linux server, select the source type as **Linux Server (Password-based)**, specify a friendly name for credentials, add the username and password. Click on **Save**.
-1. If you are using SSH key-based authentication for Linux server, you can select source type as **Linux Server (SSH key-based)**, specify a friendly name for credentials, add the username, browse, and select the SSH private key file. Click on **Save**.
+1. In **Step 1: Provide credentials for discovery of Windows and Linux physical or virtual serversΓÇï**, select **Add credentials**.
+1. For Windows server, select the source type as **Windows Server**, specify a friendly name for credentials, add the username and password. Select **Save**.
+1. If you are using password-based authentication for Linux server, select the source type as **Linux Server (Password-based)**, specify a friendly name for credentials, add the username and password. Select **Save**.
+1. If you are using SSH key-based authentication for Linux server, you can select source type as **Linux Server (SSH key-based)**, specify a friendly name for credentials, add the username, browse, and select the SSH private key file. Select **Save**.
* Azure Migrate supports the SSH private key generated by ssh-keygen command using RSA, DSA, ECDSA, and ed25519 algorithms. * Currently Azure Migrate does not support passphrase-based SSH key. Use an SSH key without a passphrase.
Now, connect from the appliance to the physical servers to be discovered, and st
![Image of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
-1. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery.
-1. In **Step 2:Provide physical or virtual server detailsΓÇï**, click on **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server.
+1. If you want to add multiple credentials at once, select **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery.
+1. In **Step 2:Provide physical or virtual server detailsΓÇï**, select **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server.
1. You can either **Add single item** at a time or **Add multiple items** in one go. There is also an option to provide server details through **Import CSV**.
- - If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN** and click on **Save**.
- - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. Verify** the added records and click on **Save**.
- - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and click on **Save**.
+ - If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN**, and select **Save**.
+ - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. Verify** the added records and select **Save**.
+ - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and select **Save**.
-1. On clicking Save, appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server.
- - If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again.
- - To remove a server, click on **Delete**.
+1. On selecting Save, appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server.
+ - If validation fails for a server, review the error by selecting **Validation failed** in the Status column of the table. Fix the issue, and validate again.
+ - To remove a server, select **Delete**.
1. You can **revalidate** the connectivity to servers anytime before starting the discovery. 1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers. You can change this option at any time.
Now, connect from the appliance to the physical servers to be discovered, and st
### Start discovery
-Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+Select **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
## How discovery works
Click on **Start discovery**, to kick off discovery of the successfully validate
After discovery finishes, you can verify that the servers appear in the portal. 1. Open the Azure Migrate dashboard.
-2. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
+2. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** page, select the icon that displays the count for **Discovered servers**.
## Next steps
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
ms. Previously updated : 04/27/2022 Last updated : 11/13/2022 #Customer intent: As a server admin I want to discover my GCP instances.
Set up an account that the appliance can use to access servers on GCP.
* Performance Monitor Users * Performance Log users. * For **Linux servers**:
- * You need a root account on the Linux servers that you want to discover. If you are not able to provide a root account, refer to the instructions in the [support matrix](migrate-support-matrix-physical.md#physical-server-requirements) for an alternative.
+ * You need a root account on the Linux servers that you want to discover. If you aren't able to provide a root account, refer to the instructions in the [support matrix](migrate-support-matrix-physical.md#permissions-for-linux-server) for an alternative.
* Azure Migrate uses password authentication when discovering GCP instances. GCP instances don't support password authentication by default. Before you can discover instance, you need to enable password authentication. 1. Sign into each Linux machine. 2. Open the sshd_config file: vi /etc/ssh/sshd_config 3. In the file, locate the **PasswordAuthentication** line, and change the value to **yes**. 4. Save the file and close it. Restart the ssh service.
- * If you are using a root user to discover your Linux servers, ensure root login is allowed on the servers.
+ * If you're using a root user to discover your Linux servers, ensure root login is allowed on the servers.
1. Sign into each Linux machine 2. Open the sshd_config file: vi /etc/ssh/sshd_config 3. In the file, locate the **PermitRootLogin** line, and change the value to **yes**.
To set up the appliance, you:
1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**. 2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
-3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of your GCP virtual servers. The name should be alphanumeric with 14 characters or fewer.
-4. Click **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
+3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you'll set up for discovery of your GCP virtual servers. The name should be alphanumeric with 14 characters or fewer.
+4. Click **Generate key** to start the creation of the required Azure resources. Don't close the Discover servers page during the creation of resources.
5. After the successful creation of the Azure resources, a **project key** is generated.
-6. Copy the key as you will need it to complete the registration of the appliance during its configuration.
+6. Copy the key as you'll need it to complete the registration of the appliance during its configuration.
### 2. Download the installer script
In the configuration manager, select **Set up prerequisites**, and then complete
2. The appliance will verify the key and start the auto-update service, which updates all the services on the appliance to their latest versions. When the auto-update has run, you can select **View appliance services** to see the status and versions of the services running on the appliance server. 3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure Login prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt.
- :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and log in.":::
+ :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and sign in.":::
4. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported. > [!NOTE]
- > If you close the login tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
- 5. After you successfully log in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to log in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
+ > If you close the sign in tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
+ 5. After you successfully sign in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to sign in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
After the appliance is successfully registered, to see the registration details, select **View details**.
You can *rerun prerequisites* at any time during appliance configuration to chec
Now, connect from the appliance to the GCP servers to be discovered, and start the discovery.
-1. In **Step 1: Provide credentials for discovery of Windows and Linux physical or virtual serversΓÇï**, click on **Add credentials**.
-1. For Windows server, select the source type as **Windows Server**, specify a friendly name for credentials, add the username and password. Click on **Save**.
-1. If you are using password-based authentication for Linux server, select the source type as **Linux Server (Password-based)**, specify a friendly name for credentials, add the username and password. Click on **Save**.
-1. If you are using SSH key-based authentication for Linux server, you can select source type as **Linux Server (SSH key-based)**, specify a friendly name for credentials, add the username, browse and select the SSH private key file. Click on **Save**.
+1. In **Step 1: Provide credentials for discovery of Windows and Linux physical or virtual serversΓÇï**, select **Add credentials**.
+1. For Windows server, select the source type as **Windows Server**, specify a friendly name for credentials, add the username and password. Select **Save**.
+1. If you're using password-based authentication for Linux server, select the source type as **Linux Server (Password-based)**, specify a friendly name for credentials, add the username and password. Select **Save**.
+1. If you're using SSH key-based authentication for Linux server, you can select source type as **Linux Server (SSH key-based)**, specify a friendly name for credentials, add the username, browse and select the SSH private key file. Select **Save**.
- Azure Migrate supports the SSH private key generated by ssh-keygen command using RSA, DSA, ECDSA, and ed25519 algorithms.
- - Currently Azure Migrate does not support passphrase-based SSH key. Use an SSH key without a passphrase.
- - Currently Azure Migrate does not support SSH private key file generated by PuTTY.
+ - Currently Azure Migrate doesn't support passphrase-based SSH key. Use an SSH key without a passphrase.
+ - Currently Azure Migrate doesn't support SSH private key file generated by PuTTY.
- Azure Migrate supports OpenSSH format of the SSH private key file as shown below: ![Image of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
-2. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials.
-3. In **Step 2:Provide physical or virtual server detailsΓÇï**, click on **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server.
-4. You can either **Add single item** at a time or **Add multiple items** in one go. There is also an option to provide server details through **Import CSV**.
+2. If you want to add multiple credentials at once, select **Add more** to save and add more credentials.
+3. In **Step 2:Provide physical or virtual server detailsΓÇï**, select **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server.
+4. You can either **Add single item** at a time or **Add multiple items** in one go. There's also an option to provide server details through **Import CSV**.
- - If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN** and click on **Save**.
- - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. Verify** the added records and click on **Save**.
- - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and click on **Save**.
+ - If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN** and select **Save**.
+ - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. Verify** the added records and select **Save**.
+ - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and select **Save**.
5. On clicking **Save**, the appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server. - If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again.
- - To remove a server, click on **Delete**.
+ - To remove a server, select **Delete**.
6. You can **revalidate** the connectivity to servers anytime before starting the discovery. 1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers. You can change this option at any time.
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
ms. Previously updated : 04/27/2022 Last updated : 11/13/2022 #Customer intent: As a Hyper-V admin, I want to discover my on-premises servers on Hyper-V.
SHA256 | 0ad60e7299925eff4d1ae9f1c7db485dc9316ef45b0964148a3c07c80761ade2
The user account on your servers must have the required permissions to initiate discovery of installed applications, agentless dependency analysis, and SQL Server instances and databases. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers.
-* For Windows servers, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
-* For Linux servers, provide the root user account details or create an account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files.
+* For **Windows servers**, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
+* For **Linux servers**, provide a sudo user account with permissions to execute ls and netstat commands or create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files. If you're providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked.
> [!NOTE] > You can add multiple server credentials in the Azure Migrate appliance configuration manager to initiate discovery of installed applications, agentless dependency analysis, and SQL Server instances and databases. You can add multiple domain, Windows (non-domain), Linux (non-domain), or SQL Server authentication credentials. Learn how to [add server credentials](add-server-credentials.md).+ ## Set up a project Set up a new project.
This tutorial sets up the appliance on a server running in Hyper-V environment,
1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**. 2. In **Discover Servers** > **Are your servers virtualized?**, select **Yes, with Hyper-V**.
-3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of servers. The name should be alphanumeric with 14 characters or fewer.
-1. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover server page during the creation of resources.
+3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you'll set up for discovery of servers. The name should be alphanumeric with 14 characters or fewer.
+1. Select **Generate key** to start the creation of the required Azure resources. Don't close the Discover server page during the creation of resources.
1. After the successful creation of the Azure resources, a **project key** is generated.
-1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
+1. Copy the key as you'll need it to complete the registration of the appliance during its configuration.
### 2. Download the VHD
-In **2: Download Azure Migrate appliance**, select the .VHD file and click on **Download**.
+In **2: Download Azure Migrate appliance**, select the .VHD file and select **Download**.
### Verify security
Connect from the appliance to Hyper-V hosts or clusters, and start server discov
### Provide Hyper-V host/cluster details
-1. In **Step 1: Provide Hyper-V host credentials**, click on **Add credentials** to specify a friendly name for credentials, add **Username** and **Password** for a Hyper-V host/cluster that the appliance will use to discover servers. Click on **Save**.
-1. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for discovery of servers in Hyper-V environment.
-1. In **Step 2: Provide Hyper-V host/cluster details**, click on **Add discovery source** to specify the Hyper-V host/cluster **IP address/FQDN** and the friendly name for credentials to connect to the host/cluster.
-1. You can either **Add single item** at a time or **Add multiple items** in one go. There is also an option to provide Hyper-V host/cluster details through **Import CSV**.
+1. In **Step 1: Provide Hyper-V host credentials**, select **Add credentials** to specify a friendly name for credentials, add **Username** and **Password** for a Hyper-V host/cluster that the appliance will use to discover servers. select **Save**.
+1. If you want to add multiple credentials at once, select **Add more** to save and add more credentials. Multiple credentials are supported for discovery of servers in Hyper-V environment.
+1. In **Step 2: Provide Hyper-V host/cluster details**, select **Add discovery source** to specify the Hyper-V host/cluster **IP address/FQDN** and the friendly name for credentials to connect to the host/cluster.
+1. You can either **Add single item** at a time or **Add multiple items** in one go. There's also an option to provide Hyper-V host/cluster details through **Import CSV**.
- - If you choose **Add single item**, you need to specify friendly name for credentials and Hyper-V host/cluster **IP address/FQDN** and click on **Save**.
- - If you choose **Add multiple items** _(selected by default)_, you can add multiple records at once by specifying Hyper-V host/cluster **IP address/FQDN** with the friendly name for credentials in the text box. **Verify** the added records and click on **Save**.
- - If you choose **Import CSV**, you can download a CSV template file, populate the file with the Hyper-V host/cluster **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and click on **Save**.
+ - If you choose **Add single item**, you need to specify friendly name for credentials and Hyper-V host/cluster **IP address/FQDN**, and select **Save**.
+ - If you choose **Add multiple items** _(selected by default)_, you can add multiple records at once by specifying Hyper-V host/cluster **IP address/FQDN** with the friendly name for credentials in the text box. **Verify** the added records and select **Save**.
+ - If you choose **Import CSV**, you can download a CSV template file, populate the file with the Hyper-V host/cluster **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and select **Save**.
1. On clicking Save, appliance will try validating the connection to the Hyper-V hosts/clusters added and show the **Validation status** in the table against each host/cluster. - For successfully validated hosts/clusters, you can view more details by clicking on their IP address/FQDN. - If validation fails for a host, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again.
- - To remove hosts or clusters, click on **Delete**.
+ - To remove hosts or clusters, select **Delete**.
- You can't remove a specific host from a cluster. You can only remove the entire cluster. - You can add a cluster, even if there are issues with specific hosts in the cluster. 1. You can **revalidate** the connectivity to hosts/clusters anytime before starting the discovery.
If validation fails, you can select a **Failed** status to see the validation er
### Start discovery
-Click on **Start discovery**, to kick off server discovery from the successfully validated host(s)/cluster(s). After the discovery has been successfully initiated, you can check the discovery status against each host/cluster in the table.
+Select **Start discovery**, to kick off server discovery from the successfully validated host(s)/cluster(s). After the discovery has been successfully initiated, you can check the discovery status against each host/cluster in the table.
## How discovery works
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
ms. Previously updated : 04/27/2022 Last updated : 11/13/2022 #Customer intent: As a server admin I want to discover my on-premises server inventory.
If you just created a free Azure account, you're the owner of your subscription.
1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
-## Prepare physical servers
+## Prepare Windows server
-Set up an account that the appliance can use to access the physical servers.
-
-**Windows servers**
-
-For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined. The user account can be created in one of the two ways:
+For Windows servers, use a domain account for domain-joined servers, and a local account for servers that aren't domain-joined. The user account can be created in one of the two ways:
### Option 1
For Windows servers, use a domain account for domain-joined servers, and a local
> [!Note] > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers.
+> [!Note]
+> To discover SQL Server databases on Windows Servers, both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
- > [!Note]
- > To discover SQL Server databases on Windows Servers, both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
--
-**Linux servers**
+## Prepare Linux server
-For Linux servers, you can create a user account in one of three ways:
+For Linux servers, you can create a user account in one of two ways:
### Option 1-- You need a root account on the servers that you want to discover. This account can be used to pull configuration and performance metadata and perform software inventory (discovery of installed applications) and enable agentless dependency analysis using SSH connectivity.
+- You need a sudo user account on the servers that you want to discover. This account can be used to pull configuration and performance metadata and perform software inventory (discovery of installed applications) and enable agentless dependency analysis using SSH connectivity.
+- You need to enable sudo access for the commands listed [here](discovered-metadata.md#linux-server-metadata). In addition to these commands, the user account also needs to have permissions to execute ls and netstat commands to perform agentless dependency analysis.
+- Make sure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked.
+- The Linux OS distributions that are supported for discovery by Azure Migrate using an account with sudo access are listed [here](migrate-support-matrix-physical.md#option-1-1).
> [!Note] > If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Linux servers, it recommended to use Option 1. ### Option 2-- To discover the configuration and performance metadata from Linux servers, you can provide a user account with sudo permissions.-- The support to add a user account with sudo access is provided by default with the new appliance installer script downloaded from portal after July 20,2021.-- For older appliances, you can enable the capability by following these steps:
- 1. On the server running the appliance, open the Registry Editor.
- 1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance.
- 1. Create a registry key ΓÇÿisSudoΓÇÖ with DWORD value of 1.
+- If you can't provide user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry on the appliance server and provide a non-root account with the required capabilities using the following commands:
- :::image type="content" source="./media/tutorial-discover-physical/issudo-reg-key.png" alt-text="Screenshot that shows how to enable sudo support.":::
+ **Command** | **Purpose**
+ | |
+ setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk <br></br> setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk _(if /usr/sbin/fdisk is not present)_ | To collect disk configuration data
+ setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br> cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br> cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | To collect disk performance data
+ setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number
+ chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID
-- You need to enable sudo access for the commands listed [here](discovered-metadata.md#linux-server-metadata). Make sure that you have enabled 'NOPASSWD' for the account to run the required commands without prompting for a password every time sudo command is invoked.-- The following Linux OS distributions are supported for discovery by Azure Migrate using an account with sudo access:-
- Operating system | Versions
- |
- Red Hat Enterprise Linux | 6,7,8
- Cent OS | 6.6, 8.2
- Ubuntu | 14.04,16.04,18.04
- SUSE Linux | 11.4, 12.4
- Debian | 7, 10
- Amazon Linux | 2.0.2021
- CoreOS Container | 2345.3.0
-
-### Option 3
-- If you cannot provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry and provide a non-root account with the required capabilities using the following commands:-
-**Command** | **Purpose**
- | |
-setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk <br></br> setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk _(if /usr/sbin/fdisk is not present)_ | To collect disk configuration data
-setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br> cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br> cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | To collect disk performance data
-setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number
-chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID
+- To perform agentless dependency analysis on the server, ensure that you also set the required permissions on /bin/netstat and /bin/ls files by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
## Set up a project
To set up the appliance, you:
1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**. 2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
-3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of physical or virtual servers. The name should be alphanumeric with 14 characters or fewer.
-1. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
+3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you'll set up for discovery of physical or virtual servers. The name should be alphanumeric with 14 characters or fewer.
+1. Select **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
1. After the successful creation of the Azure resources, a **project key** is generated.
-1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
+1. Copy the key as you'll need it to complete the registration of the appliance during its configuration.
[ ![Selections for Generate Key.](./media/tutorial-assess-physical/generate-key-physical-inline-1.png)](./media/tutorial-assess-physical/generate-key-physical-expanded-1.png#lightbox)
You can *rerun prerequisites* at any time during appliance configuration to chec
Now, connect from the appliance to the physical servers to be discovered, and start the discovery.
-1. In **Step 1: Provide credentials for discovery of Windows and Linux physical or virtual serversΓÇï**, click on **Add credentials**.
-1. For Windows server, select the source type as **Windows Server**, specify a friendly name for credentials, add the username and password. Click on **Save**.
-1. If you are using password-based authentication for Linux server, select the source type as **Linux Server (Password-based)**, specify a friendly name for credentials, add the username and password. Click on **Save**.
-1. If you are using SSH key-based authentication for Linux server, you can select source type as **Linux Server (SSH key-based)**, specify a friendly name for credentials, add the username, browse and select the SSH private key file. Click on **Save**.
+1. In **Step 1: Provide credentials for discovery of Windows and Linux physical or virtual serversΓÇï**, select **Add credentials**.
+1. For Windows server, select the source type as **Windows Server**, specify a friendly name for credentials, add the username and password. Select **Save**.
+1. If you're using password-based authentication for Linux server, select the source type as **Linux Server (Password-based)**, specify a friendly name for credentials, add the username and password. Select **Save**.
+1. If you're using SSH key-based authentication for Linux server, you can select source type as **Linux Server (SSH key-based)**, specify a friendly name for credentials, add the username, browse and select the SSH private key file. Select **Save**.
- Azure Migrate supports the SSH private key generated by ssh-keygen command using RSA, DSA, ECDSA, and ed25519 algorithms.
- - Currently Azure Migrate does not support passphrase-based SSH key. Use an SSH key without a passphrase.
- - Currently Azure Migrate does not support SSH private key file generated by PuTTY.
+ - Currently Azure Migrate doesn't support passphrase-based SSH key. Use an SSH key without a passphrase.
+ - Currently Azure Migrate doesn't support SSH private key file generated by PuTTY.
- The SSH key file supports CRLF to mark a line break in the text file that you upload. SSH keys created on Linux systems most commonly have LF as their newline character so you can convert them to CRLF by opening the file in vim, typing `:set textmode` and saving the file. - If your Linux servers support the older version of RSA key, you can generate the key using the `$ ssh-keygen -m PEM -t rsa -b 4096` command. - Azure Migrate supports OpenSSH format of the SSH private key file as shown below: ![Screenshot of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
-1. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery.
-1. In **Step 2:Provide physical or virtual server detailsΓÇï**, click on **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server.
-1. You can either **Add single item** at a time or **Add multiple items** in one go. There is also an option to provide server details through **Import CSV**.
+1. If you want to add multiple credentials at once, select **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery.
+1. In **Step 2:Provide physical or virtual server detailsΓÇï**, select **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server.
+1. You can either **Add single item** at a time or **Add multiple items** in one go. There's also an option to provide server details through **Import CSV**.
- - If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN** and click on **Save**.
- - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. **Verify** the added records and click on **Save**.
- - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and click on **Save**.
+ - If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN** and select **Save**.
+ - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. **Verify** the added records and select **Save**.
+ - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and select **Save**.
1. On clicking Save, the appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server. - If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again.
- - To remove a server, click on **Delete**.
+ - To remove a server, select **Delete**.
1. You can **revalidate** the connectivity to servers anytime before starting the discovery. 1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers. You can change this option at any time.
Now, connect from the appliance to the physical servers to be discovered, and st
### Start discovery
-Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+select **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
## How discovery works
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
ms. Previously updated : 04/27/2022 Last updated : 11/13/2022 #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
To give the account the required permissions to register Azure AD apps:
:::image type="content" source="./media/tutorial-discover-vmware/register-apps.png" alt-text="Screenshot that shows verifying user setting to register apps.":::
-1. If **App registrations** is set to **No**, request the tenant or global admin to assign the required permissions. Alternately, the tenant or global admin can assign the Application Developer role to an account to allow Azure AD app registration by users. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. If **App registrations** is set to **No**, request the tenant, or global admin to assign the required permissions. Alternately, the tenant or global admin can assign the Application Developer role to an account to allow Azure AD app registration by users. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare VMware
In VMware vSphere Web Client, set up a read-only account to use for vCenter Serv
Your user account on your servers must have the required permissions to initiate discovery of installed applications, agentless dependency analysis, and discovery of web apps, and SQL Server instances and databases. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers.
-* For Windows servers and web apps discovery, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
-* For Linux servers, provide the root user account details or create an account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files.
+* For **Windows servers** and web apps discovery, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
+* For **Linux servers**, provide a sudo user account with permissions to execute ls and netstat commands or create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files. If you're providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked.
> [!NOTE] > You can add multiple server credentials in the Azure Migrate appliance configuration manager to initiate discovery of installed applications, agentless dependency analysis, and discovery of web apps, and SQL Server instances and databases. You can add multiple domain, Windows (non-domain), Linux (non-domain), or SQL Server authentication credentials. Learn how to [add server credentials](add-server-credentials.md).
In the configuration manager, select **Set up prerequisites**, and then complete
2. The appliance will verify the key and start the auto-update service, which updates all the services on the appliance to their latest versions. When the auto-update has run, you can select **View appliance services** to see the status and versions of the services running on the appliance server. 3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure Login prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt.
- :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and log in.":::
+ :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and sign in.":::
4. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported. > [!NOTE]
- > If you close the login tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
- 5. After you successfully log in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to log in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
+ > If you close the sign in tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
+ 5. After you successfully sign in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to sign in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
After the appliance is successfully registered, to see the registration details, select **View details**.
The appliance must connect to vCenter Server to discover the configuration and p
1. In **Step 1: Provide vCenter Server credentials**, select **Add credentials** to enter a name for the credentials. Add the username and password for the vCenter Server account that the appliance will use to discover servers running on vCenter Server. - You should have set up an account with the required permissions as described earlier in this article. - If you want to scope discovery to specific VMware objects (vCenter Server datacenters, clusters, hosts, folders of clusters or hosts, or individual servers), review the instructions to [set discovery scope](set-discovery-scope.md) to restrict the account that Azure Migrate uses.
- - If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for discovery of servers across multiple vCenter Servers using a single appliance.
+ - If you want to add multiple credentials at once, select **Add more** to save and add more credentials. Multiple credentials are supported for discovery of servers across multiple vCenter Servers using a single appliance.
1. In **Step 2: Provide vCenter Server details**, select **Add discovery source** to add the IP address or FQDN of a vCenter Server. You can leave the port as the default (443) or specify a custom port on which vCenter Server listens. Select the friendly name for credentials you would like to map to the vCenter Server and click **Save**.
- Click on **Add more** to save the previous details and add more vCenter Server details. **You can add up to 10 vCenter Servers per appliance.**
+ Select **Add more** to save the previous details and add more vCenter Server details. **You can add up to 10 vCenter Servers per appliance.**
:::image type="content" source="./media/tutorial-discover-vmware/add-discovery-source.png" alt-text="Screenshot that allows to add more vCenter Server details.":::
To start vCenter Server discovery, select **Start discovery**. After the discove
1. Return to Azure Migrate in the Azure portal. 1. Select **Refresh** to view discovered data.
-1. Click on the discovered servers count to review the discovered inventory. You can filter the inventory by selecting the appliance name and selecting one or more vCenter Servers from the **Source** filter.
+1. Select the discovered servers count to review the discovered inventory. You can filter the inventory by selecting the appliance name and selecting one or more vCenter Servers from the **Source** filter.
:::image type="content" source="./media/tutorial-discover-vmware/discovery-assessment-tile.png" alt-text="Screenshot that shows how to refresh data in discovery and assessment tile.":::
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
Title: What's new in Azure Migrate description: Learn about what's new and recent updates in the Azure Migrate service. --
-ms.
Previously updated : 11/04/2022++
+ms.
Last updated : 11/13/2022
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (November 2022)
+
+- Support for providing a sudo user account to perform agentless dependency analysis on Linux servers running in VMware, Hyper-V, and Physical/other cloud environments.
+ ## Update (October 2022) - Support for export of errors and notifications from the portal for software inventory and agentless dependency.-- Private preview : Plan replication. This is a new feature added to the Migration and Modernization tool of Azure Migrate. It helps to estimate the time and resources required for replication and migration of the discovered servers. This feature will help in planning the replication and migration schedule. Currently, the feature is available for VMware agentless migrations. To enroll for private preview, please fill this [form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2QMeqnlmM1An_w4S8FpvJ5UNEQ1VEgzNUpEUE0xODBHVUdVVUxGMUdVNS4u).
+- Private preview: Plan replication. This is a new feature added to the Migration and Modernization tool of Azure Migrate. It helps to estimate the time and resources required for replication and migration of the discovered servers. This feature will help in planning the replication and migration schedule. Currently, the feature is available for VMware agentless migrations. To enroll for private preview, please fill this [form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2QMeqnlmM1An_w4S8FpvJ5UNEQ1VEgzNUpEUE0xODBHVUdVVUxGMUdVNS4u).
## Update (September 2022)
## Update (May 2022) - Upgraded the Azure SQL assessment experience to identify the ideal migration target for your SQL deployments across Azure SQL MI, SQL Server on Azure VM, and Azure SQL DB: - We recommended migrating instances to *SQL Server on Azure VM* as per the Azure best practices.
- - *Right sized Lift and Shift* - Server to *SQL Server on Azure VM*. We recommend this when SQL Server credentials are not available.
+ - *Right sized Lift and Shift* - Server to *SQL Server on Azure VM*. We recommend this when SQL Server credentials aren't available.
- Enhanced user-experience that covers readiness and cost estimates for multiple migration targets for SQL deployments in one assessment. - Support for Storage vMotion during replication for agentless VMware VM migrations.
For more information, see [ASP.NET app containerization and migration to Azure K
## Update (September 2020) - Migration of servers to Availability Zones is now supported.-- Migration of UEFI-based VMs and physical servers to Azure generation 2 VMs is now supported. With this release, Azure Migrate: Server Migration tool will not perform the conversion from Gen 2 VM to Gen 1 VM during migration.
+- Migration of UEFI-based VMs and physical servers to Azure generation 2 VMs is now supported. With this release, Azure Migrate: Server Migration tool won't perform the conversion from Gen 2 VM to Gen 1 VM during migration.
- A new Azure Migrate Power BI assessment dashboard is available to help you compare costs across different assessment settings. The dashboard comes with a PowerShell utility that automatically creates the assessments that plug into the Power BI dashboard. [Learn more.](https://github.com/Azure/azure-docs-powershell-samples/tree/master/azure-migrate/assessment-utility) - Dependency analysis (agentless) can now be run concurrently on a 1000 VMs. - Dependency analysis (agentless) can now be enabled or disabled at scale using PowerShell scripts. [Learn more.](https://github.com/Azure/azure-docs-powershell-samples/tree/master/azure-migrate/dependencies-at-scale)
partner-solutions Nginx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-overview.md
Last updated 05/12/2022
In this article you learn how to enable deeper integration of the **NGINX** SaaS service with Azure.
-The Cloud-Native Observability Platform of NGINX centralizes log, metric, and tracing analytics in one place. You can more easily monitor the health and performance of your Azure environment, and troubleshoot your services faster.
+NGINX for Azure (preview) delivers secure and high performance applications using familiar and trusted load balancing solutions. Use NGINX for Azure (preview) as a reverse proxy within your Azure environment.
The NGINX for Azure (preview) offering in the Azure Marketplace allows you to manage NGINX in the Azure portal as an integrated service. You can implement NGINX as a monitoring solution for your cloud workloads through a streamlined workflow.
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
Title: 'Quickstart: Create a private endpoint by using the Azure portal'+ description: In this quickstart, you'll learn how to create a private endpoint by using the Azure portal. Previously updated : 06/28/2022 Last updated : 11/17/2022 #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
You use the bastion host to connect securely to the VM for testing the private e
| Setting | Value | |--|-| | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
| Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. | 13. Select the **Review + create** tab or select the **Review + create** button.
Next, create a VM that you can use to test the private endpoint.
| Region | Select **West Europe**. | | Availability options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
+ | Image | Select **Windows Server 2022 Datacenter - Gen2**. |
| Size | Select the VM size or use the default setting. | | **Administrator account** | | | Username | Enter a username. |
Next, you create a private endpoint for the web app that you created in the "Pre
| **Networking** | | | Virtual network | Select **myVNet**. | | Subnet | Select **myVNet/mySubnet (10.1.0.0/24)**. |
- | Enable network policies for all private endpoints in this subnet. | Select the checkbox if you plan to apply Application Security Groups or Network Security groups to the subnet that contains the private endpoint. </br> For more information, see [Manage network policies for private endpoints](disable-private-endpoint-network-policy.md) |
+ | Network policy for private endpoints | Select **edit** to apply Network security groups and/or Route tables to the subnet that contains the private endpoint. </br> In **Edit subnet network policy**, select the checkbox next to **Network security groups** and **Route Tables**. </br> Select **Save**. </br></br>For more information, see [Manage network policies for private endpoints](disable-private-endpoint-network-policy.md) |
# [**Dynamic IP**](#tab/dynamic-ip)
Next, you create a private endpoint for the web app that you created in the "Pre
## Test connectivity to the private endpoint
-Use the VM that you created earlier to connect to the web app across the private endpoint.
+Use the virtual machine that you created earlier to connect to the web app across the private endpoint.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
purview Catalog Private Link Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-name-resolution.md
Previously updated : 01/21/2022 Last updated : 11/17/2022 # Customer intent: As a Microsoft Purview admin, I want to set up private endpoints for my Microsoft Purview account, for secure access.
If you do not use DNS forwarders and instead you manage A records directly in yo
| `Contoso-Purview.proxy.purview.azure.com` | A | \<account private endpoint IP address of Microsoft Purview\> | | `Contoso-Purview.guardian.purview.azure.com` | A | \<account private endpoint IP address of Microsoft Purview\> | | `gateway.purview.azure.com` | A | \<account private endpoint IP address of Microsoft Purview\> |
+| `insight.prod.ext.web.purview.azure.com` | A | \<account private endpoint IP address of Microsoft Purview> |
| `manifest.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Microsoft Purview\> | | `cdn.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Microsoft Purview\> | | `hub.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Microsoft Purview\> |
purview How To Policies Data Owner Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-authoring-generic.md
Now that you have created your policy, you will need to publish it for it to bec
## Publish a policy A newly created policy is in the **draft** state. The process of publishing associates the new policy with one or more data sources under governance. This is called "binding" a policy to a data source.
-Ensure you have the *Data Source Admin* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-publish-data-owner-policies)
+Ensure you have the *Data Source Admin* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-for-publishing-data-owner-policies)
The steps to publish a policy are as follows:
The steps to publish a policy are as follows:
> After making changes to a policy, there is no need to publish it again for it to take effect if the data source(s) continues to be the same. ## Unpublish a policy
-Ensure you have the *Data Source Admin* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-publish-data-owner-policies)
+Ensure you have the *Data Source Admin* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-for-publishing-data-owner-policies)
The steps to publish a policy are as follows:
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
Title: Connect to and manage Azure Arc-enabled SQL Server instances
-description: This guide describes how to connect to Azure Arc-enabled SQL Server in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure Arc-enabled SQL Server source.
+ Title: Connect to and manage Azure Arc-enabled SQL Server
+description: This guide describes how to connect to Azure Arc-enabled SQL Server in Microsoft Purview, and use Microsoft Purview features to scan and manage your Azure Arc-enabled SQL Server source.
Last updated 11/07/2022
-# Connect to and manage an Azure Arc-enabled SQL Server instance in Microsoft Purview (Public preview)
+# Connect to and manage Azure Arc-enabled SQL Server in Microsoft Purview (public preview)
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-This article outlines how to register Azure Arc-enabled SQL Server instances, and how to authenticate and interact with an Azure Arc-enabled SQL Server instance in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md).
+This article shows how to register an Azure Arc-enabled SQL Server instance. It also shows how to authenticate and interact with Azure Arc-enabled SQL Server in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md).
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
+|Metadata extraction|Full scan|Incremental scan|Scoped scan|Classification|Access policy|Lineage|Data sharing|
||||||||| | [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#access-policy) | Limited** | No |
-\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
+\** Lineage is supported if the dataset is used as a source/sink in the [Azure Data Factory copy activity](how-to-link-azure-data-factory.md).
-The supported SQL Server versions are 2012 and above. SQL Server Express LocalDB is not supported.
+The supported SQL Server versions are 2012 and later. SQL Server Express LocalDB is not supported.
-When scanning Azure Arc-enabled SQL Server, Microsoft Purview supports:
+When you're scanning Azure Arc-enabled SQL Server, Microsoft Purview supports extracting the following technical metadata:
-- Extracting technical metadata including:
+- Instances
+- Databases
+- Schemas
+- Tables, including the columns
+- Views, including the columns
- - Instance
- - Databases
- - Schemas
- - Tables including the columns
- - Views including the columns
-
-When setting up scan, you can choose to specify the database name to scan one database, and you can further scope the scan by selecting tables and views as needed. The whole Azure Arc-enabled SQL Server will be scanned if database name is not provided.
+When you're setting up a scan, you can choose to specify the database name to scan one database. You can further scope the scan by selecting tables and views as needed. The whole Azure Arc-enabled SQL Server instance will be scanned if you don't provide a database name.
## Prerequisites
When setting up scan, you can choose to specify the database name to scan one da
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. See [Access control in the Microsoft Purview governance portal](catalog-permissions.md) for details.
-* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
+* The latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and manage a self-hosted integration runtime](manage-integration-runtimes.md).
## Register
-This section describes how to register an Azure Arc-enabled SQL Server instance in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
+This section describes how to register an Azure Arc-enabled SQL Server instance in Microsoft Purview by using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
-There are two ways to set up authentication for scanning Azure Arc-enabled SQL Server with self-hosted integration runtime:
--- SQL Authentication-- Windows Authentication
+There are two ways to set up authentication for scanning Azure Arc-enabled SQL Server with a self-hosted integration runtime:
-#### Set up SQL server authentication
+- Windows authentication
+- SQL Server authentication
-If SQL Authentication is applied, ensure the SQL Server deployment is configured to allow SQL Server and Windows Authentication.
+To configure authentication for the SQL Server deployment:
-To enable this, within SQL Server Management Studio (SSMS), navigate to "Server Properties" and change from "Windows Authentication Mode" to "SQL Server and Windows Authentication mode".
+1. In SQL Server Management Studio (SSMS), go to **Server Properties**, and then select **Security** on the left pane.
+1. Under **Server authentication**:
+ - For Windows authentication, select either **Windows Authentication mode** or **SQL Server and Windows Authentication mode**.
+ - For SQL Server authentication, select **SQL Server and Windows Authentication mode**.
-If Windows Authentication is applied, configure the SQL Server deployment to use Windows Authentication mode.
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/enable-sql-server-authentication.png" alt-text="Screenshot that shows the Security page of the Server Properties window, with options for selecting authentication mode.":::
-A change to the Server Authentication will require a restart of the SQL Server Instance and SQL Server Agent, this can be triggered within SSMS by navigating to the SQL Server instance and selecting "Restart" within the right-click options pane.
+A change to the server authentication requires you to restart the SQL Server instance and SQL Server Agent. In SSMS, go to the SQL Server instance and select **Restart** on the right-click options pane.
-##### Creating a new login and user
+#### Create a new login and user
-If you would like to create a new login and user to be able to scan your SQL server, follow the steps below:
+If you want to create a new login and user to scan your SQL Server instance, use the following steps.
-The account must have access to the **master** database. This is because the `sys.databases` is in the master database. The Microsoft Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
+The account must have access to the master database, because `sys.databases` is in the master database. The Microsoft Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
> [!Note]
-> All the steps below can be executed using the code provided [here](https://github.com/Azure/Purview-Samples/blob/master/TSQL-Code-Permissions/grant-access-to-on-prem-sql-databases.sql)
+> You can run all the following steps by using [this code](https://github.com/Azure/Purview-Samples/blob/master/TSQL-Code-Permissions/grant-access-to-on-prem-sql-databases.sql).
-1. Navigate to SQL Server Management Studio (SSMS), connect to the server, navigate to security, select and hold (or right-click) on login and create New login. If Windows Authentication is applied, select "Windows authentication". If SQL Authentication is applied, make sure to select "SQL authentication".
+1. Go to SSMS, connect to the server, and then select **Security** on the left pane.
- :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/create-new-login-user.png" alt-text="Screenshot that shows how to create a new login and user.":::
+1. Select and hold (or right-click) **Login**, and then select **New login**. If Windows authentication is applied, select **Windows authentication**. If SQL Server authentication is applied, select **SQL Server authentication**.
-1. Select Server roles on the left navigation and ensure that public role is assigned.
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/create-new-login-user.png" alt-text="Screenshot that shows selections for creating a new login and user.":::
-1. Select User mapping on the left navigation, select all the databases in the map and select the Database role: **db_datareader**.
+1. Select **Server Roles** on the left pane, and ensure that a public role is assigned.
+
+1. Select **User Mapping** on the left pane, select all the databases in the map, and then select the **db_datareader** database role.
:::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/user-mapping.png" alt-text="Screenshot that shows user mapping.":::
-1. Select OK to save.
+1. Select **OK** to save.
+
+1. If SQL Server authentication is applied, you must change your password as soon as you create a new login:
-1. If SQL Authentication is applied, navigate again to the user you created, by selecting and holding (or right-clicking) and selecting **Properties**. Enter a new password and confirm it. Select the 'Specify old password' and enter the old password. **It is required to change your password as soon as you create a new login.**
+ 1. Select and hold (or right-click) the user that you created, and then select **Properties**.
+ 1. Enter a new password and confirm it.
+ 1. Select the **Specify old password** checkbox and enter the old password.
+ 1. Select **OK**.
- :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/change-password.png" alt-text="Screenshot that shows how to change a password.":::
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/change-password.png" alt-text="Screenshot that shows selections for changing a password.":::
-##### Storing your SQL login password in a key vault and creating a credential in Microsoft Purview
+#### Store your SQL Server login password in a key vault and create a credential in Microsoft Purview
-1. Navigate to your key vault in the Azure portal1. Select **Settings > Secrets**
-1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your SQL server login
-1. Select **Create** to complete
-1. If your key vault is not connected to Microsoft Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
-1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to set up your scan. Make sure the right authentication method is selected when creating a new credential. If SQL Authentication is applied, select "SQL authentication" as the authentication method. If Windows Authentication is applied, then select "Windows authentication".
+1. Go to your key vault in the Azure portal. Select **Settings** > **Secrets**.
+1. Select **+ Generate/Import**. For **Name** and **Value**, enter the password from your SQL Server login.
+1. Select **Create**.
+1. If your key vault is not connected to Microsoft Purview yet, [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account).
+1. [Create a new credential](manage-credentials.md#create-a-new-credential) by using the username and password to set up your scan.
+
+ Be sure to select the right authentication method when you're creating a new credential. If Windows authentication is applied, select **Windows authentication**. If SQL Server authentication is applied, select **SQL Server authentication**.
### Steps to register
-1. Navigate to your Microsoft Purview account
+1. Go to your Microsoft Purview account.
-1. Under Sources and scanning in the left navigation, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it is not set up, follow the steps mentioned [here](manage-integration-runtimes.md) to create a self-hosted integration runtime for scanning on an on-premises or Azure VM that has access to your on-premises network.
+1. Under **Sources and scanning** on the left pane, select **Integration runtimes**. Make sure that a self-hosted integration runtime is set up. If it isn't set up, [follow the steps to create a self-hosted integration runtime](manage-integration-runtimes.md) for scanning on an on-premises or Azure virtual machine that has access to your on-premises network.
-1. Select **Data Map** on the left navigation.
+1. Select **Data Map** on the left pane.
-1. Select **Register**
+1. Select **Register**.
-1. Select **SQL server on Azure Arc-enabled servers** and then **Continue**
+1. Select **SQL Server on Azure Arc-enabled servers**, and then select **Continue**.
- :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/set-up-azure-arc-enabled-sql-data-source.png" alt-text="Screenshot that shows how to set up the SQL data source.":::
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/set-up-azure-arc-enabled-sql-data-source.png" alt-text="Screenshot that shows selecting a SQL data source.":::
-1. Provide a friendly name, which will be a short name you can use to identify your server, and the server endpoint.
+1. Provide a friendly name, which is a short name that you can use to identify your server. Also provide the server endpoint.
1. Select **Finish** to register the data source. ## Scan
-Follow the steps below to scan Azure Arc-enabled SQL Server instances to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
-
-### Create and run scan
+Use the following steps to scan Azure Arc-enabled SQL Server instances to automatically identify assets and classify your data. For more information about scanning in general, see [Scans and ingestion in Microsoft Purview](concept-scans-and-ingestion.md).
-To create and run a new scan, do the following:
+To create and run a new scan:
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+1. In the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), select the **Data Map** tab on the left pane.
-1. Select the Azure Arc-enabeld SQL Server source that you registered.
+1. Select the Azure Arc-enabled SQL Server source that you registered.
-1. Select **New scan**
+1. Select **New scan**.
-1. Select the credential to connect to your data source. The credentials are grouped and listed under different authentication methods.
+1. Select the credential to connect to your data source. Credentials are grouped and listed under the authentication methods.
- :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-set-up-scan-win-auth.png" alt-text="Screenshot that shows how to set up a scan.":::
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-set-up-scan-win-auth.png" alt-text="Screenshot that shows selecting a credential for a scan.":::
1. You can scope your scan to specific tables by choosing the appropriate items in the list.
- :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-scope-your-scan.png" alt-text="Screenshot that shows how to scope your scan.":::
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-scope-your-scan.png" alt-text="Screenshot that shows selected assets for scoping a scan.":::
-1. Then select a scan rule set. You can choose between the system default, existing custom rule sets, or create a new rule set inline.
+1. Select a scan rule set. You can choose between the system default, existing custom rule sets, or creation of a new rule set inline.
- :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-scan-rule-set.png" alt-text="Screenshot that shows the scan rule set.":::
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/azure-arc-enabled-sql-scan-rule-set.png" alt-text="Screenshot that shows selecting a scan rule set.":::
1. Choose your scan trigger. You can set up a schedule or run the scan once.
- :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/trigger-scan.png" alt-text="Screenshot that shows how to choose a trigger.":::
+ :::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/trigger-scan.png" alt-text="Screenshot that shows setting up a recurring scan trigger.":::
-1. Review your scan and select **Save and run**.
+1. Review your scan, and then select **Save and run**.
[!INCLUDE [view and manage scans](includes/view-and-manage-scans.md)] ## Access policy ### Supported policies+ The following types of policies are supported on this data resource from Microsoft Purview:+ - [DevOps policies](concept-policies-devops.md)-- [Data owner policies](concept-policies-data-owner.md)(preview)
+- [Data Owner policies](concept-policies-data-owner.md) (preview)
-### Access policy pre-requisites on Arc enabled SQL Server
+### Access policy prerequisites on Azure Arc-enabled SQL Server
+ ### Configure the Microsoft Purview account for policies+ [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)] ### Register the data source and enable Data use management
-The Arc-enabled SQL Server data source needs to be registered first with Microsoft Purview, before policies can be created.
+
+Before you can create policies, you must register the Azure Arc-enabled SQL Server data source with Microsoft Purview:
1. Sign in to Microsoft Purview Studio.
-1. Navigate to the **Data map** feature on the left pane, select **Sources**, then select **Register**. Type "Azure Arc" in the search box and select **SQL Server on Azure Arc**. Then select **Continue**
-![Screenshot shows how to select a source for registration.](./media/how-to-policies-data-owner-sql/select-arc-sql-server-for-registration.png)
+1. Go to **Data map** on the left pane, select **Sources**, and then select **Register**. Enter **Azure Arc** in the search box and select **SQL Server on Azure Arc**. Then select **Continue**.
+
+ ![Screenshot that shows selecting a source for registration.](./media/how-to-policies-data-owner-sql/select-arc-sql-server-for-registration.png)
-1. Enter a **Name** for this registration. It is best practice to make the name of the registration the same as the server name in the next step.
+1. For **Name**, enter a name for this registration. It's best practice to make the name of the registration the same as the server name in the next step.
-1. select an **Azure subscription**, **Server name** and **Server endpoint**.
+1. Select values for **Azure subscription**, **Server name**, and **Server endpoint**.
-1. **Select a collection** to put this registration in.
+1. For **Select a collection**, choose a collection to put this registration in.
-1. Enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
+1. Enable **Data use management**. **Data use management** needs certain permissions and can affect the security of your data, because it delegates to certain Microsoft Purview roles to manage access to the data sources. Go through the secure practices related to **Data use management** in this guide: [Enable Data use management on your Microsoft Purview sources](./how-to-enable-data-use-management.md).
-1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
+1. After you enable **Data use management**, Microsoft Purview automatically captures the application ID of the app registration that's related to this Azure Arc-enabled SQL Server instance. Come back to this screen and select the refresh button, in case the association between Azure Arc-enabled SQL Server and the app registration changes in the future.
-1. Select **Register** or **Apply** at the bottom
+1. Select **Register** or **Apply**.
-Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this picture.
-![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-arc-sql.png)
+![Screenshot that shows selections for registering a data source for a policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-arc-sql.png)
### Create a policy
-To create an access policy for Arc-enabled SQL Server, follow these guides:
-* [DevOps policy on a single Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md#create-a-new-devops-policy)
-* [Data owner policy on a single Arc-enabled SQL Server](./how-to-policies-data-owner-arc-sql-server.md#create-and-publish-a-data-owner-policy)
-To create policies that cover all data sources inside a resource group or Azure subscription you can refer to [this section](register-scan-azure-multiple-sources.md#access-policy).
+To create an access policy for Azure Arc-enabled SQL Server, follow these guides:
+
+* [DevOps policy on a single Azure Arc-enabled SQL Server instance](./how-to-policies-devops-arc-sql-server.md#create-a-new-devops-policy)
+* [Data owner policy on a single Azure Arc-enabled SQL Server instance](./how-to-policies-data-owner-arc-sql-server.md#create-and-publish-a-data-owner-policy)
+
+To create policies that cover all data sources inside a resource group or Azure subscription, see [Discover and govern multiple Azure sources in Microsoft Purview](register-scan-azure-multiple-sources.md#access-policy).
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+Now that you've registered your source, use the following guides to learn more about Microsoft Purview and your data:
+ - [DevOps policies in Microsoft Purview](concept-policies-devops.md) - [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)-- [Search Data Catalog](how-to-search-catalog.md)
+- [Search the data catalog](how-to-search-catalog.md)
purview Troubleshoot Policy Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-policy-distribution.md
GET https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-
## Delta pull ### Request
-To fetch policies via full pull, send a `GET` request to /policyEvents as follows:
+To fetch policies via delta pull, send a `GET` request to /policyEvents as follows:
``` GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyEvents?api-version={apiVersion}&syncToken={syncToken}
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure Batch](../batch/create-pool-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Cache for Redis](../azure-cache-for-redis/cache-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Cognitive Search](../search/search-performance-optimization.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Container Apps](../container-apps/disaster-recovery.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Container Instances](../container-instances/availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
resource-mover Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/common-questions.md
Currently, you can move resources from any source public region to any target pu
Azure Resource Mover is currently available as follows:
-**Support** | **Details**
- |
-Move support | Azure resources that are supported for move with Resource Mover can be moved from any public region to another public region and within regions in China. Moving resources within Azure Gov is also supported (US DoD Central, US DoD East, US Gov Arizona, US Gov Texas, US Gov Virginia). US Sec East/West/West Central are not currently supported.
-Metadata support | Supported regions for storing metadata about machines to be moved include East US2, North Europe, Southeast Asia, Japan East, UK South, and Australia East as metadata regions. <br/><br/> Moving resources within the Azure China region is also supported with the metadata region China North2.
+| Support | Details|
+|-- | -|
+|Move support | Azure resources that are supported for a move with Resource Mover can be moved from any public region to another public region and within regions in China. Moving resources within Azure Gov is also supported (US DoD Central, US DoD East, US Gov Arizona, US Gov Texas, US Gov Virginia). US Sec East/West/West Central are not currently supported.|
+|Metadata support | Supported regions for storing metadata about machines to be moved include East US2, North Europe, Southeast Asia, Japan East, UK South, and Australia East as metadata regions. <br/><br/> Moving resources within the Azure China region is also supported with the metadata region China North2.|
### What resources can I move across regions using Resource Mover?
You can't select disks as resources to the moved across regions. However, disks
### What does it mean to move a resource group?
-When a resource is selected for move, the corresponding resource group is added automatically for moving. This is so that the destination resource can be placed in a resource group. You can choose to customize and provide an existing resource group, after it's added for move. Moving a resource group doesn't mean that all the resources in the source resource group will be moved.
+When a resource is selected for move, the corresponding resource group is added automatically for moving. This is so that the destination resource can be placed in a resource group. You can choose to customize and provide an existing resource group after it's added for move. Moving a resource group doesn't mean that all the resources in the source resource group will be moved.
### Can I move resources across subscriptions when I move them across regions?
No. Resource Mover service doesn't store customer data, it only stores metadata
### Where is the metadata for moving across regions stored?
-It's stored in an [Azure Cosmos DB](../cosmos-db/database-encryption-at-rest.md) database, and in [Azure Blob storage](../storage/common/storage-service-encryption.md), in a Microsoft subscription. Currently metadata is stored in East US 2 and North Europe. We will expand this coverage to other regions. This doesn't restrict you from moving resources across any public regions.
+It's stored in an [Azure Cosmos DB](../cosmos-db/database-encryption-at-rest.md) database, and in [Azure Blob storage](../storage/common/storage-service-encryption.md), in a Microsoft subscription. Currently, metadata is stored in East US 2 and North Europe. We will expand this coverage to other regions. This doesn't restrict you from moving resources across any public region.
### Is the collected metadata encrypted?
When you add resources in the Resource Mover hub in the portal, permissions are
### What if I don't have permissions to assign role identity?
-There are a couple of reasons you might not have permissions.
+There are a couple of reasons you might not have permission.
-**Possible cause** | **Recommendation**
- |
-You're not a *Contributor* and *User Access Administrator* (or *Owner*) when you add a resource for first time. | Use an account with *Contributor* and *User Access Administrator* (or *Owner*) permissions for the subscription.
-The Resource Mover managed identity doesn't have the required role. | Add the 'Contributor' and 'User Access administrator' roles.
-The Resource Mover managed identity was reset to *None*. | Reenable a system-assigned identity in the move collection settings > **Identity**. Alternatively, in **Add Resources**, add the resource again, which does the same thing.
-The subscription was moved to a different tenant. | Disable and then enable managed identity for the move collection.
+|Possible cause | Recommendation|
+|-- | --|
+|You're not a *Contributor* and *User Access Administrator* (or *Owner*) when you add a resource for the first time. | Use an account with *Contributor* and *User Access Administrator* (or *Owner*) permissions for the subscription.|
+|The Resource Mover managed identity doesn't have the required role. | Add the 'Contributor' and 'User Access administrator' roles. |
+|The Resource Mover managed identity was reset to *None*. | Reenable a system-assigned identity in the move collection settings > **Identity**. Alternatively, in **Add Resources**, add the resource again, which does the same thing. |
+|The subscription was moved to a different tenant. | Disable and then enable managed identity for the move collection.|
### How can I do multiple moves together?
Change the source/target combinations as needed using the change option in the p
### What happens when I remove a resource from a list of move resources?
-You can remove resources that you've added to move list. The exact remove behavior depends on the resource state. [Learn more](remove-move-resources.md#vm-resource-state-after-removing).
-
+You can remove resources that you've added to the move list. The exact remove behavior depends on the resource state. [Learn more](remove-move-resources.md#vm-resource-state-after-removing).
## Next steps
resource-mover Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/overview.md
This article provides an overview of the Azure Resource Mover service. Resource
You might move resources to different Azure regions to: - **Align to a region launch**: Move resources to a newly introduced Azure region that wasn't previously available.-- **Align for services/features**: Move resources to take advantage of services or features that are available in a specific region.
+- **Align for services/features**: Move resources to take advantage of the services or features that are available in a specific region.
- **Respond to business developments**: Move resources to a region in response to business changes, such as mergers or acquisitions. - **Align for proximity**: Move resources to a region local to your business. - **Meet data requirements**: Move resources to align with data residency requirements, or data classification needs.-- **Respond to deployment requirements**: Move resources that were deployed in error, or move in response to capacity needs.
+- **Respond to deployment requirements**: Move resources that were deployed in error or move in response to capacity needs.
- **Respond to decommissioning**: Move resources because a region is decommissioned.
Resource Mover provides:
- A single hub for moving resources across regions. - Reduced move time and complexity. Everything you need is in a single location. - A simple and consistent experience for moving different types of Azure resources.-- An easy way to identify dependencies across resources you want to move. This helps you to move related resources together, so that everything works as expected in the target region, after the move.
+- An easy way to identify dependencies across resources you want to move. This feature helps you to move related resources together so that everything works as expected in the target region after the move.
- Automatic cleanup of resources in the source region, if you want to delete them after the move.-- Testing. You can try out a move, and then discard it if you don't want to do a full move.
+- Testing. You can try out a move and then discard it if you don't want to do a full move.
## Move across regions
-To move resources across regions, you select the resources that you want to move. Resource Mover validates those resources, and resolves any dependencies they have on other resources. If there are dependencies, you have a couple of options:
+To move resources across regions, you select the resources that you want to move. Resource Mover validates those resources and resolves any dependencies they have on other resources. If there are dependencies, you have a couple of options:
- Move the dependent resources to the target region.-- Don't move the dependent resources, but use equivalent resources in the target region instead.
+- Don't move the dependent resources but use equivalent resources in the target region instead.
After all dependencies are resolved, Resource Mover walks you through a simple move process. 1. You kick off an initial move.
-2. After the initial move, you can decide whether to commit and complete the move, or to discard the move.
-3. After the move's done, you can decide whether you want to delete the resources in the source location.
+2. After the initial move, you can decide whether to commit and complete the move or discard the move.
+3. After the move is done you can decide whether you want to delete the resources in the source location.
-You can move resources across regions in the Resource Mover hub, or from within a resource group. [Learn more](select-move-tool.md)
+You can move resources across regions in the Resource Mover hub or from within a resource group. [Learn more](select-move-tool.md)
## What resources can I move across regions? Using Resource Mover, you can currently move the following resources across regions: - Azure VMs and associated disks-- Encrypted Azure VMs and associated disks. This includes VMs with Azure disk encryption enabled, and Azure VMs using default server-side encryption (both with platform-managed keys and customer-managed keys)
+- Encrypted Azure VMs and associated disks. This includes VMs with Azure disk encryption enabled and Azure VMs using default server-side encryption (both with platform-managed keys and customer-managed keys)
- NICs - Availability sets - Azure virtual networks
resource-mover Select Move Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/select-move-tool.md
You can move resources within Azure as follows: -- **Move resources across regions**: Move resource from within the Resource Mover hub, or within a resource group.
+- **Move resources across regions**: Move resources from within the Resource Mover hub, or a resource group.
- **Move resources across resource groups/subscriptions**: Move from within a resource group. - **Move resources between Azure clouds**: Use the Azure Site Recovery service to move resources between public and government clouds. - **Move resources between availability zones in the same region**: Use the Azure Site Recovery service to move resources between availability zones in the same Azure region.
You can move resources within Azure as follows:
## Compare move tools
-**Tool** | **When to use** | **Learn more**
- | |
-**Move within resource group** | Move resources to a different resource group/subscription, or across regions.<br/><br/> If you move across regions, in the resource group you select the resources you want to move, and then you move to the Resource Mover hub, to verify dependencies and move the resources to the target region. | [Move resources to another resource group/subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).<br/><br/> [Move resources to another region from a resource group](move-region-within-resource-group.md).
-**Move from the Resource Mover hub** | Move resources across regions. <br/><br/> You can move to a target region, or to a specific availability zone, or availability set, within the target region. | [Move resources across regions in the Resource Mover hub]().
-**Move VMs with Site Recovery** | Use for moving Azure VMs between government and public clouds.<br/><br/> Use if you want to move VMs between availability zones in the same region. |[Move resources between government/public clouds](../site-recovery/region-move-cross-geos.md), [Move resources to availability zones in the same region](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md).
+| Tool | When to use| Learn more |
+|-- | - | - |
+| **Move within resource group** | Move resources to a different resource group/subscription, or across regions.<br/><br/> If you move across regions, in the resource group you select the resources you want to move, and then you move to the Resource Mover hub, to verify dependencies and move the resources to the target region. | [Move resources to another resource group/subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).<br/><br/> [Move resources to another region from a resource group](move-region-within-resource-group.md). |
+| **Move from the Resource Mover hub** | Move resources across regions. <br/><br/> You can move to a target region, or to a specific availability zone, or availability set, within the target region. | [Move resources across regions in the Resource Mover hub](). |
+| **Move VMs with Site Recovery** | Use it for moving Azure VMs between government and public clouds.<br/><br/> Use if you want to move VMs between availability zones in the same region. |[Move resources between government/public clouds](../site-recovery/region-move-cross-geos.md), [Move resources to availability zones in the same region](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md).|
## Next steps
resource-mover Tutorial Move Region Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
In this tutorial, you learn how to:
> [!div class="checklist"] > * Check prerequisites. > * For VMs with Azure Disk Encryption enabled, copy keys and secrets from the source-region key vault to the destination-region key vault.
-> * Prepare to move VMs and to select resources in the source region that you want to move them from.
+> * Prepare to move VMs and select resources in the source region that you want to move them from.
> * Resolve resource dependencies. > * For VMs with Azure Disk Encryption enabled, manually assign the destination key vault. For VMs that use server-side encryption with customer-managed keys, manually assign a disk encryption set in the destination region. > * Move the key vault or disk encryption set.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites
-Requirement |Details
- |
-**Subscription permissions** | Check to ensure that you have *Owner* access on the subscription that contains the resources you want to move.<br/><br/> *Why do I need Owner access?* The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types), formerly known as the Managed Service Identity (MSI). This identity is trusted by the subscription. Before you can create the identity and assign it the required roles (*Contributor* and *User access administrator* in the source subscription), the account you use to add resources needs *Owner* permissions in the subscription. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD roles](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).
-**VM support** | Check to ensure that the VMs you want to move are supported by doing the following:<li>[Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<li>[Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<li>Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.
-**Key vault requirements (Azure Disk Encryption)** | If you have Azure Disk Encryption enabled for VMs, you need a key vault in both the source and destination regions. For more information, see [Create a key vault](../key-vault/general/quick-create-portal.md).<br/><br/> For the key vaults in the source and destination regions, you need these permissions:<li>Key permissions: Key Management Operations (Get, List) and Cryptographic Operations (Decrypt and Encrypt)<li>Secret permissions: Secret Management Operations (Get, List, and Set)<li>Certificate (List and Get)
-**Disk encryption set (server-side encryption with CMK)** | If you're using VMs with server-side encryption that uses a CMK, you need a disk encryption set in both the source and destination regions. For more information, see [Create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set).<br/><br/> Moving between regions isn't supported if you're using hardware security module (HSM keys) for customer-managed keys.
-**Target region quota** | The subscription needs enough quota to create the resources you're moving in the target region. If it doesn't have quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
-**Target region charges** | Verify the pricing and charges that are associated with the target region to which you're moving the VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+| Requirement |Details |
+| | -|
+|**Subscription permissions** | Check to ensure that you have *Owner* access on the subscription that contains the resources you want to move.<br/><br/> *Why do I need Owner access?* The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types), formerly known as the Managed Service Identity (MSI). This identity is trusted by the subscription. Before you can create the identity and assign it the required roles (*Contributor* and *User access administrator* in the source subscription), the account you use to add resources needs *Owner* permissions in the subscription. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD roles](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).|
+| **VM support** | Check to ensure that the VMs you want to move are supported by doing the following:<li>[Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<li>[Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<li>Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.|
+| **Key vault requirements (Azure Disk Encryption)** | If you have Azure Disk Encryption enabled for VMs, you need a key vault in both the source and destination regions. For more information, see [Create a key vault](../key-vault/general/quick-create-portal.md).<br/><br/> For the key vaults in the source and destination regions, you need these permissions:<li>Key permissions: Key Management Operations (Get, List) and Cryptographic Operations (Decrypt and Encrypt)<li>Secret permissions: Secret Management Operations (Get, List, and Set)<li>Certificate (List and Get)|
+| **Disk encryption set (server-side encryption with CMK)** | If you're using VMs with server-side encryption that uses a CMK, you need a disk encryption set in both the source and destination regions. For more information, see [Create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set).<br/><br/> Moving between regions isn't supported if you're using a hardware security module (HSM keys) for customer-managed keys.|
+| **Target region quota** | The subscription needs enough quota to create the resources you're moving in the target region. If it doesn't have a quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).|
+| **Target region charges** | Verify the pricing and charges that are associated with the target region to which you're moving the VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/).|
## Verify permissions in the key vault
For users who execute the script, set permissions for the following components:
Component | Permissions needed | Secrets | *Set* <br></br> Select **Secret permissions** > **Secret Management Operations**, and then select **Set**.
-Keys <br></br> If you're using a KEK, you need these permissions in addition to the permissions for secrets. | *Get*, *Create*, and *Encrypt* <br></br> Select **Key Permissions** > **Key Management Operations**, and then select **Get** and **Create** . In **Cryptographic Operations**, select **Encrypt**.
+Keys <br></br> If you're using a KEK, you need these permissions in addition to the permissions for secrets. | *Get*, *Create*, and *Encrypt* <br></br> Select **Key Permissions** > **Key Management Operations**, and then select **Get** and **Create**. In **Cryptographic Operations**, select **Encrypt**.
<br>
After the move, you can optionally delete resources in the source region.
After the move, you can manually delete the move collection and Site Recovery resources that you created during this process. - The move collection is hidden by default. To see it you need to turn on hidden resources.-- The cache storage has a lock that must be deleted, before it can be deleted.
+- The cache storage has a lock that must be deleted before it can be deleted.
To delete your resources, do the following:
-1. Locate the resources in resource group ```RegionMoveRG-<sourceregion>-<target-region>```.
-1. Check to ensure that all the VMs and other source resources in the source region have been moved or deleted. This step ensures that there are no pending resources using them.
+1. Locate the resources in the resource group ```RegionMoveRG-<sourceregion>-<target-region>```.
+1. Check to ensure that all the VMs and other source resources in the source region have been moved or deleted. This step ensures that no pending resources are using them.
1. Delete the resources: - Move collection name: ```movecollection-<sourceregion>-<target-region>```
resource-mover Tutorial Move Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-powershell.md
In this tutorial, you learn how to:
> * Optionally remove resources in the source region after the move. > [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options.
+> Tutorials show the quickest path for trying out a scenario and use default options.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Then sign in to the [Azure portal](https://portal.azure.com). ## Prerequisites
-**Requirement** | **Description**
- |
-**Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.
-**Resource Mover support** | [Review](common-questions.md) supported regions and other common questions.
-**VM support** | Check that any VMs you want to move are supported.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/><br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.
-**SQL support** | If you want to move SQL resources, review the [SQL requirements list](tutorial-move-region-sql.md#check-sql-requirements).
-**Destination subscription** | The subscription in the destination region needs enough quota to create the resources you're moving in the target region. If it doesn't have quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
-**Destination region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.
+| Requirement | Description |
+| | |
+| **Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles. |
+| **Resource Mover support** | [Review](common-questions.md) supported regions and other common questions.|
+| **VM support** | Check that any VMs you want to move are supported.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/><br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.|
+| **SQL support** | If you want to move SQL resources, review the [SQL requirements list](tutorial-move-region-sql.md#check-sql-requirements).|
+| **Destination subscription** | The subscription in the destination region needs enough quota to create the resources you're moving in the target region. If it doesn't have a quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).|
+| **Destination region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you. |
### Review PowerShell requirements Most move resources operations are the same whether using the Azure portal or PowerShell, with a couple of exceptions.
-**Operation** | **Portal** | **PowerShell**
- | |
-**Create a move collection** | A move collection (a list of all the resources you're moving) is created automatically. Required identity permissions are assigned in the backend by the portal. | You use PowerShell cmdlets to:<br/><br/> - Create a resource group for the move collection and specify the location for it.<br/><br/> - Assign a managed identity to the collection.<br/><br/> - Add resources to the collection.
-**Remove a move collection** | You can't directly remove a move collection in the portal. | You use a PowerShell cmdlet to remove a move collection.
-**Resource move operations**<br/><br/> (Prepare, initiate move, commit etc.).| Single steps with automatic validation by Resource Mover. | PowerShell cmdlets to:<br/><br/> 1) Validate dependencies.<br/><br/> 2) Perform the move.
-**Delete source resources** | Directly in the Resource Mover portal. | PowerShell cmdlets at the resource-type level.
+| **Operation** | **Portal** | **PowerShell** |
+| | | |
+| **Create a move collection** | A move collection (a list of all the resources you're moving) is created automatically. Required identity permissions are assigned in the backend by the portal. | You use PowerShell cmdlets to:<br/><br/> - Create a resource group for the move collection and specify the location for it.<br/><br/> - Assign a managed identity to the collection.<br/><br/> - Add resources to the collection.|
+| **Remove a move collection** | You can't directly remove a move collection in the portal. | You use a PowerShell cmdlet to remove a move collection.|
+| **Resource move operations**<br/><br/> (Prepare, initiate move, commit, etc.).| Single steps with automatic validation by Resource Mover. | PowerShell cmdlets to:<br/><br/> 1) Validate dependencies.<br/><br/> 2) Perform the move.|
+| **Delete source resources** | Directly in the Resource Mover portal. | PowerShell cmdlets at the resource-type level. |
### Sample values We're using these values in our script examples:
-**Setting** | **Value**
- |
-Subscription ID | subscription-id
-Source region | Central US
-Target region | West Central US
-Resource group (holding metadata for move collection) | RG-MoveCollection-demoRMS
-Move collection name | PS-centralus-westcentralus-demoRMS
-Resource group (source region) | PSDemoRM
-Resource group (target region) | PSDemoRM-target
-Resource Move service location | East US 2
-IdentityType | SystemAssigned
-VM to move | PSDemoVM
+| **Setting** | **Value** |
+| | |
+| Subscription ID | subscription-id |
+| Source region | Central US |
+| Target region | West Central US |
+| Resource group (holding metadata for move collection) | RG-MoveCollection-demoRMS |
+| Move collection name | PS-centralus-westcentralus-demoRMS |
+| Resource group (source region) | PSDemoRM |
+| Resource group (target region) | PSDemoRM-target |
+| Resource Move service location | East US 2 |
+| IdentityType | SystemAssigned |
+| VM to move | PSDemoVM |
## Sign in to Azure
Connect-AzAccount ΓÇô Subscription "<subscription-id>"
## Set up the move collection
-The MoveCollection object stores metadata and configuration information about resources you want to move. To set up a move collection, you do the following:
+The MoveCollection object stores metadata and configuration information about the resources you want to move. To set up a move collection, you do the following:
- Create a resource group for the move collection. - Register the service provider to the subscription, so that the MoveCollection resource can be created.
Add resources as follows:
![Output text after retrieving the resource ID](./media/tutorial-move-region-powershell/output-retrieve-resource.png)
-2. Create the target resource settings object in accordance with the resource you're moving. In our case it's a VM.
+2. Create the target resource settings object per the resource you're moving. In our case, it's a VM.
```azurepowershell-interactive $targetResourceSettingsObj = New-Object Microsoft.Azure.PowerShell.Cmdlets.ResourceMover.Models.Api202101.VirtualMachineResourceSettings
Add the source resource group that contains resources you want to move to the mo
You usually need to prepare resources in the source region before the move. For example: - To move stateless resources such as Azure virtual networks, network adapters, load balancers, and network security groups, you might need to export an Azure Resource Manager template.-- To move stateful resources such as Azure VMs and SQL databases, you might need to start replicating resources from the source to destination region.
+- To move stateful resources such as Azure VMs and SQL databases, you might need to start replicating resources from the source to the destination region.
In this tutorial, since we're moving VMs, we need to prepare the source resource group, and then initiate and commit its move, before we can start preparing VMs.
After preparing and moving the source resource group, we can prepare VM resource
![Output text after initating prepare of all resources](./media/tutorial-move-region-powershell/initiate-prepare-all.png) > [!NOTE]
- > You can provide the source resource ID instead of resource name as the input parameters for the Prepare cmdlet, as well as in the Initiate Move and Commit cmdlets. To do this, run:
+ > You can provide the source resource ID instead of the resource name as the input parameters for the Prepare cmdlet, as well as in the Initiate Move and Commit cmdlets. To do this, run:
```azurepowershell-interactive Invoke-AzResourceMoverPrepare -ResourceGroupName "RG-MoveCollection-demoRMS" -MoveCollectionName "PS-centralus-westcentralus-demoRMS" -MoveResourceInputType MoveResourceSourceId -MoveResource $('/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PSDemoRMS/providers/Microsoft.Network/networkSecurityGroups/PSDemoVM-nsg')
After preparing and moving the source resource group, we can prepare VM resource
## Discard or commit?
-After the initial move, you can decide whether you want to commit the move, or to discard it.
+After the initial move, you can decide whether you want to commit the move or discard it.
- **Discard**: You might discard a move if you're testing, and you don't want to actually move the source resource. Discarding the move returns the resource to a state of *Initiate move pending*. You can then initiate the move again if needed. - **Commit**: Commit completes the move to the target region. After committing, a source resource will be in a state of *Delete source pending*, and you can decide if you want to delete it.
resource-mover Tutorial Move Region Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-virtual-machines.md
# Tutorial: Move Azure VMs across regions In this article, learn how to move Azure VMs, and related network/storage resources, to a different Azure region, using [Azure Resource Mover](overview.md).
-.
- In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Optionally remove resources in the source region after the move. > [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options.
+> Tutorials show the quickest path for trying out a scenario and using default options.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Then sign in to the [Azure portal](https://portal.azure.com). ## Prerequisites
-**Requirement** | **Description**
- |
-**Resource Mover support** | [Review](common-questions.md) supported regions and other common questions.
-**Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.
-**VM support** | Check that the VMs you want to move are supported.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/><br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.
-**Destination subscription** | The subscription in the destination region needs enough quota to create the resources you're moving in the target region. If it doesn't have quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
-**Destination region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.
+
+| Requirement | Description |
+| | |
+| **Resource Mover support** | [Review](common-questions.md) supported regions and other common questions. |
+| **Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.|
+| **VM support** | Check that the VMs you want to move are supported.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/><br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.|
+| **Destination subscription** | The subscription in the destination region needs enough quota to create the resources you're moving in the target region. If it doesn't have a quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
+**Destination region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.|
## Prepare VMs
If you don't have an Azure subscription, create a [free account](https://azure.m
## Select resources
-Select resources you want to move.
+Select the resources you want to move.
- All supported resource types in resource groups within the selected source region are displayed. - Resources that have already been added for moving across regions aren't shown.
Select resources you want to move.
> [!NOTE] > - Added resources are in a *Prepare pending* state. > - The resource group for the VMs is added automatically.
-> - If you want to remove an resource from a move collection, the method for doing that depends on where you are in the move process. [Learn more](remove-move-resources.md).
+> - If you want to remove a resource from a move collection, the method for doing that depends on where you are in the move process. [Learn more](remove-move-resources.md).
## Resolve dependencies
Select resources you want to move.
2. If dependencies are found, click **Add dependencies**. 3. In **Add dependencies**, leave the default **Show all dependencies** option.
- - Show all dependencies iterates through all of the direct and indirect dependencies for a resource. For example, for a VM it shows the NIC, virtual network, network security groups (NSGs) etc.
- - Show first level dependencies only shows only direct dependencies. For example, for a VM it shows the NIC, but not the virtual network.
+ - Show all dependencies and iterates through all of the direct and indirect dependencies for a resource. For example, for a VM it shows the NIC, virtual network, network security groups (NSGs), etc.
+ - Show first-level dependencies only shows only direct dependencies. For example, for a VM it shows the NIC, but not the virtual network.
4. Select the dependent resources you want to add > **Add dependencies**. Monitor progress in the notifications.
Select resources you want to move.
![Page to add additional dependencies](./media/tutorial-move-region-virtual-machines/add-additional-dependencies.png) - ## Move the source resource group Before you can prepare and move VMs, the VM resource group must be present in the target region.
With resources prepared, you can now initiate the move.
> [!NOTE] > - For VMs, replica VMs are created in the target region. The source VM is shut down, and some downtime occurs (usually minutes). > - Resource Mover recreates other resources using the ARM templates that were prepared. There's usually no downtime.
-> - After moving resources, they're in an *Commit move pending* state.
+> - After moving resources, they're in a *Commit move pending* state.
![Page showing resources in *Delete source pending* state](./media/tutorial-move-region-virtual-machines/delete-source-pending.png) ## Discard or commit?
-After the initial move, you can decide whether you want to commit the move, or to discard it.
+After the initial move, you can decide whether you want to commit the move or discard it.
- **Discard**: You might discard a move if you're testing, and you don't want to actually move the source resource. Discarding the move returns the resource to a state of *Initiate move pending*. - **Commit**: Commit completes the move to the target region. After committing, a source resource will be in a state of *Delete source pending*, and you can decide if you want to delete it.
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
Automation rules apply to the following categories of use cases:
- Inspect the contents of an incident (alerts, entities, and other properties) and take further action by calling a playbook. -- Automation rules can also be [the mechanism by which you run a playbook](whats-new.md#automation-rules-for-alerts-preview) in response to an **alert** *not associated with an incident*.
+- Automation rules can also be the mechanism by which you run a playbook in response to an **alert** *not associated with an incident*.
> [!IMPORTANT] > > **Automation rules for alerts** are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -- In short, automation rules streamline the use of automation in Microsoft Sentinel, enabling you to simplify complex workflows for your threat response orchestration processes. ## Components
sentinel Aws S3 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/aws-s3-troubleshoot.md
The SQS didn't successfully read the S3 data.
SentinelHealth | take 20 ```
-1. If the health feature isnΓÇÖt enabled, [enable it](monitor-sentinel-health.md).
+1. If the health feature isnΓÇÖt enabled, [enable it](enable-monitoring.md).
## Data from the AWS S3 connector (or one of its data types) is seen in Microsoft Sentinel with a delay of more than 30 minutes
There might be errors in the health logs, or the health feature might not be ena
| take 20 ```
-1. If the health feature isnΓÇÖt enabled, [enable it](monitor-sentinel-health.md).
+1. If the health feature isnΓÇÖt enabled, [enable it](enable-monitoring.md).
## Next steps
sentinel Connect Logstash Data Connection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash-data-connection-rules.md
+
+ Title: Use Logstash to stream logs with pipeline transformations via DCR-based API
+description: Use Logstash to forward logs from external data sources into custom and standard tables in Microsoft Sentinel, and to configure the output with DCRs.
++ Last updated : 11/07/2022+++
+# Use Logstash to stream logs with pipeline transformations via DCR-based API
+
+> [!IMPORTANT]
+> Data ingestion using the Logstash output plugin with Data Collection Rules (DCRs) is currently in public preview. This feature is provided without a service level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Microsoft Sentinel's new Logstash output plugin supports pipeline transformations and advanced configuration via Data Collection Rules (DCRs). The plugin forwards any type of logs from external data sources into custom or standard tables in Microsoft Sentinel.
+
+In this article, you learn how to set up the new Logstash plugin to stream the data into Microsoft Sentinel using DCRs, with full control over the output schema. Learn how to **[deploy the plugin](#deploy-the-microsoft-sentinel-output-plugin-in-logstash)**.
+
+> [!NOTE]
+> A [previous version of the Logstash plugin](connect-logstash.md) allows you to connect data sources through Logstash via the Data Collection API.
+
+With the new plugin, you can:
+- Control the configuration of the column names and types.
+- Perform ingestion-time transformations like filtering or enrichment.
+- Ingest custom logs into a custom table, or ingest a Syslog input stream into the Microsoft Sentinel Syslog table.
+
+Ingestion into standard tables is limited only to [standard tables supported for custom logs ingestion](data-transformation.md#data-transformation-support-for-custom-data-connectors).
+
+To learn more about working with the Logstash data collection engine, see [Getting started with Logstash](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html).
+
+## Overview
+
+### Architecture and background
++
+The Logstash engine is comprised of three components:
+
+- Input plugins: Customized collection of data from various sources.
+- Filter plugins: Manipulation and normalization of data according to specified criteria.
+- Output plugins: Customized sending of collected and processed data to various destinations.
+
+> [!NOTE]
+> - Microsoft supports only the Microsoft Sentinel-provided Logstash output plugin discussed here. The current plugin is named **[microsoft-sentinel-logstash-output-plugin](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-sentinel-logstash-output-plugin)**, v1.0.0. You can [open a support ticket](https://portal.azure.com/#create/Microsoft.Support) for any issues regarding the output plugin.
+>
+> - Microsoft does not support third-party Logstash output plugins for Microsoft Sentinel, or any other Logstash plugin or component of any type.
+>
+> - See the [prerequisites](#prerequisites) for the pluginΓÇÖs Logstash version support.
+
+The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to your Log Analytics workspace, using the Log Analytics HTTP Data Collector REST API. The data is ingested into custom logs.
+
+- Learn more about the [Log Analytics REST API](/rest/api/loganalytics/create-request).
+- Learn more about [custom logs](../azure-monitor/agents/data-sources-custom-logs.md).
+
+## Deploy the Microsoft Sentinel output plugin in Logstash
+
+**To set up the plugin, follow these steps**:
+
+1. Review the [prerequisites](#prerequisites)
+1. [Install the plugin](#install-the-plugin)
+1. [Create a sample file](#create-a-sample-file)
+1. [Create the required DCR-related resources](#create-the-required-dcr-resources)
+1. [Configure Logstash configuration file](#configure-logstash-configuration-file)
+1. [Restart Logstash](#restart-logstash)
+1. [View inc oming logs in Microsoft Sentinel](#view-incoming-logs-in-microsoft-sentinel)
+1. [Monitor output plugin audit logs](#monitor-output-plugin-audit-logs)
+
+### Prerequisites
+
+- Install a supported version of Logstash. The plugin supports:
+ - Logstash version 7.0 to 7.17.6.
+ - Logstash version 8.0 to 8.4.2.
+
+ > [!NOTE]
+ > If you use Logstash 8, we recommended that you [disable ECS in the pipeline](https://www.elastic.co/guide/en/logstash/8.4/ecs-ls.html).
+
+- Verify that you have a Log Analytics workspace with at least contributor rights.
+- Verify that you have permissions to create DCR objects in the workspace.
+
+### Install the plugin
+
+The Microsoft Sentinel output plugin is available in the Logstash collection.
+
+- Follow the instructions in the Logstash [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html) document to install the **[microsoft-logstash-output-azure-loganalytics](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics)** plugin.
+- If your Logstash system does not have Internet access, follow the instructions in the Logstash [Offline Plugin Management](https://www.elastic.co/guide/en/logstash/current/offline-plugins.html) document to prepare and use an offline plugin pack. (This will require you to build another Logstash system with Internet access.)
+
+### Create a sample file
+
+In this section, you create a sample file in one of these scenarios:
+
+- [Create a sample file for custom logs](#create-a-sample-file-for-custom-logs)
+- [Create a sample file to ingest logs into the Syslog table](#create-a-sample-file-to-ingest-logs-into-the-syslog-table)
+
+#### Create a sample file for custom logs
+
+In this scenario, you configure the Logstash input plugin to send events to Microsoft Sentinel. For this example, we use the generator input plugin to simulate events. You can use any other input plugin.
+
+In this example, the Logstash configuration file looks like this:
+
+```
+input {
+ generator {
+ lines => [
+ "This is a test log message"
+ ]
+ count => 10
+ }
+}
+```
+
+1. Copy the output plugin configuration below to your Logstash configuration file.
+
+ ```
+ output {
+ microsoft-sentinel-logstash-output-plugin {
+ create_sample_file => true
+ sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/tmp" for Linux.
+ }
+ }
+ ```
+1. To make sure that the referenced file path exists before creating the sample file, start Logstash.
+
+ The plugin writes ten records to a sample file named `sampleFile<epoch seconds>.json` in the configured path. For example: *c:\temp\sampleFile1648453501.json*.
+ Here is part of a sample file that the plugin creates:
+
+ ```
+ [
+ {
+ "host": "logstashMachine",
+ "sequence": 0,
+ "message": "This is a test log message",
+ "ls_timestamp": "2022-03-28T17:45:01.690Z",
+ "ls_version": "1"
+ },
+ {
+ "host": "logstashMachine",
+ "sequence": 1
+ ...
+
+ ]
+ ```
+
+ The plugin automatically adds these properties to every record:
+ - `ls_timestamp`: The time when the record is received from the input plugin
+ - `ls_version`: The Logstash pipeline version.
+
+ You can remove these fields when you [create the DCR](#create-the-required-dcr-resources).
+
+#### Create a sample file to ingest logs into the Syslog table
+
+In this scenario, you configure the Logstash input plugin to send syslog events to Microsoft Sentinel.
+
+1. If you don't already have syslog messages forwarded into your Logstash machine, you can use the logger command to generate messages. For example (for Linux):
+
+ ```
+ logger -p local4.warn --rfc3164 --tcp -t CEF: "0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example" -P 514 -d -n 127.0.0.1
+ Here is an example for the Logstash input plugin:
+ input {
+ syslog {
+ port => 514
+ }
+ }
+ ```
+1. Copy the output plugin configuration below to your Logstash configuration file.
+
+ ```
+ output {
+ microsoft-sentinel-logstash-output-plugin {
+ create_sample_file => true
+ sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/tmp" for Linux.
+ }
+ }
+ ```
+1. To make sure that the file path exists before creating the sample file, start Logstash.
+
+ The plugin writes ten records to a sample file named `sampleFile<epoch seconds>.json` in the configured path. For example: *c:\temp\sampleFile1648453501.json*.
+ Here is part of a sample file that the plugin creates:
+ ```
+ [
+ {
+ "logsource": "logstashMachine",
+ "facility": 20,
+ "severity_label": "Warning",
+ "severity": 4,
+ "timestamp": "Apr 7 08:26:04",
+ "program": "CEF:",
+ "host": "127.0.0.1",
+ "facility_label": "local4",
+ "priority": 164,
+ "message": 0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example",
+ "ls_timestamp": "2022-04-07T08:26:04.000Z",
+ "ls_version": "1"
+ }
+ ]
+
+ ```
+ The plugin automatically adds these properties to every record:
+ - `ls_timestamp`: The time when the record is received from the input plugin
+ - `ls_version`: The Logstash pipeline version.
+
+ You can remove these fields when you [create the DCR](#create-the-required-dcr-resources).
+
+### Create the required DCR resources
+
+To configure the Microsoft Sentinel DCR-based Logstash plugin, you first need to create the DCR-related resources.
+
+In this section, you create resources to use for your DCR, in one of these scenarios:
+- [Create DCR resources for ingestion into a custom table](#create-dcr-resources-for-ingestion-into-a-custom-table)
+- [Create DCR resources for ingestion into a standard table](#create-dcr-resources-for-ingestion-into-a-standard-table)
+
+#### Create DCR resources for ingestion into a custom table
+
+To ingest the data to a custom table, follow these steps (based on the [Send data to Azure Monitor Logs using REST API (Azure portal) tutorial](../azure-monitor/logs/tutorial-logs-ingestion-portal.md)):
+
+1. Review the [prerequisites](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#prerequisites).
+1. [Configure the application](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#configure-application).
+1. [Create data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-data-collection-endpoint).
+1. [Add custom log table](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#add-custom-log-table).
+1. [Parse and filter sample data](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#parse-and-filter-sample-data) using [the sample file you created in the previous section](#create-a-sample-file).
+1. [Collect information from DCR](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#collect-information-from-dcr).
+1. [Assign permissions to DCR](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#assign-permissions-to-dcr).
+
+ Skip the Send sample data step.
+
+If you come across any issues, see the [troubleshooting steps](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#troubleshooting).
+
+#### Create DCR resources for ingestion into a standard table
+
+To ingest the data to a standard table like Syslog or CommonSecurityLog, you use a process based on the [Send data to Azure Monitor Logs using REST API (Resource Manager templates) tutorial](../azure-monitor/logs/tutorial-logs-ingestion-api.md). While the tutorial explains how to ingest data into a custom table, you can easily adjust the process to ingest data into a standard table. The steps below indicate relevant changes in the steps.
+
+1. Review the [prerequisites](../azure-monitor/logs/tutorial-logs-ingestion-api.md#prerequisites).
+1. [Collect workspace details](../azure-monitor/logs/tutorial-logs-ingestion-api.md#collect-workspace-details).
+1. [Configure an application](../azure-monitor/logs/tutorial-logs-ingestion-api.md#configure-an-application).
+
+ Skip the Create new table in Log Analytics workspace step. This step isn't relevant when ingesting data into a standard table, because the table is already defined in Log Analytics.
+
+1. [Create data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-a-data-collection-endpoint).
+1. [Create the DCR](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-a-data-collection-rule). In this step:
+ - Provide [the sample file you created in the previous section](#create-a-sample-file).
+ - Use the sample file you created to define the `streamDeclarations` property. Each of the fields in the sample file should have a corresponding column with the same name and the appropriate type (see the [example](#example-dcr-that-ingests-data-into-the-syslog-table) below).
+ - Configure the value of the `outputStream` property with the name of the standard table instead of the custom table. Unlike custom tables, standard table names don't have the `_CL` suffix.
+ - The prefix of the table name should be `Microsoft-` instead of `Custom-`. In our example, the `outputStream` property value is `Microsoft-Syslog`.
+1. [Assign permissions to a DCR](../azure-monitor/logs/tutorial-logs-ingestion-api.md#assign-permissions-to-a-dcr).
+
+ Skip the Send sample data step.
+
+If you come across any issues, see the [troubleshooting steps](../azure-monitor/logs/tutorial-logs-ingestion-api.md#troubleshooting).
+
+##### Example: DCR that ingests data into the Syslog table
+
+Note that:
+- The `streamDeclarations` column names and types should be the same as the sample file fields, but you do not have to specify all of them. For example, in the DCR below, the `PRI`, `type` and `ls_version` fields are omitted from the `streamDeclarations` column.
+- The `dataflows` property transforms the input to the Syslog table format, and sets the `outputStream` to `Microsoft-Syslog`.
++
+```
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Rule to create."
+ }
+ },
+ "location": {
+ "defaultValue": "westus2",
+ "allowedValues": [
+ "westus2",
+ "eastus2",
+ "eastus2euap"
+ ],
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ },
+ "location": {
+ "defaultValue": "[resourceGroup().location]",
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ },
+ "workspaceResourceId": {
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ }
+ },
+ "endpointResourceId": {
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "apiVersion": "2021-09-01-preview",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
+ "streamDeclarations": {
+ "Custom-SyslogStream": {
+ "columns": [
+ {
+ "name": "ls_timestamp",
+ "type": "datetime"
+ }, {
+ "name": "timestamp",
+ "type": "datetime"
+ },
+ {
+ "name": "message",
+ "type": "string"
+ },
+ {
+ "name": "facility_label",
+ "type": "string"
+ },
+ {
+ "name": "severity_label",
+ "type": "string"
+ },
+ {
+ "name": "host",
+ "type": "string"
+ },
+ {
+ "name": "logsource",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "clv2ws1"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-SyslogStream"
+ ],
+ "destinations": [
+ "clv2ws1"
+ ],
+ "transformKql": "source | project TimeGenerated = ls_timestamp, EventTime = todatetime(timestamp), Computer = logsource, HostName = logsource, HostIP = host, SyslogMessage = message, Facility = facility_label, SeverityLevel = severity_label",
+ "outputStream": "Microsoft-Syslog"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "String",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+ }
+ }
+}
+```
+
+### Configure Logstash configuration file
+
+To configure the Logstash configuration file to ingest the logs into a custom table, retrieve these values:
+
+|Field |How to retrieve |
+|||
+|`client_app_Id` |The `Application (client) ID` value you create in step 3 when you [create the DCR resources](#create-the-required-dcr-resources), according to the tutorial you used in this section. |
+|`client_app_secret` |The `Application (client) ID` value you create in step 5 when you [create the DCR resources](#create-the-required-dcr-resources), according to the tutorial you used in this section. |
+|`tenant_id` |Your subscription's tenant ID. You can find the tenant ID under **Home > Azure Active Directory > Overview > Basic Information**. |
+|`data_collection_endpoint` |The value of the `logsIngestion` URI in step 3 when you [create the DCR resources](#create-the-required-dcr-resources), according to the tutorial you used in this section. |
+|`dcr_immutable_id` |The value of the DCR `immutableId` in step 6 when you [create the DCR resources](#create-the-required-dcr-resources), according to the tutorial you used in this section. |
+|`dcr_stream_name` |For custom tables, as explained in step 6 when you [create the DCR resources](#create-dcr-resources-for-ingestion-into-a-custom-table), go to the JSON view of the DCR, and copy the `dataFlows` > `streams` property. See the `dcr_stream_name` in the [example](#example-output-plugin-configuration-section) below.<br><br>For standard tables, the value is `Custom-SyslogStream`. |
+
+After you retrieve the required values:
+
+1. Replace the output section of the [Logstash configuration file](#create-a-sample-file) you created in the previous step with the example below.
+1. Replace the placeholder strings in the [example](#example-output-plugin-configuration-section) below with the values you retrieved.
+1. Make sure you change the `create_sample_file` attribute to `false`.
+
+#### Optional configuration
+
+|Field |How to retrieve |Default value |
+||||
+|`key_names` |An array of strings. Provide this field if you want to send a subset of the columns to Log Analytics. |None (field is empty) |
+|`plugin_flush_interval` |Defines the maximal time difference (in seconds) between sending two messages to Log Analytics. |`5` |
+|`retransmission_time` |Sets the amount of time in seconds for retransmitting messages once sending failed. |`10` |
+|`compress_data` |When this field is `True`, the event data is compressed before using the API. Recommended for high throughput pipelines. |`False` |
+
+#### Example: Output plugin configuration section
+
+```
+output {
+ microsoft-sentinel-logstash-output-plugin {
+ client_app_Id => "<enter your client_app_id value here>"
+ client_app_secret => "<enter your client_app_secret value here>"
+ tenant_id => "<enter your tenant id here> "
+ data_collection_endpoint => "<enter your DCE logsIngestion URI here> "
+ dcr_immutable_id => "<enter your DCR immutableId here> "
+ dcr_stream_name => "<enter your stream name here> "
+ create_sample_file=> false
+ sample_file_path => "c:\\temp"
+ }
+}
+```
+To set other parameters for the Microsoft Sentinel Logstash output plugin, see the output plugin's readme file.
+
+> [!NOTE]
+> For security reasons, we recommend that you don't implicitly state the `client_app_Id`, `client_app_secret`, `tenant_id`, `data_collection_endpoint`, and `dcr_immutable_id` attributes in your Logstash configuration file. We recommend that you store this sensitive information in a [Logstash KeyStore](https://www.elastic.co/guide/en/logstash/current/keystore.html#keystore).
+
+### Restart Logstash
+
+Restart Logstash with the updated output plugin configuration and see that data is ingested to the right table according to your DCR configuration.
+
+### View incoming logs in Microsoft Sentinel
+
+1. Verify that messages are being sent to the output plugin.
+
+1. From the Microsoft Sentinel navigation menu, click **Logs**. Under the **Tables** heading, expand the **Custom Logs** category. Find and click the name of the table you specified (with a `_CL` suffix) in the configuration.
+
+ :::image type="content" source="./media/connect-logstash/logstash-custom-logs-menu.png" alt-text="Screenshot of log stash custom logs.":::
+
+1. To see records in the table, query the table by using the table name as the schema.
+
+ :::image type="content" source="./media/connect-logstash/logstash-custom-logs-query.png" alt-text="Screenshot of a log stash custom logs query.":::
+
+## Monitor output plugin audit logs
+
+To monitor the connectivity and activity of the Microsoft Sentinel output plugin, enable the appropriate Logstash log file. See the [Logstash Directory Layout](https://www.elastic.co/guide/en/logstash/current/dir-layout.html#dir-layout) document for the log file location.
+
+If you are not seeing any data in this log file, generate and send some events locally (through the input and filter plugins) to make sure the output plugin is receiving data. Microsoft Sentinel will support only issues relating to the output plugin.
+
+## Limitations
+
+- Ingestion into standard tables is limited only to [standard tables supported for custom logs ingestion](data-transformation.md#data-transformation-support-for-custom-data-connectors).
+- The columns of the input stream in the `streamDeclarations` property must start with a letter. If you start a column with other characters (for example `@` or `_`), the operation fails.
+- The `TimeGenerated` datetime field is required. You must include this field in the KQL transform.
+- For additional possible issues, review the [troubleshooting section](../azure-monitor/logs/tutorial-logs-ingestion-api.md#troubleshooting) in the tutorial.
+
+## Next steps
+
+In this article, you learned how to use Logstash to connect external data sources to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles:
+- Learn how to [get visibility into your data and potential threats](get-visibility.md).
+- Get started detecting threats with Microsoft Sentinel, using [built-in](detect-threats-built-in.md) or [custom](detect-threats-custom.md) rules.
sentinel Connect Logstash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash.md
Title: Connect data sources through Logstash to Microsoft Sentinel | Microsoft Docs
-description: Learn how to use Logstash to forward logs from external data sources to Microsoft Sentinel.
+ Title: Use Logstash to stream logs with HTTP Data Collection API (legacy)
+description: Learn how to use Logstash to forward logs from external data sources to Microsoft Sentinel using the HTTP Data Collection API.
Last updated 11/09/2021
-# Use Logstash to connect data sources to Microsoft Sentinel
-
+# Use Logstash to stream logs with HTTP Data Collection API (legacy)
> [!IMPORTANT] > Data ingestion using the Logstash output plugin is currently in public preview. This feature is provided without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Using Microsoft Sentinel's output plugin for the **Logstash data collection engine**, you can send any type of log you want through Logstash directly to your Log Analytics workspace in Microsoft Sentinel. Your logs will be sent to a custom table that you will define using the output plugin.
+> [!NOTE]
+> A [newer version of the Logstash plugin](connect-logstash-data-connection-rules.md) can forward logs from external data sources into custom and standard tables using the DCR based API. The new plugin allows full control over the output schema, including the configuration of the column names and types.
+
+Using Microsoft Sentinel's output plugin for the **Logstash data collection engine**, you can send any type of log you want through Logstash directly to your Log Analytics workspace in Microsoft Sentinel. Your logs will be sent to a custom table that you define using the output plugin. This version of the plugin uses the HTTP Data Collection API.
To learn more about working with the Logstash data collection engine, see [Getting started with Logstash](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html).
To learn more about working with the Logstash data collection engine, see [Getti
### Architecture and background
-![Diagram of the Log stash architecture.](./media/connect-logstash/logstash-architecture.png)
The Logstash engine is comprised of three components:
sentinel Data Type Cloud Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-type-cloud-support.md
+
+ Title: Support for Microsoft Sentinel connector data types in different clouds
+description: This article describes the types of clouds that affect data streaming from the different connectors that Microsoft Sentinel supports.
++ Last updated : 11/14/2022+++
+# Support for data types in Microsoft Sentinel across different clouds
+
+Microsoft Sentinel data connectors use data stored in various cloud environments, like the Microsoft 365 Commercial cloud or the Government Community Cloud (GCC).
+
+This article describes the types of clouds that affect the supported data types for the different connectors that Microsoft Sentinel supports. Specifically, support varies for different Microsoft 365 Defender connector data types in different GCC environments.
+
+## Microsoft cloud types
+
+|Name |Also named|Description |Learn more |
+||||
+|Azure Commercial |Azure, Azure Public |The standard Microsoft cloud. Most of the enterprises in the private market, academic institutions and home Office 365 tenants reside in a Commercial environment.<br><br>Different tools help meet the Microsoft 365 Commercial compliance and security needs. For example: Intune, Microsoft Purview compliance portal, Microsoft Purview Information Protection, and more. |[Microsoft 365 integration](../security/fundamentals/feature-availability.md#microsoft-365-integration) |
+|Government Community Cloud (GCC) |GCC-M, GCC Moderate |A government-focused copy of Microsoft 365 Commercial environment. While GCC contains similar features to the Microsoft 365 Commercial environment, GCC is subject to the FedRAMP Moderate policy. |[Government Community Cloud](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc) |
+|Department of Defense (DoD) | |Originally created for internal use by the Department of Defense. DoD is the only environment that meets DoD SRG levels 5 and 6. Other clouds described in this article don't support these SRG levels. |[GCC High and DoD](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod) |
+|GCC-High |GCC High |Technically, GCC High is a copy of a DoD environment, but GCC High exists in its own sovereign environment.<br><br>GCC High (and above) stores the data in Azure Government, so it is physically segregated from the commercial services. |[GCC High and DoD](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc-high-and-dod) |
+
+## Microsoft clouds and Microsoft Sentinel
+
+Microsoft Sentinel is built on Microsoft Azure environmentsΓÇöboth commercial and government. Office 365 environments, like GCC, GCC-High and DoD, interface at different levels with Azure environments.
+
+This diagram shows the hierarchy of the Office 365 and Microsoft Azure clouds and how they relate to each other and to Microsoft Sentinel.
++
+Because of this complexity, different types of data streaming into Microsoft Sentinel may or may not be fully supported.
+
+## How cloud support affects data from Microsoft 365 Defender connectors
+
+Your environment ingests data from multiple connectors. The type of cloud you use affects Microsoft Sentinel's ability to ingest and display data from these connectors, like logs, alerts, device events, and more.
+
+We have identified support discrepancies between the different clouds for the data streaming from these connectors:
+
+- Microsoft Defender for Endpoint
+- Microsoft Defender for Office 365
+- Microsoft Defender for Identity
+- Microsoft Defender for Cloud Apps
+- Azure Active Directory Identity Protection
+
+Read more about [support for Microsoft Defender 365 connector data types in different clouds](microsoft-365-defender-cloud-support.md).
+
+## Next steps
+
+In this article, you learned about the types of clouds that affect the supported data types for the different connectors that Microsoft Sentinel supports.
+
+- To get started with Microsoft Sentinel, you need a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free trial](https://azure.microsoft.com/free/).
+- Learn how to [onboard your data to Microsoft Sentinel](quickstart-onboard.md) and [get visibility into your data and potential threats](get-visibility.md).
sentinel Enable Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-monitoring.md
+
+ Title: Turn on health monitoring in Microsoft Sentinel
+description: Monitor supported data connectors by using the SentinelHealth data table.
+ Last updated : 11/07/2022+++++
+# Turn on health monitoring for Microsoft Sentinel (preview)
+
+Monitor the health of supported Microsoft Sentinel resources by turning on the health monitoring feature in Microsoft Sentinel's **Settings** page. Get insights on health drifts, such as the latest failure events or changes from success to failure states, and use this information to create notifications and other automated actions.
+
+To get health data from the *SentinelHealth* data table, you must first turn on the Microsoft Sentinel health feature for your workspace.
+
+When the health feature is turned on, the *SentinelHealth* data table is created at the first success or failure event generated for supported resource types.
+
+The following resource types are currently supported:
+- Data connectors
+- Automation rules
+- Playbooks (Azure Logic Apps workflows)
+ > [!NOTE]
+ > When monitoring playbook health, you'll also need to collect Azure Logic Apps diagnostic events from your playbooks in order to get the full picture of your playbook activity. See [**Monitor the health of your automation rules and playbooks**](monitor-automation-health.md) for more information.
+
+To configure the retention time for your health events, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+
+> [!IMPORTANT]
+>
+> The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Turn on health monitoring for your workspace
+
+1. In Microsoft Sentinel, under the **Configuration** menu on the left, select **Settings**.
+
+1. Select **Settings** from the banner.
+
+1. Scroll down to the **Health monitoring** section that appears below, and select it to expand.
+
+1. Select **Configure Diagnostic Settings**.
+
+ :::image type="content" source="media/enable-monitoring/enable-health-monitoring.png" alt-text="Screenshot shows how to get to the health monitoring settings.":::
+
+1. In the **Diagnostic settings** screen, select **+ Add diagnostic setting**.
+
+ - In the **Diagnostic setting name** field, enter a meaningful name for your setting.
+
+ - In the **Logs** column, select the appropriate **Categories** for the resource types you want to monitor, for example **Data Collection - Connectors**.
+
+ - Under **Destination details**, select **Send to Log Analytics workspace**, and select your **Subscription** and **Log Analytics workspace** from the dropdown menus.
+
+1. Select **Save** on the top banner to save your new setting.
+
+The *SentinelHealth* data table is created at the first success or failure event generated for the selected resources.
+
+## Access the *SentinelHealth* table
+
+In the Microsoft Sentinel **Logs** page, run a query on the *SentinelHealth* table. For example:
+
+```kusto
+SentinelHealth
+ | take 20
+```
+
+## Next steps
+
+- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.
+- [Monitor the health of your Microsoft Sentinel data connectors](monitor-data-connector-health.md).
+- [Monitor the health of your Microsoft Sentinel automation rules](monitor-automation-health.md).
+- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
sentinel Health Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/health-audit.md
+
+ Title: Health monitoring in Microsoft Sentinel
+description: Learn about the Microsoft Sentinel health and audit feature, which monitors service health drifts and user actions.
+++ Last updated : 08/19/2022+++
+# Health monitoring in Microsoft Sentinel
+
+Microsoft Sentinel is a critical service for monitoring and ensuring your organizationΓÇÖs information security, so youΓÇÖll want to rest assured that itΓÇÖs always running smoothly. YouΓÇÖll want to be able to make sure that the service's many moving parts are always functioning as intended. You also might like to configure notifications of health drifts for relevant stakeholders who can take action. For example, you can configure email or Microsoft Teams messages to be sent to operations teams, managers, or officers, launch new tickets in your ticketing system, and so on.
+
+This article describes how Microsoft SentinelΓÇÖs health monitoring feature lets you monitor the activity of some of the serviceΓÇÖs key resources.
+
+## Description
+
+This section describes the function and use cases of the health monitoring components.
+
+### Data storage
+
+Health data is collected in the *SentinelHealth* table in your Log Analytics workspace. The prevalent way you'll use this data is by querying the table.
+
+> [!IMPORTANT]
+>
+> - The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> - When monitoring the health of **playbooks**, you'll also need to capture Azure Logic Apps diagnostic events from your playbooks, in addition to the *SentinelHealth* data, in order to get the full picture of your playbook activity. Azure Logic Apps diagnostic data is collected in the *AzureDiagnostics* table in your workspace.
+
+### Use cases
+
+**Is the data connector running correctly?**
+
+[Is the data connector receiving data](./monitor-data-connector-health.md)? For example, if you've instructed Microsoft Sentinel to run a query every 5 minutes, you want to check whether that query is being performed, how it's performing, and whether there are any risks or vulnerabilities related to the query.
+
+**Did an automation rule run as expected?**
+
+[Did my automation rule run when it was supposed to](./monitor-automation-health.md) - that is, when its conditions were met? Did all the actions in the automation rule run successfully?
+
+## How Microsoft Sentinel presents health data
+
+To dive into the health data that Microsoft Sentinel generates, you can:
+
+- Run queries on the *SentinelHealth* data table from the Microsoft Sentinel **Logs** blade.
+ - [Data connectors](monitor-data-connector-health.md#run-queries-to-detect-health-drifts)
+ - [Automation rules and playbooks](monitor-automation-health.md#get-the-complete-automation-picture) (join query with Azure Logic Apps diagnostics)
+
+- Use the health monitoring workbooks provided in Microsoft Sentinel.
+ - [Data connectors](monitor-data-connector-health.md#use-the-health-monitoring-workbook)
+ - [Automation rules and playbooks](monitor-automation-health.md#use-the-health-monitoring-workbook)
+
+- Export the data into various destinations, like your Log Analytics workspace, archiving to a storage account, and more. Learn about the [supported destinations](../azure-monitor/essentials/diagnostic-settings.md) for your logs.
+
+## Next steps
+- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- Monitor the health of your [automation rules and playbooks](monitor-automation-health.md).
+- Monitor the health of your [data connectors](monitor-data-connector-health.md).
+- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
sentinel Health Table Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/health-table-reference.md
+
+ Title: SentinelHealth tables reference
+description: Learn about the fields in the SentinelHealth tables, used for health monitoring and analysis.
+++ Last updated : 11/08/2022+++
+# SentinelHealth tables reference
+
+This article describes the fields in the *SentinelHealth* table used for monitoring the health of Microsoft Sentinel resources. With the Microsoft Sentinel [health monitoring feature](health-audit.md), you can keep tabs on the proper functioning of your SIEM and get information on any health drifts in your environment.
+
+Learn how to query and use the health table for deeper monitoring and visibility of actions in your environment:
+- For [data connectors](monitor-data-connector-health.md)
+- For [automation rules and playbooks](monitor-automation-health.md)
+
+> [!IMPORTANT]
+>
+> The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+Microsoft Sentinel's health monitoring feature covers different kinds of resources, such as [data connectors](monitor-data-connector-health.md) and [automation rules](monitor-automation-health.md). Many of the data fields in the following tables apply across resource types, but some have specific applications for each type. The descriptions below will indicate one way or the other.
+
+## SentinelHealth table columns schema
+
+The following table describes the columns and data generated in the SentinelHealth data table:
+
+| ColumnName | ColumnType | Description |
+| | -- | - |
+| **TenantId** | String | The tenant ID for your Microsoft Sentinel workspace. |
+| **TimeGenerated** | Datetime | The time at which the health event occurred. |
+| <a name="operationname_health"></a>**OperationName** | String | The health operation. Possible values depend on the resource type.<br>See [Operation names for different resource types](#operation-names-for-different-resource-types) for details. |
+| <a name="sentinelresourceid_health"></a>**SentinelResourceId** | String | The unique identifier of the resource on which the health event occurred, and its associated Microsoft Sentinel workspace. |
+| **SentinelResourceName** | String | The resource name. |
+| <a name="status_health"></a>**Status** | String | Indicates the overall result of the operation. Possible values depend on the operation name.<br>See [Operation names for different resource types](#operation-names-for-different-resource-types) for details. |
+| **Description** | String | Describes the operation, including extended data as needed. For failures, this can include details of the failure reason. |
+| **Reason** | Enum | Shows a basic reason or error code for the failure of the resource. Possible values depend on the resource type. More detailed reasons can be found in the **Description** field. |
+| **WorkspaceId** | String | The workspace GUID on which the health issue occurred. The full Azure Resource Identifier is available in the [SentinelResourceID](#sentinelresourceid_health) column. |
+| **SentinelResourceType** | String | The Microsoft Sentinel resource type being monitored.<br>Possible values: `Data connector`, `Automation rule`, `Playbook` |
+| **SentinelResourceKind** | String | A resource classification within the resource type.<br>- For data connectors, this is the type of connected data source. |
+| **RecordId** | String | A unique identifier for the record that can be shared with the support team for better correlation as needed. |
+| **ExtendedProperties** | Dynamic (json) | A JSON bag that varies by the [OperationName](#operationname_health) value and the [Status](#status_health) of the event.<br>See [Extended properties](#extended-properties) for details. |
+| **Type** | String | `SentinelHealth` |
+
+## Operation names for different resource types
+
+| Resource types | Operation names | Statuses |
+| -- | | -- |
+| **[Data collectors](monitor-data-connector-health.md)** | Data fetch status change<br><br>__________________<br>Data fetch failure summary | Success<br>Failure<br>_____________<br>Informational |
+| **[Automation rules](monitor-automation-health.md)** | Automation rule run | Success<br>Partial success<br>Failure |
+| **[Playbooks](monitor-automation-health.md)** | Playbook was triggered | Success<br>Failure |
++
+## Extended properties
+
+### Data connectors
+
+For `Data fetch status change` events with a success indicator, the bag contains a ΓÇÿDestinationTableΓÇÖ property to indicate where data from this resource is expected to land. For failures, the contents vary depending on the failure type.
+
+### Automation rules
+
+| ColumnName | ColumnType | Description |
+| | -- | - |
+| **ActionsTriggeredSuccessfully** | Integer | Number of actions the automation rule successfully triggered. |
+| **IncidentName** | String | The resource ID of the Microsoft Sentinel incident on which the rule was triggered. |
+| **IncidentNumber** | String | The sequential number of the Microsoft Sentinel incident as shown in the portal. |
+| **TotalActions** | Integer | Number of actions configured in this automation rule. |
+| **TriggeredOn** | String | `Alert` or `Incident`. The object on which the rule was triggered. |
+| **TriggeredPlaybooks** | Dynamic (json) | A list of playbooks this automation rule triggered successfully.<br><br>Each playbook record in the list contains:<br>- **RunId:** The run ID for this triggering of the Logic Apps workflow<br>- **WorkflowId:** The unique identifier (full ARM resource ID) of the Logic Apps workflow resource. |
+| **TriggeredWhen** | String | `Created` or `Updated`. Indicates whether the rule was triggered due to the creation or updating of an incident or alert. |
+
+### Playbooks
+
+| ColumnName | ColumnType | Description |
+| | -- | - |
+| **IncidentName** | String | The resource ID of the Microsoft Sentinel incident on which the rule was triggered. |
+| **IncidentNumber** | String | The sequential number of the Microsoft Sentinel incident as shown in the portal. |
+| **RunId** | String | The run ID for this triggering of the Logic Apps workflow. |
+| **TriggeredByName** | Dynamic (json) | Information on the identity (user or application) that triggered the playbook. |
+| **TriggeredOn** | String | `Incident`. The object on which the playbook was triggered.<br>(Playbooks using the alert trigger are logged only if they're called by automation rules, so those playbook runs will appear in the **TriggeredPlaybooks** extended property under automation rule events.) |
++
+## Next steps
+
+- Learn about [health monitoring in Microsoft Sentinel](health-audit.md).
+- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- Monitor the [health of your data connectors](monitor-data-connector-health.md).
+- Monitor the [health of your automation rules and playbooks](monitor-automation-health.md).
sentinel Microsoft 365 Defender Cloud Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-cloud-support.md
+
+ Title: Support for Microsoft 365 Defender connector data types in Microsoft Sentinel for different clouds (GCC environments)
+description: This article describes support for different Microsoft 365 Defender connector data types in Microsoft Sentinel across different clouds, including Commercial, GCC, GCC-High, and DoD.
++ Last updated : 11/14/2022+++
+# Support for Microsoft 365 Defender connector data types in different clouds
+
+The type of cloud your environment uses affects Microsoft Sentinel's ability to ingest and display data from these connectors, like logs, alerts, device events, and more. This article describes support for different Microsoft 365 Defender connector data types in Microsoft Sentinel across different clouds, including Commercial, GCC, GCC-High, and DoD.
+
+Read more about [data type support for different clouds in Microsoft Sentinel](data-type-cloud-support.md).
+
+## Microsoft Defender for Endpoint
+
+|Data type |Commercial |GCC |GCC-High |DoD |
+||||||
+|DeviceInfo |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+|DeviceNetworkInfo |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+|DeviceProcessEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</ul></li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+|DeviceNetworkEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |
+|DeviceFileEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+|DeviceRegistryEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+|DeviceLogonEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+|DeviceImageLoadEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+|DeviceEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+|DeviceFileCertificateInfo |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |
+
+## Microsoft Defender for Identity
+
+|Data type |Commercial |GCC |GCC-High |DoD |
+||||||
+|IdentityDirectoryEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |Unsupported |Unsupported |Unsupported |
+IdentityLogonEvents|<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |Unsupported |Unsupported |Unsupported |
+IdentityQueryEvents|<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li> |Unsupported |Unsupported |Unsupported |
+
+## Microsoft Defender for Cloud Apps
+
+|Data type |Commercial |GCC |GCC-High |DoD |
+||||||
+|CloudAppEvents |<ul><li>Microsoft 365 Defender: GA</li><li>Microsoft Sentinel: Public Preview</li></ul> |Unsupported |Unsupported |Unsupported |
+
+## Microsoft 365 Defender incidents
+
+|Data type |Commercial |GCC |GCC-High |DoD |
+||||||
+|SecurityIncident |Microsoft Sentinel: Public Preview |Microsoft Sentinel: Public Preview |Microsoft Sentinel: Public Preview |Microsoft Sentinel: Public Preview |
+
+## Alerts
+
+|Connector/Data type |Commercial |GCC |GCC-High |DoD |
+||||||
+|Microsoft 365 Defender Alerts: SecurityAlert |Public Preview |Public Preview |Public Preview |Public Preview |
+|Microsoft Defender for Endpoint Alerts (standalone connector): SecurityAlert (MDATP) |Public Preview |Public Preview |Public Preview |Public Preview |
+| Microsoft Defender for Office 365 Alerts (standalone connector): SecurityAlert (OATP) |Public Preview |Public Preview |Public Preview |Public Preview |
+Microsoft Defender for Identity Alerts (standalone connector): SecurityAlert (AATP) |Public Preview |Unsupported |Unsupported |Unsupported |
+Microsoft Defender for Cloud Apps Alerts (standalone connector): SecurityAlert (MCAS), |Public Preview |Unsupported |Unsupported |Unsupported |
+|Microsoft Defender for Cloud Apps Alerts (standalone connector): McasShadowItReporting |Public Preview |Unsupported |Unsupported |Unsupported |
+
+## Azure Active Directory Identity Protection
+
+|Data type |Commercial |GCC |GCC-High |DoD |
+||||||
+|SecurityAlert (IPC) |Public Preview/GA |Supported |Supported |Supported |
+|AlertEvidence |Public Preview |Unsupported |Unsupported |Unsupported |
+
+## Next steps
+
+In this article, you learned which Microsoft 365 Defender connector data types are supported in Microsoft Sentinel for different cloud environments.
+
+- Read more about [GCC environments in Microsoft Sentinel](data-type-cloud-support.md).
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Monitor Automation Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-automation-health.md
+
+ Title: Monitor the health of your Microsoft Sentinel automation rules and playbooks
+description: Use the SentinelHealth and AzureDiagnostics data tables to keep track of your automation rules' and playbooks' execution and performance.
+++ Last updated : 11/09/2022+++
+# Monitor the health of your automation rules and playbooks
+
+To ensure proper functioning and performance of your security orchestration, automation, and response operations in your Microsoft Sentinel service, keep track of the health of your automation rules and playbooks by monitoring their execution logs.
+
+Set up notifications of health events for relevant stakeholders, who can then take action. For example, define and send email or Microsoft Teams messages, create new tickets in your ticketing system, and so on.
+
+This article describes how to use Microsoft Sentinel's health monitoring features to keep track of your automation rules and playbooks' health from within Microsoft Sentinel.
+
+## Summary
+
+Automation health monitoring in Microsoft Sentinel has two parts:
+
+| Feature | Table | Coverage | Enable from |
+| - | - | - | - |
+| **Microsoft Sentinel automation health logs** | *SentinelHealth* | - Automation rules run<br>- Playbooks triggered | Microsoft Sentinel settings > Health monitoring |
+| **Azure Logic Apps diagnostics logs** | *AzureDiagnostics* | - Playbook run started/ended<br>- Playbook actions/triggers started/ended | Logic Apps resource > [Diagnostics settings](../azure-monitor/essentials/diagnostic-settings.md?tabs=portal#create-diagnostic-settings) |
++
+- **Microsoft Sentinel automation health logs:**
+
+ - This log captures events that record the running of automation rules, and the end result of these runnings - if they succeeded or failed, and if they failed, why. The log records the collective success or failure of the launch of the actions in the rule, and it also lists the playbooks called by the rule.
+ - The log also captures events that record the on-demand (manual or API-based) triggering of playbooks, including the **identities that triggered them**, whether they succeeded or failed, and if they failed, why.
+ - This log *does not include* a record of the execution of the contents of a playbook, only of the success or failure of the launching of the playbook. For a log of the actions taken within a playbook, see the next list below.
+ - These logs are collected in the *SentinelHealth* table in Log Analytics.
+
+ > [!IMPORTANT]
+ >
+ > The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+- **Azure Logic Apps diagnostics logs:**
+
+ - These logs capture the results of the running of playbooks (also known as Logic Apps workflows) and the actions in them.
+ - These logs provide you with a complete picture of your automation health when used in tandem with the automation health logs.
+ - These logs are collected in the *AzureDiagnostics* table in Log Analytics.
+
+## Use the SentinelHealth data table (Public preview)
+
+To get automation health data from the *SentinelHealth* data table, you must first turn on the Microsoft Sentinel health feature for your workspace. For more information, see [Turn on health monitoring for Microsoft Sentinel](enable-monitoring.md).
+
+Once the health feature is turned on, the *SentinelHealth* data table is created at the first success or failure event generated for your automation rules and playbooks.
+
+### Understanding SentinelHealth table events
+
+The following types of automation health events are logged in the *SentinelHealth* table:
+
+- **Automation rule run**. Logged whenever an automation rule's conditions are met, causing it to run. Besides the fields in the basic *SentinelHealth* table, these events will include [extended properties unique to the running of automation rules](health-table-reference.md#automation-rules), including a list of the playbooks called by the rule. The following sample query will display these events:
+
+ ```kusto
+ SentinelHealth
+ | where OperationName == "Automation rule run"
+ ```
+
+- **Playbook was triggered**. Logged whenever a playbook is triggered on an incident manually from the portal or through the API. Besides the fields in the basic *SentinelHealth* table, these events will include [extended properties unique to the manual triggering of playbooks](health-table-reference.md#playbooks). The following sample query will display these events:
+
+ ```kusto
+ SentinelHealth
+ | where OperationName == "Playbook was triggered"
+ ```
+
+For more information, see [SentinelHealth table columns schema](health-table-reference.md#sentinelhealth-table-columns-schema).
+
+### Statuses, errors and suggested steps
+
+For **Automation rule run**, you may see the following statuses:
+- Success: rule executed successfully, triggering all actions.
+- Partial success: rule executed and triggered at least one action, but some actions failed.
+- Failure: automation rule did not run any action due to one of the following reasons:
+ - Conditions evaluation failed.
+ - Conditions met, but the first action failed.
+
+For **Playbook was triggered**, you may see the following statuses:
+- Success: playbook was triggered successfully.
+- Failure: playbook could not be triggered.
+ > [!NOTE]
+ >
+ > "Success" means only that the automation rule successfully triggered a playbook. It doesn't tell you when the playbook started or ended, the results of the actions in the playbook, or the final result of the playbook. To find this information, query the Logic Apps diagnostics logs (see the instructions later in this article).
+
+#### Error descriptions and suggested actions
+
+| Error description | Suggested actions |
+| | -- |
+| **Could not add task: *\<TaskName>*.**<br>Incident/alert was not found. | Make sure the incident/alert exists and try again. |
+| **Could not modify property: *\<PropertyName>*.**<br> Incident/alert was not found. | Make sure the incident/alert exists and try again. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Incident/alert was not found. | If the error occurred when trying to trigger a playbook on demand, make sure the incident/alert exists and try again. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because playbook was not found or because Microsoft Sentinel was missing permissions on it. | Edit the automation rule, find and select the playbook in its new location, and save. Make sure Microsoft Sentinel has [permission to run this playbook](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because it contains an unsupported trigger type. | Make sure your playbook starts with the [correct Logic Apps trigger](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary): Microsoft Sentinel Incident or Microsoft Sentinel Alert. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because the subscription is disabled and marked as read-only. Playbooks in this subscription cannot be run until the subscription is re-enabled. | Re-enable the Azure subscription in which the playbook is located. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because it was disabled. | Enable your playbook, in Microsoft Sentinel in the Active Playbooks tab under Automation, or in the Logic Apps resource page. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because of invalid template definition. | There is an error in the playbook definition. Go to the Logic Apps designer to fix the issues and save the playbook. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because access control configuration restricts Microsoft Sentinel. | Logic Apps configurations allow restricting access to trigger the playbook. This restriction is in effect for this playbook. Remove this restriction so Microsoft Sentinel is not blocked. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md?tabs=azure-portal#restrict-access-by-ip-address-range) |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because Microsoft Sentinel is missing permissions to run it. | Microsoft Sentinel requires [permissions to run playbooks](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because it wasnΓÇÖt migrated to new permissions model. Grant Microsoft Sentinel permissions to run this playbook and resave the rule. | Grant Microsoft Sentinel [permissions to run this playbook](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents) and resave the rule. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered due to too many requests exceeding workflow throttling limits. | The number of waiting workflow runs has exceeded the maximum allowed limit. Try increasing the value of `'maximumWaitingRuns'` in [trigger concurrency configuration](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs-limit). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered due to too many requests exceeding throttling limits. | Learn more about [subscription and tenant limits](../azure-resource-manager/management/request-limits-and-throttling.md#subscription-and-tenant-limits). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because access was forbidden. Managed identity is missing configuration or Logic Apps network restriction has been set. | If the playbook uses managed identity, [make sure the managed identity was assigned with permissions](authenticate-playbooks-to-sentinel.md#authenticate-with-managed-identity). The playbook may have network restriction rules preventing it from being triggered as they block Microsoft Sentinel service. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because the subscription or resource group was locked. | Remove the lock to allow Microsoft Sentinel trigger playbooks in the locked scope. Learn more about [locked resources](../azure-resource-manager/management/lock-resources.md?tabs=json). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because caller is missing required playbook-triggering permissions on playbook or Microsoft Sentinel is missing permissions on it. | The user trying to trigger the playbook on demand is missing Logic Apps Contributor role on the playbook or to trigger the playbook. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md?tabs=azure-portal#restrict-access-by-ip-address-range) |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered due to invalid credentials in connection. | [Check the credentials your connection is using](authenticate-playbooks-to-sentinel.md#manage-your-api-connections) in the **API connections** service in the Azure portal. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because playbook ARM ID is not valid. | |
+
+## Get the complete automation picture
+
+Microsoft Sentinel's health monitoring table allows you to track the triggering of playbooks, but to monitor what happens inside your playbooks and their results when they're run, you must also turn on **Azure Logic Apps diagnostics**.
+
+By [enabling Azure Logic Apps diagnostics](../logic-apps/monitor-logic-apps-log-analytics.md#set-up-azure-monitor-logs), you'll ingest the following events to the *AzureDiagnostics* table:
+- {Action name} started
+- {Action name} ended
+- Workflow (playbook) started
+- Workflow (playbook) ended
+
+These added events will give you additional insights into the actions being taken in your playbooks.
+
+### Turn on Logic Apps diagnostics
+
+For each playbook you are interested in monitoring, [follow these steps](../logic-apps/monitor-logic-apps-log-analytics.md#set-up-azure-monitor-logs). Make sure to select **Send to Log Analytics workspace** as your log destination, and choose your Microsoft Sentinel workspace.
+
+### Correlate Microsoft Sentinel and Azure Logic Apps logs
+
+Now that you have logs for your automation rules and playbooks *and* logs for your individual Logic Apps workflows in your workspace, you can correlate them to get the complete picture. Consider the following sample query:
+
+```kusto
+SentinelHealth
+| where SentinelResourceType == "Automation rule"
+| mv-expand TriggeredPlaybooks = ExtendedProperties.TriggeredPlaybooks
+| extend runId = tostring(TriggeredPlaybooks.RunId)
+| join (AzureDiagnostics
+ | where OperationName == "Microsoft.Logic/workflows/workflowRunCompleted"
+ | project
+ resource_runId_s,
+ playbookName = resource_workflowName_s,
+ playbookRunStatus = status_s)
+ on $left.runId == $right.resource_runId_s
+| project
+ RecordId,
+ TimeGenerated,
+ AutomationRuleName= SentinelResourceName,
+ AutomationRuleStatus = Status,
+ Description,
+ workflowRunId = runId,
+ playbookName,
+ playbookRunStatus
+```
+
+## Use the health monitoring workbook
+
+The **Automation health** workbook helps you visualize your health data, as well as the correlation between the two types of logs that we just mentioned. The workbook includes the following displays:
+- Automation rule health and details
+- Playbook trigger health and details
+- Playbook runs health and details (requires Azure Diagnostic enabled on the Playbook level)
+- Automation details per incident
++
+Select the **Playbooks run by Automation Rules** tab to see playbook activity.
++
+Select a playbook to see the list of its runs in the drill-down chart below.
++
+Select a particular run to see the results of the actions in the playbook.
++
+## Next steps
+
+- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.
+- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- Monitor the health of your [data connectors](monitor-data-connector-health.md).
+- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md
description: Use the SentinelHealth data table and the Health Monitoring workboo
- Previously updated : 07/28/2022 Last updated : 11/09/2022 # Monitor the health of your data connectors
-After you've configured and connected your Microsoft Sentinel workspace to your data connectors, you'll want to monitor your connector health, viewing any service or data source issues, such as authentication, throttling, and more.
+To ensure complete and uninterrupted data ingestion in your Microsoft Sentinel service, keep track of your data connectors' health, connectivity, and performance.
-You also might like to configure notifications for health drifts for relevant stakeholders who can take action. For example, configure email messages, Microsoft Teams messages, new tickets in your ticketing system, and so on.
+This article describes how to use the following features, which allow you to perform this monitoring from within Microsoft Sentinel:
-This article describes how to use the following features, which allow you to keep track of your data connectors' health, connectivity, and performance from within Microsoft Sentinel:
+- **Data connectors health monitoring workbook:** This workbook provides additional monitors, detects anomalies, and gives insight regarding the workspaceΓÇÖs data ingestion status. You can use the workbookΓÇÖs logic to monitor the general health of the ingested data, and to build custom views and rule-based alerts.
-- **Data connectors health monitoring workbook**. This workbook provides additional monitors, detects anomalies, and gives insight regarding the workspaceΓÇÖs data ingestion status. You can use the workbookΓÇÖs logic to monitor the general health of the ingested data, and to build custom views and rule-based alerts.
+- ***SentinelHealth* data table (Preview):** Querying this table provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions. The *SentinelHealth* data table is currently supported only for [selected data connectors](#supported-data-connectors).
-- ***SentinelHealth* data table**. (Public preview) Provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions. The *SentinelHealth* data table is currently supported only for [selected data connectors](#supported-data-connectors).-
-> [!IMPORTANT]
->
-> The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
+ > [!IMPORTANT]
+ >
+ > The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Use the health monitoring workbook
There are three tabbed sections in this workbook:
## Use the SentinelHealth data table (Public preview)
-To get data connector health data from the *SentinelHealth* data table, you must first turn on the Microsoft Sentinel health feature for your workspace. For more information, see [Turn on health monitoring for Microsoft Sentinel](monitor-sentinel-health.md).
+To get data connector health data from the *SentinelHealth* data table, you must first turn on the Microsoft Sentinel health feature for your workspace. For more information, see [Turn on health monitoring for Microsoft Sentinel](enable-monitoring.md).
Once the health feature is turned on, the *SentinelHealth* data table is created at the first success or failure event generated for your data connectors.
The following types of health events are logged in the *SentinelHealth* table:
- **Failure summary**. Logged once an hour, per connector, per workspace, with an aggregated failure summary. Failure summary events are created only when the connector has experienced polling errors during the given hour. They contain any extra details provided in the *ExtendedProperties* column, such as the time period for which the connector's source platform was queried, and a distinct list of failures encountered during the time period.
-For more information, see [SentinelHealth table columns schema](#sentinelhealth-table-columns-schema).
+For more information, see [SentinelHealth table columns schema](health-table-reference.md#sentinelhealth-table-columns-schema).
### Run queries to detect health drifts
For example:
For more information, see [Azure Monitor alerts overview](../azure-monitor/alerts/alerts-overview.md) and [Azure Monitor alerts log](../azure-monitor/alerts/alerts-log.md).
-### SentinelHealth table columns schema
-
-The following table describes the columns and data generated in the SentinelHealth data table for data connectors:
-
-| ColumnName | ColumnType | Description|
-| -- | -- | |
-| **TenantId** | String | The tenant ID for your Microsoft Sentinel workspace. |
-| **TimeGenerated** | Datetime | The time at which the health event occurred. |
-| <a name="operationname"></a>**OperationName** | String | The health operation. One of the following values: <br><br>-`Data fetch status change` for health or success indications <br>- `Failure summary` for aggregated health summaries. <br><br>For more information, see [Understanding SentinelHealth table events](#understanding-sentinelhealth-table-events). |
-| <a name="sentinelresourceid"></a>**SentinelResourceId** | String | The unique identifier of the Microsoft Sentinel workspace and the associated connector on which the health event occurred. |
-| **SentinelResourceName** | String | The data connector name. |
-| <a name="status"></a>**Status** | String | Indicates `Success` or `Failure` for the `Data fetch status change` [OperationName](#operationname), and `Informational` for the `Failure summary` [OperationName](#operationname). |
-| **Description** | String | Describes the operation, including extended data as needed. For example, for failures, this column might indicate the failure reason. |
-| **WorkspaceId** | String | The workspace GUID on which the health issue occurred. The full Azure Resource Identifier is available in the [SentinelResourceID](#sentinelresourceid) column. |
-| **SentinelResourceType** | String |The Microsoft Sentinel resource type being monitored: `Data connector`|
-| **SentinelResourceKind** | String | The type of data connector being monitored, such as `Office365`. |
-| **RecordId** | String | A unique identifier for the record that can be shared with the support team for better correlation as needed. |
-| **ExtendedProperties** | Dynamic (json) | A JSON bag that varies by the [OperationName](#operationname) value and the [Status](#status) of the event: <br><br>- For `Data fetch status change` events with a success indicator, the bag contains a ΓÇÿDestinationTableΓÇÖ property to indicate where data from this connector is expected to land. For failures, the contents vary depending on the failure type. |
-| **Type** | String | `SentinelHealth` |
- ## Next steps
-Learn how to [onboard your data to Microsoft Sentinel](quickstart-onboard.md), [connect data sources](connect-data-sources.md), and [get visibility into your data, and potential threats](get-visibility.md).
+- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.
+- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- Monitor the health of your [automation rules and playbooks](monitor-automation-health.md).
+- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
sentinel Monitor Sentinel Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-sentinel-health.md
- Title: Turn on health monitoring in Microsoft Sentinel
-description: Monitor supported data connectors by using the SentinelHealth data table.
- Previously updated : 7/28/2022-----
-# Turn on health monitoring for Microsoft Sentinel (preview)
-
-Monitor the health of supported data connectors by turning on health monitoring in Microsoft Sentinel. Get insights on health drifts, such as the latest failure events, or changes from success to failure states. Use this information to create alerts and other automated actions.
-
-To get health data from the *SentinelHealth* data table, you must first turn on the Microsoft Sentinel health feature for your workspace.
-
-When the health feature is turned on, the *SentinelHealth* data table is created at the first success or failure event generated for supported data connectors.
-
-To configure the retention time for your health events, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
-
-> [!IMPORTANT]
->
-> The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-
-## Turn on health monitoring for your workspace
-
-1. In Microsoft Sentinel, under the **Configuration** menu on the left, select **Settings** and expand the **Health** section.
-
-1. Select **Configure Diagnostic Settings** and create a new diagnostic setting.
-
- - In the **Diagnostic setting name** field, enter a meaningful name for your setting.
-
- - In the **Category details** column, select the appropriate category like **Data Connector**.
-
- - Under **Destination details**, select **Send to Log Analytics workspace**, and select your subscription and workspace from the dropdown menus.
-
-1. Select **Save** to save your new setting.
-
-The *SentinelHealth* data table is created at the first success or failure event generated for supported resources.
-
-## Access the *SentinelHealth* table
-
-In the Microsoft Sentinel **Logs** page, run a query on the *SentinelHealth* table. For example:
-
-```kusto
-SentinelHealth
- | take 20
-```
-
-## Next steps
-
-[Monitor the health of your Microsoft Sentinel data connectors](monitor-data-connector-health.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
This article lists recent features added for Microsoft Sentinel, and new features in related services that provide an enhanced user experience in Microsoft Sentinel.
-If you're looking for items older than six months, you'll find them in the [Archive for What's new in Sentinel](whats-new-archive.md). For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/azure-sentinel/bg-p/AzureSentinelBlog/label-name/What's%20New).
-
-> [!IMPORTANT]
-> Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
+The listed features were released in the last three months. For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/azure-sentinel/bg-p/AzureSentinelBlog/label-name/What's%20New).
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
-> [!TIP]
-> Our threat hunting teams across Microsoft contribute queries, playbooks, workbooks, and notebooks to the [Microsoft Sentinel Community](https://github.com/Azure/Azure-Sentinel), including specific [hunting queries](https://github.com/Azure/Azure-Sentinel) that your teams can adapt and use.
->
-> You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+## November 2022
+
+- [Monitor the health of automation rules and playbooks](#monitor-the-health-of-automation-rules-and-playbooks)
+- [Updated Microsoft Sentinel Logstash plugin](#updated-microsoft-sentinel-logstash-plugin)
+
+### Monitor the health of automation rules and playbooks
+
+To ensure proper functioning and performance of your security orchestration, automation, and response operations in your Microsoft Sentinel service, keep track of the health of your automation rules and playbooks by monitoring their execution logs.
+
+Set up notifications of health events for relevant stakeholders, who can then take action. For example, define and send email or Microsoft Teams messages, create new tickets in your ticketing system, and so on.
+
+- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.
+- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- Monitor the health of your [automation rules and playbooks](monitor-automation-health.md).
+- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
+
+### Updated Microsoft Sentinel Logstash plugin
+
+A [new version of the Microsoft Sentinel Logstash plugin](connect-logstash-data-connection-rules.md) leverages the new Azure Monitor Data Collection Rules (DCR) based Logs Ingestion API. The new plugin:
+
+- Provides data transformation capabilities like filtering, masking, and enrichment.
+- Allows full control over the output schema, including configuration of the column names and types.
+- Can forward logs from external data sources into both custom tables and standard tables.
+- Provides performance improvements, compression, and better telemetry and error handling.
## October 2022
Microsoft Sentinel customers (who are also AADIP subscribers) with [Microsoft 36
### Out of the box anomaly detection on the SAP audit log (Preview)
-The SAP audit log records audit and security events on SAP systems, like failed sign-in attempts or other over 200 security related actions. Customers monitor the SAP audit log and generate alerts and incidents out of the box using Microsoft Sentinel built-in analytics rules.
- The Microsoft Sentinel for SAP solution now includes the [**SAP - Dynamic Anomaly Detection analytics** rule](https://aka.ms/Sentinel4sapDynamicAnomalyAuditRuleBlog), adding an out of the box capability to identify suspicious anomalies across the SAP audit log events.
-Now, together with the existing ability to identify threats deterministically based on predefined patterns and thresholds, customers can easily identify suspicious anomalies in the SAP security log, out of the box, with no coding required.
-
-You can fine-tune the new capability by editing the [SAP_Dynamic_Audit_Log_Monitor_Configuration and SAP_User_Config watchlists](sap-solution-security-content.md#available-watchlists).
-
-Learn more:
-- [Learn about the new feature (blog)](https://aka.ms/Sentinel4sapDynamicAnomalyAuditRuleBlog)-- [Use the new rule for anomaly detection](sap/configure-audit-log-rules.md#anomaly-detection)
+Learn how to [use the new rule for anomaly detection](sap/configure-audit-log-rules.md#anomaly-detection).
### IoT device entity page (Preview)
-OT/IoT devices, including Programmable Logic Controllers (PLCs), Human-Machine Interfaces (HMIs), engineering workstations, network devices, and more, are becoming increasingly prevalent in organizations. Often, these devices are used as entry points for attacks, but they can also be used by attackers to move laterally.
-For SOCs, monitoring IoT/OT networks presents a number of challenges, including the lack of visibility for security teams into their OT networks, the lack of experience among SOC analysts in managing OT incidents, and the lack of communication between OT teams and SOC teams.
-
The new [IoT device entity page](entity-pages.md) is designed to help the SOC investigate incidents that involve IoT/OT devices in their environment, by providing the full OT/IoT context through Microsoft Defender for IoT to Sentinel. This enables SOC teams to detect and respond more quickly across all domains to the entire attack timeline. Learn more about [investigating IoT device entities in Microsoft Sentinel](iot-advanced-threat-monitoring.md).
If your original query referenced the user or peer names (not just their IDs), s
You can now use the new [Windows DNS Events via AMA connector](connect-dns-ama.md) to stream and filter events from your Windows Domain Name System (DNS) server logs to the `ASimDnsActivityLog` normalized schema table. You can then dive into your data to protect your DNS servers from threats and attacks.
-The Azure Monitor Agent (AMA) and its DNS extension are installed on your Windows Server to upload data from your DNS analytical logs to your Microsoft Sentinel workspace.
-
-Here are some benefits of using AMA for DNS log collection:
--- AMA is faster compared to the existing Log Analytics Agent (MMA/OMS). AMA handles up to 5000 events per second (EPS) compared to 2000 EPS with the existing agent.-- AMA provides centralized configuration using Data Collection Rules (DCRs), and also supports multiple DCRs.-- AMA supports transformation from the incoming stream into other data tables.-- AMA supports basic and advanced filtering of the data. The data is filtered on the DNS server and before the data is uploaded, which saves time and resources.- ### Create and delete incidents manually (Preview) Microsoft Sentinel **incidents** have two main sources:
Since this capability raises the possibility that you'll create an incident in e
### Add entities to threat intelligence (Preview)
-When investigating an incident, you examine entities and their context as an important part of understanding the scope and nature of the incident. In the course of the investigation, you may discover an entity in the incident that should be labeled and tracked as an indicator of compromise (IOC), a threat indicator.
-
-Microsoft Sentinel allows you to flag the entity as malicious, right from within the investigation graph. You'll then be able to view this indicator both in Logs and in the Threat Intelligence blade in Sentinel.
+Microsoft Sentinel now allows you to flag entities as malicious, right from within the investigation graph. You'll then be able to view this indicator both in Logs and in the Threat Intelligence blade in Sentinel.
Learn how to [add an entity to your threat intelligence](add-entity-to-threat-intelligence.md).
-## August 2022
--- [Azure resource entity page (Preview)](#azure-resource-entity-page-preview)-- [New data sources for User and entity behavior analytics (UEBA) (Preview)](#new-data-sources-for-user-and-entity-behavior-analytics-ueba-preview)-- [Microsoft Sentinel Solution for SAP is now generally available](#microsoft-sentinel-solution-for-sap-is-now-generally-available)-
-### Azure resource entity page (Preview)
-
-Azure resources such as Azure Virtual Machines, Azure Storage Accounts, Azure Key Vault, Azure DNS, and more are essential parts of your network. Threat actors might attempt to obtain sensitive data from your storage account, gain access to your key vault and the secrets it contains, or infect your virtual machine with malware. The new [Azure resource entity page](entity-pages.md) is designed to help your SOC investigate incidents that involve Azure resources in your environment, hunt for potential attacks, and assess risk.
-
-You can now gain a 360-degree view of your resource security with the new entity page, which provides several layers of security information about your resources.
-
-First, it provides some basic details about the resource: where it is located, when it was created, to which resource group it belongs, the Azure tags it contains, etc. Further, it surfaces information about access management: how many owners, contributors, and other roles are authorized to access the resource, and what networks are allowed access to it; what is the permission model of the key vault, is public access to blobs allowed in the storage account, and more. Finally, the page also includes some integrations, such as Microsoft Defender for Cloud, Defender for Endpoint, and Purview that enrich the information about the resource.
-
-### New data sources for User and entity behavior analytics (UEBA) (Preview)
-
-The [Security Events data source](ueba-reference.md#ueba-data-sources) for UEBA, which until now included only event ID 4624 (An account was successfully logged on), now includes four more event IDs and types, currently in **PREVIEW**:
-
- - 4625: An account failed to log on.
- - 4648: A logon was attempted using explicit credentials.
- - 4672: Special privileges assigned to new logon.
- - 4688: A new process has been created.
-
-Having user data for these new event types in your workspace will provide you with more and higher-quality insights into the described user activities, from Active Directory and Azure AD enrichments to anomalous activity to matching with internal Microsoft threat intelligence, all further enabling your incident investigations to piece together the attack story.
-
-As before, to use this data source you must enable the [Windows Security Events data connector](data-connectors-reference.md#windows-security-events-via-ama). If you have enabled the Security Events data source for UEBA, you will automatically begin receiving these new event types without having to take any additional action.
-
-It's likely that the inclusion of these new event types will result in the ingestion of somewhat more *Security Events* data, billed accordingly. Individual event IDs cannot be enabled or disabled independently; only the whole Security Events data set together. You can, however, filter the event data at the source if you're using the new [AMA-based version of the Windows Security Events data connector](data-connectors-reference.md#windows-security-events-via-ama).
-
-### Microsoft Sentinel Solution for SAP is now generally available
-
-The Microsoft Sentinel Solution for SAP is now generally available (GA). The solution is free until February 2023, when an additional cost will be added on top of the ingested data. [Learn more about pricing](https://azure.microsoft.com/pricing/offers/microsoft-sentinel-sap-promo/).
-
-With previous versions, every solution update would duplicate content, creating new objects alongside the previous version objects. The GA version uses rule and workbook templates, so that for every solution update, you can clearly understand what has changed, using a dedicated wizard. [Learn more about rule templates](manage-analytics-rule-templates.md).
-
-[Learn more about the updated solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/protect-critical-information-within-sap-systems-against/ba-p/3586943).
-
-#### Solution highlights
-
-The Microsoft Sentinel solution for SAP allows you to monitor, detect, and respond to suspicious activities within the SAP ecosystem, protecting your sensitive data against sophisticated cyber attacks.
-
-Use the solution to:
--- Monitor all SAP system layers -- Gain visibility across business logic, application, databases, and operating system layers with built-in investigation and threat detection tools -- Detect and automatically respond to threats -- Discover suspicious activity including privilege escalation, unauthorized changes, sensitive transactions, and suspicious data downloads with out-of-the-box detection capabilities -- Customize based on your needs: build your own threat detection solutions to monitor specific business risks and extend built-in security content-
-## July 2022
--- [Sync user entities from your on-premises Active Directory with Microsoft Sentinel (Preview)](#sync-user-entities-from-your-on-premises-active-directory-with-microsoft-sentinel-preview)-- [Automation rules for alerts (Preview)](#automation-rules-for-alerts-preview)-
-### Sync user entities from your on-premises Active Directory with Microsoft Sentinel (Preview)
-
-Until now, you've been able to bring your user account entities from your Azure Active Directory (Azure AD) into the IdentityInfo table in Microsoft Sentinel, so that User and Entity Behavior Analytics (UEBA) can use that information to provide context and give insight into user activities, to enrich your investigations.
-
-Now you can do the same with your on-premises (non-Azure) Active Directory as well.
-
-If you have Microsoft Defender for Identity, [enable and configure User and Entity Behavior Analytics (UEBA)](enable-entity-behavior-analytics.md#how-to-enable-user-and-entity-behavior-analytics) to collect and sync your Active Directory user account information into Microsoft Sentinel's IdentityInfo table, so you can get the same insight value from your on-premises users as you do from your cloud users.
-
-Learn more about the [requirements for using Microsoft Defender for Identity](/defender-for-identity/prerequisites) this way.
-
-### Automation rules for alerts (Preview)
-
-In addition to their incident-management duties, [automation rules](automate-incident-handling-with-automation-rules.md) have a new, added function: they are the preferred mechanism for running playbooks built on the **alert trigger**.
-
-Previously, these playbooks could be automated only by attaching them to analytics rules on an individual basis. With the alert trigger for automation rules, a single automation rule can apply to any number of analytics rules, enabling you to centrally manage the running of playbooks for alerts as well as those for incidents.
-
-Learn more about [migrating your alert-trigger playbooks to be invoked by automation rules](migrate-playbooks-to-automation-rules.md).
-
-## June 2022
--- [Microsoft Purview Data Loss Prevention (DLP) integration in Microsoft Sentinel (Preview)](#microsoft-purview-data-loss-prevention-dlp-integration-in-microsoft-sentinel-preview)-- [Incident update trigger for automation rules (Preview)](#incident-update-trigger-for-automation-rules-preview)-
-### Microsoft Purview Data Loss Prevention (DLP) integration in Microsoft Sentinel (Preview)
-
-[Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md) now includes the integration of Microsoft Purview DLP alerts and incidents in Microsoft Sentinel's incidents queue.
-
-With this feature, you will be able to do the following:
--- View all DLP alerts grouped under incidents in the Microsoft 365 Defender incident queue.--- View intelligent inter-solution (DLP-MDE, DLP-MDO) and intra-solution (DLP-DLP) alerts correlated under a single incident.--- Retain DLP alerts and incidents for **180 days**.--- Hunt for compliance logs along with security logs under Advanced Hunting.--- Take in-place administrative remediation actions on users, files, and devices.--- Associate custom tags to DLP incidents and filter by them.--- Filter the unified incident queue by DLP policy name, tag, Date, service source, incident status, and user.-
-In addition to the native experience in the Microsoft 365 Defender Portal, customers will also be able to use the one-click Microsoft 365 Defender connector to [ingest and investigate DLP incidents in Microsoft Sentinel](/microsoft-365/security/defender/investigate-dlp).
--
-### Incident update trigger for automation rules (Preview)
-
-Automation rules are an essential tool for triaging your incidents queue, reducing the noise in it, and generally coping with the high volume of incidents in your SOC seamlessly and transparently. Previously you could create and run automation rules and playbooks that would run upon the creation of an incident, but your automation options were more limited past that point in the incident lifecycle.
-
-You can now create automation rules and playbooks that will run when incident fields are modified - for example, when an owner is assigned, when its status or severity is changed, or when alerts and comments are added.
-
-Learn more about the [update trigger in automation rules](automate-incident-handling-with-automation-rules.md).
-
-## May 2022
--- [Relate alerts to incidents](#relate-alerts-to-incidents-preview)-- [Similar incidents](#similar-incidents-preview)-
-### Relate alerts to incidents (Preview)
-
-You can now add alerts to, or remove alerts from, existing incidents, either manually or automatically, as part of your investigation processes. This allows you to refine the incident scope as the investigation unfolds. For example, relate Microsoft Defender for Cloud alerts, or alerts from third-party products, to incidents synchronized from Microsoft 365 Defender. Use this feature from the investigation graph, the API, or through automation playbooks.
-
-Learn more about [relating alerts to incidents](relate-alerts-to-incidents.md).
-
-### Similar incidents (Preview)
-
-When you triage or investigate an incident, the context of the entirety of incidents in your SOC can be extremely useful. For example, other incidents involving the same entities can represent useful context that will allow you to reach the right decision faster. Now there's a new tab in the incident page that lists other incidents that are similar to the incident you are investigating. Some common use cases for using similar incidents are:
--- Finding other incidents that might be part of a larger attack story.-- Using a similar incident as a reference for incident handling. The way the previous incident was handled can act as a guide for handling the current one.-- Finding relevant people in your SOC that have handled similar incidents for guidance or consult.-
-Learn more about [similar incidents](investigate-cases.md#similar-incidents-preview).
-
-## March 2022
--- [Automation rules now generally available](#automation-rules-now-generally-available)-- [Create a large watchlist from file in Azure Storage (public preview)](#create-a-large-watchlist-from-file-in-azure-storage-public-preview)-
-### Automation rules now generally available
-
-Automation rules are now generally available (GA) in Microsoft Sentinel.
-
-[Automation rules](automate-incident-handling-with-automation-rules.md) allow users to centrally manage the automation of incident handling. They allow you to assign playbooks to incidents, automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules streamline automation use in Microsoft Sentinel and enable you to simplify complex workflows for your incident orchestration processes.
-
-### Create a large watchlist from file in Azure Storage (public preview)
-
-Create a watchlist from a large file that's up to 500 MB in size by uploading the file to your Azure Storage account. When you add the watchlist to your workspace, you provide a shared access signature URL. Microsoft Sentinel uses the shared access signature URL to retrieve the watchlist data from Azure Storage.
-
-For more information, see:
--- [Use watchlists in Microsoft Sentinel](watchlists.md)-- [Create watchlists in Microsoft Sentinel](watchlists-create.md)-
-## February 2022
--- [New custom log ingestion and data transformation at ingestion time (Public preview)](#new-custom-log-ingestion-and-data-transformation-at-ingestion-time-public-preview)-- [View MITRE support coverage (Public preview)](#view-mitre-support-coverage-public-preview)-- [View Microsoft Purview data in Microsoft Sentinel (Public preview)](#view-microsoft-purview-data-in-microsoft-sentinel-public-preview)-- [Manually run playbooks based on the incident trigger (Public preview)](#manually-run-playbooks-based-on-the-incident-trigger-public-preview)-- [Search across long time spans in large datasets (public preview)](#search-across-long-time-spans-in-large-datasets-public-preview)-- [Restore archived logs from search (public preview)](#restore-archived-logs-from-search-public-preview)-
-### New custom log ingestion and data transformation at ingestion time (Public preview)
-
-Microsoft Sentinel supports two new features for data ingestion and transformation. These features, provided by Log Analytics, act on your data even before it's stored in your workspace.
-
-The first of these features is the [**Logs ingestion API**](../azure-monitor/logs/logs-ingestion-api-overview.md). It allows you to send custom-format logs from any data source to your Log Analytics workspace, and store those logs either in certain specific standard tables, or in custom-formatted tables that you create. The actual ingestion of these logs can be done by direct API calls. You use Log Analytics [**data collection rules (DCRs)**](../azure-monitor/essentials/data-collection-rule-overview.md) to define and configure these workflows.
-
-The second feature is [**workspace transformations**](../azure-monitor/essentials/data-collection-transformations.md#workspace-transformation-dcr) for standard logs. It uses [**DCRs**](../azure-monitor/essentials/data-collection-rule-overview.md) to filter out irrelevant data, to enrich or tag your data, or to hide sensitive or personal information. Data transformation can be configured at ingestion time for the following types of built-in data connectors:
--- AMA-based data connectors (based on the new Azure Monitor Agent)-- MMA-based data connectors (based on the legacy Log Analytics Agent)-- Data connectors that use Diagnostic settings-- Service-to-service data connectors-
-For more information, see:
--- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)-- [Data transformation in Microsoft Sentinel (preview)](data-transformation.md)-- [Configure ingestion-time data transformation for Microsoft Sentinel (preview)](configure-data-transformation.md).-
-### View MITRE support coverage (Public preview)
-
-Microsoft Sentinel now provides a new **MITRE** page, which highlights the MITRE tactic and technique coverage you currently have, and can configure, for your organization.
-
-Select items from the **Active** and **Simulated** menus at the top of the page to view the detections currently active in your workspace, and the simulated detections available for you to configure.
-
-For example:
--
-For more information, see [Understand security coverage by the MITRE ATT&CK® framework](mitre-coverage.md).
-
-### View Microsoft Purview data in Microsoft Sentinel (Public Preview)
-
-Microsoft Sentinel now integrates directly with Microsoft Purview by providing an out-of-the-box solution.
-
-The Microsoft Purview solution includes the Microsoft Purview data connector, related analytics rule templates, and a workbook that you can use to visualize sensitivity data detected by Microsoft Purview, together with other data ingested in Microsoft Sentinel.
--
-For more information, see [Tutorial: Integrate Microsoft Sentinel and Microsoft Purview](purview-solution.md).
-
-### Manually run playbooks based on the incident trigger (Public preview)
-
-While full automation is the best solution for many incident-handling, investigation, and mitigation tasks, there may often be cases where you would prefer your analysts have more human input and control over the situation. Also, you may want your SOC engineers to be able to test the playbooks they write before fully deploying them in automation rules.
-
-For these and other reasons, Microsoft Sentinel now allows you to [**run playbooks manually on-demand for incidents**](automate-responses-with-playbooks.md#run-a-playbook-manually) as well as alerts.
-
-Learn more about [running incident-trigger playbooks manually](tutorial-respond-threats-playbook.md#run-a-playbook-manually-on-an-incident).
-
-### Search across long time spans in large datasets (public preview)
-
-Use a search job when you start an investigation to find specific events in logs within a given time frame. You can search all your logs, filter through them, and look for events that match your criteria.
-
-Search jobs are asynchronous queries that fetch records. The results are returned to a search table that's created in your Log Analytics workspace after you start the search job. The search job uses parallel processing to run the search across long time spans, in extremely large datasets. So search jobs don't impact the workspace's performance or availability.
-
-Use search to find events in any of the following log types:
--- [Analytics logs](../azure-monitor/logs/data-platform-logs.md)-- [Basic logs (preview)](../azure-monitor/logs/basic-logs-configure.md)-
-You can also search analytics or basic log data stored in [archived logs (preview)](../azure-monitor/logs/data-retention-archive.md).
-
-For more information, see:
--- [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md)-- [Search across long time spans in large datasets (preview)](search-jobs.md)-
-For information about billing for basic logs or log data stored in archived logs, see [Plan costs for Microsoft Sentinel](billing.md#understand-the-full-billing-model-for-microsoft-sentinel).
-
-### Restore archived logs from search (public preview)
-
-When you need to do a full investigation on data stored in archived logs, restore a table from the **Search** page in Microsoft Sentinel. Specify a target table and time range for the data you want to restore. Within a few minutes, the log data is restored and available within the Log Analytics workspace. Then you can use the data in high-performance queries that support full KQL.
-
-For more information, see:
--- [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md)-- [Restore archived logs from search (preview)](restore.md)-
-## January 2022
--- [Support for MITRE ATT&CK techniques (Public preview)](#support-for-mitre-attck-techniques-public-preview)-- [Codeless data connectors (Public preview)](#codeless-data-connectors-public-preview)-- [Maturity Model for Event Log Management (M-21-31) Solution (Public preview)](#maturity-model-for-event-log-management-m-21-31-solution-public-preview)-- [SentinelHealth data table (Public preview)](#sentinelhealth-data-table-public-preview)-- [More workspaces supported for Multiple Workspace View](#more-workspaces-supported-for-multiple-workspace-view)-- [Kusto Query Language workbook and tutorial](#kusto-query-language-workbook-and-tutorial)-
-### Support for MITRE ATT&CK techniques (Public preview)
-
-In addition to supporting MITRE ATT&CK tactics, your entire Microsoft Sentinel user flow now also supports MITRE ATT&CK techniques.
-
-When creating or editing [analytics rules](detect-threats-custom.md), map the rule to one or more specific tactics *and* techniques. When you search for rules on the **Analytics** page, filter by tactic and technique to narrow your search results.
--
-Check for mapped tactics and techniques throughout Microsoft Sentinel, in:
--- **[Incidents](investigate-cases.md)**. Incidents created from alerts that are detected by rules mapped to MITRE ATT&CK tactics and techniques automatically inherit the rule's tactic and technique mapping.--- **[Bookmarks](bookmarks.md)**. Bookmarks that capture results from hunting queries mapped to MITRE ATT&CK tactics and techniques automatically inherit the query's mapping.-
-#### MITRE ATT&CK framework version upgrade
-
-We also upgraded the MITRE ATT&CK support throughout Microsoft Sentinel to use the MITRE ATT&CK framework *version 9*. This update includes support for the following new tactics:
-
-**Replacing the deprecated *PreAttack* tactic**:
--- [Reconnaissance](https://attack.mitre.org/versions/v9/tactics/TA0043/)-- [Resource Development](https://attack.mitre.org/versions/v9/tactics/TA0042/)-
-**Industrial Control System (ICS) tactics**:
--- [Impair Process Control](https://collaborate.mitre.org/attackics/index.php/Impair_Process_Control)-- [Inhibit Response Function](https://collaborate.mitre.org/attackics/index.php/Inhibit_Response_Function)-
-### Codeless data connectors (Public preview)
-
-Partners, advanced users, and developers can now use the new Codeless Connector Platform (CCP) to create custom connectors, connect their data sources, and ingest data to Microsoft Sentinel.
-
-The Codeless Connector Platform (CCP) provides support for new data connectors via ARM templates, API, or via a solution in the Microsoft Sentinel [content hub](sentinel-solutions.md).
-
-Connectors created using CCP are fully SaaS, without any requirements for service installations, and also include [health monitoring](monitor-data-connector-health.md) and full support from Microsoft Sentinel.
-
-For more information, see [Create a codeless connector for Microsoft Sentinel](create-codeless-connector.md).
-
-### Maturity Model for Event Log Management (M-21-31) Solution (Public preview)
-
-The Microsoft Sentinel content hub now includes the **Maturity Model for Event Log Management (M-21-31)** solution, which integrates Microsoft Sentinel and Microsoft Defender for Cloud to provide an industry differentiator for meeting challenging requirements in regulated industries.
-
-The Maturity Model for Event Log Management (M-21-31) solution provides a quantifiable framework to measure maturity. Use the analytics rules, hunting queries, playbooks, and workbook provided with the solution to do any of the following:
--- Design and build log management architectures-- Monitor and alert on log health issues, coverage, and blind spots-- Respond to notifications with Security Orchestration Automation & Response (SOAR) activities-- Remediate with Cloud Security Posture Management (CSPM)-
-For more information, see:
--- [Modernize Log Management with the Maturity Model for Event Log Management (M-21-31) Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/modernize-log-management-with-the-maturity-model-for-event-log/ba-p/3072842) (blog)-- [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md)-- [The Microsoft Sentinel content hub catalog](sentinel-solutions-catalog.md#domain-solutions)-
-### SentinelHealth data table (Public preview)
-
-Microsoft Sentinel now provides the **SentinelHealth** data table to help you monitor your connector health, providing insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states. Use this data to create alerts and other automated actions, such as Microsoft Teams messages, new tickets in a ticketing system, and so on.
-
-Turn on the Microsoft Sentinel health feature for your workspace in order to have the **SentinelHealth** data table created at the next success or failure event generated for supported data connectors.
-
-For more information, see [Use the SentinelHealth data table (Public preview)](monitor-data-connector-health.md#use-the-sentinelhealth-data-table-public-preview).
-
-### More workspaces supported for Multiple Workspace View
-
-Now, instead of being limited to 10 workspaces in Microsoft Sentinel's [Multiple Workspace View](multiple-workspace-view.md), you can view data from up to 30 workspaces simultaneously.
-
-While we often recommend a single-workspace environment, some use cases require multiple use cases, such as for Managed Security Service Providers (MSSPs) and their customers. **Multiple Workspace View** lets you see and work with security incidents across several workspaces at the same time, even across tenants, allowing you to maintain full visibility and control of your organizationΓÇÖs security responsiveness.
-
-For more information, see:
--- [Use multiple Microsoft Sentinel workspaces](extend-sentinel-across-workspaces-tenants.md#the-need-to-use-multiple-microsoft-sentinel-workspaces)-- [Work with incidents in many workspaces at once](multiple-workspace-view.md)-- [Manage multiple tenants in Microsoft Sentinel as an MSSP](multiple-tenants-service-providers.md)-
-### Kusto Query Language workbook and tutorial
-
-Kusto Query Language is used in Microsoft Sentinel to search, analyze, and visualize data, as the basis for detection rules, workbooks, hunting, and more.
-
-The new **Advanced KQL for Microsoft Sentinel** interactive workbook is designed to help you improve your Kusto Query Language proficiency by taking a use case-driven approach.
-
-The workbook:
--- Groups Kusto Query Language operators / commands by category for easy navigation.-- Lists the possible tasks a user would perform with Kusto Query Language in Microsoft Sentinel. Each task includes operators used, sample queries, and use cases.-- Compiles a list of existing content found in Microsoft Sentinel (analytics rules, hunting queries, workbooks and so on) to provide additional references specific to the operators you want to learn.-- Allows you to execute sample queries on-the-fly, within your own environment or in "LA Demo" - a public [Log Analytics demo environment](https://aka.ms/lademo). Try the sample Kusto Query Language statements in real time without the need to navigate away from the workbook.-
-Accompanying the new workbook is an explanatory [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766), as well as a new [introduction to Kusto Query Language](kusto-overview.md) and a [collection of learning and skilling resources](kusto-resources.md) in the Microsoft Sentinel documentation.
- ## Next steps > [!div class="nextstepaction"]
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
In this quickstart, you'll do the following steps:
4. Write a .NET console application to receive those messages from the queue. > [!NOTE]
-> - This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For an overview of the .NET client library, see [Azure Service Bus client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/README.md). For more samples, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
-> - This quick start shows you two ways of connecting to Azure Service Bus: **connection string** and **passwordless**. The first option shows you how to use a connection string to connect to a Service Bus namespace. The second option shows you how to use your security principal in Azure Active Directory and the role-based access control (RBAC) to connect to a Service Bus namespace. You don't need to worry about having hard-coded connection string in your code or in a configuration file or in secure storage like Azure Key Vault. If you are new to Azure, you may find the connection string option easier to follow. We recommend using the passwordless option in real-world applications and production environments. For more information, see [Authentication and authorization](service-bus-authentication-and-authorization.md).
-
+> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For an overview of the .NET client library, see [Azure Service Bus client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/README.md). For more samples, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
## Prerequisites
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md
This article shows you how to use Application Configuration Service for VMware T
[Application Configuration Service for VMware Tanzu](https://docs.pivotal.io/tcs-k8s/0-1/) is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native `ConfigMap` resources that are populated from properties defined in one or more Git repositories.
-Application Configuration Service for Tanzu gives you a central place to manage external properties for applications across all environments.
+With Application Configuration Service for Tanzu, you have a central place to manage external properties for applications across all environments. To understand the differences from Spring Cloud Config Server in Basic/Standard tier, see the [Use Application Configuration Service for external configuration](./how-to-migrate-standard-tier-to-enterprise-tier.md#use-application-configuration-service-for-external-configuration) section of [Migrate an Azure Spring Apps Basic or Standard tier instance to Enterprise tier](./how-to-migrate-standard-tier-to-enterprise-tier.md).
## Prerequisites
spring-apps How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-migrate-standard-tier-to-enterprise-tier.md
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to migrate an existing application in Basic or Standard tier to Enterprise tier. When you migrate from Basic or Standard tier to Enterprise tier, VMware Tanzu components will replace the OSS Spring Cloud components to provide more feature support.
+This article shows you how to migrate an existing application in Basic or Standard tier to Enterprise tier. When you migrate from Basic or Standard tier to Enterprise tier, VMware Tanzu components will replace the open-source software (OSS) Spring Cloud components to provide more feature support.
This article will use the Pet Clinic sample apps as examples of how to migrate.
The app creation steps are the same as Standard Tier.
## Use Application Configuration Service for external configuration
-In Enterprise tier, Application Configuration Service provides external configuration support for your apps. Managed Spring Cloud Config Server is only available in Basic and Standard tiers and isn't available in Enterprise tier.
+For externalized configuration in a distributed system, managed Spring Cloud Config Server is only available in Basic and Standard tiers. In Enterprise tier, Application Configuration Service for Tanzu (ACS) provides similar functions for your apps. The following table describes some differences in usage between the OSS config server and ACS.
-| Component | Standard Tier | Enterprise Tier |
-||--||
-| Config Server | OSS config server <br> Auto bound (always injection) <br>Always provisioned | Application Configuration Service for Tanzu <br> Need manual binding to app <br> Enable on demand |
+| Component | Support tiers | Enabled | Bind to app | Profile |
+||-|-|-|--|
+| Spring Cloud Config Server | Basic/Standard | Always enabled. | Auto bound | Configured in app's source code. |
+| Application Configuration Service for Tanzu | Enterprise | Enable on demand. | Manual bind | Provided as `config-file-pattern` in an Azure Spring Apps deployment. |
+
+Unlike the client-server mode in the OSS config server, ACS manages configuration by using the Kubernetes-native `ConfigMap`, which is populated from properties defined in backend Git repositories. ACS can't get the active profile configured in the app's source code to match the right configuration, so the explicit configuration `config-file-pattern` should be specified at the Azure Spring Apps deployment level.
## Configure Application Configuration Service for Tanzu settings
static-web-apps Apis Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-functions.md
The following table contrasts the differences between using managed and existing
| Feature | Managed Functions | Bring your own Functions | |||| | Access to Azure Functions [triggers](../azure-functions/functions-triggers-bindings.md#supported-bindings) | HTTP only | All |
-| Supported Azure Functions [runtimes](../azure-functions/supported-languages.md#languages-by-runtime-version)<sup>1</sup> | Node.js 12<br>Node.js 14<br>Node.js 16<br>.NET Core 3.1<br>.NET 6.0<br>Python 3.8<br>Python 3.9 | All |
+| Supported Azure Functions [runtimes](../azure-functions/supported-languages.md#languages-by-runtime-version)<sup>1</sup> | Node.js 12<br>Node.js 14<br>Node.js 16<br>.NET Core 3.1<br>.NET 6.0<br>.NET 7.0<br>Python 3.8<br>Python 3.9 | All |
| Supported Azure Functions [hosting plans](../azure-functions/functions-scale.md) | Consumption | Consumption<br>Premium<br>Dedicated | | [Integrated security](user-information.md) with direct access to user authentication and role-based authorization data | Γ£ö | Γ£ö | | [Routing integration](./configuration.md?#routes) that makes the `/api` route available to the web app securely without requiring custom CORS rules. | Γ£ö | Γ£ö |
static-web-apps Deploy Blazor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-blazor.md
Together, the following projects make up the parts required to create a Blazor W
### Fallback route
-The app exposes URLs like `_/counter_` and `_/fetchdata_`, which map to specific routes of the app. Since this app is implemented as a single page, each route is served the `_https://docsupdatetracker.net/index.html_` file. To ensure that requests for any path return `_https://docsupdatetracker.net/index.html_`, a [fallback route](./configuration.md#fallback-routes) gets implemented in the `_staticwebapp.config.json_` file found in the client project's root folder.
+The app exposes URLs like `/counter` and `/fetchdata`, which map to specific routes of the app. Since this app is implemented as a single page, each route is served the `https://docsupdatetracker.net/index.html` file. To ensure that requests for any path return `https://docsupdatetracker.net/index.html`, a [fallback route](./configuration.md#fallback-routes) gets implemented in the `staticwebapp.config.json` file found in the client project's root folder.
```json {
The app exposes URLs like `_/counter_` and `_/fetchdata_`, which map to specific
} ```
-The json configuration ensures that requests to any route in the app return the `_https://docsupdatetracker.net/index.html_` page.
+The json configuration ensures that requests to any route in the app return the `https://docsupdatetracker.net/index.html` page.
## Clean up resources
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
- For performance issues and considerations, see [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md). -- Maximum file upload size via the SFTP endpoint is 91 GB.
+- Maximum file upload size via the SFTP endpoint is 100 GB.
- Special containers such as $logs, $blobchangefeed, $root, $web aren't accessible via the SFTP endpoint.
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
Previously updated : 06/23/2022 Last updated : 11/17/2022
If the deleted storage account used customer-managed keys with Azure Key Vault a
## Recover a deleted account from the Azure portal
-To restore a deleted storage account from within another storage account, follow these steps:
+To recover a deleted storage account from the Azure portal, follow these steps:
1. Navigate to the list of your storage accounts in the Azure portal. 1. Select the **Restore** button to open the **Restore deleted account** pane.
To restore a deleted storage account from within another storage account, follow
1. Select the **Restore** button to recover the account. The portal displays a notification that the recovery is in progress.
-## Recover a deleted account via a support ticket
-
-1. In the Azure portal, navigate to **Help + support**.
-1. Select **New support request**.
-1. On the **Basics** tab, in the **Issue type** field, select **Technical**.
-1. In the **Subscription** field, select the subscription that contained the deleted storage account.
-1. In the **Service** field, select **Storage Account Management**.
-1. In the **Resource** field, select any storage account resource. The deleted storage account will not appear in the list.
-1. Add a brief summary of the issue.
-1. In the **Problem type** field, select **Deletion and Recovery**.
-1. In the **Problem subtype** field, select **Recover deleted storage account**. The following image shows an example of the **Basics** tab being filled out:
-
- :::image type="content" source="media/storage-account-recover/recover-account-support-basics.png" alt-text="Screenshot showing how to recover a storage account through support ticket - Basics tab":::
-
-1. Next, navigate to the **Solutions** tab, and select **Customer-Controlled Storage Account Recovery**, as shown in the following image:
-
- :::image type="content" source="media/storage-account-recover/recover-account-support-solutions.png" alt-text="Screenshot showing how to recover a storage account through support ticket - Solutions tab":::
-
-1. From the dropdown, select the account to recover, as shown in the following image. If the storage account that you want to recover is not in the dropdown, then it cannot be recovered.
-
- :::image type="content" source="media/storage-account-recover/recover-account-support.png" alt-text="Screenshot showing how to recover a storage account through support ticket":::
-
-1. Select the **Recover** button to restore the account. The portal displays a notification that the recovery is in progress.
- ## Next steps - [Storage account overview](storage-account-overview.md)
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-troubleshooting.md
For other resource types, we don't currently have an Azure RBAC-related solution
1. Select **Shared access signature (SAS)** and select **Next**. 1. Enter the shared access signature URL you received and enter a unique display name for the connection. Select **Next** and then select **Connect**.
-For more information on how to attach to resources, see [Attach to an individual resource](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=linux#attach-to-an-individual-resource).
+For more information on how to attach to resources, see [Attach to an individual resource](../../vs-azure-tools-storage-manage-with-storage-explorer.md#attach-to-an-individual-resource).
### Recommended Azure built-in roles
Follow these steps to find them:
1. Install OpenSSL: - [Windows](https://slproweb.com/products/Win32OpenSSL.html): Any of the light versions should be sufficient.
- - Mac and Linux: Should be included with your operating system.
+ - Mac: Should be included with your operating system.
+ - Linux: Should be included with your operating system.
+ 1. Run OpenSSL: - Windows: Open the installation directory, select **/bin/**, and then double-click **openssl.exe**.
- - Mac and Linux: Run `openssl` from a terminal.
+ - Mac: Run `openssl` from a terminal.
+ - Linux: Run `openssl` from a terminal.
+ 1. Run the command `openssl s_client -showcerts -connect <hostname>:443` for any of the Microsoft or Azure host names that your storage resources are behind. For more information, see this [list of host names that are frequently accessed by Storage Explorer](./storage-explorer-network.md).+ 1. Look for self-signed certificates. If the subject `("s:")` and issuer `("i:")` are the same, the certificate is most likely self-signed.+ 1. When you find the self-signed certificates, for each one, copy and paste everything from, and including, `--BEGIN CERTIFICATE--` to `--END CERTIFICATE--` into a new .cer file.+ 1. Open Storage Explorer and go to **Edit** > **SSL Certificates** > **Import Certificates**. Then use the file picker to find, select, and open the .cer files you created. ### Disable SSL certificate validation
-If you can't find any self-signed certificates by following these steps, contact us through the feedback tool. You can also open Storage Explorer from the command line with the `--ignore-certificate-errors` flag. When opened with this flag, Storage Explorer ignores certificate errors. *This flag is not recommended.*
+If you can't find any self-signed certificates by following these steps, contact us through the feedback tool. You can also open Storage Explorer from the command line with the `--ignore-certificate-errors` flag. When opened with this flag, Storage Explorer ignores certificate errors. *This flag isn't recommended.*
## Sign-in issues
If you can't do any of those options, you can also [change where sign-in happens
### Unable to acquire token, tenant is filtered out
-If you see an error message that says a token can't be acquired because a tenant is filtered out, you're trying to access a resource that's in a tenant you filtered out. To unfilter the tenant, go to the **Account Panel**. Make sure the checkbox for the tenant specified in the error is selected. For more information on filtering tenants in Storage Explorer, see [Managing accounts](./storage-explorer-sign-in.md#managing-accounts).
+Sometimes you may see an error message that says a token can't be acquired because a tenant is filtered out. This means you're trying to access a resource that's in a tenant you filtered out. To include the tenant, go to the **Account Panel**. Make sure the checkbox for the tenant specified in the error is selected. For more information on filtering tenants in Storage Explorer, see [Managing accounts](./storage-explorer-sign-in.md#managing-accounts).
### Authentication library failed to start properly
If on startup you see an error message that says Storage Explorer's authenticati
If you believe that your installation environment meets all prerequisites, [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues/new). When you open your issue, make sure to include: -- Your OS.-- What version of Storage Explorer you're trying to use.-- If you checked the prerequisites.
+- Your OS
+- What version of Storage Explorer you're trying to use
+- Whether you checked the prerequisites
- [Authentication logs](#authentication-logs) from an unsuccessful launch of Storage Explorer. Verbose authentication logging is automatically enabled after this type of error occurs. ### Blank window when you use integrated sign-in
If you can't retrieve your subscriptions after you successfully sign in, try the
- Verify that your account has access to the subscriptions you expect. You can verify your access by signing in to the portal for the Azure environment you're trying to use. - Make sure you've signed in through the correct Azure environment like Azure, Azure China 21Vianet, Azure Germany, Azure US Government, or Custom Environment. - If you're behind a proxy server, make sure you configured the Storage Explorer proxy correctly.-- Try removing and re-adding the account.
+- Try removing and adding back the account.
- If there's a "More information" or "Error details" link, check which error messages are being reported for the tenants that are failing. If you aren't sure how to respond to the error messages, [open an issue in GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues). ## Problem interacting with your OS credential store during an AzCopy transfer
If you see this message on Windows, most likely the Windows Credential Manager i
1. Close Storage Explorer 1. On the **Start** menu, search for **Credential Manager** and open it. 1. Go to **Windows Credentials**.
-1. Under **Generic Credentials**, look for entries associated with programs you no longer use and delete them. You can also look for entries like `azcopy/aadtoken/<some number>` and delete those.
+1. Under **Generic Credentials**, look for entries associated with programs you no longer use and delete them. You can also look for entries like `azcopy/aadtoken/<some number>` and delete those entries.
-If the message continues to appear after completing the above steps, or if you encounter this message on platforms other than Windows, then please [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).
+If the message continues to appear after completing the above steps, or if you encounter this message on platforms other than Windows, you can [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).
## Can't remove an attached storage account or resource
If the owner of a subscription or account has granted you access to a resource,
## Connection string doesn't have complete configuration settings
-If you receive this error message, it's possible that you don't have the necessary permissions to obtain the keys for your storage account. To confirm that this is the case, go to the portal and locate your storage account. Right-click the node for your storage account and select **Open in Portal**. Then, go to the **Access Keys** pane. If you don't have permissions to view keys, you'll see a "You don't have access" message. To work around this issue, you can either obtain the account key from someone else and attach through the name and key or you can ask someone for a shared access signature to the storage account and use it to attach the storage account.
+If you receive this error message, it's possible that you don't have the necessary permissions to obtain the keys for your storage account. To confirm, go to the portal and locate your storage account. Right-click the node for your storage account and select **Open in Portal**. Then, go to the **Access Keys** pane. If you don't have permissions to view keys, you'll see a "You don't have access" message. To work around this issue, you can obtain either an account name and key or an account shared access signature and use it to attach the storage account.
If you do see the account keys, file an issue in GitHub so that we can help you resolve the issue. ## "Error occurred while adding new connection: TypeError: Cannot read property 'version' of undefined"
-If you receive this error message when you try to add a custom connection, the connection data that's stored in the local credential manager might be corrupted. To work around this issue, try deleting your corrupted local connections, and then re-add them:
+If you receive this error message when you try to add a custom connection, the connection data that's stored in the local credential manager might be corrupted. To work around this issue, try deleting and adding back your corrupted local connections:
1. Start Storage Explorer. From the menu, go to **Help** > **Toggle Developer Tools**. 1. In the opened window, on the **Application** tab, go to **Local Storage** > **file://** on the left side.
If you receive this error message when you try to add a custom connection, the c
To preserve the connections that aren't corrupted, use the following steps to locate the corrupted connections. If you don't mind losing all existing connections, skip these steps and follow the platform-specific instructions to clear your connection data.
-1. From a text editor, re-add each connection name to **Developer Tools**. Then check whether the connection is still working.
-1. If a connection is working correctly, it's not corrupted and you can safely leave it there. If a connection isn't working, remove its value from **Developer Tools**, and record it so that you can add it back later.
+1. From a text editor, add back each connection name to **Developer Tools**. Then check whether the connection is still working.
+1. If a connection is working correctly, it's not corrupted; you can safely leave it there. If a connection isn't working, remove its value from **Developer Tools**, and record it so that you can add it back later.
1. Repeat until you've examined all your connections.
-After you go through all your connections, for all connection names that aren't added back, you must clear their corrupted data, if there is any. Then add them back by using the standard steps in Storage Explorer.
+After removing connection names, you must clear their corrupted data. Then you can add the connections back by using the standard connect steps in Storage Explorer.
-### [Windows](#tab/Windows)
+# [Windows](#tab/Windows)
1. On the **Start** menu, search for **Credential Manager** and open it. 1. Go to **Windows Credentials**. 1. Under **Generic Credentials**, look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key. An example is `StorageExplorer_CustomConnections_Accounts_v1/account1`.
-1. Delete these entries and re-add the connections.
+1. Delete and add back these connections.
-### [macOS](#tab/macOS)
+# [macOS](#tab/macOS)
-1. Open Spotlight by selecting **Command+Spacebar** and search for **Keychain access**.
+1. Open Spotlight by selecting **Command+Space** and search for **Keychain access**.
1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key. An example is `StorageExplorer_CustomConnections_Accounts_v1/account1`.
-1. Delete these entries and re-add the connections.
+1. Delete and add back these connections.
-### [Linux](#tab/Linux)
+# [Ubuntu](#tab/linux-ubuntu)
-Local credential management varies depending on the Linux distribution. If your Linux distribution doesn't provide a built-in GUI tool for local credential management, install a third-party tool to manage your local credentials. For example, you can use [Seahorse](https://wiki.gnome.org/Apps/Seahorse/), an open-source GUI tool for managing Linux local credentials.
+Local credential management varies depending on your system configuration. If your system doesn't have a tool for local credential management installed, you may install a third-party tool compatible with `libsecret` to manage your local credentials. For example, on systems using GNOME, you can install [Seahorse](https://wiki.gnome.org/Apps/Seahorse/).
1. Open your local credential management tool. Find your saved credentials.
-1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key. An example is `StorageExplorer_CustomConnections_Accounts_v1/account1`.
-1. Delete these entries and re-add the connections.
+1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key (for example `StorageExplorer_CustomConnections_Accounts_v1/account1`)
+1. Delete and add back these connections.
+
+# [Red Hat Enterprise Linux](#tab/linux-rhel)
+
+Local credential management varies depending on your system configuration. If your system doesn't have a tool for local credential management installed, you may install a third-party tool compatible with `libsecret` to manage your local credentials. For example, on systems using GNOME, you can install [Seahorse](https://wiki.gnome.org/Apps/Seahorse/).
+
+1. Open your local credential management tool. Find your saved credentials.
+1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key (for example `StorageExplorer_CustomConnections_Accounts_v1/account1`)
+1. Delete and add back these connections.
+
+# [SUSE Linux Enterprise Server](#tab/linux-sles)
+
+> [!NOTE]
+> Storage Explorer has not been tested for SLES. You may try using Storage Explorer on your system, but we cannot guarantee that Storage Explorer will work as expected.
+
+Local credential management varies depending on your system configuration. If your system doesn't have a tool for local credential management installed, you may install a third-party tool compatible with `libsecret` to manage your local credentials. For example, on systems using GNOME, you can install [Seahorse](https://wiki.gnome.org/Apps/Seahorse/).
+
+1. Open your local credential management tool. Find your saved credentials.
+1. Look for entries that have the `<connection_type_key>/<corrupted_connection_name>` key (for example `StorageExplorer_CustomConnections_Accounts_v1/account1`)
+1. Delete and add back these connections.
+ If you still encounter this error after you run these steps, or if you want to share what you suspect has corrupted the connections, [open an issue](https://github.com/microsoft/AzureStorageExplorer/issues) on our GitHub page.
If you accidentally attached by using an invalid shared access signature URL and
1. When you're running Storage Explorer, select **F12** to open the **Developer Tools** window. 1. On the **Application** tab, select **Local Storage** > **file://** on the left side.
-1. Find the key associated with the service type of the problematic shared access signature URI. For example, if the bad shared access signature URI is for a blob container, look for the key named `StorageExplorer_AddStorageServiceSAS_v1_blob`.
+1. Find the key associated with the service type of the shared access signature URI. For example, if the bad shared access signature URI is for a blob container, look for the key named `StorageExplorer_AddStorageServiceSAS_v1_blob`.
1. The value of the key should be a JSON array. Find the object associated with the bad URI, and delete it. 1. Select **Ctrl+R** to reload Storage Explorer.
-## Linux dependencies
+## Storage Explorer dependencies
+
+# [Windows](#tab/Windows)
+
+Storage Explorer comes packaged with all dependencies it needs to run on Windows.
+
+# [macOS](#tab/macOS)
+
+Storage Explorer comes packaged with all dependencies it needs to run on macOS.
+
+# [Ubuntu](#tab/linux-ubuntu)
### Snap
snap connect storage-explorer:password-manager-service :password-manager-service
You can also download the application as a *.tar.gz* file, but you'll have to install dependencies manually.
-Storage Explorer as provided in the *.tar.gz* download is supported for the following versions of Ubuntu only. Storage Explorer might work on other Linux distributions, but they aren't officially supported.
+Storage Explorer requires the [.NET 6 runtime](/dotnet/core/install/linux) to be installed on your system. The ASP.NET runtime is *not* required.
-- Ubuntu 20.04 x64-- Ubuntu 18.04 x64-- Ubuntu 16.04 x64
+> [!NOTE]
+> Older versions of Storage Explorer may require a different version of .NET or .NET Core. Refer to release notes or in-app error messages to help determine the required version.
+
+Many libraries needed by Storage Explorer come preinstalled with Canonical's standard installations of Ubuntu. Custom environments might be missing some of these libraries. If you have issues launching Storage Explorer, make sure the following packages are installed on your system:
+
+- iproute2
+- libasound2
+- libatm1
+- libgconf-2-4
+- libnspr4
+- libnss3
+- libpulse0
+- libsecret-1-0
+- libx11-xcb1
+- libxss1
+- libxtables11
+- libxtst6
+- xdg-utils
-Storage Explorer requires the .NET 6 runtime to be installed on your system. The ASP.NET runtime is **not** required.
+# [Red Hat Enterprise Linux](#tab/linux-rhel)
+
+### Snap
+
+Storage Explorer 1.10.0 and later is available as a snap from the Snap Store. The Storage Explorer snap installs all its dependencies automatically. It's updated when a new version of the snap is available. Installing the Storage Explorer snap is the recommended method of installation.
+
+Storage Explorer requires the use of a password manager, which you might need to connect manually before Storage Explorer will work correctly. You can connect Storage Explorer to your system's password manager by running the following command:
+
+```bash
+snap connect storage-explorer:password-manager-service :password-manager-service
+```
+
+### .tar.gz file
> [!NOTE]
-> Older versions of Storage Explorer may require a different version of .NET or .NET Core. Refer to release notes or in app error messages to help determine the required version.
+> Storage Explorer as provided in the *.tar.gz* download is supported for Ubuntu only. Storage Explorer might work on RHEL, but it is not officially supported.
-### [Ubuntu 22.04](#tab/2204)
+You can also download the application as a *.tar.gz* file, but you'll have to install dependencies manually.
-1. Download the Storage Explorer *.tar.gz* file.
-1. Install the [.NET 6 runtime](/dotnet/core/install/linux-ubuntu)
+Storage Explorer requires the [.NET 6 runtime](/dotnet/core/install/linux) to be installed on your system. The ASP.NET runtime is *not* required.
+> [!NOTE]
+> Older versions of Storage Explorer may require a different version of .NET or .NET Core. Refer to release notes or in-app error messages to help determine the required version.
-### [Ubuntu 20.04](#tab/2004)
+Many libraries needed by Storage Explorer may be missing in RHEL environments. If you have issues launching Storage Explorer, make sure the following packages (or their RHEL equivalents) are installed on your system:
-1. Download the Storage Explorer *.tar.gz* file.
-1. Install the [.NET 6 runtime](/dotnet/core/install/linux-ubuntu)
+- iproute2
+- libasound2
+- libatm1
+- libgconf-2-4
+- libnspr4
+- libnss3
+- libpulse0
+- libsecret-1-0
+- libx11-xcb1
+- libxss1
+- libxtables11
+- libxtst6
+- xdg-utils
-### [Ubuntu 18.04](#tab/1804)
+# [SUSE Linux Enterprise Server](#tab/linux-sles)
-1. Download the Storage Explorer *.tar.gz* file.
-1. Install the [.NET 6 runtime](/dotnet/core/install/linux-ubuntu)
+> [!NOTE]
+> Storage Explorer has not been tested for SLES. You may try using Storage Explorer on your system, but we cannot guarantee that Storage Explorer will work as expected.
-
+### Snap
-Many libraries needed by Storage Explorer come preinstalled with Canonical's standard installations of Ubuntu. Custom environments might be missing some of these libraries. If you have issues launching Storage Explorer, make sure the following packages are installed on your system:
+Storage Explorer 1.10.0 and later is available as a snap from the Snap Store. The Storage Explorer snap installs all its dependencies automatically. It's updated when a new version of the snap is available. Installing the Storage Explorer snap is the recommended method of installation.
+
+Storage Explorer requires the use of a password manager, which you might need to connect manually before Storage Explorer will work correctly. You can connect Storage Explorer to your system's password manager by running the following command:
+
+```bash
+snap connect storage-explorer:password-manager-service :password-manager-service
+```
+
+### .tar.gz file
+
+You can also download the application as a *.tar.gz* file, but you'll have to install dependencies manually.
+
+Storage Explorer requires the [.NET 6 runtime](/dotnet/core/install/linux) to be installed on your system. The ASP.NET runtime is *not* required.
+
+> [!NOTE]
+> Older versions of Storage Explorer may require a different version of .NET or .NET Core. Refer to release notes or in-app error messages to help determine the required version.
+
+Many libraries needed by Storage Explorer may be missing in SLES environments. If you have issues launching Storage Explorer, make sure the following packages (or their SLES equivalents) are installed on your system:
- iproute2 - libasound2
Many libraries needed by Storage Explorer come preinstalled with Canonical's sta
- libxtst6 - xdg-utils ++ ### Patch Storage Explorer for newer versions of .NET Core For Storage Explorer 1.7.0 or earlier, you might have to patch the version of .NET Core used by Storage Explorer:
When you report an issue to GitHub, you might be asked to gather certain logs to
### Storage Explorer logs
-Storage Explorer logs various things to its own application logs. You can easily get to these logs by selecting **Help** > **Open Logs Directory**. By default, Storage Explorer logs at a low level of verbosity. To change the verbosity level, go to **Settings** (the **gear** symbol on the left) > **Application** > **Logging** > **Log Level**. You can then set the log level as needed. For troubleshooting, it is recommended to use the `debug` log level.
+Storage Explorer logs various things to its own application logs. You can easily get to these logs by selecting **Help** > **Open Logs Directory**. By default, Storage Explorer logs at a low level of verbosity. To change the verbosity level, go to **Settings** (the **gear** symbol on the left) > **Application** > **Logging** > **Log Level**. You can then set the log level as needed. For troubleshooting, the `debug` log level is recommended.
Logs are split into folders for each session of Storage Explorer that you run. For whatever log files you need to share, place them in a zip archive, with files from different sessions in different folders.
Logs are split into folders for each session of Storage Explorer that you run. F
For issues related to sign-in or Storage Explorer's authentication library, you'll most likely need to gather authentication logs. Authentication logs are stored at: - Windows: *C:\Users\\<your username\>\AppData\Local\Temp\servicehub\logs*-- macOS and Linux: *~/.ServiceHub/logs*
+- macOS: *~/.ServiceHub/logs*
+- Linux: *~/.ServiceHub/logs*
Generally, you can follow these steps to gather the logs:
-1. Go to **Settings** (the **gear** symbol on the left) > **Application** > **Sign-in**. Select **Verbose Authentication Logging**. If Storage Explorer fails to start because of an issue with its authentication library, this will be done for you.
+1. Go to **Settings** (the **gear** symbol on the left) > **Application** > **Sign-in**. Select **Verbose Authentication Logging**. If Storage Explorer fails to start because of an issue with its authentication library, this step will be done for you.
1. Close Storage Explorer. 1. Optional/recommended: Clear out existing logs from the *logs* folder. This step reduces the amount of information you have to send us. 1. Open Storage Explorer and reproduce your issue.
If you're having trouble transferring data, you might need to get the AzCopy log
- For transfers that failed in the past, go to the AzCopy logs folder. This folder can be found at: - Windows: *C:\Users\\<your username\>\\.azcopy*
- - macOS and Linux: *~/.azcopy*
+ - macOS: *~/.azcopy*
+ - Linux: *~/.azcopy*
### Network logs
-For some issues, you'll need to provide logs of the network calls made by Storage Explorer. On Windows, you can do this by using Fiddler.
+For some issues, you'll need to provide logs of the network calls made by Storage Explorer. On Windows, you can get network logs by using Fiddler.
> [!NOTE] > Fiddler traces might contain passwords you entered or sent in your browser during the gathering of the trace. Make sure to read the instructions on how to sanitize a Fiddler trace. Don't upload Fiddler traces to GitHub. You'll be told where you can securely send your Fiddler trace.
storage Storage Files Identity Ad Ds Update Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-update-password.md
Previously updated : 11/16/2022 Last updated : 11/17/2022
If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage account in an organizational unit or domain that enforces password expiration time, you must change the password before the maximum password age. Your organization may run automated cleanup scripts that delete accounts once their password expires. Because of this, if you don't change your password before it expires, your account could be deleted, which will cause you to lose access to your Azure file shares.
+To prevent unintended password rotation, during the onboarding of the Azure storage account in the domain, make sure to place the Azure storage account into a separate organizational unit in AD DS. Disable Group Policy inheritance on this organizational unit to prevent default domain policies or specific password policies from being applied.
+ > [!NOTE] > A storage account identity in AD DS can be either a service account or a computer account. Service account passwords can expire in AD; however, because computer account password changes are driven by the client machine and not AD, they don't expire in AD.
-To trigger password rotation, you can run the `Update-AzStorageAccountADObjectPassword` cmdlet from the [AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). This command must be run in an on-premises AD DS-joined environment by a [hybrid identity](../../active-directory/hybrid/whatis-hybrid-identity.md) with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account and uses it to update the password of the registered account in AD DS. Then it regenerates the target Kerberos key of the storage account and updates the password of the registered account in AD DS.
+There are two options for triggering password rotation. You can use the `AzFilesHybrid` module or Active Directory PowerShell. Use one method, not both.
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+
+## Use AzFilesHybrid module
-To prevent password rotation, during the onboarding of the Azure storage account in the domain, make sure to place the Azure storage account into a separate organizational unit in AD DS. Disable Group Policy inheritance on this organizational unit to prevent default domain policies or specific password policies to be applied.
+You can run the `Update-AzStorageAccountADObjectPassword` cmdlet from the [AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). This command must be run in an on-premises AD DS-joined environment by a [hybrid identity](../../active-directory/hybrid/whatis-hybrid-identity.md) with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account and uses it to update the password of the registered account in AD DS. Then it regenerates the target Kerberos key of the storage account and updates the password of the registered account in AD DS.
```PowerShell # Update the password of the AD DS account registered for the storage account
Update-AzStorageAccountADObjectPassword `
This action will change the password for the AD object from kerb1 to kerb2. This is intended to be a two-stage process: rotate from kerb1 to kerb2 (kerb2 will be regenerated on the storage account before being set), wait several hours, and then rotate back to kerb1 (this cmdlet will likewise regenerate kerb1).
-## Applies to
-| File share type | SMB | NFS |
-|-|:-:|:-:|
-| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+## Use Active Directory PowerShell
+
+If you don't want to download the `AzFilesHybrid` module, you can use [Active Directory PowerShell](/powershell/module/activedirectory).
+
+> [!IMPORTANT]
+> The Windows Server Active Directory PowerShell cmdlets in this section must be run in Windows PowerShell 5.1 with elevated privileges. PowerShell 7.x and Azure Cloud Shell won't work in this scenario.
+
+Replace `<domain-object-identity>` in the following script with your value, then run the script to update your domain object password:
+
+```powershell
+$KeyName = "kerb1" # Could be either the first or second kerberos key, this script assumes we're refreshing the first
+$KerbKeys = New-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -KeyName $KeyName
+$KerbKey = $KerbKeys.keys | Where-Object {$_.KeyName -eq $KeyName} | Select-Object -ExpandProperty Value
+$NewPassword = ConvertTo-SecureString -String $KerbKey -AsPlainText -Force
+
+Set-ADAccountPassword -Identity <domain-object-identity> -Reset -NewPassword $NewPassword
+```
+
storage Storage Files Migration Robocopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-robocopy.md
An example: *RoboCopy /MIR* will mirror source to target - that means added, cha
## Migration goals
-The goal is to move the data from existing file share locations to Azure. In Azure, you'll store you data in native Azure file shares you can use without a need for a Windows Server. This migration needs to be done in a way that guarantees the integrity of the production data and availability during the migration. The latter requires keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
+The goal is to move the data from existing file share locations to Azure. In Azure, you'll store your data in native Azure file shares you can use without a need for a Windows Server. This migration needs to be done in a way that guarantees the integrity of the production data and availability during the migration. The latter requires keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
## Migration overview
The migration process consists of several phases. You'll need to deploy Azure st
> [!TIP] > If you are returning to this article, use the navigation on the right side to jump to the migration phase where you left off.
-## Phase 1: Identify how many Azure file shares you need
+## Phase 1: Deploy Azure storage resources
+An Azure file share is stored in the cloud in an Azure storage account.
+For standard storage, another level of performance considerations applies here.
-## Phase 2: Deploy Azure storage resources
+If you have highly active shares (shares used by many users and/or applications), two Azure file shares might reach the performance limit of a storage account.
-In this phase, consult the mapping table from Phase 1 and use it to provision the correct number of Azure storage accounts and file shares within them.
+If you like to use the maximum number of IO and throughput a storage account offers, consider deploying storage accounts with one file share each.
+You can pool multiple Azure file shares into the same storage account if you have archival shares or you expect low day-to-day activity in them.
+This recommendation does not apply to premium storage. You determine individual performance characteristics for each premium Azure file share. Storage account limits do not apply for premium storage.
-## Phase 3: Preparing to use Azure file shares
+These considerations apply more to direct cloud access (through an Azure VM or other service) than to Azure File Sync. If you plan to use only Azure File Sync on these shares, grouping several into a single Azure storage account is fine.
+
+If you've made a list of your shares, you should map each share to the storage account it will be in.
+To complete this phase, you should create a mapping of storage accounts to file shares. Then deploy the Azure storage accounts and Azure file shares from that mapping.
+
+> [!CAUTION]
+> If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100 TiB file shares.
+
+By default, storage accounts are created with Azure files shares limited at 5 TiB. Follow the steps in [Create an Azure file share](storage-how-to-create-file-share.md) to create a large file share.
+
+Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See [Azure Storage redundancy options](../common/storage-redundancy.md).
+
+The names of your resources are also important. For example, if you group multiple shares for the HR department into an Azure storage account, you should name the storage account appropriately. Similarly, when you name your Azure file shares, you should use names similar to the ones used for their on-premises counterparts.
+
+## Phase 2: Preparing to use Azure file shares
With the information in this phase, you will be able to decide how your servers and users in Azure and outside of Azure will be enabled to utilize your Azure file shares. The most critical decisions are: - **Networking:** Enable your networks to route SMB traffic. - **Authentication:** Configure Azure storage accounts for Kerberos authentication. AdConnect and Domain joining your storage account will allow your apps and users to use their AD identity to for authentication - **Authorization:** Share-level ACLs for each Azure file share will allow AD users and groups to access a given share and within an Azure file share, native NTFS ACLs will take over. Authorization based on file and folder ACLs then works like it does for on-premises SMB shares.-- **Business continuity:** Integration of Azure file shares into an existing environment often entails to preserve existing share addresses. If you are not already using [DFS-Namespaces](files-manage-namespaces.md), consider establishing that in your environment. You'd be able to keep share addresses your users and scripts use, unchanged. DFS-N provides a namespace routing service for SMB, by redirecting clients to Azure file shares.
+- **Business continuity:** Integration of Azure file shares into an existing environment often entails preserving existing share addresses. If you are not already using [DFS-Namespaces](files-manage-namespaces.md), consider establishing that in your environment. You'd be able to keep share addresses your users and scripts use, unchanged. DFS-N provides a namespace routing service for SMB, by redirecting clients to Azure file shares.
:::row::: :::column:::
With the information in this phase, you will be able to decide how your servers
Before you can use RoboCopy, you need to make the Azure file share accessible over SMB. The easiest way is to mount the share as a local network drive to the Windows Server you are planning on using for RoboCopy. > [!IMPORTANT]
-> Make sure you mount the Azure file share using the storage account access key. Don't use a domain identity. Before you can successfully mount an Azure file share to a local Windows Server, you need to have completed Phase 3: Preparing to use Azure file shares.
+> Make sure you mount the Azure file share using the storage account access key. Don't use a domain identity. Before you can successfully mount an Azure file share to a local Windows Server, you need to have completed Phase 2: Preparing to use Azure file shares.
Once you are ready, review the [Use an Azure file share with Windows how-to article](storage-how-to-use-files-windows.md). Then mount the Azure file share you want to start the RoboCopy for.
-## Phase 4: RoboCopy
+## Phase 3: RoboCopy
The following RoboCopy command will copy only the differences (updated files and folders) from your source storage to your Azure file share.
The following RoboCopy command will copy only the differences (updated files and
> [!TIP] > [Check out the Troubleshooting section](#troubleshoot-and-optimize) if RoboCopy is impacting your production environment, reports lots of errors or is not progressing as fast as expected.
-## Phase 5: User cut-over
+## Phase 4: User cut-over
When you run the RoboCopy command for the first time, your users and applications are still accessing files on the source of your migration and potentially change them. It is possible, that RoboCopy has processed a directory, moves on to the next and then a user on the source location adds, changes, or deletes a file that will now not be processed in this current RoboCopy run. This behavior is expected.
When you consider the downtime acceptable, then you need to remove user access t
Run one last RoboCopy round. It will pick up any changes, that might have been missed. How long this final step takes, dependents on the speed of the RoboCopy scan. You can estimate the time (which is equal to your downtime) by measuring how long the previous run took.
-In a previous section, you've configured your users to [access the share with their identity](#phase-3-preparing-to-use-azure-file-shares) and should have established a strategy for your users to [use established paths to your new Azure file shares (DFS-N)](files-manage-namespaces.md).
+In a previous section, you've configured your users to [access the share with their identity](#phase-2-preparing-to-use-azure-file-shares) and should have established a strategy for your users to [use established paths to your new Azure file shares (DFS-N)](files-manage-namespaces.md).
You can try to run a few of these copies between different source and target shares in parallel. When doing so, keep your network throughput and core to thread count ratio in mind to not overtax the system.
storage Storage Files Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md
Configuring public and private endpoints for Azure Files is done on the top-leve
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ## Secure transfer
-By default, Azure storage accounts require secure transfer, regardless of whether data is accessed over the public or private endpoint. For Azure Files, the **require secure transfer** setting is enforced for all protocol access to the data stored on Azure file shares, including SMB, NFS, and FileREST. The **require secure transfer** setting may be disabled to allow unencrypted traffic. You may also see this setting mislabeled as "require secure transfer for REST API operations".
+By default, Azure storage accounts require secure transfer, regardless of whether data is accessed over the public or private endpoint. For Azure Files, the **require secure transfer** setting is enforced for all protocol access to the data stored on Azure file shares, including SMB, NFS, and FileREST. You can disable the **require secure transfer** setting to allow unencrypted traffic. In the Azure portal, you may also see this setting labeled as **require secure transfer for REST API operations**.
The SMB, NFS, and FileREST protocols have slightly different behavior with respect to the **require secure transfer** setting: - When **require secure transfer** is enabled on a storage account, all SMB file shares in that storage account will require the SMB 3.x protocol with AES-128-CCM, AES-128-GCM, or AES-256-GCM encryption algorithms, depending on the available/required encryption negotiation between the SMB client and Azure Files. You can toggle which SMB encryption algorithms are allowed via the [SMB security settings](files-smb-protocol.md#smb-security-settings). Disabling the **require secure transfer** setting enables SMB 2.1 and SMB 3.x mounts without encryption. -- NFS file shares do not support an encryption mechanism, so in order to use the NFS protocol to access an Azure file share, you must disable **require secure transfer** for the storage account.
+- NFS file shares don't support an encryption mechanism, so in order to use the NFS protocol to access an Azure file share, you must disable **require secure transfer** for the storage account.
- When secure transfer is required, the FileREST protocol may only be used with HTTPS. FileREST is only supported on SMB file shares today.
synapse-analytics How To Connect To Workspace With Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md
Private endpoints are created in a subnet. The subscription, resource group, and
Select **Connect to an Azure resource in my directory** in the **Resource** tab. Select the **Subscription** that contains your Azure Synapse workspace. The **Resource type** for creating private endpoints to an Azure Synapse workspace is *Microsoft.Synapse/workspaces*. Select your Azure Synapse workspace as the **Resource**. Every Azure Synapse workspace has three **Target sub-resource** that you can create a private endpoint to: Sql, SqlOnDemand, and Dev.-- Sql is for SQL query execution in SQL pool.-- SqlOnDemand is for SQL built-in query execution.
+- Sql is for SQL query execution in dedicated SQL pools.
+- SqlOnDemand is SQL query execution in the built-in serverless SQL pool.
- Dev is for accessing everything else inside Azure Synapse Analytics Studio workspaces. Select **Next: Configuration>** to advance to the next part of the setup.
synapse-analytics Troubleshoot Sql Database Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/troubleshoot/troubleshoot-sql-database-failover.md
Previously updated : 11/09/2022 Last updated : 11/17/2022 # Troubleshoot: Azure Synapse Link for Azure SQL Database after failover of an Azure SQL Database
You must stop Synapse Link manually and configure Synapse Link according to the
:::image type="content" source="media/troubleshoot-sql-database-failover/synapse-studio-linked-services.png" alt-text="A screenshot of Synapse Studio. The Manage hub is open. In the list of Linked services, the AzureSqlDatabase1 linked service is highlighted." lightbox="media/troubleshoot-sql-database-failover/synapse-studio-linked-services.png"::: 1. You must reset the linked service connection string based on the new primary server after failover so that Synapse Link can connect to the new primary logical server's database. There are two options:
- * Use [the auto-failover group read/write listener endpoint](/sql/azure-sql/managed-instance/auto-failover-group-configure-sql-mi#locate-listener-endpoint) and use the Synapse workspace's managed identity (SMI) to connect your Synapse workspace to the source database. Because of Read/Write listener endpoint that automatically maps to the new primary server after failover, so you only need to set it once. If failover occurs later, it will automatically use the fully-qualified domain name (FQDN) of the listener endpoint. Note that you still need to take action on every failover to update the Resource ID and Managed Identity ID for the new primary (see next step).
+ * Use [the auto-failover group read/write listener endpoint](/sql/azure-sql/database/auto-failover-group-configure-sql-db#locate-listener-endpoint) and use the Synapse workspace's managed identity (SMI) to connect your Synapse workspace to the source database. Because of Read/Write listener endpoint that automatically maps to the new primary server after failover, so you only need to set it once. If failover occurs later, it will automatically use the fully-qualified domain name (FQDN) of the listener endpoint. Note that you still need to take action on every failover to update the Resource ID and Managed Identity ID for the new primary (see next step).
* After each failover, edit the linked service **Connection string** with the **Server name**, **Database name**, and authentication information for the new primary server. You can use a managed identity or SQL Authentication. The authentication account used to connect to the database, whether it be a managed identity or SQL Authenticated login to the Azure SQL Database, must have at least the CONTROL permission inside the database to perform the actions necessary for the linked service. The db_owner permission is similar to CONTROL.
virtual-desktop Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/licensing.md
Previously updated : 04/27/2022 Last updated : 11/17/2022
There are two price tiers for Azure Virtual Desktop per-user access pricing. Cha
For more information about prices, see [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+Check if your Azure Virtual Desktop solution is compatible with per-user access pricing by reviewing [our licensing documentation](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/EAEAS#Documents).
+ Each price tier has flat per-user access charges. For example, a user incurs the same charge to your subscription no matter when or how many hours they used the service during that billing cycle. > [!IMPORTANT]
Here's a summary of the two types of licenses for Azure Virtual Desktop you can
The Azure Virtual Desktop per-user access license isn't a full replacement for a Windows or Microsoft 365 license. Per-user licenses only grant access rights to Azure Virtual Desktop and don't include Microsoft Office, Microsoft 365 Defender, or Universal Print. This means that if you choose a per-user license, you'll need to separately license other products and services to grant your users access to them in your Azure Virtual Desktop environment.
+There are a few ways to enable your external users to access Office:
+
+- Users can sign in to Office with their own Office account.
+- You can re-sell Office through your Cloud Service Provider (CSP).
+- You can distribute Office by using a Service Provider Licensing Agreement (SPLA).
+ ## Next steps Now that you're familiar with your licensing pricing options, you can start planning your Azure Virtual Desktop environment. Here are some articles that might help you:
virtual-desktop Per User Access Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/per-user-access-pricing.md
Previously updated : 10/31/2022 Last updated : 11/17/2022
To enroll your Azure subscription into per-user access pricing:
7. After enrollment is done, check the value in the **Per-user access pricing** column of the subscriptions list to make sure it's changed from ΓÇ£EnrollingΓÇ¥ to ΓÇ£Enrolled.ΓÇ¥
-## Licensing other products and services for use with Azure Virtual Desktop
-
-There are a few ways to enable your external users to access Office:
--- Users can sign in to Office with their own Office account.-- You can re-sell Office through your Cloud Service Provider (CSP). -- You can distribute Office by using a Service Provider Licensing Agreement (SPLA).- ## Next steps To learn more about per-user access pricing, see [Understanding licensing and per-user access pricing](licensing.md). If you want to learn how to estimate per-user app streaming costs for your deployment, see [Estimate per-user app streaming costs for Azure Virtual Desktop](streaming-costs.md). For estimating total deployment costs, see [Understanding total Azure Virtual Desktop deployment costs](total-costs.md).
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
You can create and manage maintenance configurations using any of the following
- [Azure PowerShell](maintenance-configurations-powershell.md) - [Azure portal](maintenance-configurations-portal.md)
+>[!IMPORTANT]
+> Pre/Post **tasks** property is currently exposed in the API but it is not supported a this time.
+ For an Azure Functions sample, see [Scheduling Maintenance Updates with Maintenance Configurations and Azure Functions](https://github.com/Azure/azure-docs-powershell-samples/tree/master/maintenance-auto-scheduler). ## Next steps
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
VM restore points are incremental. The first restore point stores a full copy of
## Restore points for VMs inside Virtual Machine Scale Set and Availability Set (AvSet)
-Currently, restore points can only be created in one VM at a time, that is, you cannot create a single restore point across multiple VMs. Due to this limitation, we currently support creating restore points for individual VMs with a Virtual Machine Scale Set and Availability Set. If you want to back up your entire Virtual Machine Scale Set instance or your Availability Set instance, you must individually create restore points for all the VMs that are part of the instance.
+Currently, restore points can only be created in one VM at a time, that is, you cannot create a single restore point across multiple VMs. Due to this limitation, we currently support creating restore points for individual VMs with a Virtual Machine Scale Set in Flexible Orchestration mode, or Availability Set. If you want to back up instances within a Virtual Machine Scale Set instance or your Availability Set instance, you must individually create restore points for all the VMs that are part of the instance.
> [!Note]
-> Virtual Machine Scale Set with Unified orchestration is not supported by restore points. You cannot create restore points of VMs inside a Virtual Machine Scale Set with Unified orchestration.
+> Virtual Machine Scale Set with Uniform orchestration is not supported by restore points. You cannot create restore points of VMs inside a Virtual Machine Scale Set with Uniform orchestration.
## Limitations
Currently, restore points can only be created in one VM at a time, that is, you
- Restore points APIs require an API of version 2021-03-01 or later. - A maximum of 500 VM restore points can be retained at any time for a VM, irrespective of the number of restore point collections. - Concurrent creation of restore points for a VM is not supported.
+- Restore points for Virtual Machine Scale Sets in Uniform orchestration mode are not supported.
- Movement of Virtual Machines (VM) between Resource Groups (RG), or Subscriptions is not supported when the VM has restore points. Moving the VM between Resource Groups or Subscriptions will not update the source VM reference in the restore point and will cause a mismatch of ARM IDs between the actual VM and the restore points. > [!Note] > Public preview of cross-region creation and copying of VM restore points is available, with the following limitations:
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/enable-infiniband.md
yum install -y kernel-devel-${KERNEL}
For Windows, download and install the [Mellanox OFED for Windows drivers](https://www.mellanox.com/products/adapter-software/ethernet/windows/winof-2). ## Enable IP over InfiniBand (IB)
-If you are plan to run MPI jobs, you typically don't need IPoIB. The MPI library will use the verbs interface for IB communication (unless you explicitly use the TCP/IP channel of MPI library). But if you have an app that uses TCP/IP for communication and you want to run over IB, you can use IPoIB over the IB interface. Use the following commands (for RHEL/CentOS) to enable IP over InfiniBand.
+If you plan to run MPI jobs, you typically don't need IPoIB. The MPI library will use the verbs interface for IB communication (unless you explicitly use the TCP/IP channel of MPI library). But if you have an app that uses TCP/IP for communication and you want to run over IB, you can use IPoIB over the IB interface. Use the following commands (for RHEL/CentOS) to enable IP over InfiniBand.
```bash sudo sed -i -e 's/# OS.EnableRDMA=n/OS.EnableRDMA=y/g' /etc/waagent.conf
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
tags: azure-resource-manager
keywords: '' ms.assetid: 44bbd2b6-a376-4b5c-b824-e76917117fa9-+ vm-linux
virtual-machines Ha Setup With Fencing Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/ha-setup-with-fencing-device.md
editor: - vm-linux
virtual-machines Hana Additional Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-additional-network-requirements.md
editor: - vm-linux
virtual-machines Hana Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-architecture.md
editor: '' - vm-linux
virtual-machines Hana Available Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-available-skus.md
editor: '' keywords: 'HLI, HANA, SKUs, S896, S224, S448, S672, Optane, SAP' - vm-linux
virtual-machines Hana Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-backup-restore.md
editor: - vm-linux
virtual-machines Hana Certification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-certification.md
editor: '' - vm-linux
virtual-machines Hana Concept Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-concept-preparation.md
editor: - vm-linux
virtual-machines Hana Connect Azure Vm Large Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-connect-azure-vm-large-instances.md
editor: ''
tags: azure-resource-manager keywords: '' - vm-linux
virtual-machines Hana Connect Vnet Express Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-connect-vnet-express-route.md
editor: - vm-linux
virtual-machines Hana Data Tiering Extension Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-data-tiering-extension-nodes.md
editor: '' - vm-linux
virtual-machines Hana Example Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-example-installation.md
editor: - vm-linux
virtual-machines Hana Failover Procedure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-failover-procedure.md
editor: - vm-linux
virtual-machines Hana Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-installation.md
editor: - vm-linux
virtual-machines Hana Large Instance Enable Kdump https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-large-instance-enable-kdump.md
editor: - vm-linux
virtual-machines Hana Large Instance Virtual Machine Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-large-instance-virtual-machine-migration.md
editor: - vm-linux
virtual-machines Hana Monitor Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-monitor-troubleshoot.md
documentationcenter:
- vm-linux
virtual-machines Hana Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-network-architecture.md
editor: '' - vm-linux
virtual-machines Hana Onboarding Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-onboarding-requirements.md
editor: '' - vm-linux
virtual-machines Hana Operations Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-operations-model.md
editor: '' - vm-linux
virtual-machines Hana Overview High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery.md
editor: - vm-linux
virtual-machines Hana Overview Infrastructure Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-infrastructure-connectivity.md
editor: - vm-linux
virtual-machines Hana Setup Smt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-setup-smt.md
editor: - vm-linux
virtual-machines Hana Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-sizing.md
editor: '' - vm-linux
virtual-machines Hana Storage Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-storage-architecture.md
editor: '' - vm-linux
virtual-machines Hana Supported Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-supported-scenario.md
editor: - vm-linux
virtual-machines Large Instance Os Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/large-instance-os-backup.md
editor: - vm-linux
virtual-machines Os Compatibility Matrix Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/os-compatibility-matrix-hana-large-instance.md
editor: - vm-linux
virtual-machines Os Upgrade Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/os-upgrade-hana-large-instance.md
editor: - vm-linux
virtual-machines Troubleshooting Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/troubleshooting-monitoring.md
documentationcenter:
- vm-linux
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
Previously updated : 11/09/2022 Last updated : 11/16/2022
Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. To learn more about network security groups, see [Network security group overview](./network-security-groups-overview.md). Next, complete the [Filter network traffic](tutorial-filter-network-traffic.md) tutorial to gain some experience with network security groups.
-## Before you begin
+## Prerequisites
If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article:
virtual-network Manage Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md
Title: Create, change, or delete an Azure virtual network
description: Create and delete a virtual network and change settings, like DNS servers and IP address spaces, for an existing virtual network. Previously updated : 01/10/2019 Last updated : 11/16/2022 + # Create, change, or delete a virtual network - Learn how to create and delete a virtual network and change settings, like DNS servers and IP address spaces, for an existing virtual network. If you're new to virtual networks, you can learn more about them in the [Virtual network overview](virtual-networks-overview.md) or by completing a [tutorial](quick-create-portal.md). A virtual network contains subnets. To learn how to create, change, and delete subnets, see [Manage subnets](virtual-network-manage-subnet.md).
-## Before you begin
+## Prerequisites
+
+If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article:
+
+- **Portal users**: Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.
+
+ If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Run `Connect-AzAccount` to sign in to Azure.
-Complete the following tasks before completing steps in any section of this article:
+- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.
-- If you don't already have an Azure account, sign up for a [free trial account](https://azure.microsoft.com/free).-- If using the portal, open https://portal.azure.com, and sign in with your Azure account.-- If using PowerShell commands to complete tasks in this article, either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.-- If using Azure CLI commands to complete tasks in this article, run the commands via either [Azure Cloud Shell](https://shell.azure.com/bash) or the Azure CLI running locally. This tutorial requires the Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If you're running the Azure CLI locally, you also need to run `az login` to create a connection with Azure.-- The account you log into, or connect to Azure with, must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that is assigned the appropriate actions listed in [Permissions](#permissions).
+ If you're running Azure CLI locally, use Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
+
+The account you log into, or connect to Azure with, must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md) that is assigned the appropriate actions listed in [Permissions](#permissions).
## Create a virtual network
-1. Select **+ Create a resource** > **Networking** > **Virtual network**.
-2. In **Create virtual network**, enter or select values for the following settings on the *Basics* tab:
+### Create a virtual network using the Azure portal
+
+1. In the search box at the top of the portal, enter *Virtual networks*. Select **Virtual networks** in the search results.
+
+1. Select **+ Create**.
- | **Setting** | **Description** |
- | | |
- | **Project details** | |
- | **Subscription** | Select a [subscription](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription). You cannot use the same virtual network in more than one Azure subscription. However, you can connect a virtual network in one subscription to virtual networks in other subscriptions with [virtual network peering](virtual-network-peering-overview.md). Any Azure resource that you connect to the virtual network must be in the same subscription as the virtual network. |
- |**Resource group**| Select an existing [resource group](../azure-resource-manager/management/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-groups) or create a new one. An Azure resource that you connect to the virtual network can be in the same resource group as the virtual network or in a different resource group. |
+1. In the **Basics** tab of **Create virtual network**, enter or select values for the following settings:
+
+ | Setting | Value | Details |
+ | | | |
+ | **Project details** | | |
+ | **Subscription** | Select your subscription. | You can't use the same virtual network in more than one Azure subscription. However, you can connect a virtual network in one subscription to virtual networks in other subscriptions using [virtual network peering](virtual-network-peering-overview.md). <br> Any Azure resource that you connect to the virtual network must be in the same subscription as the virtual network. |
+ |**Resource group**| Select an existing [resource group](../azure-resource-manager/management/overview.md#resource-groups) or create a new one by selecting **Create new**. | An Azure resource that you connect to the virtual network can be in the same resource group as the virtual network or in a different resource group. |
| **Instance details** | |
- | **Name** | The name must be unique in the [resource group](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) that you select to create the virtual network in. You cannot change the name after the virtual network is created. You can create multiple virtual networks over time. For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks. |
- | **Region** | Select an Azure [region](https://azure.microsoft.com/regions/). A virtual network can be in only one Azure region. However, you can connect a virtual network in one region to a virtual network in another region by using a VPN gateway. Any Azure resource that you connect to the virtual network must be in the same region as the virtual network. |
+ | **Name** | Enter a name for the virtual network you're creating. | The name must be unique in the resource group that you select to create the virtual network in. <br> You can't change the name after the virtual network is created. <br> For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks. |
+ | **Region** | Select an Azure [region](https://azure.microsoft.com/regions/). | A virtual network can be in only one Azure region. However, you can connect a virtual network in one region to a virtual network in another region using [virtual network peering](virtual-network-peering-overview.md). <br> Any Azure resource that you connect to the virtual network must be in the same region as the virtual network. |
1. Select **IP Addresses** tab or **Next: IP Addresses >**, and enter the following IP address information:
- - **Address space**: The address space for a virtual network is composed of one or more non-overlapping address ranges that are specified in CIDR notation. The address range you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you've connected to the virtual network. You can't add the following address ranges:
+ - **IPv4 Address space**: The address space for a virtual network is composed of one or more non-overlapping address ranges that are specified in CIDR notation. The address range you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you've connected to the virtual network.
+
+ You can't add the following address ranges:
- 224.0.0.0/4 (Multicast) - 255.255.255.255/32 (Broadcast) - 127.0.0.0/8 (Loopback) - 169.254.0.0/16 (Link-local) - 168.63.129.16/32 (Internal DNS, DHCP, and Azure Load Balancer [health probe](../load-balancer/load-balancer-custom-probe-overview.md#probe-source-ip-address))-
- Although you can define only one address range when you create the virtual network in the portal, you can add more address ranges to the address space after the virtual network is created. To learn how to add an address range to an existing virtual network, see [Add or remove an address range](#add-or-remove-an-address-range).
+
+ The portal requires that you define at least one IPv4 address range when you create a virtual network. You can change the address space after the virtual network is created, under specific conditions.
> [!WARNING] > If a virtual network has address ranges that overlap with another virtual network or on-premises network, the two networks can't be connected. Before you define an address range, consider whether you might want to connect the virtual network to other virtual networks or on-premises networks in the future. Microsoft recommends configuring virtual network address ranges with private address space or public address space owned by your organization.
- >
- - **Add IPv6 address space**
+ - **Add IPv6 address space**: IPv6 address space of an Azure Virtual Network enables you to host applications in Azure with IPv6 and IPv4 connectivity within the virtual network and to and from the Internet.
+
+ - **Subnet name**: The subnet name must be unique within the virtual network. You can't change the subnet name after the subnet is created. The portal requires that you define one subnet when you create a virtual network, even though a virtual network isn't required to have any subnets. In the portal, you can define one or more subnets when you create a virtual network. You can add more subnets to the virtual network later, after the virtual network is created. To add a subnet to a virtual network, see [Manage subnets](virtual-network-manage-subnet.md).
+
+ >[!TIP]
+ >Sometimes, administrators create different subnets to filter or control traffic routing between the subnets. Before you define subnets, consider how you might want to filter and route traffic between your subnets. To learn more about filtering traffic between subnets, see [Network security groups](./network-security-groups-overview.md). Azure automatically routes traffic between subnets, but you can override Azure default routes. To learn more about Azures default subnet traffic routing, see [Routing overview](virtual-networks-udr-overview.md).
+
+ - **Subnet address range**: The range must be within the address space you entered for the virtual network. The smallest range you can specify is /29, which provides eight IP addresses for the subnet. Azure reserves the first and last address in each subnet for protocol conformance. Three more addresses are reserved for Azure service usage. As a result, a virtual network with a subnet address range of /29 has only three usable IP addresses. If you plan to connect a virtual network to a VPN gateway, you must create a gateway subnet. Learn more about [specific address range considerations for gateway subnets](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub). You can change the address range after the subnet is created, under specific conditions. To learn how to change a subnet address range, see [Manage subnets](virtual-network-manage-subnet.md).
- - **Subnet name**: The subnet name must be unique within the virtual network. You can't change the subnet name after the subnet is created. The portal requires that you define one subnet when you create a virtual network, even though a virtual network isn't required to have any subnets. In the portal, you can define one or more subnets when you create a virtual network. You can add more subnets to the virtual network later, after the virtual network is created. To add a subnet to a virtual network, see [Manage subnets](virtual-network-manage-subnet.md). You can create a virtual network that has multiple subnets by using Azure CLI or PowerShell.
+### Create a virtual network using PowerShell
- >[!TIP]
- >Sometimes, administrators create different subnets to filter or control traffic routing between the subnets. Before you define subnets, consider how you might want to filter and route traffic between your subnets. To learn more about filtering traffic between subnets, see [Network security groups](./network-security-groups-overview.md). Azure automatically routes traffic between subnets, but you can override Azure default routes. To learn more about Azures default subnet traffic routing, see [Routing overview](virtual-networks-udr-overview.md).
- >
+Use [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) to create a virtual network.
- - **Subnet address range**: The range must be within the address space you entered for the virtual network. The smallest range you can specify is /29, which provides eight IP addresses for the subnet. Azure reserves the first and last address in each subnet for protocol conformance. Three more addresses are reserved for Azure service usage. As a result, a virtual network with a subnet address range of /29 has only three usable IP addresses. If you plan to connect a virtual network to a VPN gateway, you must create a gateway subnet. Learn more about [specific address range considerations for gateway subnets](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub). You can change the address range after the subnet is created, under specific conditions. To learn how to change a subnet address range, see [Manage subnets](virtual-network-manage-subnet.md).
+```azurepowershell-interactive
+## Create myVNet virtual network. ##
+New-AzVirtualNetwork -ResourceGroupName myResourceGroup -Name myVNet -Location eastus -AddressPrefix 10.0.0.0/16
+```
-**Commands**
+### Create a virtual network using the Azure CLI
-- Azure CLI: [az network vnet create](/cli/azure/network/vnet)-- PowerShell: [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
+Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network.
+
+```azurecli-interactive
+## Create myVNet virtual network with the default address space: 10.0.0.0/16. ##
+az network vnet create --resource-group myResourceGroup --name myVNet
+```
## View virtual networks and settings
-1. In the search box at the top of the portal, enter *virtual networks* in the search box. When **Virtual networks** appear in the search results, select it.
-2. From the list of virtual networks, select the virtual network that you want to view settings for.
-3. The following settings are listed for the virtual network you selected:
+### View virtual networks and settings using the Azure portal
+
+1. In the search box at the top of the portal, enter *Virtual networks*. Select **Virtual networks** in the search results.
+
+1. From the list of virtual networks, select the virtual network that you want to view settings for.
+
+1. The following settings are listed for the virtual network you selected:
+ - **Overview**: Provides information about the virtual network, including address space and DNS servers. The following screenshot shows the overview settings for a virtual network named **MyVNet**: :::image type="content" source="media/manage-virtual-network/vnet-overview-inline.png" alt-text="Screenshot of the Virtual Network overview page. It includes essential information including resource group, subscription info, and DNS information." lightbox="media/manage-virtual-network/vnet-overview-expanded.png":::
- You can move a virtual network to a different subscription, region, or resource group by selecting **Move** next to **Resource group**, **Location**, or **Subscription**. To learn how to move a virtual network, see [Move resources to a different resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md?toc=%2fazure%2fvirtual-network%2ftoc.json). The article lists prerequisites, and how to move resources by using the Azure portal, PowerShell, and Azure CLI. All resources that are connected to the virtual network must move with the virtual network.
+ You can move a virtual network to a different subscription, region, or resource group by selecting **Move** next to **Resource group**, **Location**, or **Subscription**. To learn how to move a virtual network, see [Move resources to a different resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md). The article lists prerequisites, and how to move resources by using the Azure portal, PowerShell, and Azure CLI. All resources that are connected to the virtual network must move with the virtual network.
+ - **Address space**: The address spaces that are assigned to the virtual network are listed. To learn how to add and remove an address range to the address space, complete the steps in [Add or remove an address range](#add-or-remove-an-address-range).
- - **Connected devices**: Any resources that are connected to the virtual network are listed. In the preceding screenshot, three network interfaces and one load balancer are connected to the virtual network. Any new resources that you create and connect to the virtual network are listed. If you delete a resource that was connected to the virtual network, it no longer appears in the list.
+
+ - **Connected devices**: Any resources that are connected to the virtual network are listed. Any new resources that you create and connect to the virtual network are added to the list. If you delete a resource that was connected to the virtual network, it no longer appears in the list.
+ - **Subnets**: A list of subnets that exist within the virtual network is shown. To learn how to add and remove a subnet, see [Manage subnets](virtual-network-manage-subnet.md).
- - **DNS servers**: You can specify whether the Azure internal DNS server or a custom DNS server provides name resolution for devices that are connected to the virtual network. When you create a virtual network by using the Azure portal, Azure's DNS servers are used for name resolution within a virtual network, by default. To modify the DNS servers, complete the steps in [Change DNS servers](#change-dns-servers) in this article.
- - **Peerings**: If there are existing peerings in the subscription, they're listed here. You can view settings for existing peerings, or create, change, or delete peerings. To learn more about peerings, see [Virtual network peering](virtual-network-peering-overview.md).
- - **Properties**: Displays settings about the virtual network, including the virtual network's resource ID and the Azure subscription it is in.
- - **Diagram**: The diagram provides a visual representation of all devices that are connected to the virtual network. The diagram has some key information about the devices. To manage a device in this view, in the diagram, select the device.
+
+ - **DNS servers**: You can specify whether the Azure internal DNS server or a custom DNS server provides name resolution for devices that are connected to the virtual network. When you create a virtual network by using the Azure portal, Azure's DNS servers are used for name resolution within a virtual network, by default. To learn how to modify the DNS servers, see the steps in [Change DNS servers](#change-dns-servers) in this article.
+
+ - **Peerings**: If there are existing peerings in the subscription, they're listed here. You can view settings for existing peerings, or create, change, or delete peerings. To learn more about peerings, see [Virtual network peering](virtual-network-peering-overview.md) and [Manage virtual network peerings](virtual-network-manage-peering.md).
+
+ - **Properties**: Displays settings about the virtual network, including the virtual network's resource ID and Azure subscription.
+
+ - **Diagram**: Provides a visual representation of all devices that are connected to the virtual network. The diagram has some key information about the devices. To manage a device in this view, in the diagram, select the device.
+ - **Common Azure settings**: To learn more about common Azure settings, see the following information: - [Activity log](../azure-monitor/essentials/platform-logs-overview.md) - [Access control (IAM)](../role-based-access-control/overview.md)
- - [Tags](../azure-resource-manager/management/tag-resources.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
- - [Locks](../azure-resource-manager/management/lock-resources.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+ - [Tags](../azure-resource-manager/management/tag-resources.md)
+ - [Locks](../azure-resource-manager/management/lock-resources.md)
- [Automation script](../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates)
-**Commands**
+### View virtual networks and settings using PowerShell
-- Azure CLI: [az network vnet show](/cli/azure/network/vnet)-- PowerShell: [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork)
+Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to list all virtual networks in a resource group.
+
+```azurepowershell-interactive
+Get-AzVirtualNetwork -ResourceGroupName myResourceGroup | format-table Name, ResourceGroupName, Location
+```
+
+Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to view the settings of a virtual network.
+
+```azurepowershell-interactive
+Get-AzVirtualNetwork -ResourceGroupName myResourceGroup -Name myVNet
+```
+
+### View virtual networks and settings the Azure CLI
+
+Use [az network vnet list](/cli/azure/network/vnet#az-network-vnet-list) to list all virtual networks in a resource group.
+
+```azurecli-interactive
+az network vnet list --resource-group myResourceGroup
+```
+
+Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to view the settings of a virtual network.
+
+```azurecli-interactive
+az network vnet show --resource-group myResourceGroup --name myVNet
+```
## Add or remove an address range
You can't add the following address ranges:
- 169.254.0.0/16 (Link-local) - 168.63.129.16/32 (Internal DNS, DHCP, and Azure Load Balancer [health probe](../load-balancer/load-balancer-custom-probe-overview.md#probe-source-ip-address))
-To add or remove an address range:
+> [!NOTE]
+> If the virtual network is peered with another virtual network or connected with on-premises network, the new address range can't overlap with the address space of the peered virtual networks or on-premises network. To learn more, see [Update the address space for a peered virtual network](update-virtual-network-peering-address-space.md).
+
+### Add or remove an address range using the Azure portal
+
+1. In the search box at the top of the portal, enter *Virtual networks*. Select **Virtual networks** in the search results.
-1. In the search box at the top of the portal, enter *virtual networks* in the search box. When **Virtual networks** appear in the search results, select it.
2. From the list of virtual networks, select the virtual network for which you want to add or remove an address range.
-3. Select **Address space**, under **SETTINGS**.
+
+3. Select **Address space**, under **Settings**.
+ 4. Complete one of the following options:+ - **Add an address range**: Enter the new address range. The address range can't overlap with an existing address range that is defined for the virtual network.
- - **Remove an address range**: On the right of the address range you want to remove, select **...**, then select **Remove**. If a subnet exists in the address range, you can't remove the address range. To remove an address range, you must first delete any subnets (and any resources in the subnets) that exist in the address range.
+
+ - **Modify an address range**: Modify an existing address range. You can change the address range prefix to decrease or increase the address range. You can decrease the address range as long as it still includes the ranges of any associated subnets. Additionally, you can extend the address range as long as it doesn't overlap with an existing address range that is defined for the virtual network.
+
+ - **Remove an address range**: On the right of the address range you want to remove, select **Delete**. If a subnet exists in the address range, you can't remove the address range. To remove an address range, you must first delete any subnets (and any resources in the subnets) that exist in the address range.
+ 5. Select **Save**.
-**Commands**
+### Add or remove an address range using PowerShell
+
+Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to update the address space of a virtual network.
+
+```azurepowershell-interactive
+## Place the virtual network configuration into a variable. ##
+$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName myResourceGroup -Name myVNet
+## Remove the old address range. ##
+$virtualNetwork.AddressSpace.AddressPrefixes.Remove("10.0.0.0/16")
+## Add the new address range. ##
+$virtualNetwork.AddressSpace.AddressPrefixes.Add("10.1.0.0/16")
+## Update the virtual network. ##
+Set-AzVirtualNetwork -VirtualNetwork $virtualNetwork
+```
-- Azure CLI: [az network vnet update](/cli/azure/network/vnet)-- PowerShell: [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork)
+### Add or remove an address range using the Azure CLI
+
+Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to update the address space of a virtual network.
+
+```azurecli-interactive
+## Update the address space of myVNet virtual network with 10.1.0.0/16 address range (10.1.0.0/16 overrides any previous address ranges set in this virtual network). ##
+az network vnet update --resource-group myResourceGroup --name myVNet --address-prefixes 10.1.0.0/16
+```
## Change DNS servers All VMs that are connected to the virtual network register with the DNS servers that you specify for the virtual network. They also use the specified DNS server for name resolution. Each network interface (NIC) in a VM can have its own DNS server settings. If a NIC has its own DNS server settings, they override the DNS server settings for the virtual network. To learn more about NIC DNS settings, see [Network interface tasks and settings](virtual-network-network-interface.md#change-dns-servers). To learn more about name resolution for VMs and role instances in Azure Cloud Services, see [Name resolution for VMs and role instances](virtual-networks-name-resolution-for-vms-and-role-instances.md). To add, change, or remove a DNS server:
-1. In the search box at the top of the portal, enter *virtual networks* in the search box. When **Virtual networks** appear in the search results, select it.
+### Change DNS servers of a virtual network using the Azure portal
+
+1. In the search box at the top of the portal, enter *Virtual networks*. Select **Virtual networks** in the search results.
+ 2. From the list of virtual networks, select the virtual network for which you want to change DNS servers.
-3. Select **DNS servers**, under **SETTINGS**.
+
+3. Select **DNS servers**, under **Settings**.
+ 4. Select one of the following options:+ - **Default (Azure-provided)**: All resource names and private IP addresses are automatically registered to the Azure DNS servers. You can resolve names between any resources that are connected to the same virtual network. You can't use this option to resolve names across virtual networks. To resolve names across virtual networks, you must use a custom DNS server.
- - **Custom**: You can add one or more servers, up to the Azure limit for a virtual network. To learn more about DNS server limits, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#virtual-networking-limits-classic). You have the following options:
- - **Add an address**: Adds the server to your virtual network DNS servers list. This option also registers the DNS server with Azure. If you've already registered a DNS server with Azure, you can select that DNS server in the list.
- - **Remove an address**: Next to the server that you want to remove, select **...**, then **Remove**. Deleting the server removes the server only from this virtual network list. The DNS server remains registered in Azure for your other virtual networks to use.
- - **Reorder DNS server addresses**: It's important to verify that you list your DNS servers in the correct order for your environment. DNS server lists are used in the order that they're specified. They don't work as a round-robin setup. If the first DNS server in the list can be reached, the client uses that DNS server, regardless of whether the DNS server is functioning properly. Remove all the DNS servers that are listed, and then add them back in the order that you want.
- - **Change an address**: Highlight the DNS server in the list, and then enter the new address.
+
+ - **Custom**: You can add one or more servers, up to the Azure limit for a virtual network. To learn more about DNS server limits, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md). You have the following options:
+
+ - **Add an address**: Adds the server to your virtual network DNS servers list. This option also registers the DNS server with Azure. If you've already registered a DNS server with Azure, you can select that DNS server in the list.
+
+ - **Remove an address**: Next to the server that you want to remove, select **Delete**. Deleting the server removes the server only from this virtual network list. The DNS server remains registered in Azure for your other virtual networks to use.
+
+ - **Reorder DNS server addresses**: It's important to verify that you list your DNS servers in the correct order for your environment. DNS servers are used in the order that they're specified in the list. They don't work as a round-robin setup. If the first DNS server in the list can be reached, the client uses that DNS server, regardless of whether the DNS server is functioning properly. Remove all the DNS servers that are listed, and then add them back in the order that you want.
+
+ - **Change an address**: Highlight the DNS server in the list, and then enter the new address.
+ 5. Select **Save**.+ 6. Restart the VMs that are connected to the virtual network, so they're assigned the new DNS server settings. VMs continue to use their current DNS settings until they're restarted.
-**Commands**
+### Change DNS servers of a virtual network using PowerShell
+
+Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to update a virtual network with new address space.
+
+```azurepowershell-interactive
+## Place the virtual network configuration into a variable. ##
+$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName myResourceGroup -Name myVNet
+## Add the IP address of the DNS server. ##
+$virtualNetwork.DhcpOptions.DnsServers.Add("10.0.0.10")
+## Update the virtual network. ##
+Set-AzVirtualNetwork -VirtualNetwork $virtualNetwork
+```
+
+### Change DNS servers of a virtual network using the Azure CLI
+
+Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to update the address space of a virtual network.
-- Azure CLI: [az network vnet update](/cli/azure/network/vnet)-- PowerShell: [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork)
+```azurecli-interactive
+## Update the virtual network with IP address of the DNS server. ##
+az network vnet update --resource-group myResourceGroup --name myVNet --dns-servers 10.0.0.10
+```
## Delete a virtual network You can delete a virtual network only if there are no resources connected to it. If there are resources connected to any subnet within the virtual network, you must first delete the resources that are connected to all subnets within the virtual network. The steps you take to delete a resource vary depending on the resource. To learn how to delete resources that are connected to subnets, read the documentation for each resource type you want to delete. To delete a virtual network:
-1. In the search box at the top of the portal, enter *virtual networks* in the search box. When **Virtual networks** appear in the search results, select it.
+### Delete a virtual network using the Azure portal
+
+1. In the search box at the top of the portal, enter *Virtual networks*. Select **Virtual networks** in the search results.
+ 2. From the list of virtual networks, select the virtual network you want to delete.
-3. Confirm that there are no devices connected to the virtual network by selecting **Connected devices**, under **SETTINGS**. If there are connected devices, you must delete them before you can delete the virtual network. If there are no connected devices, select **Overview**.
+
+3. Confirm that there are no devices connected to the virtual network by selecting **Connected devices**, under **Settings**. If there are connected devices, you must delete them before you can delete the virtual network. If there are no connected devices, select **Overview**.
+ 4. Select **Delete**.+ 5. To confirm the deletion of the virtual network, select **Yes**.
-**Commands**
+### Delete a virtual network using PowerShell
+
+Use [Remove-AzVirtualNetwork](/powershell/module/az.network/remove-azvirtualnetwork) to delete a virtual network.
+
+```azurepowershell-interactive
+Remove-AzVirtualNetwork -ResourceGroupName myResourceGroup -Name myVNet
+```
+
+### Delete a virtual network using the Azure CLI
+
+Use [az network vnet delete](/cli/azure/network/vnet#az-network-vnet-delete) to delete a virtual network.
-- Azure CLI: [azure network vnet delete](/cli/azure/network/vnet)-- PowerShell: [Remove-AzVirtualNetwork](/powershell/module/az.network/remove-azvirtualnetwork)
+```azurecli-interactive
+az network vnet delete --resource-group myResourceGroup --name myVNet
+```
## Permissions
-To perform tasks on virtual networks, your account must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) role that is assigned the appropriate actions listed in the following table:
+To perform tasks on virtual networks, your account must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) role or to a [custom](../role-based-access-control/custom-roles.md) role that is assigned the appropriate actions listed in the following table:
| Action | Name | |- | -- |
To perform tasks on virtual networks, your account must be assigned to the [netw
## Next steps - Create a virtual network using [PowerShell](powershell-samples.md) or [Azure CLI](cli-samples.md) sample scripts, or using Azure [Resource Manager templates](template-samples.md)
+- Add, change, or delete [a virtual network subnet](virtual-network-manage-subnet.md)
- Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks
virtual-network Update Virtual Network Peering Address Space https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/update-virtual-network-peering-address-space.md
Title: Updating the address space for a peered virtual network
-description: Learn about adding or deleting the address space for a peered virtual network without downtime.
+ Title: Update the address space for a peered virtual network - Azure portal
+description: Learn how to add, modify or delete the address ranges for a peered virtual network without downtime.
#Customer Intent: As a cloud engineer, I need to update the address space for peered virtual networks without incurring downtime from the current address spaces. I wish to do this in the Azure Portal.
-# Updating the address space for a peered virtual network - Portal
+# Update the address space for a peered virtual network using the Azure portal
In this article, you'll learn how to update a peered virtual network by adding or deleting an address space without incurring downtime interruptions using the Azure portal. This feature is useful when you need to grow or resize the virtual networks in Azure after scaling your workloads.
In this article, you'll learn how to update a peered virtual network by adding o
- An existing peered virtual network w/ two virtual networks - If adding address space, ensure it doesn't overlap other address spaces
-## Modifying the address range prefix of an existing address range (For example changing 10.1.0.0/16 to 10.1.0.0/18)
+## Modify the address range prefix of an existing address range
In this section, you'll modify the address range prefix for an existing address range within your peered virtual network. 1. In the search box at the top of the portal, enter *virtual networks* in the search box. When **Virtual networks** appear in the search results, select it. 2. From the list of virtual networks, select the virtual network where you're adding an address range.
In this section, you'll add an IP address range to the IP address space of a pee
3. Select **Address space**, under **Settings**. 4. On the **Address space** page, add the address range per your requirements, and select **Save** when finished.
- ::image type="content" source="media/update-virtual-network-peering-address-space/add-address-range-thumb.png" alt-text="Image of the Address Space page used to add an IP address range." lightbox="media/update-virtual-network-peering-address-space/add-address-range-full.png":::
+ :::image type="content" source="media/update-virtual-network-peering-address-space/add-address-range-thumb.png" alt-text="Image of the Address Space page used to add an IP address range." lightbox="media/update-virtual-network-peering-address-space/add-address-range-full.png":::
1. Select **Peering**, under **Settings** and **Sync** the peering connection. 1. As previously done, verify the address space is updated on the remote virtual network. ## Delete an address range
In this task, you'll delete an IP address range from an address space. First, yo
1. As previously done, verify the address space is updated on the remote virtual network. ## Next steps-- [Learn how to Create, change, or delete an Azure virtual network peering]()-- [Links]()
+- Learn how to [Create, change, or delete an Azure virtual network peering](virtual-network-manage-peering.md)
+- Learn how to [Create, change, or delete a virtual network](manage-virtual-network.md)
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
Learn how to create, change, or delete a virtual network peering. Virtual network peering enables you to connect virtual networks in the same region and across regions (also known as Global VNet Peering) through the Azure backbone network. Once peered, the virtual networks are still managed as separate resources. If you're new to virtual network peering, you can learn more about it in the [virtual network peering overview](virtual-network-peering-overview.md) or by completing the [virtual network peering tutorial](tutorial-connect-virtual-networks-portal.md).
-## Before you begin
+## Prerequisites
If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article:
virtual-network Virtual Network Network Interface Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface-vm.md
tags: azure-resource-manager
Previously updated : 11/15/2022 Last updated : 11/16/2022
Learn how to add an existing network interface when you create an Azure virtual
If you need to add, change, or remove IP addresses for a network interface, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md). To manage network interfaces, see [Create, change, or delete a network interface](virtual-network-network-interface.md).
-## Before you begin
+## Prerequisites
If you don't have one, set up an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article:
virtual-network Virtual Network Network Interface Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vs-azure-tools-storage-manage-with-storage-explorer.md
The following versions of macOS support Storage Explorer:
* macOS 10.12 Sierra and later versions
-# [Linux](#tab/linux)
+# [Ubuntu](#tab/linux-ubuntu)
-Storage Explorer is available in the [Snap Store](https://snapcraft.io/storage-explorer) for most common distributions of Linux. We recommend Snap Store for this installation. The Storage Explorer snap installs all of its dependencies and updates when new versions are published to the Snap Store.
+Storage Explorer is available in the [Snap Store](https://snapcraft.io/storage-explorer). The Storage Explorer snap installs all of its dependencies and updates when new versions are published to the Snap Store.
-For supported distributions, see the [`snapd` installation page](https://snapcraft.io/docs/installing-snapd).
+Ubuntu comes preinstalled with `snapd`, which allows you to run snaps. You can learn more on the [`snapd` installation page](https://snapcraft.io/docs/installing-snapd).
-Storage Explorer requires the use of a password manager. You might have to connect to a password manager manually. You can connect Storage Explorer to your system's password manager by running the following command:
+Storage Explorer requires the use of a password manager. You can connect Storage Explorer to your system's password manager by running the following command:
```bash snap connect storage-explorer:password-manager-service :password-manager-service ```
-Storage Explorer is also available as a *.tar.gz* download. If you use the *.tar.gz*, you must install dependencies manually. The following distributions of Linux support *.tar.gz* installation:
+Installing the Storage Explorer snap is recommended, but Storage Explorer is also available as a *.tar.gz* download. If you use the *.tar.gz*, you must install all of Storage Explorer's dependencies manually.
-* Ubuntu 22.04 x64
-* Ubuntu 20.04 x64
-* Ubuntu 18.04 x64
+For more help installing Storage Explorer on Ubuntu, see [Storage Explorer dependencies](./storage/common/storage-explorer-troubleshooting.md#storage-explorer-dependencies) in the Azure Storage Explorer troubleshooting guide.
-The *.tar.gz* installation might work on other distributions, but only these listed ones are officially supported.
+# [Red Hat Enterprise Linux](#tab/linux-rhel)
-For more help installing Storage Explorer on Linux, see [Linux dependencies](./storage/common/storage-explorer-troubleshooting.md#linux-dependencies) in the Azure Storage Explorer troubleshooting guide.
+Storage Explorer is available in the [Snap Store](https://snapcraft.io/storage-explorer). The Storage Explorer snap installs all of its dependencies and updates when new versions are published to the Snap Store.
+
+To run snaps, you'll need to install `snapd`. For installation instructions, see the [`snapd` installation page](https://snapcraft.io/docs/installing-snapd).
+
+Storage Explorer requires the use of a password manager. You can connect Storage Explorer to your system's password manager by running the following command:
+
+```bash
+snap connect storage-explorer:password-manager-service :password-manager-service
+```
+
+For more help installing Storage Explorer on RHEL, see [Storage Explorer dependencies](./storage/common/storage-explorer-troubleshooting.md#storage-explorer-dependencies) in the Azure Storage Explorer troubleshooting guide.
+
+# [SUSE Linux Enterprise Server](#tab/linux-sles)
+
+> [!NOTE]
+> Storage Explorer has not been tested for SLES. You may try using Storage Explorer on your system, but we cannot guarantee that Storage Explorer will work as expected.
+
+Storage Explorer is available in the [Snap Store](https://snapcraft.io/storage-explorer). The Storage Explorer snap installs all of its dependencies and updates when new versions are published to the Snap Store.
+
+To run snaps, you'll need to install `snapd`. For installation instructions, see the [`snapd` installation page](https://snapcraft.io/docs/installing-snapd).
+
+Storage Explorer requires the use of a password manager. You can connect Storage Explorer to your system's password manager by running the following command:
+
+```bash
+snap connect storage-explorer:password-manager-service :password-manager-service
+```
+
+For more help installing Storage Explorer on Ubuntu, see [Storage Explorer dependencies](./storage/common/storage-explorer-troubleshooting.md#storage-explorer-dependencies) in the Azure Storage Explorer troubleshooting guide.
As you enter text in the search box, Storage Explorer displays all resources tha
[14]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/get-shared-access-signature-for-storage-explorer.png [15]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/create-shared-access-signature-for-storage-explorer.png
-[21]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/connect-to-cosmos-db-by-connection-string.png
-[22]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/connection-string-for-cosmos-db.png
[23]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/storage-explorer-search-for-resource.png