Updates from: 03/05/2022 02:08:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
Your application needs to handle certain errors coming from Azure B2C service. L
## Next steps Set up a [force password reset](force-password-reset.md).+
+[Sign-up and Sign-in with embedded password reset](https://github.com/azure-ad-b2c/samples/tree/master/policies/embedded-password-reset).
active-directory-b2c Azure Ad External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-ad-external-identities-videos.md
description: Microsoft Azure Active Directory B2C Video Series -++
active-directory-b2c Azure Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-sentinel.md
description: In this tutorial, you use Microsoft Sentinel to perform security analytics for Azure Active Directory B2C data. -++
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
Previously updated : 02/09/2022 Last updated : 04/03/2022
You can now test the sample app. You need to start the Node server and access it
### Test sign in 1. After the page with the **Sign in** button finishes loading, select **Sign in**. You're prompted to sign in.
-1. Enter your sign-in credentials, such as email address and password. If you don't have an account, select **Sign up now** to create an account. If you have an account but have forgotten your password, select **Forgot your password?** to recover your password. After you successfully sign in or sign up, you should see the following page that shows sign-in status.
+1. Enter your sign-in credentials, such as email address and password. If you don't have an account, select **Sign up now** to create an account. After you successfully sign in or sign up, you should see the following page that shows sign-in status.
:::image type="content" source="./media/configure-a-sample-node-web-app/tutorial-dashboard-page.png" alt-text="Screenshot shows web app sign-in status.":::
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
Previously updated : 02/09/2022 Last updated : 04/03/2022
You're now ready to test the web application's scoped access to the web API. Run
1. To call the protected API endpoint, select the **Sign in to call PROTECTED API** button. You're prompted to sign in.
-1. Enter your sign-in credentials, such as email address and password. If you don't have an account, select **Sign up now** to create an account. If you have an account but have forgotten your password, select **Forgot your password?** to recover your password. After you successfully sign in or sign up, you should see the following page with **Call the PROTECTED API** button.
+1. Enter your sign-in credentials, such as email address and password. If you don't have an account, select **Sign up now** to create an account. After you successfully sign in or sign up, you should see the following page with **Call the PROTECTED API** button.
:::image type="content" source="./media/tutorial-call-api-using-access-token/signed-in-to-call-api.png" alt-text="Web page for signed to call protected A P I.":::
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
zone_pivot_groups: b2c-policy-type
## Create an application
-To enable sign-in for users with a Twitter account in Azure AD B2C, you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [`https://twitter.com/signup`](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/en/apply/user.html). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access).
+To enable sign-in for users with a Twitter account in Azure AD B2C, you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [`https://twitter.com/signup`](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access).
1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials. 1. Under **Standalone Apps**, select **+Create App**.
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
description: Learn about our partners who integrate with Azure AD B2C to provide identity proofing and verification solutions -++
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
The top-level resource for policy keys in the Microsoft Graph API is the [Truste
## Application extension properties
+- [Create extension properties](/graph/api/application-post-extensionproperty)
- [List extension properties](/graph/api/application-list-extensionproperty)-- [Delete extension property](/graph/api/application-delete-extensionproperty)
+- [Get an extension property](/graph/api/extensionproperty-get)
+- [Delete extension property](/graph/api/extensionproperty-delete)
+- [Get available extension properties](/graph/api/directoryobject-getavailableextensionproperties)
+
+<!--
+#Hiding this note because user flows and extension attributes are different things in Microsoft Graph.
Azure AD B2C provides a directory that can hold 100 custom attributes per user. For user flows, these extension properties are [managed by using the Azure portal](user-flow-custom-attributes.md). For custom policies, Azure AD B2C creates the property for you, the first time the policy writes a value to the extension property.
+-->
+
+Azure AD B2C provides a directory that can hold 100 extension values per user. To manage the extension values for a user, use the following [User APIs](/graph/api/resources/user) in Microsoft Graph.
+
+- [Update user](/graph/api/user-update): To write or remove the extension property value from the user.
+- [Get a user](/graph/api/user-get): To retrieve the extension property value for the user. The extension property will be returned by default through the `beta` endpoint, but only on `$select` through the `v1.0` endpoint.
## Audit logs
active-directory-b2c Openid Connect Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect-technical-profile.md
The technical profile also returns claims that aren't returned by the identity p
| ValidTokenIssuerPrefixes | No | A key that can be used to sign in to each of the tenants when using a multi-tenant identity provider such as Azure Active Directory. | | UsePolicyInRedirectUri | No | Indicates whether to use a policy when constructing the redirect URI. When you configure your application in the identity provider, you need to specify the redirect URI. The redirect URI points to Azure AD B2C, `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/oauth2/authresp`. If you specify `true`, you need to add a redirect URI for each policy you use. For example: `https://{your-tenant-name}.b2clogin.com/{your-tenant-name}.onmicrosoft.com/{policy-name}/oauth2/authresp`. | | MarkAsFailureOnStatusCode5xx | No | Indicates whether a request to an external service should be marked as a failure if the Http status code is in the 5xx range. The default is `false`. |
-| DiscoverMetadataByTokenIssuer | No | Indicates whether the OIDC metadata should be discovered by using the issuer in the JWT token. |
+| DiscoverMetadataByTokenIssuer | No | Indicates whether the OIDC metadata should be discovered by using the issuer in the JWT token.If you need to build the metadata endpoint URL based on Issuer, set this to `true`.|
| IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |token_endpoint_auth_method| No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). | |token_signing_algorithm| No | Specifies the signing algorithm to use when `token_endpoint_auth_method` is set to `private_key_jwt`. Possible values: `RS256` (default) or `RS512`.|
active-directory-b2c Partner Akamai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai.md
Title: Tutorial to configure Azure Active Directory B2C with Akamai Web Application Firewall
+ Title: Configure Azure Active Directory B2C with Akamai Web Application Firewall
-description: Tutorial to configure Akamai Web application firewall with Azure AD B2C
+description: Configure Akamai Web application firewall with Azure AD B2C
-++ Previously updated : 07/15/2021 Last updated : 04/03/2022
-# Tutorial: Configure Akamai with Azure Active Directory B2C
+# Configure Akamai with Azure Active Directory B2C
-In this sample tutorial, learn how to enable [Akamai Web Application Firewall (WAF)](https://www.akamai.com/us/en/resources/web-application-firewall.jsp) solution for Azure Active Directory (AD) B2C tenant using custom domains. Akamai WAF helps organization protect their web applications from malicious attacks that aim to exploit vulnerabilities such as SQL injection and Cross site scripting.
+In this sample article, learn how to enable [Akamai Web Application Firewall (WAF)](https://www.akamai.com/us/en/resources/web-application-firewall.jsp) solution for Azure Active Directory B2C (Azure AD B2C) tenant using custom domains. Akamai WAF helps organization protect their web applications from malicious attacks that aim to exploit vulnerabilities such as SQL injection and Cross site scripting.
>[!NOTE] >This feature is in public preview.
Benefits of using Akamai WAF solution:
- Allows fine grained manipulation of traffic to protect and secure your identity infrastructure.
-This sample tutorial applies to both [Web Application Protector (WAP)](https://www.akamai.com/us/en/products/security/web-application-protector-enterprise-waf-firewall-ddos-protection.jsp) and [Kona Site Defender (KSD)](https://www.akamai.com/us/en/products/security/kona-site-defender.jsp) WAF solutions that Akamai offers.
+This article applies to both [Web Application Protector (WAP)](https://www.akamai.com/us/en/products/security/web-application-protector-enterprise-waf-firewall-ddos-protection.jsp) and [Kona Site Defender (KSD)](https://www.akamai.com/us/en/products/security/kona-site-defender.jsp) WAF solutions that Akamai offers.
## Prerequisites
Akamai WAF integration includes the following components:
1. To use custom domains in Azure AD B2C, it's required to use custom domain feature provided by Azure Front Door. Learn how to [enable Azure AD B2C custom domains](./custom-domain.md?pivots=b2c-user-flow).
-2. After custom domain for Azure AD B2C is successfully configured using Azure Front Door, [test the custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain) before proceeding further.
+1. After custom domain for Azure AD B2C is successfully configured using Azure Front Door, [test the custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain) before proceeding further.
## Onboard with Akamai
Akamai WAF integration includes the following components:
1. [Create a new property](https://control.akamai.com/wh/CUSTOMER/AKAMAI/en-US/WEBHELP/property-manager/property-manager-help/GUID-14BB87F2-282F-4C4A-8043-B422344884E6.html).
-2. Configure the property settings as:
+1. Configure the property settings as:
| Property | Value | |:|:| |Property version | Select Standard or Enhanced TLS (preferred) |
- |Property hostnames | Add a property hostname. This is the name of your custom domain, for example: login.domain.com. <BR> Create or modify a certificate with the appropriate settings for the custom domain name. For more information, see [this](https://learn.akamai.com/en-us/webhelp/property-manager/https-delivery-with-property-manager/GUID-9EE0EB6A-E62B-4F5F-9340-60CBD093A429.html). |
+ |Property hostnames | Add a property hostname. This is the name of your custom domain, for example, `login.domain.com`. <BR> Create or modify a certificate with the appropriate settings for the custom domain name. Learn more about [creating a certificate](https://learn.akamai.com/en-us/webhelp/property-manager/https-delivery-with-property-manager/GUID-9EE0EB6A-E62B-4F5F-9340-60CBD093A429.html). |
-3. Set the origin server property configuration settings as:
+1. Set the origin server property configuration settings as:
|Property| Value | |:--|:--|
Akamai WAF integration includes the following components:
### Configure DNS
-Create a CNAME record in your DNS such as login.domain.com that points to the Edge hostname in the Property hostname field.
+Create a CNAME record in your DNS such as `login.domain.com` that points to the Edge hostname in the Property hostname field.
### Configure Akamai WAF 1. [Configure Akamai WAF](https://learn.akamai.com/en-us/webhelp/kona-site-defender/kona-site-defender-quick-start/GUID-6294B96C-AE8B-4D99-8F43-11B886E6C39A.html#GUID-6294B96C-AE8B-4D99-8F43-11B886E6C39A).
-2. Ensure that **Rule Actions** for all items listed under the **Attack Group** are set to **Deny**.
+1. Ensure that **Rule Actions** for all items listed under the **Attack Group** are set to **Deny**.
-![Image shows rule action set to deny](./media/partner-akamai/rule-action-deny.png)
+ ![Image shows rule action set to deny](./media/partner-akamai/rule-action-deny.png)
Learn more about [how the control works and configuration options](https://control.akamai.com/dl/security/GUID-81C0214B-602A-4663-839D-68BCBFF41292.html).
Learn more about [how the control works and configuration options](https://contr
### Test the settings
-Check the following to ensure all traffic to Azure AD B2C is now going through the custom domain:
+Check the following to ensure all traffic to Azure AD B2C is going through the custom domain:
- Make sure all incoming requests to Azure AD B2C custom domain are routed via Akamai WAF and using valid TLS connection. - Ensure all cookies are set correctly by Azure AD B2C for the custom domain.-- The Akamai WAF dashboard available under Defender for Cloud console display charts for all traffic passing through the WAF along with any attack traffic.
+- The Akamai WAF dashboard available under Defender for Cloud console display charts for all traffic that pass through the WAF along with any attack traffic.
## Next steps
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
description: Tutorial to configure Azure Active Directory B2C with Arkose Labs to identify risky and fraudulent users -++
active-directory-b2c Partner Azure Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-azure-web-application-firewall.md
description: Tutorial to configure Azure Active Directory B2C with Azure Web application firewall to protect your applications from malicious attacks -++
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
- Title: Configure Azure Active Directory B2C with Transmit Security-
-description: Configure Azure Active Directory B2C with Transmit Security for passwordless strong customer authentication
------ Previously updated : 02/28/2022--
-zone_pivot_groups: b2c-policy-type
--
-# Configure Transmit Security with Azure Active Directory B2C for passwordless authentication
---
-In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [Transmit Security](https://www.transmitsecurity.com/bindid) passwordless authentication solution **BindID**. BindID is a passwordless authentication service that uses strong Fast Identity Online (FIDO2) biometric authentication for a reliable omni-channel authentication experience. The solution ensures a smooth login experience for all customers across every device and channel eliminating fraud, phishing, and credential reuse.
-
-## Scenario description
-
-The following architecture diagram shows the implementation.
-
-![Screenshot showing the bindid and Azure AD B2C architecture diagram](media/partner-bindid/partner-bindid-architecture-diagram.png)
-
-|Step | Description |
-|:--| :--|
-| 1. | User arrives at a login page. Users select sign-in/sign-up and enter username into the page.
-| 2. | Azure AD B2C redirects the user to BindID using an OpenID Connect (OIDC) request.
-| 3. | BindID authenticates the user using appless FIDO2 biometrics, such as fingerprint.
-| 4. | A decentralized authentication response is returned to BindID.
-| 5. | The OIDC response is passed on to Azure AD B2C.
-| 6.| User is either granted or denied access to the customer application based on the verification results.
-
-## Onboard with BindID
-
-To integrate BindID with your Azure AD B2C instance, you'll need to configure an application in the [BindID Admin
-Portal](https://admin.bindid-sandbox.io/console/). For more information, see [getting started guide](https://developer.bindid.io/docs/guides/admin_portal/topics/getStarted/get_started_admin_portal). You can either create a new application or use one that you already created.
-
-## Prerequisites
-
-To get started, you'll need:
--- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.--- A BindID tenant. You can [sign up for free.](https://www.transmitsecurity.com/developer?utm_signup=dev_hub#try)--- If you haven't already done so, [register](./tutorial-register-applications.md) a web application, [and enable ID token implicit grant](./tutorial-register-applications.md#enable-id-token-implicit-grant).---- Complete the steps in the article [Get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).--
-### Step 1 - Create an application registration in BindID
-
-From [Applications](https://admin.bindid-sandbox.io/console/#/applications) to configure your tenant application in BindID, the following information is needed
-
-| Property | Description |
-|:|:|
-| Name | Azure AD B2C/your desired application name|
-| Domain | name.onmicrosoft.com|
-| Redirect URIs| https://jwt.ms |
-| Redirect URLs |Specify the page to which users are redirected after BindID authentication: https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br>Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant.|
-
->[!NOTE]
->BindID will provide you Client ID and Client Secret, which you'll need later to configure the Identity provider in Azure AD B2C.
--
-### Step 2 - Add a new Identity provider in Azure AD B2C
-
-1. Sign-in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
-
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-
-3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-
-4. Choose **All services** in the top-left corner of the Azure portal, then search for and select **Azure AD B2C**.
-
-5. Navigate to **Dashboard** > **Azure Active Directory B2C** > **Identity providers**.
-
-6. Select **New OpenID Connect Provider**.
-
-7. Select **Add**.
-
-### Step 3 - Configure an Identity provider
-
-1. Select **Identity provider type > OpenID Connect**
-
-2. Fill out the form to set up the Identity provider:
-
- |Property |Value |
- |:|:|
- |Name |Enter BindID ΓÇô Passwordless or a name of your choice|
- |Metadata URL| `https://signin.bindid-sandbox.io/.well-known/openid-configuration` |
- |Client ID|The application ID from the BindID admin UI captured in **Step 1**|
- |Client Secret|The application Secret from the BindID admin UI captured in **Step 1**|
- |Scope|OpenID email|
- |Response type|Code|
- |Response mode|form_post|
- |**Identity provider claims mapping**|
- |User ID|sub|
- |Email|email|
-
-3. Select **Save** to complete the setup for your new OIDC Identity provider.
-
-### Step 4 - Create a user flow policy
-
-You should now see BindID as a new OIDC Identity provider listed within your B2C identity providers.
-
-1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
-
-2. Select **New user flow**
-
-3. Select **Sign up and sign in** > **Version Recommended** > **Create**.
-
-4. Enter a **Name** for your policy.
-
-5. In the Identity providers section, select your newly created BindID Identity provider.
-
-6. Select **None** for Local Accounts to disable email and password-based authentication.
-
-7. Select **Create**
-
-8. Select the newly created User Flow
-
-9. Select **Run user flow**
-
-10. In the form, select the JWT Application and enter the Replying URL, such as `https://jwt.ms`.
-
-11. Select **Run user flow**.
-
-12. The browser will be redirected to the BindID login page. Enter the account name registered during User registration. The user enters the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint.
-
-13. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
---
-### Step 2 - Create a BindID policy key
-
-Store the client secret that you previously recorded in your Azure AD B2C tenant.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-
-3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-
-4. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
-
-5. On the Overview page, select **Identity Experience Framework**.
-
-6. Select **Policy Keys** and then select **Add**.
-
-7. For **Options**, choose `Manual`.
-
-8. Enter a **Name** for the policy key. For example, `BindIDClientSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
-
-9. In **Secret**, enter your client secret that you previously recorded.
-
-10. For **Key usage**, select `Signature`.
-
-11. Select **Create**.
-
->[!NOTE]
->In Azure Active Directory B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](./user-flow-overview.md).
-
-### Step 3- Configure BindID as an Identity provider
-
-To enable users to sign in using BindID, you need to define BindID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using digital identity available on their device, proving the userΓÇÖs identity.
-
-You can define BindID as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy
-
-1. Open the `TrustFrameworkExtensions.xml`.
-
-2. Find the **ClaimsProviders** element. If it dosen't exist, add it under the root element.
-
-3. Add a new **ClaimsProvider** as follows:
-
-```xml
- <ClaimsProvider>
- <Domain>signin.bindid-sandbox.io</Domain>
- <DisplayName>BindID</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="BindID-OpenIdConnect">
- <DisplayName>BindID</DisplayName>
- <Protocol Name="OpenIdConnect" />
- <Metadata>
- <Item Key="METADATA">https://signin.bindid-sandbox.io/.well-known/openid-configuration</Item>
- <!-- Update the Client ID below to the BindID Application ID -->
- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
- <Item Key="response_types">code</Item>
- <Item Key="scope">openid email</Item>
- <Item Key="response_mode">form_post</Item>
- <Item Key="HttpBinding">POST</Item>
- <Item Key="UsePolicyInRedirectUri">false</Item>
- <Item Key="AccessTokenResponseFormat">json</Item>
- </Metadata>
- <CryptographicKeys>
- <Key Id="client_secret" StorageReferenceId="B2C_1A_BindIDClientSecret" />
- </CryptographicKeys>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
- <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
- <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
- <OutputClaim ClaimTypeReferenceId="authenticationSource"
- DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
- </OutputClaims>
- <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
- <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
- <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
- </OutputClaimsTransformations>
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
-
-```
-
-4. Set **client_id** with your BindID Application ID.
-
-5. Save the file.
-
-### Step 4 - Add a user journey
-
-At this point, the identity provider has been set up, but it's not yet available in any of the sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
-
-1. Open the `TrustFrameworkBase.xml` file from the starter pack.
-
-2. Find and copy the entire contents of the **UserJourneys** element that includes `ID=SignUpOrSignIn`.
-
-3. Open the `TrustFrameworkExtensions.xml` and find the UserJourneys element. If the element doesn't exist, add one.
-
-4. Paste the entire content of the UserJourney element that you copied as a child of the UserJourneys element.
-
-5. Rename the ID of the user journey. For example, `ID=CustomSignUpSignIn`
-
-### Step 5 - Add the identity provider to a user journey
-
-Now that you have a user journey, add the new identity provider to the user journey.
-
-1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `BindIDExchange`.
-
-2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the BindID button to `BindID-SignIn` action. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
-
-The following XML demonstrates orchestration steps of a user journey with the identity provider:
--
-```xml
-<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
- <ClaimsProviderSelections>
- ...
- <ClaimsProviderSelection TargetClaimsExchangeId="BindIDExchange" />
- </ClaimsProviderSelections>
- ...
-</OrchestrationStep>
-
-<OrchestrationStep Order="2" Type="ClaimsExchange">
- ...
- <ClaimsExchanges>
- <ClaimsExchange Id="BindIDExchange" TechnicalProfileReferenceId="BindID-OpenIdConnect" />
- </ClaimsExchanges>
-</OrchestrationStep>
-```
-
-### Step 6 - Configure the relying party policy
-
-The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/SocialAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **PolicyProfile** TechnicalProfile element. In this sample, the application will receive the user attributes such as display name, given name, surname, email, objectId, identity provider, and tenantId.
-
-```xml
- <RelyingParty>
- <DefaultUserJourney ReferenceId="SignUpOrSignInWithBindID" />
- <TechnicalProfile Id="BindID-OpenIdConnect">
- <DisplayName>PolicyProfile</DisplayName>
- <Protocol Name="OpenIdConnect" />
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="displayName" />
- <OutputClaim ClaimTypeReferenceId="givenName" />
- <OutputClaim ClaimTypeReferenceId="surname" />
- <OutputClaim ClaimTypeReferenceId="email" />
- <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
- <OutputClaim ClaimTypeReferenceId="identityProvider" />
- <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
- </OutputClaims>
- <SubjectNamingInfo ClaimType="sub" />
- </TechnicalProfile>
- </RelyingParty>
-```
-
-### Step 7 - Upload the custom policy
-
-1. Sign in to the [Azure portal](https://portal.azure.com/#home).
-
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-
-3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-
-4. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
-
-5. Under Policies, select **Identity Experience Framework**.
-
-6. Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
--
-### Step 8 - Test your custom policy
-
-1. Open the Azure AD B2C tenant and under Policies select **Identity Experience Framework**.
-
-2. Click on your previously created **CustomSignUpSignIn** and select the settings:
-
- a. **Application**: select the registered app (sample is JWT)
-
- b. **Reply URL**: select the **redirect URL** that should show `https://jwt.ms`.
-
- c. Select **Run now**.
-
-If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
--
-## Next steps
-
-For additional information, review the following articles:
--- [Custom policies in Azure AD B2C](custom-policy-overview.md)--- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)--- [Sample custom policies for BindID and Azure AD B2C integration](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration)--+
+ Title: Configure Azure Active Directory B2C with Transmit Security
+
+description: Configure Azure Active Directory B2C with Transmit Security for passwordless strong customer authentication
+++++++ Last updated : 02/28/2022++
+zone_pivot_groups: b2c-policy-type
++
+# Configure Transmit Security with Azure Active Directory B2C for passwordless authentication
+++
+In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [Transmit Security](https://www.transmitsecurity.com/bindid) passwordless authentication solution **BindID**. BindID is a passwordless authentication service that uses strong Fast Identity Online (FIDO2) biometric authentication for a reliable omni-channel authentication experience. The solution ensures a smooth login experience for all customers across every device and channel eliminating fraud, phishing, and credential reuse.
+
+## Scenario description
+
+The following architecture diagram shows the implementation.
+
+![Screenshot showing the bindid and Azure AD B2C architecture diagram](media/partner-bindid/partner-bindid-architecture-diagram.png)
+
+|Step | Description |
+|:--| :--|
+| 1. | User arrives at a login page. Users select sign-in/sign-up and enter username into the page.
+| 2. | Azure AD B2C redirects the user to BindID using an OpenID Connect (OIDC) request.
+| 3. | BindID authenticates the user using appless FIDO2 biometrics, such as fingerprint.
+| 4. | A decentralized authentication response is returned to BindID.
+| 5. | The OIDC response is passed on to Azure AD B2C.
+| 6.| User is either granted or denied access to the customer application based on the verification results.
+
+## Onboard with BindID
+
+To integrate BindID with your Azure AD B2C instance, you'll need to configure an application in the [BindID Admin
+Portal](https://admin.bindid-sandbox.io/console/). For more information, see [getting started guide](https://developer.bindid.io/docs/guides/admin_portal/topics/getStarted/get_started_admin_portal). You can either create a new application or use one that you already created.
+
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.
+
+- A BindID tenant. You can [sign up for free.](https://www.transmitsecurity.com/developer?utm_signup=dev_hub#try)
+
+- If you haven't already done so, [register](./tutorial-register-applications.md) a web application, [and enable ID token implicit grant](./tutorial-register-applications.md#enable-id-token-implicit-grant).
++
+- Complete the steps in the article [Get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
++
+### Step 1 - Create an application registration in BindID
+
+From [Applications](https://admin.bindid-sandbox.io/console/#/applications) to configure your tenant application in BindID, the following information is needed
+
+| Property | Description |
+|:|:|
+| Name | Azure AD B2C/your desired application name|
+| Domain | name.onmicrosoft.com|
+| Redirect URIs| https://jwt.ms |
+| Redirect URLs |Specify the page to which users are redirected after BindID authentication: https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br>Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant.|
+
+>[!NOTE]
+>BindID will provide you Client ID and Client Secret, which you'll need later to configure the Identity provider in Azure AD B2C.
++
+### Step 2 - Add a new Identity provider in Azure AD B2C
+
+1. Sign-in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+4. Choose **All services** in the top-left corner of the Azure portal, then search for and select **Azure AD B2C**.
+
+5. Navigate to **Dashboard** > **Azure Active Directory B2C** > **Identity providers**.
+
+6. Select **New OpenID Connect Provider**.
+
+7. Select **Add**.
+
+### Step 3 - Configure an Identity provider
+
+1. Select **Identity provider type > OpenID Connect**
+
+2. Fill out the form to set up the Identity provider:
+
+ |Property |Value |
+ |:|:|
+ |Name |Enter BindID ΓÇô Passwordless or a name of your choice|
+ |Metadata URL| `https://signin.bindid-sandbox.io/.well-known/openid-configuration` |
+ |Client ID|The application ID from the BindID admin UI captured in **Step 1**|
+ |Client Secret|The application Secret from the BindID admin UI captured in **Step 1**|
+ |Scope|OpenID email|
+ |Response type|Code|
+ |Response mode|form_post|
+ |**Identity provider claims mapping**|
+ |User ID|sub|
+ |Email|email|
+
+3. Select **Save** to complete the setup for your new OIDC Identity provider.
+
+### Step 4 - Create a user flow policy
+
+You should now see BindID as a new OIDC Identity provider listed within your B2C identity providers.
+
+1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
+
+2. Select **New user flow**
+
+3. Select **Sign up and sign in** > **Version Recommended** > **Create**.
+
+4. Enter a **Name** for your policy.
+
+5. In the Identity providers section, select your newly created BindID Identity provider.
+
+6. Select **None** for Local Accounts to disable email and password-based authentication.
+
+7. Select **Create**
+
+8. Select the newly created User Flow
+
+9. Select **Run user flow**
+
+10. In the form, select the JWT Application and enter the Replying URL, such as `https://jwt.ms`.
+
+11. Select **Run user flow**.
+
+12. The browser will be redirected to the BindID login page. Enter the account name registered during User registration. The user enters the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint.
+
+13. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
+++
+### Step 2 - Create a BindID policy key
+
+Store the client secret that you previously recorded in your Azure AD B2C tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+4. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+
+5. On the Overview page, select **Identity Experience Framework**.
+
+6. Select **Policy Keys** and then select **Add**.
+
+7. For **Options**, choose `Manual`.
+
+8. Enter a **Name** for the policy key. For example, `BindIDClientSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+
+9. In **Secret**, enter your client secret that you previously recorded.
+
+10. For **Key usage**, select `Signature`.
+
+11. Select **Create**.
+
+>[!NOTE]
+>In Azure Active Directory B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](./user-flow-overview.md).
+
+### Step 3- Configure BindID as an Identity provider
+
+To enable users to sign in using BindID, you need to define BindID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using digital identity available on their device, proving the userΓÇÖs identity.
+
+You can define BindID as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy
+
+1. Open the `TrustFrameworkExtensions.xml`.
+
+2. Find the **ClaimsProviders** element. If it dosen't exist, add it under the root element.
+
+3. Add a new **ClaimsProvider** as follows:
+
+```xml
+ <ClaimsProvider>
+ <Domain>signin.bindid-sandbox.io</Domain>
+ <DisplayName>BindID</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="BindID-OpenIdConnect">
+ <DisplayName>BindID</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <Metadata>
+ <Item Key="METADATA">https://signin.bindid-sandbox.io/.well-known/openid-configuration</Item>
+ <!-- Update the Client ID below to the BindID Application ID -->
+ <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
+ <Item Key="response_types">code</Item>
+ <Item Key="scope">openid email</Item>
+ <Item Key="response_mode">form_post</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ <Item Key="AccessTokenResponseFormat">json</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_BindIDClientSecret" />
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource"
+ DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+
+```
+
+4. Set **client_id** with your BindID Application ID.
+
+5. Save the file.
+
+### Step 4 - Add a user journey
+
+At this point, the identity provider has been set up, but it's not yet available in any of the sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
+
+1. Open the `TrustFrameworkBase.xml` file from the starter pack.
+
+2. Find and copy the entire contents of the **UserJourneys** element that includes `ID=SignUpOrSignIn`.
+
+3. Open the `TrustFrameworkExtensions.xml` and find the UserJourneys element. If the element doesn't exist, add one.
+
+4. Paste the entire content of the UserJourney element that you copied as a child of the UserJourneys element.
+
+5. Rename the ID of the user journey. For example, `ID=CustomSignUpSignIn`
+
+### Step 5 - Add the identity provider to a user journey
+
+Now that you have a user journey, add the new identity provider to the user journey.
+
+1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `BindIDExchange`.
+
+2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the BindID button to `BindID-SignIn` action. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
+
+The following XML demonstrates orchestration steps of a user journey with the identity provider:
++
+```xml
+<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="BindIDExchange" />
+ </ClaimsProviderSelections>
+ ...
+</OrchestrationStep>
+
+<OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="BindIDExchange" TechnicalProfileReferenceId="BindID-OpenIdConnect" />
+ </ClaimsExchanges>
+</OrchestrationStep>
+```
+
+### Step 6 - Configure the relying party policy
+
+The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/SocialAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **PolicyProfile** TechnicalProfile element. In this sample, the application will receive the user attributes such as display name, given name, surname, email, objectId, identity provider, and tenantId.
+
+```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignInWithBindID" />
+ <TechnicalProfile Id="BindID-OpenIdConnect">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub"/>
+ <OutputClaim ClaimTypeReferenceId="identityProvider" />
+ <OutputClaim ClaimTypeReferenceId="tenantId" AlwaysUseDefaultValue="true" DefaultValue="{Policy:TenantObjectId}" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+```
+
+### Step 7 - Upload the custom policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+
+2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+4. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+
+5. Under Policies, select **Identity Experience Framework**.
+
+6. Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
++
+### Step 8 - Test your custom policy
+
+1. Open the Azure AD B2C tenant and under Policies select **Identity Experience Framework**.
+
+2. Click on your previously created **CustomSignUpSignIn** and select the settings:
+
+ a. **Application**: select the registered app (sample is JWT)
+
+ b. **Reply URL**: select the **redirect URL** that should show `https://jwt.ms`.
+
+ c. Select **Run now**.
+
+If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
++
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](custom-policy-overview.md)
+
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+
+- [Sample custom policies for BindID and Azure AD B2C integration](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration)
++
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md
description: Tutorial to configure Azure Active Directory B2C with BioCatch to identify risky and fraudulent users -++
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bloksec.md
description: Learn how to integrate Azure AD B2C authentication with BlokSec for Passwordless authentication -++
active-directory-b2c Partner Cloudflare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-cloudflare.md
description: Tutorial to configure Azure Active Directory B2C with Cloudflare Web application firewall to protect your applications from malicious attacks -++
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
description: Learn how to integrate Azure AD B2C authentication with Datawiza for secure hybrid access -++
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
description: Tutorial to configure Azure Active Directory B2C with Microsoft Dynamics 365 Fraud Protection to identify risky and fraudulent account -++
active-directory-b2c Partner Experian https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md
description: Learn how to integrate Azure AD B2C authentication with Experian for Identification verification and proofing based on user attributes to prevent fraud. -++
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md
description: Learn how to integrate Azure AD B2C authentication with F5 BIG-IP for secure hybrid access -++ Last updated 10/15/2021- # Tutorial: Secure Hybrid Access to applications with Azure AD B2C and F5 BIG-IP
active-directory-b2c Partner Haventec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-haventec.md
Title: Tutorial to configure Azure Active Directory B2C with Haventec
description: Learn how to integrate Azure AD B2C authentication with Haventec for multifactor passwordless authentication -++
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md
description: Tutorial to configure Azure Active Directory B2C with Hypr for true passwordless strong customer authentication -++
active-directory-b2c Partner Idemia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idemia.md
description: Learn how to integrate Azure AD B2C authentication with IDEMIA for relying party to consume IDEMIA or US State issued mobile IDs -++
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md
description: In this tutorial, you configure Azure Active Directory B2C with Jumio for automated ID verification, safeguarding customer data. -++
active-directory-b2c Partner Keyless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md
description: Tutorial for configuring Keyless with Azure Active Directory B2C for passwordless authentication -++
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md
Title: Tutorial to configure Azure Active Directory B2C with LexisNexis
description: Learn how to integrate Azure AD B2C authentication with LexisNexis which is a profiling and identity validation service and is used to verify user identification and provide comprehensive risk assessments based on the user's device. -++
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
description: Tutorial for configuring TheAccessHub Admin Tool with Azure Active Directory B2C to address customer accounts migration and Customer Service Requests (CSR) administration. -++
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md
description: Learn how to integrate Azure AD B2C authentication with Nevis for passwordless authentication -++
active-directory-b2c Partner Nok Nok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nok-nok.md
description: Tutorial to configure Nok Nok with Azure Active Directory B2C to enable passwordless FIDO2 authentication -++
active-directory-b2c Partner Onfido https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md
description: Learn how to integrate Azure AD B2C authentication with Onfido for document ID and facial biometrics verification -++
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
description: Learn how to integrate Azure AD B2C authentication with Ping Identity -++
active-directory-b2c Partner Saviynt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md
description: Tutorial to configure Azure Active Directory B2C with Saviynt for cross application integration to streamline IT modernization and promote better security, governance, and compliance.ΓÇ» -++
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md
description: Learn how to integrate Azure AD B2C authentication with whoIam for user verification -++
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
description: In this tutorial, learn how to integrate Azure AD B2C authentication with WhoIAM for user verification. -++
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
description: Learn how to integrate Azure AD B2C authentication with Zscaler. -++
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/relyingparty.md
The **UserJourneyBehaviors** element contains the following elements:
| SessionExpiryInSeconds | 0:1 | The lifetime of Azure AD B2C's session cookie specified as an integer stored on the user's browser upon successful authentication. The default is 86,400 seconds (24 hours). The minimum is 900 seconds (15 minutes). The maximum is 86,400 seconds (24 hours). | | JourneyInsights | 0:1 | The Azure Application Insights instrumentation key to be used. | | ContentDefinitionParameters | 0:1 | The list of key value pairs to be appended to the content definition load URI. |
-|ScriptExecution| 0:1| The supported [JavaScript](javascript-and-page-layout.md) execution modes. Possible values: `Allow` or `Disallow` (default).
| JourneyFraming | 0:1| Allows the user interface of this policy to be loaded in an iframe. |
+| ScriptExecution| 0:1| The supported [JavaScript](javascript-and-page-layout.md) execution modes. Possible values: `Allow` or `Disallow` (default).
+ ### SingleSignOn
active-directory-b2c Saml Service Provider Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider-options.md
For example, when `TokenNotBeforeSkewInSeconds` is set to `120` seconds:
You can specify whether milliseconds will be removed from date and time values within the SAML response. (These values include `IssueInstant`, `NotBefore`, `NotOnOrAfter`, and `AuthnInstant`.) To remove the milliseconds, set the `RemoveMillisecondsFromDateTime` metadata key within the relying party. Possible values: `false` (default) or `true`. ```xml
-<ClaimsProvider>
- <DisplayName>Token Issuer</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="Saml2AssertionIssuer">
- <DisplayName>Token Issuer</DisplayName>
- <Protocol Name="SAML2"/>
- <OutputTokenFormat>SAML2</OutputTokenFormat>
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" />
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="SAML2" />
<Metadata> <Item Key="RemoveMillisecondsFromDateTime">true</Item> </Metadata>
- ...
+ <OutputClaims>
+ ...
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="objectId" ExcludeAsClaim="true" />
</TechnicalProfile>
+ </RelyingParty>
``` ## Use an issuer ID to override an issuer URI
By using these tools, you can check the integration between your application and
<!-- LINKS - External --> [samltest]: https://aka.ms/samltestapp
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider.md
When your SAML application makes a request to Azure AD B2C, the SAML AuthN reque
In the registration manifest, find the `identifierURIs` parameter and add the appropriate value. This value will be the same value that's configured in the SAML AuthN requests for `EntityId` at the application, and the `entityID` value in the application's metadata. You will also need to find the `accessTokenAcceptedVersion` parameter and set the value to `2`. > [!IMPORTANT]
-> If you do not update the `accessTokenAcceptedVersion` to `2` you will recieve an error message requiring a verified domain.
+> If you do not update the `accessTokenAcceptedVersion` to `2` you will receive an error message requiring a verified domain.
The following example shows the `entityID` value in the SAML metadata:
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/session-behavior.md
Update the relying party (RP) file that initiates the user journey that you crea
</UserJourneyBehaviors> ```
-We recommend that you set the value of SessionExpiryInSeconds to be a short period (1200 seconds), while the value of KeepAliveInDays can be set to a relatively long period (30 days), as shown in the following example:
+You set both KeepAliveInDays and SessionExpiryInSeconds so that during a sign-in, if a user enables KMSI, the KeepAliveInDays is used to set the cookies, otherwise the value specified in the SessionExpiryInSeconds parameter is used. We recommend that you set the value of SessionExpiryInSeconds to be a short period (1200 seconds), while the value of KeepAliveInDays can be set to a relatively long period (30 days), as shown in the following example:
```xml <RelyingParty>
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/string-transformations.md
Use this claims transformation to check if a claim is equal to another claim. T
</InputClaims> <InputParameters> <InputParameter Id="operator" DataType="string" Value="NOT EQUAL" />
- <InputParameter Id="ignoreCase" DataType="string" Value="true" />
+ <InputParameter Id="ignoreCase" DataType="boolean" Value="true" />
</InputParameters> <OutputClaims> <OutputClaim ClaimTypeReferenceId="SameEmailAddress" TransformationClaimType="outputClaim" />
Use this claims transformation to check if a claim is equal to a value you speci
<InputParameters> <InputParameter Id="compareTo" DataType="string" Value="V1" /> <InputParameter Id="operator" DataType="string" Value="not equal" />
- <InputParameter Id="ignoreCase" DataType="string" Value="true" />
+ <InputParameter Id="ignoreCase" DataType="boolean" Value="true" />
</InputParameters> <OutputClaims> <OutputClaim ClaimTypeReferenceId="termsOfUseConsentRequired" TransformationClaimType="outputClaim" />
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md
The following example demonstrates the use of a custom attribute in Azure AD B2C
::: zone-end
-## Using custom attribute with MS Graph API
+## Manage extension attributes through Microsoft Graph
-[Microsoft Graph API][ms-graph-api] supports creating and updating a user with extension attributes. Extension attributes in the Microsoft Graph API are named by using the convention `extension_ApplicationClientID_attributename`, where the `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` [application](#azure-ad-b2c-extensions-app). Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example, the Microsoft Graph API identifies an extension attribute `loyaltyId` in Azure AD B2C as `extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyId`.
+You can use the Microsoft Graph API to create and manage extension attributes then set the values for a user.
-Learn how to [interact with resources in your Azure AD B2C tenant](microsoft-graph-operations.md#user-management) using Microsoft Graph API.
-
+Extension attributes in the Microsoft Graph API are named by using the convention `extension_ApplicationClientID_attributename`, where the `ApplicationClientID` is equivalent to the **appId** but without the hyphens. For example, if the **appId** of the `b2c-extensions-app` application is `25883231-668a-43a7-80b2-5685c3f874bc` and the **attributename** is `loyaltyId`, then the extension attribute will be named `extension_25883231668a43a780b25685c3f874bc_loyaltyId`.
+
+Learn how to [manage extension attributes in your Azure AD B2C tenant](microsoft-graph-operations.md#application-extension-properties) using the Microsoft Graph API.
## Remove extension attribute Unlike built-in attributes, extension/custom attributes can be removed. The extension attributes' values can also be removed. > [!Important]
-> Before you remove the extension/custom attribute, for each account in the directory, set the extension attribute value to null. In this way you explicitly remove the extension attributesΓÇÖs values. Then continue to remove the extension attribute itself. Extension/custom attribute is queryable using MS Graph API.
+> Before you remove the extension/custom attribute, for each account in the directory, set the extension attribute value to `null`. In this way you explicitly remove the extension attributesΓÇÖs values. Then continue to remove the extension attribute itself. Extension/custom attribute is queryable using MS Graph API.
::: zone pivot="b2c-user-flow"
Use the following steps to remove extension/custom attribute from a user flow in
::: zone pivot="b2c-custom-policy"
-To remove a custom attribute, use [MS Graph API](microsoft-graph-operations.md), and use the [Delete](/graph/api/application-delete-extensionproperty) command.
+Use the [Microsoft Graph API](microsoft-graph-operations.md#application-extension-properties) to delete the extension attribute from the application or to delete the extension attribute from the user.
::: zone-end
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration.
-> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to utilize the separate legacy registration workflows for MFA and SSPR.
+>
+> After Sept. 30th, 2022, all users will register security information through the combined registration experience.
This article outlines what combined security registration is. To get started with combined security registration, see the following article:
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
A major step in every multifactor authentication deployment is getting users reg
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
-> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to disable the combined registration experience.
+> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration.
We recommend that organizations use the [combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. Combined registration is a single step for end users. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
-> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to utilize the separate legacy registration workflows for MFA and SSPR.
+>
+> After Sept. 30th, 2022, all users will register security information through the combined registration experience.
To make sure you understand the functionality and effects before you enable the new experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
Before deploying SSPR, you may opt to determine the number and the average cost
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
-> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to disable the combined registration experience.
+> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration.
We recommend that organizations use the [combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. Combined registration is a single step for end users. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 01/26/2022 Last updated : 03/03/2022
By selecting **Other clients**, you can specify a condition that affects apps th
## Device state (preview) > [!CAUTION]
-> **This preview feature is being deprecated.** Customers should use **Filter for devices** condition in Conditional Access to satisfy scenarios, previously achieved using device state (preview) condition.
+> **This preview feature has being deprecated.** Customers should use **Filter for devices** condition in Conditional Access to satisfy scenarios, previously achieved using device state (preview) condition.
-The device state condition can be used to exclude devices that are hybrid Azure AD joined and/or devices marked as compliant with a Microsoft Intune compliance policy from an organization's Conditional Access policies.
+The device state condition was used to exclude devices that are hybrid Azure AD joined and/or devices marked as compliant with a Microsoft Intune compliance policy from an organization's Conditional Access policies.
For example, *All users* accessing the *Microsoft Azure Management* cloud app including **All device state** excluding **Device Hybrid Azure AD joined** and **Device marked as compliant** and for *Access controls*, **Block**. - This example would create a policy that only allows access to Microsoft Azure Management from devices that are either hybrid Azure AD joined or devices marked as compliant.
active-directory Troubleshoot Conditional Access What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access-what-if.md
Previously updated : 08/07/2020 Last updated : 03/04/2022
The [What If tool](what-if-tool.md) in Conditional Access is powerful when trying to understand why a policy was or wasn't applied to a user in a specific circumstance or if a policy would apply in a known state.
-The What If tool is located in the **Azure portal** > **Azure Active Directory** > **Conditional Access** > **What If**.
+The What If tool is located in the **Azure portal** > **Azure Active Directory** > **Security** > **Conditional Access** > **What If**.
![Conditional Access What If tool at default state](./media/troubleshoot-conditional-access-what-if/conditional-access-what-if-tool.png)
This test could be expanded to incorporate other data points to narrow the scope
* [What is Conditional Access?](overview.md) * [What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md) * [What is a device identity?](../devices/overview.md)
-* [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md)
+* [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md)
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Previously updated : 02/23/2022 Last updated : 03/04/2022
Create a location based Conditional Access policy that applies to service princi
### Create a risk-based Conditional Access policy
-Use this sample JSON for a risk-based policy using the [Microsoft Graph beta endpoint](/graph/api/resources/conditionalaccesspolicy?view=graph-rest-1.0&preserve-view=true).
+Create a location based Conditional Access policy that applies to service principals.
-> [!NOTE]
-> Report-only mode doesn't report account risk on a risky workload identity.
-```json
-{
-"displayName": "Name",
-"state": "enabled OR disabled",
-"conditions": {
-"applications": {
-"includeApplications": [
-"All"
-],
-"excludeApplications": [],
-"includeUserActions": [],
-"includeAuthenticationContextClassReferences": [],
-"applicationFilter": null
-},
-"userRiskLevels": [],
-"signInRiskLevels": [],
-"clientApplications": {
-"includeServicePrincipals": [
-"ServicePrincipalsInMyTenant"
-],
-"excludeServicePrincipals": []
-},
-"servicePrincipalRiskLevels": [
-"low",
-"medium",
-"high"
-]
-},
-"grantControls": {
-"operator": "and",
-"builtInControls": [
-"block"
-],
-"customAuthenticationFactors": [],
-"termsOfUse": []
-}
-}
-```
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **What does this policy apply to?**, select **Workload identities (Preview)**.
+ 1. Under **Include**, choose **Select service principals**, and select the appropriate service principals from the list.
+1. Under **Cloud apps or actions**, select **All cloud apps**. The policy will apply only when a service principal requests a token.
+1. Under **Conditions** > **Service principal risk (Preview)**
+ 1. Set the **Configure** toggle to **Yes**.
+ 1. Select the levels of risk where you want this policy to trigger.
+ 1. Select **Done**.
+1. Under **Grant**, **Block access** is the only available option. Access is blocked when a token request is made from outside the allowed range.
+1. Your policy can be saved in **Report-only** mode, allowing administrators to estimate the effects, or policy is enforced by turning policy **On**.
+1. Select **Create** to complete your policy.
## Roll back
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Previously updated : 01/04/2022 Last updated : 03/04/2022
There are certain sets of claims that define how and when they're used in tokens
| Basic claim set | Includes the claims that are emitted by default for tokens (in addition to the core claim set). You can [omit or modify basic claims](active-directory-claims-mapping.md#omit-the-basic-claims-from-tokens) by using the claims mapping policies. | | Restricted claim set | Can't be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. |
+This section lists:
+- [Table 1: JSON Web Token (JWT) restricted claim set](#table-1-json-web-token-jwt-restricted-claim-set)
+- [Table 2: SAML restricted claim set](#table-2-saml-restricted-claim-set)
+ ### Table 1: JSON Web Token (JWT) restricted claim set > [!NOTE]
There are certain sets of claims that define how and when they're used in tokens
### Table 2: SAML restricted claim set
+The following table lists the SAML claims that are by default in the restricted claim set.
+ | Claim type (URI) | | -- | |`http://schemas.microsoft.com/2012/01/devicecontext/claims/ismanaged`|
There are certain sets of claims that define how and when they're used in tokens
|`http://schemas.microsoft.com/ws/2008/06/identity/claims/role`| |`http://schemas.microsoft.com/ws/2008/06/identity/claims/wids`| |`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier`|--
+| `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname` |
+| `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid` |
+| `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid` |
+| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid` |
+| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname` |
+| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` |
+| `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` |
+
+These claims are restricted by default, but are not restricted if you [set the AcceptMappedClaims property](active-directory-claims-mapping.md#update-the-application-manifest) to `true` in your app manifest *or* have a [custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key):
+
+- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname`
+- `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid`
+- `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid`
+- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid`
+- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname`
+
+These claims are restricted by default, but are not restricted if you have a [custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key):
+
+ - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn`
+ - `http://schemas.microsoft.com/ws/2008/06/identity/claims/role`
+
## Claims mapping policy properties To control what claims are emitted and where the data comes from, use the properties of a claims mapping policy. If a policy is not set, the system issues tokens that include the core claim set, the basic claim set, and any [optional claims](active-directory-optional-claims.md) that the application has chosen to receive.
The ID element identifies which property on the source provides the value for th
| User | employeeid | Employee ID | | User | facsimiletelephonenumber | Facsimile Telephone Number | | User | assignedroles | list of App roles assigned to user|
+| User | accountEnabled | Account Enabled |
+| User | consentprovidedforminor | Consent Provided For Minor |
+| User | createddatetime | Created Date/Time|
+| User | creationtype | Creation Type |
+| User | lastpasswordchangedatetime | Last Password Change Date/Time |
+| User | mobilephone | Mobile Phone |
+| User | officelocation | Office Location |
+| User | onpremisesdomainname | On-Premises Domain Name |
+| User | onpremisesimmutableid | On-Premises Imutable ID |
+| User | onpremisessyncenabled | On-Premises Sync Enabled |
+| User | preferreddatalocation | Preffered Data Location |
+| User | proxyaddresses | Proxy Addresses |
+| User | usertype | User Type |
| application, resource, audience | displayname | Display Name | | application, resource, audience | objectid | ObjectID | | application, resource, audience | tags | Service Principal Tag |
active-directory Concept Azure Ad Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-register.md
Another user wants to access their organizational email on their personal Androi
- [Manage device identities using the Azure portal](device-management-azure-portal.md) - [Manage stale devices in Azure AD](manage-stale-devices.md)
+- [Register your personal device on your work or school network](https://support.microsoft.com/account-billing/register-your-personal-device-on-your-work-or-school-network-8803dd61-a613-45e3-ae6c-bd1ab25bf8a8)
active-directory Enterprise State Roaming Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md
# Enable Enterprise State Roaming in Azure Active Directory
-Enterprise State Roaming is available to any organization with an Azure AD Premium or Enterprise Mobility + Security (EMS) license. For more information on how to get an Azure AD subscription, see the [Azure AD product page](https://azure.microsoft.com/services/active-directory).
+Enterprise State Roaming provides users with a unified experience across their Windows devices and reduces the time needed for configuring a new device. Enterprise State Roaming operates similar to the standard [consumer settings sync](https://go.microsoft.com/fwlink/?linkid=2015135) that was first introduced in Windows 8. Enterprise State Roaming is available to any organization with an Azure AD Premium or Enterprise Mobility + Security (EMS) license. For more information on how to get an Azure AD subscription, see the [Azure AD product page](https://azure.microsoft.com/services/active-directory).
When you enable Enterprise State Roaming, your organization is automatically granted a free, limited-use license for Azure Rights Management protection from Azure Information Protection. This free subscription is limited to encrypting and decrypting enterprise settings and application data synced by Enterprise State Roaming. You must have [a paid subscription](https://azure.microsoft.com/services/information-protection/) to use the full capabilities of the Azure Rights Management service.
active-directory Enterprise State Roaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-overview.md
- Title: What is enterprise state roaming in Azure Active Directory?
-description: Enterprise State Roaming provides users with a unified experience across their Windows devices
----- Previously updated : 02/15/2022--------
-# What is enterprise state roaming?
-
-With Windows 10 or newer, [Azure Active Directory (Azure AD)](../fundamentals/active-directory-whatis.md) users gain the ability to securely synchronize their user settings and application settings data to the cloud. Enterprise State Roaming provides users with a unified experience across their Windows devices and reduces the time needed for configuring a new device. Enterprise State Roaming operates similar to the standard [consumer settings sync](https://go.microsoft.com/fwlink/?linkid=2015135) that was first introduced in Windows 8. Additionally, Enterprise State Roaming offers:
--- **Separation of corporate and consumer data** ΓÇô Organizations are in control of their data, and there is no mixing of corporate data in a consumer cloud account or consumer data in an enterprise cloud account.-- **Enhanced security** ΓÇô Data is automatically encrypted before leaving the userΓÇÖs Windows 10 or newer device by using Azure Rights Management (Azure RMS), and data stays encrypted at rest in the cloud. All content stays encrypted at rest in the cloud, except for the namespaces, like settings names and Windows app names. -- **Better management and monitoring** ΓÇô Provides control and visibility over who syncs settings in your organization and on which devices through the Azure AD portal integration. -
-| Article | Description |
-| | |
-| [Enable Enterprise State Roaming in Azure Active Directory](enterprise-state-roaming-enable.md) | Enterprise State Roaming is available to any organization with a Premium Azure Active Directory (Azure AD) subscription. |
-| [Settings and data roaming FAQ](enterprise-state-roaming-faqs.yml) | This article answers some questions IT administrators might have about settings and app data sync. |
-| [Group policy and MDM settings for settings sync](enterprise-state-roaming-group-policy-settings.md) | Windows 10 or newer provides Group Policy and mobile device management (MDM) policy settings to limit settings sync. |
-| [Windows 10 roaming settings reference](enterprise-state-roaming-windows-settings-reference.md) | A list of settings that will be roamed and/or backed-up in Windows 10 or newer. |
-| [Troubleshooting](enterprise-state-roaming-troubleshooting.md) | This article goes through some basic steps for troubleshooting, and contains a list of known issues. |
-
-## Next steps
-
-For information about enabling enterprise state roaming, see [enable enterprise state roaming](enterprise-state-roaming-enable.md).
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
Previously updated : 02/18/2022 Last updated : 03/04/2022 zone_pivot_groups: identity-mi-methods
zone_pivot_groups: identity-mi-methods
# Manage user-assigned managed identities -- Managed identities for Azure resources eliminate the need to manage credentials in code. You can use them to get an Azure Active Directory (Azure AD) token for your applications. The applications can use the token when accessing resources that support Azure AD authentication. Azure manages the identity so you don't have to. There are two types of managed identities: system-assigned and user-assigned. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. To learn more about managed identities, see [What are managed identities for Azure resources?](overview.md).
Deleting a user-assigned identity doesn't remove it from the VM or resource it w
:::image type="content" source="media/how-manage-user-assigned-managed-identities/delete-user-assigned-managed-identity-portal.png" alt-text="Screenshot that shows the Delete user-assigned managed identities.":::
-## Assign a role to a user-assigned managed identity
+## Manage access to user-assigned managed identities
-To assign a role to a user-assigned managed identity, your account needs the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role assignment.
+In some environments, administrators choose to limit who can manage user-assigned managed identities. You do this by using [built-in](../../role-based-access-control/built-in-roles.md#identity) RBAC roles. You can use these roles to grant a user or group in your organization rights over a user-assigned managed identity.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **Managed Identities**. Under **Services**, select **Managed Identities**.
-1. A list of the user-assigned managed identities for your subscription is returned. Select the user-assigned managed identity that you want to assign a role.
+1. A list of the user-assigned managed identities for your subscription is returned. Select the user-assigned managed identity that you want to manage.
1. Select **Azure role assignments**, and then select **Add role assignment**. 1. In the **Add role assignment** pane, configure the following values, and then select **Save**: - **Role**: The role to assign.
To assign a role to a user-assigned managed identity, your account needs the [Us
![Screenshot that shows the user-assigned managed identity IAM.](media/how-manage-user-assigned-managed-identities/assign-role-screenshot-02.png)
+>[!NOTE]
+>You can find information on assigning roles to managed identities in [Assign a managed identity access to a resource by using the Azure portal](../../role-based-access-control/role-assignments-portal-managed-identity.md)
::: zone-end
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md
Previously updated : 11/18/2021 Last updated : 03/03/2022
You can create a new administrative unit by using either the Azure portal, Power
1. Select **Azure Active Directory** > **Administrative units**.
- ![Screenshot of the "Administrative units" link in Azure AD.](./media/admin-units-manage/nav-to-admin-units.png)
+ ![Screenshot of the Administrative units page in Azure AD.](./media/admin-units-manage/nav-to-admin-units.png)
-1. Select the **Add** button at the upper part of the pane, and then, in the **Name** box, enter the name of the administrative unit. Optionally, add a description of the administrative unit.
+1. Select **Add**.
- ![Screenshot showing the Add button and the Name box for entering the name of the administrative unit.](./media/admin-units-manage/add-new-admin-unit.png)
+1. In the **Name** box, enter the name of the administrative unit. Optionally, add a description of the administrative unit.
-1. Select the blue **Add** button to finalize the administrative unit.
+ ![Screenshot showing the Add administrative unit page and the Name box for entering the name of the administrative unit.](./media/admin-units-manage/add-new-admin-unit.png)
+
+1. Optionally, on the **Assign roles** tab, select a role and then select the users to assign the role to with this administrative unit scope.
+
+ ![Screenshot showing the Add assignments pane to add role assignments with this administrative unit scope.](./media/admin-units-manage/assign-roles-admin-unit.png)
+
+1. On the **Review + create** tab, review the administrative unit and any role assignments.
+
+1. Select the **Create** button.
### PowerShell
Body
## Delete an administrative unit
-In Azure AD, you can delete an administrative unit that you no longer need as a unit of scope for administrative roles.
+In Azure AD, you can delete an administrative unit that you no longer need as a unit of scope for administrative roles. Before you delete the administrative unit, you should remove any role assignments with that administrative unit scope.
### Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+1. Select **Azure Active Directory** > **Administrative units** and then select the administrative unit you want to delete.
+
+1. Select **Roles and administrators**, and then open a role to view the role assignments.
+
+1. Remove all the role assignments with the administrative unit scope.
+ 1. Select **Azure Active Directory** > **Administrative units**.
-
-1. Select the administrative unit to be deleted, and then select **Delete**.
-1. To confirm that you want to delete the administrative unit, select **Yes**. The administrative unit is deleted.
+1. Add a check mark next to the administrative unit you want to delete.
+
+1. Select **Delete**.
![Screenshot of the administrative unit Delete button and confirmation window.](./media/admin-units-manage/select-admin-unit-to-delete.png)
+1. To confirm that you want to delete the administrative unit, select **Yes**.
+ ### PowerShell Use the [Remove-AzureADMSAdministrativeUnit](/powershell/module/azuread/remove-azureadmsadministrativeunit) command to delete an administrative unit.
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with AWS Single-Account Access | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with AWS Single-Account Access'
description: Learn how to configure single sign-on between Azure Active Directory and AWS Single-Account Access.
Previously updated : 03/05/2021 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with AWS Single-Account Access
+# Tutorial: Azure AD SSO integration with AWS Single-Account Access
In this tutorial, you'll learn how to integrate AWS Single-Account Access with Azure Active Directory (Azure AD). When you integrate AWS Single-Account Access with Azure AD, you can:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
c. Select **Close**.
+> [!NOTE]
+> AWS has a set of permissions/limts are required to configure AWS SSO. To know more information on AWS limits, please refer [this](https://docs.aws.amazon.com/singlesignon/latest/userguide/limits.html) page.
+ ### How to configure role provisioning in AWS Single-Account Access 1. In the Azure AD management portal, in the AWS app, go to **Provisioning**.
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Title: Rotate certificates in Azure Kubernetes Service (AKS)
description: Learn how to rotate your certificates in an Azure Kubernetes Service (AKS) cluster. Previously updated : 3/3/2022 Last updated : 3/4/2022 # Rotate certificates in Azure Kubernetes Service (AKS)
az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-
Azure Kubernetes Service will automatically rotate non-ca certificates on both the control plane and agent nodes before they expire with no downtime for the cluster.
-For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/).
+For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) which has been enabled by default in all Azure regions.
#### How to check whether current agent node pool is TLS Bootstrapping enabled? To verify if TLS Bootstrapping is enabled on your cluster browse to the following paths. On a Linux node: /var/lib/kubelet/bootstrap-kubeconfig, on a Windows node, itΓÇÖs c:\k\bootstrap-config.
analysis-services Analysis Services Create Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md
+
+ Title: Quickstart - Create an Azure Analysis Services server resource by using Bicep
+description: Quickstart showing how to an Azure Analysis Services server resource by using a Bicep file.
Last updated : 03/04/2022++++
+tags: azure-resource-manager, bicep
+
+#Customer intent: As a BI developer who is new to Azure, I want to use Azure Analysis Services to store and manage my organizations data models.
++
+# Quickstart: Create a server - Bicep
+
+This quickstart describes how to create an Analysis Services server resource in your Azure subscription by using [Bicep](../azure-resource-manager/bicep/overview.md).
++
+## Prerequisites
+
+* **Azure subscription**: Visit [Azure Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) to create an account.
+* **Azure Active Directory**: Your subscription must be associated with an Azure Active Directory tenant. And, you need to be signed in to Azure with an account in that Azure Active Directory. To learn more, see [Authentication and user permissions](analysis-services-manage-users.md).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure quickstart templates](https://azure.microsoft.com/resources/templates/analysis-services-create/).
++
+A single [Microsoft.AnalysisServices/servers](/azure/templates/microsoft.analysisservices/servers) resource with a firewall rule is defined in the Bicep file.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serverName=<analysis-service-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serverName "<analysis-service-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<analysis-service-name\>** with a unique analysis service name.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal or Azure PowerShell to verify the resource group and server resource was created.
+
+```azurepowershell-interactive
+Get-AzAnalysisServicesServer -Name <analysis-service-name>
+```
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and the server resource.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+Write-Host "Press [ENTER] to continue..."
+```
+++
+## Next steps
+
+In this quickstart, you used a Bicep file to create a new resource group and an Azure Analysis Services server resource. After you've created a server resource by using the template, consider the following:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Configure server firewall - Portal](analysis-services-qs-firewall.md)
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
Below are the current restrictions of WebSocket support in API Management:
* WebSocket APIs are not supported yet in the [self-hosted gateway](./self-hosted-gateway-overview.md). * Azure CLI, PowerShell, and SDK currently do not support management operations of WebSocket APIs. * 200 active connections limit per unit.
+* Websockets APIs support the following valid buffer types for messages: Close, BinaryFragment, BinayrMessage, UTF8Fragment, and UTF8Message.
### Unsupported policies
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
description: Connect privately to a Web App using Azure Private Endpoint
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 02/17/2022 Last updated : 03/04/2022
You can also deploy the Private Endpoint in a different region than the Web App.
From a security perspective: -- When you enable Private Endpoints to your Web App, you disable all public access.
+- By default, when you enable Private Endpoints to your Web App, you disable all public access.
- You can enable multiple Private Endpoints in others VNets and Subnets, including VNets in other regions. - The IP address of the Private Endpoint NIC must be dynamic, but will remain the same until you delete the Private Endpoint. - The NIC of the Private Endpoint can't have an NSG associated. - The Subnet that hosts the Private Endpoint can have an NSG associated, but you must disable the network policies enforcement for the Private Endpoint: see [Disable network policies for private endpoints][disablesecuritype]. As a result, you can't filter by any NSG the access to your Private Endpoint.-- When you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App isn't evaluated.
+- By default, when you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App isn't evaluated.
- You can eliminate the data exfiltration risk from the VNet by removing all NSG rules where destination is tag Internet or Azure services. When you deploy a Private Endpoint for a Web App, you can only reach this specific Web App through the Private Endpoint. If you have another Web App, you must deploy another dedicated Private Endpoint for this other Web App. In the Web HTTP logs of your Web App, you'll find the client source IP. This feature is implemented using the TCP Proxy protocol, forwarding the client IP property up to the Web App. For more information, see [Getting connection Information using TCP Proxy v2][tcpproxy].
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Title: Integrate your app with an Azure virtual network
description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 01/26/2022 Last updated : 03/04/2022 # Integrate your app with an Azure virtual network
-This article describes the Azure App Service VNet integration feature and how to set it up with apps in [App Service](./overview.md). With [Azure virtual networks](../virtual-network/virtual-networks-overview.md) (VNets), you can place many of your Azure resources in a non-internet-routable network. The App Service VNet integration feature enables your apps to access resources in or through a virtual network. Virtual network integration doesn't enable your apps to be accessed privately.
+This article describes the Azure App Service virtual network integration feature and how to set it up with apps in [App Service](./overview.md). With [Azure virtual networks](../virtual-network/virtual-networks-overview.md), you can place many of your Azure resources in a non-internet-routable network. The App Service virtual network integration feature enables your apps to access resources in or through a virtual network. Virtual network integration doesn't enable your apps to be accessed privately.
App Service has two variations:
After your app integrates with your virtual network, it uses the same DNS server
There are some limitations with using regional virtual network integration:
-* The feature is available from all App Service scale units in Premium v2 and Premium v3. It's also available in Standard but only from newer App Service scale units. If you're on an older scale unit, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest scale units. You can scale down if you want after the plan is created.
+* The feature is available from all App Service deployments in Premium v2 and Premium v3. It's also available in Standard but only from newer App Service deployments. If you're on an older deployment, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you want after the plan is created.
* The feature can't be used by Isolated plan apps that are in an App Service Environment. * You can't reach resources across peering connections with classic virtual networks. * The feature requires an unused subnet that's an IPv4 `/28` block or larger in an Azure Resource Manager virtual network.
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Previously updated : 01/31/2022 Last updated : 03/04/2022
Define access policies to use the user-assigned managed identity with your Key V
1. Select the Key Vault that contains your certificate. 1. If you're using the permission model **Vault access policy**: Select **Access Policies**, select **+ Add Access Policy**, select **Get** for **Secret permissions**, and choose your user-assigned managed identity for **Select principal**. Then select **Save**.
- If you're using the permission model **Azure role-based access control**: Select **Access control (IAM)** and [Add a role assignment](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#assign-a-role-to-a-user-assigned-managed-identity) for the user-assigned managed identity to the Azure Key Vault for the role **Key Vault Secrets User**.
+ If you're using **Azure role-based access control** follow the article [Assign a managed identity access to a resource](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) and assign the user-assigned managed identity the **Key Vault Secrets User** role to the Azure Key Vault.
### Verify Firewall Permissions to Key Vault
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
To finish preparing the Run As account:
> In case of unrestricted access, a user with VM Contributor rights or having permissions to run commands against the hybrid worker machine can use the Automation Account Run As certificate from the hybrid worker machine, using other sources like Azure cmdlets which could potentially allow a malicious user access as a subscription contributor. This could jeopardize the security of your Azure environment. </br> </br> > We recommend that you divide the tasks within the team and grant the required permissions/access to users as per their job. Do not provide unrestricted permissions to the machine hosting the hybrid runbook worker role.
+## Start a runbook on a Hybrid Runbook Worker
+
+[Start a runbook in Azure Automation](start-runbooks.md) describes different methods for starting a runbook. Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook as usual.
+
+When you start a runbook in the Azure portal, you're presented with the **Run on** option for which you can select **Azure** or **Hybrid Worker**. Select **Hybrid Worker**, to choose the Hybrid Runbook Worker group from a dropdown.
+++
+When starting a runbook using PowerShell, use the `RunOn` parameter with the [Start-AzAutomationRunbook](/powershell/module/Az.Automation/Start-AzAutomationRunbook) cmdlet. The following example uses Windows PowerShell to start a runbook named **Test-Runbook** on a Hybrid Runbook Worker group named MyHybridGroup.
+
+```azurepowershell-interactive
+Start-AzAutomationRunbook -AutomationAccountName "MyAutomationAccount" -Name "Test-Runbook" -RunOn "MyHybridGroup"
+```
## Work with signed runbooks on a Windows Hybrid Runbook Worker
The signed runbook is called **\<runbook name>.asc**.
You can now upload the signed runbook to Azure Automation and execute it like a regular runbook.
-## Start a runbook on a Hybrid Runbook Worker
-
-[Start a runbook in Azure Automation](start-runbooks.md) describes different methods for starting a runbook. Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook as usual.
-
-When you start a runbook in the Azure portal, you're presented with the **Run on** option for which you can select **Azure** or **Hybrid Worker**. If you select **Hybrid Worker**, you can choose the Hybrid Runbook Worker group from a dropdown.
-When starting a runbook using PowerShell, use the `RunOn` parameter with the [Start-AzAutomationRunbook](/powershell/module/Az.Automation/Start-AzAutomationRunbook) cmdlet. The following example uses Windows PowerShell to start a runbook named **Test-Runbook** on a Hybrid Runbook Worker group named MyHybridGroup.
-
-```azurepowershell-interactive
-Start-AzAutomationRunbook -AutomationAccountName "MyAutomationAccount" -Name "Test-Runbook" -RunOn "MyHybridGroup"
-```
## Logging To help troubleshoot issues with your runbooks running on a hybrid runbook worker, logs are stored locally in the following location:
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md
Title: Restore a database in Azure Arc-enabled SQL Managed Instance to a previous point in time
-description: Explains how to restore a database to a specific point in time on Azure Arc-enabled SQL Managed Instance.
+ Title: Restore a database in Azure Arc-enabled SQL Managed Instance to a previous point-in-time
+description: Explains how to restore a database to a specific point-in-time on Azure Arc-enabled SQL Managed Instance.
Previously updated : 11/03/2021 Last updated : 03/01/2022
-# Perform a point in time Restore
+# Perform a point-in-time Restore
Use the point-in-time restore (PITR) to create a database as a copy of another database from some time in the past that is within the retention period. This article describes how to do a point-in-time restore of a database in Azure Arc-enabled SQL managed instance.
Currently, point-in-time restore can restore a database:
Azure Arc-enabled SQL managed instance has built-in automatic backups feature enabled. Whenever you create or restore a new database, Azure Arc-enabled SQL managed instance initiates a full backup immediately and schedules differential and transaction log backups automatically. SQL managed instance stores these backups in the storage class specified during the deployment.
-Point-in-time restore enables a database to be restored to a specific point in time, within the retention period. To restore a database to a specific point in time, Azure Arc-enabled data services applies the backup files in a specific order. For example:
+Point-in-time restore enables a database to be restored to a specific point-in-time, within the retention period. To restore a database to a specific point-in-time, Azure Arc-enabled data services applies the backup files in a specific order. For example:
1. Full backup 2. Differential backup 3. One or more transaction log backups Currently, full backups are taken once a week, differential backups are taken every 12 hours and transaction log backups every 5 minutes.
Currently, full backups are taken once a week, differential backups are taken ev
The default retention period for a new Azure Arc-enabled SQL managed instance is seven days, and can be adjusted with values of 0, or 1-35 days. The retention period can be set during deployment of the SQL managed instance by specifying the `--retention-days` property. Backup files older than the configured retention period are automatically deleted.
-## Create a database from a point in time using az
+## Create a database from a point-in-time using az CLI
```azurecli az sql midb-arc restore --managed-instance <SQL managed instance> --name <source DB name> --dest-name <Name for new db> --k8s-namespace <namespace of managed instance> --time "YYYY-MM-DDTHH:MM:SSZ" --use-k8s
az sql midb-arc restore --managed-instance <SQL managed instance> --name <source
az sql midb-arc restore --managed-instance sqlmi1 --name Testdb1 --dest-name mynewdb --k8s-namespace arc --time "2021-10-29T01:42:14.00Z" --use-k8s --dry-run ```
+## Create a database from a point-in-time using kubectl
-## Create a database from a point in time using Azure Data Studio
+1. To perform a point-in-time restore with Kubernetes native tools, you can use `kubectl`. Create a task spec yaml file. For example:
-You can also restore a database to a point in time from Azure Data Studio as follows:
+ ```yaml
+ apiVersion: tasks.sql.arcdata.microsoft.com/v1
+ kind: SqlManagedInstanceRestoreTask
+ metadata:
+ name: myrestoretask20220304
+ namespace: test
+ spec:
+ source:
+ name: miarc1
+ database: testdb
+ restorePoint: "2021-10-12T18:35:33Z"
+ destination:
+ name: miarc1
+ database: testdb-pitr
+ dryRun: false
+ ```
+
+1. Edit the properties as follows:
+
+ 1. `name:` Unique string for each custom resource (CR). Required by Kubernetes.
+ 1. `namespace:` Kubernetes namespace where the Azure Arc-enabled SQL managed instance is.
+ 1. `source: ... name:` Name of the source instance.
+ 1. `source: ... database:` Name of source database where the restore would be applied from.
+ 1. `restorePoint:` Point-in-time for the restore operation in UTC datetime.
+ 1. `destination: ... name:` Name of the destination Arc-enabled SQL managed instance. Currently, point-in-time restore is only supported within the Arc SQL managed instance. This should be same as the source SQL managed instance.
+ 1. `destination: ... database:` Name of the new database where the restore would be applied to.
+
+1. Create a task to start the point-in-time restore. The following example initiates the task defined in `myrestoretask20220304.yaml`.
++
+ ```console
+ kubectl apply -f myrestoretask20220304.yaml
+ ```
+
+1. Check restore task status as follows:
+
+ ```console
+ kubectl get sqlmirestoretask -n <namespace>
+ ```
+
+Restore task status will be updated about every 10 seconds based on the PITR progress. The status progresses from `Waiting` to `Restoring` to `Completed` or `Failed`.
+
+## Create a database from a point-in-time using Azure Data Studio
+
+You can also restore a database to a point-in-time from Azure Data Studio as follows:
1. Launch Azure Data studio 2. Ensure you have the required Arc extensions as described in [Tools](install-client-tools.md). 3. Connect to the Azure Arc data controller
az sql mi-arc edit --name sqlmi --k8s-namespace arc --use-k8s --retention-days
You can disable the automated backups for a specific instance of Azure Arc-enabled SQL managed instance by setting the `--retention-days` property to 0, as follows. > [!WARNING]
-> If you disable Automatic Backups for an Azure Arc-enabled SQL managed instance, then any Automatic Backups configured will be deleted and you lose the ability to do a point in time restore. You can change the `retention-days` property to re-initiate automatic backups if needed.
+> If you disable Automatic Backups for an Azure Arc-enabled SQL managed instance, then any Automatic Backups configured will be deleted and you lose the ability to do a point-in-time restore. You can change the `retention-days` property to re-initiate automatic backups if needed.
### Disable Automatic backups for **Direct** connected SQL managed instance
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-development.md
While you can connect from outside of Azure, it is not recommended *especially w
Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible.
-If your client library or tool doesn't support TLS, then enabling unencrypted connections is possible through the [Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/2021-06-01/redis/update). In cases where encrypted connections aren't possible, we recommend placing your cache and client application into a virtual network. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
+If your client library or tool doesn't support TLS, then enabling unencrypted connections is possible through the [Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/redis/update). In cases where encrypted connections aren't possible, we recommend placing your cache and client application into a virtual network. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
### Azure TLS Certificate Change
azure-functions Bring Dependency To Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/bring-dependency-to-functions.md
First, you need to create an Azure Storage Account. In the account, you also nee
After you created the storage account and file share, use the [az webapp config storage-account add](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command to attach the file share to your functions app, as shown in the following example.
-```console
+```azurecli
az webapp config storage-account add \ --name < Function-App-Name > \ --resource-group < Resource-Group > \
azure-functions Functions Identity Based Connections Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-based-connections-tutorial.md
Next you will update your function app to use its system-assigned identity when
> [!IMPORTANT] > The `AzureWebJobsStorage` configuration is used by some triggers and bindings, and those extensions must be able to use identity-based connections, too. Apps that use blob triggers or event hub triggers may need to update those extensions. Because no functions have been defined for this app, there isn't a concern yet. To learn more about this requirement, see [Connecting to host storage with an identity (Preview)](./functions-reference.md#connecting-to-host-storage-with-an-identity-preview). >
-> Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption. When you enable identity-based connections for `AzureWebJobsStorage` in Linux Consmption, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md).
+> Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption. When you enable identity-based connections for `AzureWebJobsStorage` in Linux Consumption, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md).
1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
For example, a JavaScript function app is constrained by the default memory limi
And for plans with more than 4GB memory, ensure the Bitness Platform Setting is set to `64 Bit` under [General Settings](../app-service/configure-common.md#configure-general-settings).
-## Region Max Scale Out
+## Region max scale out
Below are the currently supported maximum scale-out values for a single plan in each region and OS configuration.
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
For Windows function apps, target the version in Azure by setting the `WEBSITE_N
For Linux function apps, run the following Azure CLI command to update the Node version.
-```bash
+```azurecli
az functionapp config set --linux-fx-version "node|14" --name "<MY_APP_NAME>" --resource-group "<MY_RESOURCE_GROUP_NAME>" ```
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
Use an `HttpResponseContext` object to return a response, as shown in the follow
}, { "type": "http",
- "direction": "out"
+ "direction": "out",
+ "name": "Response"
} ] }
param($req, $TriggerMetadata)
$name = $req.Query.Name
-Push-OutputBinding -Name res -Value ([HttpResponseContext]@{
+Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
StatusCode = [System.Net.HttpStatusCode]::OK Body = "Hello $name!" })
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
# Azure Functions hosting options
-When you create a function app in Azure, you must choose a hosting plan for your app. There are three basic hosting plans available for Azure Functions: [Consumption plan](consumption-plan.md), [Premium plan](functions-premium-plan.md), and [Dedicated (App Service) plan](dedicated-plan.md). All hosting plans are generally available (GA) on both Linux and Windows virtual machines.
+When you create a function app in Azure, you must choose a hosting plan for your app. There are three basic hosting plans available for Azure Functions: [Consumption plan], [Premium plan], and [Dedicated (App Service) plan][Dedicated plan]. All hosting plans are generally available (GA) on both Linux and Windows virtual machines.
The hosting plan you choose dictates the following behaviors:
The following is a summary of the benefits of the three main hosting plans for F
| Plan | Benefits | | | |
-|**[Consumption plan](consumption-plan.md)**| Scale automatically and only pay for compute resources when your functions are running.<br/><br/>On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/><br/> Γ£ö Default hosting plan.<br/>Γ£ö Pay only when your functions are running.<br/>Γ£ö Scales automatically, even during periods of high load.|
-|**[Premium plan](functions-premium-plan.md)**|Automatically scales based on demand using pre-warmed workers which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that aren't available on the Consumption plan, such as virtual network connectivity.<br/>Γ£ö You want to provide a custom Linux image on which to run your functions. |
-|**[Dedicated plan](dedicated-plan.md)** |Run your functions within an App Service plan at regular [App Service plan rates](https://azure.microsoft.com/pricing/details/app-service/windows/).<br/><br/>Best for long-running scenarios where [Durable Functions](durable/durable-functions-overview.md) can't be used. Consider an App Service plan in the following situations:<br/><br/>Γ£ö You have existing, underutilized VMs that are already running other App Service instances.<br/>Γ£ö Predictive scaling and costs are required.|
+|**[Consumption plan]**| Scale automatically and only pay for compute resources when your functions are running.<br/><br/>On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/><br/> Γ£ö Default hosting plan.<br/>Γ£ö Pay only when your functions are running.<br/>Γ£ö Scales automatically, even during periods of high load.|
+|**[Premium plan]**|Automatically scales based on demand using pre-warmed workers which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that aren't available on the Consumption plan, such as virtual network connectivity.<br/>Γ£ö You want to provide a custom Linux image on which to run your functions. |
+|**[Dedicated plan]** |Run your functions within an App Service plan at regular [App Service plan rates](https://azure.microsoft.com/pricing/details/app-service/windows/).<br/><br/>Best for long-running scenarios where [Durable Functions](durable/durable-functions-overview.md) can't be used. Consider an App Service plan in the following situations:<br/><br/>Γ£ö You have existing, underutilized VMs that are already running other App Service instances.<br/>Γ£ö Predictive scaling and costs are required.|
The comparison tables in this article also include the following hosting options, which provide the highest amount of control and isolation in which to run your function apps. | Hosting option | Details | | | |
-|**[ASE](dedicated-plan.md)** | App Service Environment (ASE) is an App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale.<br/><br/>ASEs are appropriate for application workloads that require: <br/><br/>Γ£ö Very high scale.<br/>Γ£ö Full compute isolation and secure network access.<br/>Γ£ö High memory usage.|
-| **Kubernetes**<br/>([Direct](functions-kubernetes-keda.md) or<br/>[Azure Arc](../app-service/overview-arc-integration.md)) | Kubernetes provides a fully isolated and dedicated environment running on top of the Kubernetes platform.<br/><br/> Kubernetes is appropriate for application workloads that require: <br/>Γ£ö Custom hardware requirements.<br/>Γ£ö Isolation and secure network access.<br/>Γ£ö Ability to run in hybrid or multi-cloud environment.<br/>Γ£ö Run alongside existing Kubernetes applications and services.|
+|**[ASE][Dedicated plan]** | App Service Environment (ASE) is an App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale.<br/><br/>ASEs are appropriate for application workloads that require: <br/><br/>Γ£ö Very high scale.<br/>Γ£ö Full compute isolation and secure network access.<br/>Γ£ö High memory usage.|
+| **Kubernetes**<br/>([Direct][Kubernetes] or<br/>[Azure Arc](../app-service/overview-arc-integration.md)) | Kubernetes provides a fully isolated and dedicated environment running on top of the Kubernetes platform.<br/><br/> Kubernetes is appropriate for application workloads that require: <br/>Γ£ö Custom hardware requirements.<br/>Γ£ö Isolation and secure network access.<br/>Γ£ö Ability to run in hybrid or multi-cloud environment.<br/>Γ£ö Run alongside existing Kubernetes applications and services.|
The remaining tables in this article compare the plans on various features and behaviors. For a cost comparison between dynamic hosting plans (Consumption and Premium), see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). For pricing of the various Dedicated plan options, see the [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/). ## Operating system/runtime
-The following table shows operating system and language support for the hosting plans.
+The following table shows operating system and [language support](supported-languages.md) for the hosting plans.
-| | Linux<sup>1</sup><br/>Code-only | Windows<sup>2</sup><br/>Code-only | Linux<sup>1,3</sup><br/>Docker container |
+| | Linux<sup>1,2</sup><br/>code-only | Windows code-only | Linux<sup>1,2,3</sup><br/>Docker container |
| | | | |
-| **[Consumption plan](consumption-plan.md)** | .NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>Python<br/>TypeScript | .NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>TypeScript | No support |
-| **[Premium plan](functions-premium-plan.md)** | .NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>Python<br/>TypeScript |.NET Core<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>TypeScript |.NET Core<br/>Node.js<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
-| **[Dedicated plan](dedicated-plan.md)** | .NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>Python<br/>TypeScript |.NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>TypeScript |.NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
-| **[ASE](dedicated-plan.md)** | .NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>Python<br/>TypeScript |.NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>TypeScript |.NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
-| **[Kubernetes (direct)](functions-kubernetes-keda.md)** | n/a | n/a |.NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
-| **[Azure Arc (Preview)](../app-service/overview-arc-integration.md)** | .NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>Python<br/>TypeScript | n/a |.NET Core 3.1<br/>.NET 5.0<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
+| **[Consumption plan]** | C#<br/>JavaScript<br/>Java<br/>Python<br/>PowerShell Core<br/>TypeScript | C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>TypeScript | No support |
+| **[Premium plan]** | C#<br/>JavaScript<br/>Java<br/>Python<br/>PowerShell Core<br/>TypeScript |C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>TypeScript |C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
+| **[Dedicated plan]** | C#<br/>JavaScript<br/>Java<br/>Python<br/>TypeScript |C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>TypeScript |C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
+| **[ASE][Dedicated plan]** | C#<br/>JavaScript<br/>Java<br/>Python<br/>TypeScript |C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>TypeScript |C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
+| **[Kubernetes (direct)][Kubernetes]** | n/a | n/a |C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
+| **[Azure Arc (Preview)](../app-service/overview-arc-integration.md)** | C#<br/>JavaScript<br/>Java<br/>Python<br/>TypeScript | n/a |C#<br/>JavaScript<br/>Java<br/>PowerShell Core<br/>Python<br/>TypeScript |
<sup>1</sup> Linux is the only supported operating system for the Python runtime stack. <br/>
-<sup>2</sup> Windows is the only supported operating system for the PowerShell runtime stack.<br/>
+<sup>2</sup> PowerShell support on Linux is currently in preview.<br/>
<sup>3</sup> Linux is the only supported operating system for Docker containers.<br/> [!INCLUDE [Timeout Duration section](../../includes/functions-timeout-duration.md)]
The following table shows operating system and language support for the hosting
The following table compares the scaling behaviors of the various hosting plans.
-| Plan | Scale out | Max # instances |
+| Plan | Scale out | Max # instances |
| | | |
-| **[Consumption plan](consumption-plan.md)** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of incoming trigger events. | 200 |
-| **[Premium plan](functions-premium-plan.md)** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. |100|
-| **[Dedicated plan](dedicated-plan.md)**<sup>1</sup> | Manual/autoscale |10-20|
-| **[ASE](dedicated-plan.md)**<sup>1</sup> | Manual/autoscale |100 |
-| **[Kubernetes](functions-kubernetes-keda.md)** | Event-driven autoscale for Kubernetes clusters using [KEDA](https://keda.sh). | Varies&nbsp;by&nbsp;cluster&nbsp;&nbsp;|
+| **[Consumption plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of incoming trigger events. | **Windows:** 200<br/>**Linux:** 100<sup>1</sup> |
+| **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-40<sup>2</sup>|
+| **[Dedicated plan]**<sup>3</sup> | Manual/autoscale |10-20|
+| **[ASE][Dedicated plan]**<sup>3</sup> | Manual/autoscale |100 |
+| **[Kubernetes]** | Event-driven autoscale for Kubernetes clusters using [KEDA](https://keda.sh). | Varies&nbsp;by&nbsp;cluster&nbsp;&nbsp;|
-<sup>1</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
+<sup>1</sup> During scale out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
+<sup>2</sup> In some regions, Linux apps on a Premium plan can scale to 40 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
+<sup>3</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
## Cold start behavior | Plan | Details | | -- | -- |
-| **[Consumption&nbsp;plan](consumption-plan.md)** | Apps may scale to zero when idle, meaning some requests may have additional latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from pre-warmed placeholder functions that already have the function host and language processes running. |
-| **[Premium plan](functions-premium-plan.md)** | Perpetually warm instances to avoid any cold start. |
-| **[Dedicated plan](dedicated-plan.md)** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isn't really an issue. |
-| **[ASE](dedicated-plan.md)** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isn't really an issue. |
-| **[Kubernetes](functions-kubernetes-keda.md)** | Depending on KEDA configuration, apps can be configured to avoid a cold start. If configured to scale to zero, then a cold start is experienced for new events.
+| **[Consumption plan]** | Apps may scale to zero when idle, meaning some requests may have additional latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from pre-warmed placeholder functions that already have the function host and language processes running. |
+| **[Premium plan]** | Perpetually warm instances to avoid any cold start. |
+| **[Dedicated plan]** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isn't really an issue. |
+| **[ASE][Dedicated plan]** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isn't really an issue. |
+| **[Kubernetes]** | Depending on KEDA configuration, apps can be configured to avoid a cold start. If configured to scale to zero, then a cold start is experienced for new events.
## Service limits
The following table compares the scaling behaviors of the various hosting plans.
| Plan | Details | | | |
-| **[Consumption plan](consumption-plan.md)** | Pay only for the time your functions run. Billing is based on number of executions, execution time, and memory used. |
-| **[Premium plan](functions-premium-plan.md)** | Premium plan is based on the number of core seconds and memory used across needed and pre-warmed instances. At least one instance per plan must be kept warm at all times. This plan provides the most predictable pricing. |
-| **[Dedicated plan](dedicated-plan.md)** | You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps.|
-| **[App Service Environment (ASE)](dedicated-plan.md)** | There's a flat monthly rate for an ASE that pays for the infrastructure and doesn't change with the size of the ASE. There's also a cost per App Service plan vCPU. All apps hosted in an ASE are in the Isolated pricing SKU. |
-| **[Kubernetes](functions-kubernetes-keda.md)**| You pay only the costs of your Kubernetes cluster; no additional billing for Functions. Your function app runs as an application workload on top of your cluster, just like a regular app. |
+| **[Consumption plan]** | Pay only for the time your functions run. Billing is based on number of executions, execution time, and memory used. |
+| **[Premium plan]** | Premium plan is based on the number of core seconds and memory used across needed and pre-warmed instances. At least one instance per plan must be kept warm at all times. This plan provides the most predictable pricing. |
+| **[Dedicated plan]** | You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps.|
+| **[App Service Environment (ASE)][Dedicated plan]** | There's a flat monthly rate for an ASE that pays for the infrastructure and doesn't change with the size of the ASE. There's also a cost per App Service plan vCPU. All apps hosted in an ASE are in the Isolated pricing SKU. |
+| **[Kubernetes]**| You pay only the costs of your Kubernetes cluster; no additional billing for Functions. Your function app runs as an application workload on top of your cluster, just like a regular app. |
## Next steps + [Deployment technologies in Azure Functions](functions-deployment-technologies.md) + [Azure Functions developer guide](functions-reference.md)+
+[Consumption plan]: consumption-plan.md
+[Premium plan]: functions-premium-plan.md
+[Dedicated plan]: dedicated-plan.md
+[Kubernetes]: functions-kubernetes-keda.md
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
A pre-upgrade validator is available to help identify potential issues when migr
To migrate an app from 3.x to 4.x, set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` with the following Azure CLI command:
-```bash
+```azurecli
az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -n <APP_NAME> -g <RESOURCE_GROUP_NAME> # For Windows function apps only, also enable .NET 6.0 that is needed by the runtime
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent, which collects monitoring data
Previously updated : 09/21/2021 Last updated : 3/3/2022
The Azure Monitor agent doesn't require any keys but instead requires a [system-
## Networking The Azure Monitor agent supports Azure service tags (both AzureMonitor and AzureResourceManager tags are required). It supports connecting via **direct proxies, Log Analytics gateway, and private links** as described below.
+### Firewall requirements
+|Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection|
+||||--|--|
+|global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
+|`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+|`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
+
+If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)
+ ### Proxy configuration If the machine connects through a proxy server to communicate over the internet, review requirements below to understand the network configuration required.
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
`Stop-Service -Name <gateway-name>` `Start-Service -Name <gateway-name>`
-## Private link configuration
-To configure the agent to use private links for network communications with Azure Monitor, you can use [Azure Monitor Private Links Scopes (AMPLS)](../logs/private-link-security.md) and [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md) to enable required network isolation.
+### Private link configuration
+To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
## Next steps
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
# Export telemetry from Application Insights
-Want to keep your telemetry for longer than the standard retention period? Or process it in some specialized way? Continuous Export is ideal for this. The events you see in the Application Insights portal can be exported to storage in Microsoft Azure in JSON format. From there, you can download your data and write whatever code you need to process it.
+Want to keep your telemetry for longer than the standard retention period? Or process it in some specialized way? Continuous Export is ideal for this purpose. The events you see in the Application Insights portal can be exported to storage in Microsoft Azure in JSON format. From there, you can download your data and write whatever code you need to process it.
> [!IMPORTANT]
-> Continuous export has been deprecated. [Migrate to a workspace-based Application Insights resource](convert-classic-resource.md) to use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry.
+> Continuous export has been deprecated. When [migrating to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry.
> [!NOTE] > Continuous export is only supported for classic Application Insights resources. [Workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
Before you set up continuous export, there are some alternatives you might want
* The [Data access REST API](https://dev.applicationinsights.io/) lets you access your telemetry programmatically. * You can also access setup [continuous export via PowerShell](/powershell/module/az.applicationinsights/new-azapplicationinsightscontinuousexport).
-After Continuous Export copies your data to storage (where it can stay for as long as you like), it's still available in Application Insights for the usual [retention period](./data-retention-privacy.md).
+After continuous export copies your data to storage, where it may stay as long as you like, it's still available in Application Insights for the usual [retention period](./data-retention-privacy.md).
## Supported Regions
Continuous Export **does not support** the following Azure storage features/conf
3. Create or select an [Azure storage account](../../storage/common/storage-introduction.md) where you want to store the data. For more information on storage pricing options, visit the [official pricing page](https://azure.microsoft.com/pricing/details/storage/).
- Click Add, Export Destination, Storage account, and then either create a new store or choose an existing store.
+ Select Add, Export Destination, Storage account, and then either create a new store or choose an existing store.
> [!Warning] > By default, the storage location will be set to the same geographical region as your Application Insights resource. If you store in a different region, you may incur transfer charges.
Continuous Export **does not support** the following Azure storage features/conf
There can be a delay of about an hour before data appears in the storage.
-Once the first export is complete you will find a structure similar to the following in your Azure Blob storage container: (This will vary depending on the data you are collecting.)
+Once the first export is complete, you'll find the following structure in your Azure Blob storage container: (This structure will vary depending on the data you're collecting.)
|Name | Description | |:-|:|
Once the first export is complete you will find a structure similar to the follo
| [Messages](export-data-model.md#trace-messages) | Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md). | [Metrics](export-data-model.md#metrics) | Generated by metric API calls. | [PerformanceCounters](export-data-model.md) | Performance Counters collected by Application Insights.
-| [Requests](export-data-model.md#requests)| Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use this to reports server response time, measured at the server.|
+| [Requests](export-data-model.md#requests)| Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use requests to report server response time, measured at the server.|
### To edit continuous export
-Click on continuous export and select the storage account to edit.
+Select continuous export and select the storage account to edit.
### To stop continuous export
-To stop the export, click Disable. When you click Enable again, the export will restart with new data. You won't get the data that arrived in the portal while export was disabled.
+To stop the export, select Disable. When you select Enable again, the export will restart with new data. You won't get the data that arrived in the portal while export was disabled.
To stop the export permanently, delete it. Doing so doesn't delete your data from storage.
To stop the export permanently, delete it. Doing so doesn't delete your data fro
* To add or change exports, you need Owner, Contributor, or Application Insights Contributor access rights. [Learn about roles][roles]. ## <a name="analyze"></a> What events do you get?
-The exported data is the raw telemetry we receive from your application, except that we add location data, which we calculate from the client IP address.
+The exported data is the raw telemetry we receive from your application with added location data from the client IP address.
-Data that has been discarded by [sampling](./sampling.md) is not included in the exported data.
+Data that has been discarded by [sampling](./sampling.md) isn't included in the exported data.
-Other calculated metrics are not included. For example, we don't export average CPU utilization, but we do export the raw telemetry from which the average is computed.
+Other calculated metrics aren't included. For example, we don't export average CPU utilization, but we do export the raw telemetry from which the average is computed.
The data also includes the results of any [availability web tests](./monitor-web-app-availability.md) that you have set up.
The data also includes the results of any [availability web tests](./monitor-web
> ## <a name="get"></a> Inspect the data
-You can inspect the storage directly in the portal. Click home in the leftmost menu, at the top where it says "Azure services" select **Storage accounts**, select the storage account name, on the overview page select **Blobs** under services, and finally select the container name.
+You can inspect the storage directly in the portal. Select home in the leftmost menu, at the top where it says "Azure services" select **Storage accounts**, select the storage account name, on the overview page select **Blobs** under services, and finally select the container name.
To inspect Azure storage in Visual Studio, open **View**, **Cloud Explorer**. (If you don't have that menu command, you need to install the Azure SDK: Open the **New Project** dialog, expand Visual C#/Cloud and choose **Get Microsoft Azure SDK for .NET**.)
private IEnumerable<T> DeserializeMany<T>(string folderName)
For a larger code sample, see [using a worker role][exportasa]. ## <a name="delete"></a>Delete your old data
-You are responsible for managing your storage capacity and deleting the old data if necessary.
+You're responsible for managing your storage capacity and deleting the old data if necessary.
## If you regenerate your storage key... If you change the key to your storage, continuous export will stop working. You'll see a notification in your Azure account.
-Open the Continuous Export tab and edit your export. Edit the Export Destination, but just leave the same storage selected. Click OK to confirm.
+Open the Continuous Export tab and edit your export. Edit the Export Destination, but just leave the same storage selected. Select OK to confirm.
The continuous export will restart.
The continuous export will restart.
* [Export to SQL using Stream Analytics][exportasa] * [Stream Analytics sample 2](../../stream-analytics/app-insights-export-stream-analytics.md)
-On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdinsight/) - Hadoop clusters in the cloud. HDInsight provides a variety of technologies for managing and analyzing big data, and you could use it to process data that has been exported from Application Insights.
+On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdinsight/) - Hadoop clusters in the cloud. HDInsight provides various technologies for managing and analyzing big data. You can use it to process data that has been exported from Application Insights.
## Q & A * *But all I want is a one-time download of a chart.*
- Yes, you can do that. At the top of the tab, click **Export Data**.
+ Yes, you can do that. At the top of the tab, select **Export Data**.
* *I set up an export, but there's no data in my store.* Did Application Insights receive any telemetry from your app since you set up the export? You'll only receive new data.
On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdin
No, sorry. Our export engine currently only works with Azure storage at this time. * *Is there any limit to the amount of data you put in my store?*
- No. We'll keep pushing data in until you delete the export. We'll stop if we hit the outer limits for blob storage, but that's pretty huge. It's up to you to control how much storage you use.
+ No. We'll keep pushing data in until you delete the export. We'll stop if we hit the outer limits for blob storage, but that's huge. It's up to you to control how much storage you use.
* *How many blobs should I see in the storage?* * For every data type you selected to export, a new blob is created every minute (if data is available).
- * In addition, for applications with high traffic, additional partition units are allocated. In this case, each unit creates a blob every minute.
+ * In addition, for applications with high traffic, extra partition units are allocated. In this case, each unit creates a blob every minute.
* *I regenerated the key to my storage or changed the name of the container, and now the export doesn't work.*
- Edit the export and open the export destination tab. Leave the same storage selected as before, and click OK to confirm. Export will restart. If the change was within the past few days, you won't lose data.
+ Edit the export and open the export destination tab. Leave the same storage selected as before, and select OK to confirm. Export will restart. If the change was within the past few days, you won't lose data.
* *Can I pause the export?*
- Yes. Click Disable.
+ Yes. Select Disable.
## Code samples
On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdin
## Diagnostic settings based export
-Diagnostic settings based export uses a different schema than continuous export. It also supports features that continuous export does not like:
+Diagnostic settings based export uses a different schema than continuous export. It also supports features that continuous export doesn't like:
-* Azure storage accounts with vnet, firewalls, and private links.
-* Export to event hub.
+* Azure storage accounts with virtual networks, firewalls, and private links.
+* Export to Event Hubs.
-To migrate to diagnostic settings based export:
+To migrate to diagnostic settings-based export:
1. Disable current continuous export. 2. [Migrate application to workspace-based](convert-classic-resource.md).
azure-monitor Container Insights Enable Aks Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks-policy.md
Monitoring Addon require following roles on the managed identity used by Azure P
2. Create the policy definition with the following command:
- ``` sh
+ ```azurecli
az cloud set -n <AzureCloud | AzureChinaCloud | AzureUSGovernment> # set the Azure cloud az login # login to cloud environment az account set -s <subscriptionId>
Monitoring Addon require following roles on the managed identity used by Azure P
- Create the policy assignment with the following command:
- ``` sh
+ ```azurecli
az policy assignment create --name aks-monitoring-addon --policy "(Preview)AKS-Monitoring-Addon" --assign-identity --identity-scope /subscriptions/<subscriptionId> --role Contributor --scope /subscriptions/<subscriptionId> --location <locatio> --role Contributor --scope /subscriptions/<subscriptionId> -p "{ \"workspaceResourceId\": { \"value\": \"/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>\" } }" ```
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Run the following commands to locate the full Azure Resource Manager identifier
3. The following example displays the list of workspaces in your subscriptions in the default JSON format.
- ```
+ ```azurecli
az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json ```
az k8s-extension show --name azuremonitor-containers --cluster-name <cluster-nam
The following command only deletes the extension instance, but doesn't delete the Log Analytics workspace. The data within the Log Analytics resource is left intact.
-```bash
+```azurecli
az k8s-extension delete --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <cluster-name> --resource-group <resource-group> ```
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Last updated 07/15/2020
# Stream Azure monitoring data to an event hub or external partner
-Azure Monitor provides a complete full stack monitoring solution for applications and services in Azure, in other clouds, and on-premises. In addition to using Azure Monitor for analyzing that data and leveraging it for different monitoring scenarios, you may need to send it to other monitoring tools in your environment. In most cases, the most effective method to stream monitoring data to external tools is using [Azure Event Hubs](../../event-hubs/index.yml). This article provides a brief description on how to do this and then lists some of the partners where you can send data. Some have special integration with Azure Monitor and may be hosted on Azure.
+Azure Monitor provides full stack monitoring for applications and services in Azure, in other clouds, and on-premises. In most cases, the most effective method to stream monitoring data to external tools is using [Azure Event Hubs](../../event-hubs/index.yml). This article provides a brief description on how to stream data and then lists some of the partners where you can send it. Some have special integration with Azure Monitor and may be hosted on Azure.
## Create an Event Hubs namespace Before you configure streaming for any data source, you need to [create an Event Hubs namespace and event hub](../../event-hubs/event-hubs-create.md). This namespace and event hub is the destination for all of your monitoring data. An Event Hubs namespace is a logical grouping of event hubs that share the same access policy, much like a storage account has individual blobs within that storage account. Consider the following details about the event hubs namespace and event hubs that you use for streaming monitoring data: * The number of throughput units allows you to increase throughput scale for your event hubs. Only one throughput unit is typically necessary. If you need to scale up as your log usage increases, you can manually increase the number of throughput units for the namespace or enable auto inflation.
-* The number of partitions allows you to parallelize consumption across many consumers. A single partition can support up to 20MBps or approximately 20,000 messages per second. Depending on the tool consuming the data, it may or may not support consuming from multiple partitions. Four partitions is reasonable to start with if you're not sure about the number of partitions to set.
-* You set message retention on your event hub to at least 7 days. If your consuming tool goes down for more than a day, this ensures that the tool can pick up where it left off for events up to 7 days old.
-* You should use the default consumer group for your event hub. There is no need to create other consumer groups or use a separate consumer group unless you plan to have two different tools consume the same data from the same event hub.
+* The number of partitions allows you to parallelize consumption across many consumers. A single partition can support up to 20 MBps or approximately 20,000 messages per second. Depending on the tool consuming the data, it may or may not support consuming from multiple partitions. Four partitions are reasonable to start with if you're not sure about the number of partitions to set.
+* You set message retention on your event hub to at least seven days. If your consuming tool goes down for more than a day, this retention ensures that the tool can pick up where it left off for events up to seven days old.
+* You should use the default consumer group for your event hub. There's no need to create other consumer groups or use a separate consumer group unless you plan to have two different tools consume the same data from the same event hub.
* For the Azure Activity log, you pick an Event Hubs namespace, and Azure Monitor creates an event hub within that namespace called _insights-logs-operational-logs_. For other log types, you can either choose an existing event hub or have Azure Monitor create an event hub per log category. * Outbound port 5671 and 5672 must typically be opened on the computer or VNET consuming data from the event hub. ## Monitoring data available
-[Sources of monitoring data for Azure Monitor](../agents/data-sources.md) describes the different tiers of data for Azure applications and the kinds of monitoring data available for each. The following table lists each of these tiers and a description of how that data can be streamed to an event hub. Follow the links provided for further detail.
+[Sources of monitoring data for Azure Monitor](../agents/data-sources.md) describes the data tiers for Azure applications and the kinds of data available for each. The following table lists each of these tiers and a description of how that data can be streamed to an event hub. Follow the links provided for further detail.
| Tier | Data | Method | |:|:|:|
-| [Azure tenant](../agents/data-sources.md#azure-tenant) | Azure Active Directory audit logs | Configure a tenant diagnostic setting on your AAD tenant. See [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) for details. |
+| [Azure tenant](../agents/data-sources.md#azure-tenant) | Azure Active Directory audit logs | Configure a tenant diagnostic setting on your Azure AD tenant. See [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) for details. |
| [Azure subscription](../agents/data-sources.md#azure-subscription) | Azure Activity Log | Create a log profile to export Activity Log events to Event Hubs. See [Stream Azure platform logs to Azure Event Hubs](../essentials/resource-logs.md#send-to-azure-event-hubs) for details. | | [Azure resources](../agents/data-sources.md#azure-resources) | Platform metrics<br> Resource logs |Both types of data are sent to an event hub using a resource diagnostic setting. See [Stream Azure resource logs to an event hub](../essentials/resource-logs.md#send-to-azure-event-hubs) for details. | | [Operating system (guest)](../agents/data-sources.md#operating-system-guest) | Azure virtual machines | Install the [Azure Diagnostics Extension](../agents/diagnostics-extension-overview.md) on Windows and Linux virtual machines in Azure. See [Streaming Azure Diagnostics data in the hot path by using Event Hubs](../agents/diagnostics-extension-stream-event-hubs.md) for details on Windows VMs and [Use Linux Diagnostic Extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md#protected-settings) for details on Linux VMs. |
-| [Application code](../agents/data-sources.md#application-code) | Application Insights | Application Insights doesn't provide a direct method to stream data to event hubs. You can [set up continuous export](../app/export-telemetry.md) of the Application Insights data to a storage account and then use a Logic App to send the data to an event hub as described in [Manual streaming with Logic App](#manual-streaming-with-logic-app). |
+| [Application code](../agents/data-sources.md#application-code) | Application Insights | Use diagnostic settings to stream to event hubs. This is only available with workspace-based Application Insights resources. For help setting up workspace-based Application Insights resources, see [Workspace-based Application Insights resources](../app/create-workspace-resource.md#workspace-based-application-insights-resources) and [Migrate to workspace-based Application Insights resources](../app/convert-classic-resource.md#migrate-to-workspace-based-application-insights-resources).|
## Manual streaming with Logic App For data that you can't directly stream to an event hub, you can write to Azure storage and then use a time-triggered Logic App that [pulls data from blob storage](../../connectors/connectors-create-api-azureblobstorage.md#add-action) and [pushes it as a message to the event hub](../../connectors/connectors-create-api-azure-event-hubs.md#add-action).
Routing your monitoring data to an event hub with Azure Monitor enables you to e
| Tool | Hosted in Azure | Description | |:|:| :| | IBM QRadar | No | The Microsoft Azure DSM and Microsoft Azure Event Hub Protocol are available for download from [the IBM support website](https://www.ibm.com/support). |
-| Splunk | No | [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/) is an open source project available in Splunkbase. <br><br> If you cannot install an add-on in your Splunk instance, if for example you're using a proxy or running on Splunk Cloud, you can forward these events to the Splunk HTTP Event Collector using [Azure Function For Splunk](https://github.com/Microsoft/AzureFunctionforSplunkVS), which is triggered by new messages in the event hub. |
+| Splunk | No | [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/) is an open source project available in Splunkbase. <br><br> If you can't install an add-on in your Splunk instance, if for example you're using a proxy or running on Splunk Cloud, you can forward these events to the Splunk HTTP Event Collector using [Azure Function For Splunk](https://github.com/Microsoft/AzureFunctionforSplunkVS), which is triggered by new messages in the event hub. |
| SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hub](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure-Audit/02Collect-Logs-for-Azure-Audit-from-Event-Hub). | | ArcSight | No | The ArcSight Azure Event Hub smart connector is available as part of [the ArcSight smart connector collection](https://community.microfocus.com/cyberres/arcsight/f/arcsight-product-announcements/163662/announcing-general-availability-of-arcsight-smart-connectors-7-10-0-8114-0). | | Syslog server | No | If you want to stream Azure Monitor data directly to a syslog server, you can use a [solution based on an Azure function](https://github.com/miguelangelopereira/azuremonitor2syslog/).
azure-monitor Custom Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-overview.md
With the DCR based custom logs API in Azure Monitor, you can send data to a Log
> [!NOTE] > The custom logs API should not be confused with [custom logs](../agents/data-sources-custom-logs.md) data source with the legacy Log Analytics agent.++ ## Basic operation Your application sends data to a [data collection endpoint](../essentials/data-collection-endpoint-overview.md) which is a unique connection point for your subscription. The payload of your API call includes the source data formatted in JSON. The call specifies a [data collection rule](../essentials/data-collection-rule-overview.md) that understands the format of the source data, potentially filters and transforms it for the target table, and then directs it to a specific table in a specific workspace. You can modify the target table and workspace by modifying the data collection rule without any change to the REST API call or source data.
+> [!NOTE]
+> See [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md) to migrate solutions from the [Data Collector API](data-collector-api.md).
## Authentication Authentication for the custom logs API is performed at the data collection endpoint which uses standard Azure Resource Manager authentication. A common strategy is to use an Application ID and Application Key as described in [Tutorial: Add ingestion-time transformation to Azure Monitor Logs (preview)](tutorial-custom-logs.md).
The endpoint URI uses the following format, where the `Data Collection Endpoint`
{Data Collection Endpoint URI}/dataCollectionRules/{DCR Immutable ID}/streams/{Stream Name}?api-version=2021-11-01-preview ```
+> [!NOTE]
+> You can retrieve the immutable ID from the JSON view of the DCR. See [Collect information from DCR](tutorial-custom-logs.md#collect-information-from-dcr).
+ ### Headers The call can use the following headers:
azure-monitor Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingestion-time-transformations.md
See the following tutorials for a complete walkthrough of configuring ingestion-
## Limits -- Transformation queries use a subset of KQL. See [Supported KSQL features](../essentials/data-collection-rule-transformations.md#supported-kql-features) for details.
+- Transformation queries use a subset of KQL. See [Supported KQL features](../essentials/data-collection-rule-transformations.md#supported-kql-features) for details.
## Next steps
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 12/09/2021 Last updated : 03/02/2022 # Create an SMB volume for Azure NetApp Files
You can set permissions for a file or folder by using the **Security** tab of th
* [Mount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
-* [Configure ADDS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md)
+* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
* [Enable Continuous Availability on existing SMB volumes](enable-continuous-availability-existing-SMB.md) * [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md)
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na Previously updated : 10/04/2021 Last updated : 03/02/2022 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* [Configure NFSv4.1 default domain for Azure NetApp Files](azure-netapp-files-configure-nfsv41-domain.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md)
-* [Configure ADDS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md)
+* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
* [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) * [Mount a volume for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Configure export policy for an NFS volume](azure-netapp-files-configure-export-policy.md)
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
Title: Configure ADDS LDAP with extended groups for Azure NetApp Files NFS volume access | Microsoft Docs
+ Title: Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes | Microsoft Docs
description: Describes the considerations and steps for enabling LDAP with extended groups when you create an NFS volume by using Azure NetApp Files. documentationcenter: ''
na Previously updated : 01/27/2022 Last updated : 03/03/2022
-# Configure ADDS LDAP with extended groups for NFS volume access
+# Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes
When you [create an NFS volume](azure-netapp-files-create-volumes.md), you have the option to enable the LDAP with extended groups feature (the **LDAP** option) for the volume. This feature enables Active Directory LDAP users and extended groups (up to 1024 groups) to access files and directories in the volume. You can use the LDAP with extended groups feature with both NFSv4.1 and NFSv3 volumes.
-This article explains the considerations and steps for enabling LDAP with extended groups when you create an NFS volume.
+Azure NetApp Files supports fetching of extended groups from the LDAP name service rather than from the RPC header. Azure NetApp Files interacts with LDAP by querying for attributes such as usernames, numeric IDs, groups, and group memberships for NFS protocol operations.
+
+When itΓÇÖs determined that LDAP will be used for operations such as name lookup and fetching extended groups, the following process occurs:
+
+1. Azure NetApp Files uses an LDAP client configuration to make a connection attempt to the ADDS/AADDS LDAP server that is specified in the [Azure NetApp Files AD configuration](create-active-directory-connections.md).
+1. If the TCP connection over the defined ADDS/AADDS LDAP service port is successful, then the Azure NetApp Files LDAP client attempts to ΓÇ£bindΓÇ¥ (log in) to the ADDS/AADDS LDAP server (domain controller) by using the defined credentials in the LDAP client configuration.
+1. If the bind is successful, then the Azure NetApp Files LDAP client uses the RFC 2307bis LDAP schema to make an LDAP search query to the ADDS/AADDS LDAP server (domain controller).
+The following information is passed to the server in the query:
+ * [Base/user DN](configure-ldap-extended-groups.md#ldap-search-scope) (to narrow search scope)
+ * Search scope type (subtree)
+ * Object class (`user`, `posixAccount` for users, and `posixGroup` for groups)
+ * UID or username
+ * Requested attributes (`uid`, `uidNumber`, `gidNumber` for users, or `gidNumber` for groups)
+1. If the user or group isnΓÇÖt found, the request fails, and access is denied.
+1. If the request is successful, then user and group attributes are [cached for future use](configure-ldap-extended-groups.md#considerations). This operation improves the performance of subsequent LDAP queries associated with the cached user or group attributes. It also reduces the load on the ADDS/AADDS LDAP server.
## Considerations
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 02/15/2022 Last updated : 03/02/2022 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. * **LDAP over TLS**
- See [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md) for information about this option.
+ See [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) for information about this option.
* **LDAP Search Scope**, **User DN**, **Group DN**, and **Group Membership Filter** See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Install a new Active Directory forest using Azure CLI](/windows-server/identity/ad-ds/deploy/virtual-dc/adds-on-azure-vm)
-* [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md)
+* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
* [ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 01/28/2022 Last updated : 03/02/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
The following features are now GA. You no longer need to register the features before using them. * [Dual-protocol (NFSv4.1 and SMB) volume](create-volumes-dual-protocol.md)
- * [ADDS LDAP over TLS](configure-ldap-over-tls.md)
+ * [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) ## November 2021
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports billing tags to help you cross-reference cost with business units or other internal consumers. Billing tags are assigned at the capacity pool level and not volume level, and they appear on the customer invoice.
-* [ADDS LDAP over TLS](configure-ldap-over-tls.md) (Preview)
+* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
By default, LDAP communications between client and server applications are not encrypted. This means that it is possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared VNets when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (ADDS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cli.md
To pass in an object, for example, to set tags, use JSON. For example, your Bice
In this case, you can pass in a JSON string to set the parameter as shown in the following Bash script:
-```bash
+```azurecli
tags='{"Owner":"Contoso","Cost Center":"2345-324"}' az deployment group create --name addstorage --resource-group myResourceGroup \ --template-file $bicepFile \
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-cli.md
To pass in an object, for example, to set tags, use JSON. For example, your temp
In this case, you can pass in a JSON string to set the parameter as shown in the following Bash script:
-```bash
+```azurecli
tags='{"Owner":"Contoso","Cost Center":"2345-324"}' az deployment group create --name addstorage --resource-group myResourceGroup \ --template-file $templateFile \
azure-sql-edge Tutorial Deploy Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-deploy-azure-resources.md
Deploy the Azure resources required by this Azure SQL Edge tutorial. These can b
15. Update the connection string in the IoT Edge configuration file on the Edge device. The following commands use Azure CLI for deployments.
- ```powershell
+ ```azurecli
$script = "/etc/iotedge/configedge.sh '" + $connString + "'" az vm run-command invoke -g $ResourceGroup -n $EdgeDeviceId --command-id RunShellScript --script $script ``` 16. Create an Azure Machine Learning workspace within the resource group.
- ```powershell
+ ```azurecli
az ml workspace create -w $MyWorkSpace -g $ResourceGroup ```
azure-sql Database Import Export Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-import-export-private-link.md
+
+ Title: Import or export an Azure SQL Database using Private link
+description: Import or export an Azure SQL Database using Private Link without allowing Azure services to access the server.
++++
+ms.devlang:
++++ Last updated : 2/16/2022+
+# Import or export an Azure SQL Database using Private Link without allowing Azure services to access the server
++
+Running Import or Export via Azure PowerShell or Azure portal requires you to set [Allow Access to Azure Services](network-access-controls-overview.md) to ON, otherwise Import/Export operation fails with an error. Often, users want to perform Import or Export using a private end point without allowing access to all Azure services.
+
+## What is Import-Export Private Link?
+
+Import Export Private Link is a Service Managed Private Endpoint created by Microsoft and that is exclusively used by the Import-Export, database and Azure Storage services for all communications. The private end point has to be manually approved by user in the Azure portal for both server and storage.
++
+To use Private Link with Import-Export, user database and Azure Storage blob container must be hosted on the same type of Azure Cloud. For example, either both in Azure Commercial or both on Azure Gov. Hosting across cloud types isn't supported.
+
+This article explains how to import or export an Azure SQL Database using [Private Link](private-endpoint-overview.md) with *Allow Azure Services* is set to *OFF* on the Azure SQL server.
+
+> [!NOTE]
+> Import Export using Private Link for Azure SQL Database is currently in preview
+
+> [!IMPORTANT]
+> Import or Export of a database from [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) or from a database in the [Hyperscale service tier](service-tier-hyperscale.md) using PowerShell isn't currently supported.
+++
+## Configure Import-Export Private Link
+Import-Export Private Link can be configured via Azure portal, PowerShell or using REST API.
+
+### Configure Import-Export Private link using Azure portal
+
+#### Create Import Private Link
+1. Go to server into which you would like to import database. Select Import database from toolbar in Overview page.
+2. In Import Database page, select Use Private Link option
+3. Enter the storage account, server credentials, Database details and select on Ok
+
+#### Create Export Private Link
+1. Go to the database that you would like to export. Select Export database from toolbar in Overview page
+2. In Export Database page, select Use Private Link option
+3. Enter the storage account, server sign-in credentials, Database details and select Ok
+
+#### Approve Private End Points
+
+##### Approve Private Endpoints in Private Link Center
+1. Go to Private Link Center
+2. Navigate to Private endpoints section
+3. Approve the private endpoints you created using Import/Export service
+
+##### Approve Private End Point connection on Azure SQL Database
+1. Go to the server that hosts the database.
+2. Open the ΓÇÿPrivate endpoint connectionsΓÇÖ page in security section on the left.
+3. Select the private endpoint you want to approve.
+4. Select Approve to approve the connection.
++
+##### Approve Private End Point connection on Azure Storage
+1. Go to the storage account that hosts the blob container that holds BACPAC file.
+2. Open the ΓÇÿPrivate endpoint connectionsΓÇÖ page in security section on the left.
+3. Select the Import-Export private endpoints you want to approve.
+4. Select Approve to approve the connection.
++
+After the Private End points are approved both in Azure SQL Server and Storage account, Import or Export jobs will be kicked off. Until then, the jobs will be on hold.
+
+You can check the status of Import or Export jobs in Import-Export History page under Data Management section in Azure SQL Server page.
+++
+### Configure Import-Export Private Link using PowerShell
+
+#### Import a Database using Private link in PowerShell
+Use the [New-AzSqlDatabaseImport](/PowerShell/module/az.sql/new-azsqldatabaseimport) cmdlet to submit an import database request to Azure. Depending on database size, the import may take some time to complete. The DTU based provisioning model supports select database max size values for each tier. When importing a database [use one of these supported values](/sql/t-sql/statements/create-database-transact-sql).
+
+```PowerShell
+$importRequest = New-AzSqlDatabaseImport -ResourceGroupName "<resourceGroupName>" `
+ -ServerName "<serverName>" -DatabaseName "<databaseName>" `
+ -DatabaseMaxSizeBytes "<databaseSizeInBytes>" -StorageKeyType "StorageAccessKey" `
+ -StorageKey $(Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName `
+ -StorageAccountName "<storageAccountName>").Value[0]
+ -StorageUri "https://myStorageAccount.blob.core.windows.net/importsample/sample.bacpac" `
+ -Edition "Standard" -ServiceObjectiveName "P6" ` -UseNetworkIsolation $true `
+ -StorageAccountResourceIdForPrivateLink "/subscriptions/<subscriptionId>/resourcegroups/<resource_group_name>/providers/Microsoft.Storage/storageAccounts/<storage_account_name>" `
+ -SqlServerResourceIdForPrivateLink "/subscriptions/<subscriptionId>/resourceGroups/<resource_group_name>/providers/Microsoft.Sql/servers/<server_name>" `
+ -AdministratorLogin "<userID>" `
+ -AdministratorLoginPassword $(ConvertTo-SecureString -String "<password>" -AsPlainText -Force)
+
+```
+
+#### Export a Database using Private Link in PowerShell
+Use the [New-AzSqlDatabaseExport](/PowerShell/module/az.sql/new-azsqldatabaseexport) cmdlet to submit an export database request to the Azure SQL Database service. Depending on the size of your database, the export operation may take some time to complete.
+
+```PowerShell
+$importRequest = New-AzSqlDatabaseExport -ResourceGroupName "<resourceGroupName>" `
+ -ServerName "<serverName>" -DatabaseName "<databaseName>" `
+ -DatabaseMaxSizeBytes "<databaseSizeInBytes>" -StorageKeyType "StorageAccessKey" `
+ -StorageKey $(Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName `
+ -StorageAccountName "<storageAccountName>").Value[0]
+ -StorageUri "https://myStorageAccount.blob.core.windows.net/importsample/sample.bacpac" `
+ -Edition "Standard" -ServiceObjectiveName "P6" ` -UseNetworkIsolation $true `
+ -StorageAccountResourceIdForPrivateLink "/subscriptions/<subscriptionId>/resourcegroups/<resource_group_name>/providers/Microsoft.Storage/storageAccounts/<storage_account_name>" `
+ -SqlServerResourceIdForPrivateLink "/subscriptions/<subscriptionId>/resourceGroups/<resource_group_name>/providers/Microsoft.Sql/servers/<server_name>" `
+ -AdministratorLogin "<userID>" `
+ -AdministratorLoginPassword $(ConvertTo-SecureString -String "<password>" -AsPlainText -Force)
+```
+++
+### Create Import-Export Private link using REST API
+Existing APIs to perform Import and Export jobs have been enhanced to support Private Link. Refer to [Import Database API](/rest/api/sql/2021-08-01-preview/servers/import-database.md)
+
+## Next steps
+- [Import or Export Azure SQL Database without allowing Azure services to access the server](database-import-export-azure-services-off.md)
+- [Import a database from a BACPAC file](database-import.md)
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
az extension add --name vmware
Create and attach an iSCSI datastore in the Azure VMware Solution private cloud cluster using `Microsoft.StoragePool` provided iSCSI target. The disk pool attaches to a virtual network through a delegated subnet, which is done with the Microsoft.StoragePool/diskPools resource provider. If the subnet isn't delegated, the deployment fails.
-```bash
+```azurecli
#Initialize input parameters resourceGroupName='<yourRGName>' name='<desiredDataStoreName>'
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
+
+ Title: Concepts - Network design considerations
+description: Learn about network design considerations for Azure VMware Solution
+ Last updated : 03/04/2022++
+# Azure VMware Solution network design considerations
+
+Azure VMware Solution offers a VMware private cloud environment accessible for users and applications from on-premises and Azure-based environments or resources. The connectivity is delivered through networking services such as Azure ExpressRoute and VPN connections. There are several networking considerations to review before setting up your Azure VMware Solution environment. This article provides solutions for use cases you may encounter when configuring your networking with Azure VMware Solution.
+
+## Azure VMware Solution compatibility with AS-Path Prepend
+
+Azure VMware Solution is incompatible with AS-Path Prepend for redundant ExpressRoute configurations and doesn't honor the outbound path selection from Azure towards on-premises. If you're running 2 or more ExpressRoute paths between on-prem and Azure plus the following listed conditions are true, you may experience impaired connectivity or no connectivity between your on-premises networks and Azure VMware Solution. The connectivity issue is caused when Azure VMware Solution doesn't see the AS-Path Prepend and uses ECMP to send traffic towards your environment over both ExR circuits, resulting in issues with stateful firewall inspection.
+
+**Checklist of conditions that are true:**
+- Both or all circuits are connected to Azure VMware Solution with global reach.
+- The same netblocks are being advertised from two or more circuits.
+- Stateful firewalls are in the network path.
+- You're using AS-Path Prepend to force Azure to prefer one path over others.
+
+**Solution**
+
+If youΓÇÖre using BGP AS-Path Prepend to dedicate a circuit from Azure towards on-prem, open a [Customer Support Request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) with Azure VMware Solution to designate a primary circuit from Azure. YouΓÇÖll need to identify which circuit youΓÇÖd like to be primary for a given network advertisement. Azure support staff will implement the AS-Path Prepend manually within the Azure VMware Solution environment to match your on-prem configuration for route selection. That action doesn't affect redundancy as the other path(s) is still available if the primary one fails.
+
+## Management VMs and default routes from on-premises
+
+> [!IMPORTANT]
+> Azure Vmware Solution Management VMs don't honor a default route from On-Premises.
+
+If youΓÇÖre routing back to your on-premises networks using only a default route advertised towards Azure, the vCenter and NSX manager VMs won't honor that route.
+
+**Solution**
+
+To reach vCenter and NSX manager, more specific routes from on-prem need to be provided to allow traffic to have a return path route to those networks.
+
+## Next steps
+
+Now that you've covered Azure VMware Solution network design considerations, you might consider learning more.
+
+- [Network interconnectivity concepts - Azure VMware Solution](concepts-networking.md)
+
+## Recommended content
+
+- [Tutorial - Configure networking for your VMware private cloud in Azure - Azure VMware Solution](tutorial-network-checklist.md)
+++
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
For an end-to-end overview of this procedure, view the [Azure VMware Solution: C
## Create a service mesh >[!IMPORTANT]
->Make sure port UDP 500 is open between your on-premises VMware HCX Connector 'uplink' network profile addresses and the Azure VMware Solution HCX Cloud 'uplink' network profile addresses. (4500 UDP was previously required in legacy versions of HCX. See https://ports.vmware.com for latest information)
+>Make sure port UDP 4500 is open between your on-premises VMware HCX Connector 'uplink' network profile addresses and the Azure VMware Solution HCX Cloud 'uplink' network profile addresses. (500 UDP was previously required in legacy versions of HCX. See https://ports.vmware.com for latest information)
1. Under **Infrastructure**, select **Interconnect** > **Service Mesh** > **Create Service Mesh**.
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
Use the following commands to create these items.
1. If you haven't done so already, sign in to Azure:
- ```bash
+ ```azurecli
az login ``` 1. Create a resource group or you can skip by re-using the one of Azure Web PubSub service:
- ```bash
+ ```azurecli
az group create -n WebPubSubFunction -l <REGION> ``` 1. Create a general-purpose storage account in your resource group and region:
- ```bash
+ ```azurecli
az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction ```
Use the following commands to create these items.
# [JavaScript](#tab/javascript)
- ```bash
+ ```azurecli
az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ``` > [!NOTE]
Use the following commands to create these items.
# [C#](#tab/csharp)
- ```bash
+ ```azurecli
az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ```
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
Use the following commands to create these item.
1. If you haven't done so already, sign in to Azure:
- ```bash
+ ```azurecli
az login ``` 1. Create a resource group or you can skip by re-using the one of Azure Web PubSub service:
- ```bash
+ ```azurecli
az group create -n WebPubSubFunction -l <REGION> ``` 1. Create a general-purpose storage account in your resource group and region:
- ```bash
+ ```azurecli
az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction ```
Use the following commands to create these item.
# [JavaScript](#tab/javascript)
- ```bash
+ ```azurecli
az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ``` > [!NOTE]
Use the following commands to create these item.
# [C#](#tab/csharp)
- ```bash
+ ```azurecli
az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> ```
backup Backup Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-cli.md
Before you create a Backup vault, choose the storage redundancy of the data with
```azurecli-interactive az dataprotection backup-vault create -g testBkpVaultRG --vault-name TestBkpVault -l westus --type SystemAssigned --storage-settings datastore-type="VaultStore" type="LocallyRedundant"
+```
+```json
{ "eTag": null, "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault",
To understand the inner components of a Backup policy for Azure Disk Backup, ret
```azurecli-interactive az dataprotection backup-policy get-default-policy-template --datasource-type AzureDisk
+```
+```json
{ "datasourceTypes": [ "Microsoft.Compute/disks"
az dataprotection backup-policy get-default-policy-template --datasource-type Az
} ] }- ``` The policy template consists of a trigger (which decides what triggers the backup) and a lifecycle (which decides when to delete/copy/move the backup). In Azure Disk Backup, the default values for trigger are a scheduled trigger for every 4 hours (PT4H) and to retain each backup for seven days.
The policy template consists of a trigger (which decides what triggers the backu
"R/2020-04-05T13:00:00+00:00/PT4H" ] }-
+}
``` **Default retention lifecycle:**
Once the template is downloaded as a JSON file, you can edit it for scheduling a
```azurecli-interactive az dataprotection backup-policy get-default-policy-template --datasource-type AzureDisk > policy.json az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVault -n mypolicy --policy policy.json
+```
+```json
{ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupPolicies/mypolicy", "name": "mypolicy",
Use the edited JSON file to create a backup instance of the Azure Managed Disk.
```azurecli-interactive az dataprotection backup-instance create -g testBkpVaultRG --vault-name TestBkpVault --backup-instance backup_instance.json
+```
-
+```json
{ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupInstances/diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166", "name": "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166",
List all backup instances within a vault using [az dataprotection backup-instanc
```azurecli-interactive az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureDisk --datasource-id /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk
+```
-
+```json
[ { "datasourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk",
az dataprotection backup-instance list-from-resourcegraph --datasource-type Azur
"zones": null } ]-- ``` You can specify a retention rule while triggering backup. To view the retention rules in policy, look through the policy JSON for retention rules. In the below example, the rule with the name _default_ is displayed and we'll use that rule for the on-demand backup.
-```JSON
+```json
{ "isDefault": true, "lifecycles": [
Track all the jobs using the [az dataprotection job list](/cli/azure/dataprotect
You can also use Az.ResourceGraph to track all jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job that can be across any Backup vault.
-```azurepowershell-interactive
+```azurecli
az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --status Completed ```
backup Restore Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-cli.md
First, we need to fetch the relevant backup instance ID. List all backup instanc
```azurecli-interactive az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureBlob --datasource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA"
+```
+```output
[ { "datasourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA",
Once the instance is identified, fetch the relevant recovery range using the [az
```azurecli-interactive az dataprotection restorable-time-range find --start-time 2021-05-30T00:00:00 --end-time 2021-05-31T00:00:00 --source-data-store-type OperationalStore -g testBkpVaultRG --vault-name TestBkpVault --backup-instances CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036
+```
+```output
{ "id": "CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036", "name": null,
Using this option, you can restore all block blobs in the storage account by rol
```azurecli-interactive az dataprotection backup-instance restore initialize-for-data-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --target-resource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA" --point-in-time 2021-06-02T18:53:44.4465407Z
+```
+```output
{ "object_type": "AzureBackupRecoveryTimeBasedRestoreRequest", "recovery_point_time": "2021-06-02T18:53:44.4465407Z.0000000Z",
az dataprotection backup-instance restore initialize-for-data-recovery --datasou
}, "source_data_store_type": "OperationalStore" }
+```
-
+```azurecli-interactive
az dataprotection backup-instance restore initialize-for-data-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --target-resource-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/CLITestSA" --point-in-time 2021-06-02T18:53:44.4465407Z > restore.json ```
Using this option, you can browse and select up to 10 containers to restore. To
```azurecli-interactive az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --container-list container1 container2
+```
+```output
{ "object_type": "AzureBackupRecoveryTimeBasedRestoreRequest", "recovery_point_time": "2021-06-02T18:53:44.4465407Z.0000000Z",
az dataprotection backup-instance restore initialize-for-item-recovery --datasou
}, "source_data_store_type": "OperationalStore" }
+```
-
+```azurecli-interactive
az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --container-list container1 container2 > restore.json ```
To restore selected containers, use the [az dataprotection backup-instance resto
```azurecli-interactive az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --from-prefix-pattern container1/text1 container2/text4 --to-prefix-pattern container1/text4 container2/text41
+```
+```output
{ "object_type": "AzureBackupRecoveryTimeBasedRestoreRequest", "recovery_point_time": "2021-06-02T18:53:44.4465407Z.0000000Z",
az dataprotection backup-instance restore initialize-for-item-recovery --datasou
}, "source_data_store_type": "OperationalStore" }
+```
--
+```azurecli-interactive
az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --from-prefix-pattern container1/text1 container2/text4 --to-prefix-pattern container1/text4 container2/text41 > restore.json ```
Track all the jobs using the [az dataprotection job list](/cli/azure/dataprotect
You can also use Az.ResourceGraph to track all jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job which can be across any Backup vault.
-```azurepowershell-interactive
+```azurecli-interactive
az dataprotection job list-from-resourcegraph --datasource-type AzureBlob --operation Restore ```
backup Restore Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-managed-disks-cli.md
List all backup instances within a vault using [az dataprotection backup-instanc
```azurecli-interactive az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureDisk --datasource-id /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk
+```
-
+```output
[ { "datasourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/diskrg/providers/Microsoft.Compute/disks/CLITestDisk",
az dataprotection backup-instance list-from-resourcegraph --datasource-type Azur
"zones": null } ]-- ``` Once the instance is identified, fetch the relevant recovery point using the [az dataprotection recovery-point list](/cli/azure/dataprotection/recovery-point#az_dataprotection_recovery_point_list) command. ```azurecli-interactive az dataprotection recovery-point list --backup-instance-name diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166 -g testBkpVaultRG --vault-name TestBkpVault
+```
+```output
{ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/testBkpVaultRG/providers/Microsoft.DataProtection/BackupVaults/TestBkpVault/backupInstances/diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166/recoveryPoints/5081ad8f1e6c4548ae89536d0d45c493", "name": "5081ad8f1e6c4548ae89536d0d45c493",
Track all jobs using the [az dataprotection job list](/cli/azure/dataprotection/
You can also use Az.ResourceGraph to track all jobs across all Backup vaults. Use the [az dataprotection job list-from-resourcegraph](/cli/azure/dataprotection/job#az_dataprotection_job_list_from_resourcegraph) command to get the relevant job that can be across any Backup vault.
-```azurepowershell-interactive
+```azurecli-interactive
az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --operation Restore ```
baremetal-infrastructure High Availability Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/oracle/high-availability-features.md
You can use this feature instead of a time-delayed redo on the standby database.
The Oracle Database keeps flashback logs in the fast recovery area (FRA). These logs are separate from the redo logs and require more space within the FRA. By default, 24 hours of flashback logs are kept, but you can change this setting per your requirements.
-## Oracle Real Application Clusters
-
-[Oracle Real Application Clusters (RAC)](https://docs.oracle.com/en/database/oracle/oracle-database/19/racad/introduction-to-oracle-rac.html#GUID-5A1B02A2-A327-42DD-A1AD-20610B2A9D92) allows multiple interconnected servers to appear as one database service to end users and applications. This feature removes many points of failure and is a recognized high availability active/active solution for Oracle databases.
-
-As shown in the following figure from Oracle's [High Availability Overview and Best Practices](https://docs.oracle.com/en/database/oracle/oracle-database/19/haovw/ha-features.html), a single RAC database is presented to the application layer. The applications connect to the SCAN listener, which directs traffic to a specific database instance. RAC controls access from multiple instances to maintain data consistency across separate compute nodes.
-
-![Diagram showing an overview of the architecture of Oracle RAC.](media/oracle-high-availability/oracle-real-application-clusters.png)
-
-If one instance fails, the service continues on all other remaining instances. Each database deployed on the solution will be in a RAC configuration of n+1, where n is the minimum processing power required to support the service.
-
-Oracle Database services are used to allow connections to fail over between nodes when an instance fails transparently. Such failures may be planned or unplanned. Working with Oracle RAC Fast Application Notification, when an instance is made unavailable, the service is moved to a surviving node. The service moves to a node specified in the service configuration as either preferred or available.
-
-Another key feature of Oracle Database services is only starting a service depending on its role. This feature is used when there's a Data Guard failover. All patterns deployed using Data Guard are required to link a database service to a Data Guard role.
-
-For example, two services could be created, MY\_DB\_APP and MY\_DB\_AS. The MY\_DB\_APP service is started only when the database instance is started with the Data Guard role of PRIMARY. MY\_DB\_AS is only started when the Data Guard role is PHYSICAL\_STANDBY. This configuration allows for applications to point to the \_APP service, while also reporting, which can be offloaded to Active Standby and pointed to the \_AS service.
- ## Oracle Data Guard With Data Guard, you can maintain a copy of a database on separate physical hardware. Ideally, that hardware should be geographically removed from the primary database. Data Guard places no limit on the distance, although distance has a bearing on modes of protection. Increased distance adds latency between sites, which can cause some options (such as synchronous replication) to become untenable.
baremetal-infrastructure Oracle Baremetal Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/oracle/oracle-baremetal-architecture.md
This topology supports a single instance of Oracle Database with Oracle Data Gua
[![Diagram showing the architecture of a single instance of Oracle Database with Oracle Data Guard.](media/oracle-baremetal-architecture/single-instance-architecture.png)](media/oracle-baremetal-architecture/single-instance-architecture.png#lightbox)
-## Oracle Real Application Clusters (RAC) One Node
-
-This topology supports a RAC configuration with shared storage and GRID cluster. Database instances run only on one node (active-passive configuration).
-
-Features include:
--- Active-passive with Oracle RAC One Node-
- - Automatic fail-over
-
- - Fast restart on second node
--- Real-time fail-over and scalability with Oracle RAC--- Zero downtime rolling maintenance-
-[![Diagram showing the architecture of an Oracle RAC One Node active-passive configuration.](media/oracle-baremetal-architecture/one-node-rac-architecture.png)](media/oracle-baremetal-architecture/one-node-rac-architecture.png#lightbox)
-
-## RAC
-
-This topology supports an Oracle RAC configuration with shared storage and Grid cluster while multiple instances per database run concurrently (active-active configuration).
--- Performance is easy to scale through online provisioning of added servers. -- Users are active on all servers, and all servers share access to the same Oracle Database. -- All types of database maintenance can be performed either online or in rolling fashion for minimal or zero downtime. -- Oracle Active Data Guard (ADG) standby systems can easily serve a dual-purpose as test systems. -
-This configuration allows you to test all changes on an exact copy of the production database before they're applied to the production environment.
-
-> [!NOTE]
-> If you intend to use Active Data Guard Far Sync (synchronous mode), you'll need to consider the regional zones where this feature is supported. For geographical distributed regions only, we recommend using Data Guard with asynchronous mode.
-
-[![Diagram showing the architecture of an Oracle RAC active-active configuration.](media/oracle-baremetal-architecture/rac-architecture.png)](media/oracle-baremetal-architecture/rac-architecture.png#lightbox)
- ## Next steps Learn about provisioning your BareMetal instances for Oracle workloads.
baremetal-infrastructure Oracle Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/oracle/oracle-baremetal-overview.md
These instances are for running mission critical applications requiring an Oracl
Other features of BareMetal Infrastructure for Oracle include: - Oracle certified UCS blades - UCSB200-M5, UCSB460-M4, UCSB480-M5-- Oracle Real Application Clusters (RAC) node-to-node (multi-cast) communication using private virtual LAN (VLAN) -40 Gb. - Microsoft-managed hardware - Redundant storage, network, power, management - Monitoring for Infra, repairs, and replacement
baremetal-infrastructure Oracle Baremetal Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/oracle/oracle-baremetal-provision.md
Last updated 04/14/2021
In this article, we'll look at how to provision your BareMetal Infrastructure instances for Oracle workloads.
-The first step to provision your BareMetal instances is to work with your Microsoft CSA. They'll help you based on your specific workload needs and the architecture you're deploying, whether single instance, One Node RAC, or RAC. For more information on these topologies, see [Architecture of BareMetal Infrastructure for Oracle](oracle-baremetal-architecture.md).
+The first step to provision your BareMetal instances is to work with your Microsoft CSA. They'll help you based on your specific workload needs and the architecture you're deploying. For more information on these topologies, see [Architecture of BareMetal Infrastructure for Oracle](oracle-baremetal-architecture.md).
## Prerequisites
baremetal-infrastructure Oracle Baremetal Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/oracle/oracle-baremetal-storage.md
Last updated 04/14/2021
In this article, we'll give an overview of the storage offered by the BareMetal Infrastructure for Oracle workloads.
-BareMetal Infrastructure for Oracle offers NetApp Network File System (NFS) storage. NFS storage does not require Oracle Real Application Clusters (RAC) certification. For more information, see [Oracle RAC Technologies Matrix for Linux Clusters](https://www.oracle.com/database/technologies/tech-generic-linux-new.html).
-
+BareMetal Infrastructure for Oracle offers NetApp Network File System (NFS) storage.
This storage offering includes Tier 3 support from an OEM partner, using either A700s or A800s storage controllers. BareMetal Infrastructure storage offers these premium storage capabilities:
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
Title: 'Connect to a VM using the native client and Azure Bastion'
+ Title: 'Connect to a VM using a native client and Azure Bastion'
-description: Learn how to connect to a VM from a Windows computer by using Bastion and the native client.
+description: Learn how to connect to a VM from a Windows computer by using Bastion and a native client.
Previously updated : 02/07/2022 Last updated : 03/03/2022
-# Connect to a VM using the native client (Preview)
+# Connect to a VM using a native client (Preview)
-This article helps you configure Bastion, and then connect to a VM in the VNet using the native client (SSH or RDP) on your local workstation. This feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+This article helps you configure your Bastion deployment, and then connect to a VM in the VNet using a native client (SSH or RDP) on your local computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally with this feature, you can now also upload and download files, depending on the connection type and client.
+
+Your capabilities on the VM when connecting via a native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via Bastion isn't supported.
> [!NOTE]
-> * This configuration requires the Standard SKU tier for Azure Bastion.
-> * You can now upload and download files using the native client. To learn more, refer to [Upload and download files using the native client](vm-upload-download-native.md).
-> * The user's capabilities on the VM using a native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via the Bastion is not supported.
+> This configuration requires the Standard SKU tier for Azure Bastion.
+
+There are two different sets of connection instructions.
+
+* Connect to a VM from the [native client on a Windows local computer](#connect). This lets you do the following:
+
+ * Connect using SSH or RDP.
+ * [Upload and download files](vm-upload-download-native.md#rdp) over RDP.
+ * If you want to connect using SSH and need to upload files to your target VM, use the **az network bastion tunnel** command instead.
+
+* Connect to a VM using the [**az network bastion tunnel** command](#connect-tunnel). This lets you do the following:
+
+ * Use native clients on *non*-Windows local computers (example: a Linux PC).
+ * Use the native client of your choice. (This includes the Windows native client.)
+ * Connect using SSH or RDP.
+ * Set up concurrent VM sessions with Bastion.
+ * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
+
+**Preview limitations**
Currently, this feature has the following limitation:
-* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Download your private key to a file on your local machine before signing in to your Linux VM using an SSH key pair.
+* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
## <a name="prereq"></a>Prerequisites
-Before you begin, verify that youΓÇÖve met the following criteria:
+Before you begin, verify that you have the following prerequisites:
* The latest version of the CLI commands (version 2.32 or later) is installed. For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli). * An Azure virtual network. * A virtual machine in the virtual network.
+* The VM's Resource ID. The Resource ID can be easily located in the Azure portal. Go to the Overview page for your VM and select the *JSON View* link to open the Resource JSON. Copy the Resource ID at the top of the page to your clipboard to use later when connecting to your VM.
* If you plan to sign in to your virtual machine using your Azure AD credentials, make sure your virtual machine is set up using one of the following methods:
- * Enable Azure AD sign-in for a [Windows VM](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md) or [Linux VM](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
+ * [Enable Azure AD sign-in for a Windows VM](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md) or [Linux VM](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
* [Configure your Windows VM to be Azure AD-joined](../active-directory/devices/concept-azure-ad-join.md). * [Configure your Windows VM to be hybrid Azure AD-joined](../active-directory/devices/concept-azure-ad-join-hybrid.md). ## <a name="configure"></a>Configure Bastion
-Follow the instructions that pertain to your environment.
+You can either [modify an existing Bastion deployment](#modify-host), or [deploy Bastion](#configure-new) to a virtual network.
-### <a name="modify-host"></a>To modify an existing bastion host
+### <a name="modify-host"></a>To modify an existing Bastion deployment
-If you have already configured Bastion for your VNet, modify the following settings:
+If you have already deployed Bastion to your VNet, modify the following configuration settings:
1. Navigate to the **Configuration** page for your Bastion resource. Verify that the SKU is **Standard**. If it isn't, change it to **Standard** from the dropdown. 1. Check the box for **Native Client Support** and apply your changes. :::image type="content" source="./media/connect-native-client-windows/update-host.png" alt-text="Settings for updating an existing host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/update-host-expand.png":::
-### <a name="configure-new"></a>To configure a new bastion host
+### <a name="configure-new"></a>To deploy Bastion to a VNet
-If you don't already have a bastion host configured, see [Create a bastion host](tutorial-create-host-portal.md#createhost). When configuring the bastion host, specify the following settings:
+If you haven't already deployed Bastion to your VNet, [deploy Bastion](tutorial-create-host-portal.md#createhost). When configuring Bastion, specify the following settings:
-1. On the **Basics** tab, for **Instance Details -> Tier** select **Standard** to create a bastion host using the Standard SKU.
+1. On the **Basics** tab, for **Instance Details -> Tier** select **Standard** to deploy Bastion using the Standard SKU.
:::image type="content" source="./media/connect-native-client-windows/standard.png" alt-text="Settings for a new bastion host with Standard SKU selected." lightbox="./media/connect-native-client-windows/standard.png"::: 1. On the **Advanced** tab, check the box for **Native Client Support**.
Verify that the following roles and ports are configured in order to connect.
* Reader role on the virtual machine. * Reader role on the NIC with private IP of the virtual machine. * Reader role on the Azure Bastion resource.
-* Virtual Machine Administrator Login or Virtual Machine User Login role, if youΓÇÖre using the Azure AD sign-in method. You only need to do this if you're enabling Azure AD login using the process described in this article: [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md) or [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
+* Virtual Machine Administrator Login or Virtual Machine User Login role, if youΓÇÖre using the Azure AD sign-in method. You only need to do this if you're enabling Azure AD login using the processes outlined in one of these articles:
+
+ * [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md)
+ * [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md)
### Ports
To connect to a Windows VM using native client support, you must have the follow
* Inbound port: RDP (3389) *or* * Inbound port: Custom value (youΓÇÖll then need to specify this custom port when you connect to the VM via Azure Bastion)
-## <a name="connect"></a>Connect to a VM from a Windows local workstation
+## <a name="connect"></a>Connect - Windows native client
+
+This section helps you connect to your virtual machine from the native client on a local Windows computer. If you want to upload and download files after connecting, you must use an RDP connection. For more information about file transfers, see [Upload and download files](vm-upload-download-native.md).
+
+Use the example that corresponds to the type of target VM to which you want to connect.
-This section helps you connect to your virtual machine from a Windows local workstation. Use the steps that correspond to the type of VM you want to connect to.
+* [Windows VM](#connect-windows)
+* [Linux VM](#connect-linux)
-1. Sign in to your Azure account and select your subscription containing your Bastion resource.
+### <a name="connect-windows"></a>Connect to a Windows VM
+
+1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
- ```azurecli-interactive
+ ```azurecli
az login az account list az account set --subscription "<subscription ID>" ```
-1. Use the example options that correspond to the type of VM you want to connect to ([Linux VM](#connect-linux) or [Windows VM](#connect-windows)).
+1. Sign in to your target Windows VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
-### <a name="connect-linux"></a>Connect to a Linux VM
+ **RDP:**
-1. Sign in to your target Linux VM using one of the following example options.
+ To connect via RDP, use the following command. YouΓÇÖll then be prompted to input your credentials. You can use either a local username and password, or your Azure AD credentials. For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
- > [!NOTE]
- > If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
- >
+ ```azurecli
+ az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>"
+ ```
- * **Azure AD:** If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following command. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
+ **SSH:**
- ```azurecli-interactive
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "AAD"
- ```
+ The SSH CLI extension is currently in Preview. The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
- * **SSH:** If youΓÇÖre signing in using an SSH key pair, use the following command.
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ ```
- ```azurecli-interactive
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
- ```
+1. Once you sign in to your target VM, the native client on your computer will open up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+
+### <a name="connect-linux"></a>Connect to a Linux VM
- * **Username/password:** If youΓÇÖre signing in using a local username and password, use the following command. YouΓÇÖll then be prompted for the password for the target VM.
+1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
- ```azurecli-interactive
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "password" --username "<Username>"
- ```
-
- > [!NOTE]
- > VM sessions using the **az network bastion ssh** command do not support file transfer. To use file transfer with SSH over Bastion, see the [az network bastion tunnel](#connect-tunnel) section.
- >
+ ```azurecli
+ az login
+ az account list
+ az account set --subscription "<subscription ID>"
+ ```
-### <a name="connect-windows"></a>Connect to a Windows VM
+1. Sign in to your target Linux VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
+
+ **Azure AD:**
+
+ If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following command. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
-1. Sign in to your target Windows VM using one of the following example options.
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "AAD"
+ ```
- > [!NOTE]
- > If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
- >
+ **SSH:**
- * **RDP:** To connect via RDP, use the following command. YouΓÇÖll then be prompted to input your credentials. You can use either a local username and password, or your Azure AD credentials. For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
+ The SSH CLI extension is currently in Preview. The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
- ```azurecli-interactive
- az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>"
- ```
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ ```
- * **SSH:** To sign in using an SSH key pair, use the following command. The SSH CLI extension is currently in Preview. The extension can be installed by running, "az extension add --name ssh".
+ **Username/password:**
- ```azurecli-interactive
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+ If youΓÇÖre signing in using a local username and password, use the following command. YouΓÇÖll then be prompted for the password for the target VM.
+
+ ```azurecli
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "password" --username "<Username>"
```
-1. Once you sign in to your target VM, the native client on your workstation will open up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+1. Once you sign in to your target VM, the native client on your computer will open up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+
+## <a name="connect-tunnel"></a>Connect - other native clients
-## <a name="connect-tunnel"></a>Connect to a VM using the *az network bastion tunnel* command
+This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: a Linux PC) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM.
-This section helps you connect to your virtual machine using the *az network bastion tunnel* command, which allows you to:
-* Use native clients on *non*-Windows local workstations (ex: a Linux PC).
-* Use a native client of your choice.
-* Set up concurrent VM sessions with Bastion.
-* Upload files to your target VM from your local workstation.
+This connection supports file upload from the local computer to the target VM. For more information, see [Upload files](vm-upload-download-native.md).
-1. Sign in to your Azure account, and select your subscription containing your Bastion resource.
+1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
- ```azurecli-interactive
+ ```azurecli
az login az account list az account set --subscription "<subscription ID>" ```
-2. Open the tunnel to your target VM using the following command.
+1. Open the tunnel to your target VM using the following command.
- ```azurecli-interactive
+ ```azurecli
az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" ```
-3. 1. Connect to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 2. For example, you can use the following command if you have the OpenSSH client installed on your local computer:
- ```azurecli-interactive
- ssh <username>@127.0.0.1 -p <LocalMachinePort>
- ```
+1. Connect to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 2.
+ For example, you can use the following command if you have the OpenSSH client installed on your local computer:
+
+ ```azurecli
+ ssh <username>@127.0.0.1 -p <LocalMachinePort>
+ ```
## Next steps
-Read the [Bastion FAQ](bastion-faq.md).
+[Upload and download files](vm-upload-download-native.md)
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md
Previously updated : 01/31/2022 Last updated : 03/03/2022 # Customer intent: I want to upload or download files using Bastion.
-# Upload and download files using the native client: Azure Bastion (Preview)
+# Upload and download files using the native client (Preview)
-Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md). You can use either SSH or RDP to upload files to a VM from your local computer. To download files from a VM, you must use RDP.
+Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md).
-> [!NOTE]
-> * Uploading and downloading files is supported using the native client only. You can't upload and download files using PowerShell or via the Azure portal.
-> * This feature requires the Standard SKU. The Basic SKU doesn't support using the native client.
->
+* File transfers are supported using the native client only. You can't upload or download files using PowerShell or via the Azure portal.
+* To both [upload and download files](#rdp), you must use the Windows native client and RDP.
+* You can [upload files](#tunnel-command) to a VM using the native client of your choice and either RDP or SSH.
+* This feature requires the Standard SKU. The Basic SKU doesn't support using the native client.
-## Upload and download files using the *az network bastion rdp* command
+## Prerequisites
-This section helps you transfer files between your local Windows computer and your target VM over RDP. The *az network bastion rdp* command uses the native client MSTSC to connect to the target VM. Once connected to the target VM, you can transfer files using right-click, then **Copy** and **Paste**.
+* Install Azure CLI (version 2.32 or later) to run the commands in this article. For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli).
+* Get the Resource ID for the VM to which you want to connect. The Resource ID can be easily located in the Azure portal. Go to the Overview page for your VM and select the *JSON View* link to open the Resource JSON. Copy the Resource ID at the top of the page to your clipboard to use later when connecting to your VM.
-1. Sign in to your Azure account and select the subscription containing your Bastion resource.
+## <a name="rdp"></a>Upload and download files - RDP
- ```azurecli-interactive
- az login
- az account list
- az account set --subscription "<subscription ID>"
- ```
+The steps in this section apply when connecting to a target VM from a Windows local computer using the native Windows client and RDP. The **az network bastion rdp** command uses the native client MSTSC. Once connected to the target VM, you can upload and download files using **right-click**, then **Copy** and **Paste**. To learn more about this command and how to connect, see [Connect to a VM using a native client](connect-native-client-windows.md).
+
+> [!NOTE]
+> File transfer over SSH is not supported using this method. Instead, use the [az network bastion tunnel command](#tunnel-command) to upload files over SSH.
+>
+
+1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
+
+ ```azurecli
+ az login
+ az account list
+ az account set --subscription "<subscription ID>"
+ ```
1. Sign in to your target VM via RDP using the following command. You can use either a local username and password, or your Azure AD credentials. To learn more about how to use Azure AD to sign in to your Azure Windows VMs, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
- ```azurecli-interactive
+ ```azurecli
az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" ``` 1. Once you sign in to your target VM, the native client on your computer will open up with your VM session. You can now transfer files between your VM and local machine using right-click, then **Copy** and **Paste**.
-## Upload files using the *az network bastion tunnel* command
+## <a name="tunnel-command"></a>Upload files - SSH and RDP
-This section helps you upload files from your local computer to your target VM over SSH or RDP using the *az network bastion tunnel* command. The *az network tunnel command* allows you to use a native client of your choice on *non*-Windows local workstations. To learn more about the tunnel command, refer to [Connect to a VM using the *az network bastion tunnel* command](connect-native-client-windows.md#connect-tunnel).
+The steps in this section apply to native clients other than Windows, as well as Windows native clients that want to connect over SSH to upload files.
+This section helps you upload files from your local computer to your target VM over SSH or RDP using the **az network bastion tunnel** command. This command doesn't support file download from the target VM to your local computer. To learn more about the tunnel command and how to connect, see [Connect to a VM using a native client](connect-native-client-windows.md).
> [!NOTE]
-> File download over SSH is not currently supported.
+> This command can be used to upload files from your local computer to the target VM. File download is not supported.
>
-1. Sign in to your Azure account and select the subscription containing your Bastion resource.
+1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
- ```azurecli-interactive
- az login
- az account list
- az account set --subscription "<subscription ID>"
- ```
+ ```azurecli
+ az login
+ az account list
+ az account set --subscription "<subscription ID>"
+ ```
1. Open the tunnel to your target VM using the following command:
- ```azurecli-interactive
+ ```azurecli
az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" ``` 1. Open a second command prompt to connect to your target VM through the tunnel. In this second command prompt window, you can upload files from your local machine to your target VM using the following command:
- ```azurecli-interactive
+ ```azurecli
scp -P <LocalMachinePort> <local machine file path> <username>@127.0.0.1:<target VM file path> ```
-1. Connect to your target VM using SSH, the native client of your choice, and the local machine port you specified in Step 3. For example, you can use the following command if you have the OpenSSH client installed on your local computer:
+1. Connect to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 3.
+
+ For example, you can use the following command if you have the OpenSSH client installed on your local computer:
- ```azurecli-interactive
+ ```azurecli
ssh <username>@127.0.0.1 -p <LocalMachinePort> ``` ## Next steps -- Read the [Bastion FAQ](bastion-faq.md)
+* Read the [Bastion FAQ](bastion-faq.md)
chaos-studio Chaos Studio Quickstart Dns Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-dns-outage.md
First you register a fault provider on the subscription where your network secur
1. Launch a [Cloud Shell](https://shell.azure.com/). 1. Replace **$SUBSCRIPTION_ID** with the Azure subscription ID containing the network security group you wish to use in your experiment and run the following command to ensure the provider will be registered on the correct subscription.
- ```bash
+ ```azurecli
az account set --subscription $SUBSCRIPTION_ID ``` 1. Drag and drop the **AzureNetworkSecurityGroupChaos.json** into the cloud shell window to upload the file. 1. Replace **$SUBSCRIPTION_ID** used in the prior step and execute the following command to register the AzureNetworkSecurityGroupChaos fault provider.
- ```bash
+ ```azurecli
az rest --method put --url "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/providers/microsoft.chaos/chaosProviderConfigurations/AzureNetworkSecurityGroupChaos?api-version=2021-06-21-preview" --body @AzureNetworkSecurityGroupChaos.json --resource "https://management.azure.com" ```
Follow these steps if you're not going to continue to using any faults related t
1. Launch a [Cloud Shell](https://shell.azure.com/). 1. Replace **$SUBSCRIPTION_ID** with the Azure subscription ID where the network security group fault provider was provisioned and run the following command.
- ```bash
+ ```azurecli
az rest --method delete --url "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/providers/microsoft.chaos/chaosProviderConfigurations/AzureNetworkSecurityGroupChaos?api-version=2021-06-21-preview" --resource "https://management.azure.com" ```
chaos-studio Chaos Studio Samples Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-samples-rest-api.md
The Squall REST APIs can be used to start and stop experiments, query target sta
#### Enumerate details about the Microsoft.Chaos Resource Provider
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### List all the operations of the Chaos Studio Resource Provider
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/providers/Microsoft.Chaos/operations?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### List Chaos Provider Configurations
-```bash
+```azurecli
az rest --method get --urlΓÇ»"https://management.azure.com/subscriptions/{subscriptionId}/providers/microsoft.chaos/chaosProviderConfigurations/?api-version={apiVersion}" --resource "https://management.azure.com" --verbose ``` #### Create Chaos Provider Configuration
-```bash
+```azurecli
az rest --method put --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/microsoft.chaos/chaosProviderConfigurations/{chaosProviderType}?api-version={apiVersion}" --body @{providerSettings.json} --resource "https://management.azure.com" ```
az rest --method put --url "https://management.azure.com/subscriptions/{subscrip
#### List All the Targets or Agents Under a Subscription
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos/chaosTargets/?api-version={apiVersion}" --url-parameter "chaosProviderType={chaosProviderType}" --resource "https://management.azure.com" ```
az rest --method get --url "https://management.azure.com/subscriptions/{subscrip
#### List all experiments in a resource group
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Chaos/chaosExperiments?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### Get an experiment configuration details by name
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### Create or update an experiment
-```bash
+```azurecli
az rest --method put --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --body @{experimentName.json} --resource "https://management.azure.com" ``` #### Delete an experiment
-```bash
+```azurecli
az rest --method delete --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --resource "https://management.azure.com" --verbose ``` #### Start an experiment
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/{experimentId}/start?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### Get statuses (History) of an experiment
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/{experimentId}/statuses?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### Get status of an experiment
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/{experimentId}/status?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### Cancel (Stop) an experiment
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/{experimentId}/cancel?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### List the details of the last two experiment executions
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/{experimentId}/executiondetails?api-version={apiVersion}" --resource "https://management.azure.com" ``` #### List the details of a specific experiment execution
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/{experimentId}/executiondetails/{executionDetailsId}?api-version={apiVersion}" --resource "https://management.azure.com" ```
chaos-studio Chaos Studio Targets Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-targets-capabilities.md
An experiment can only inject faults on onboarded targets with the corresponding
## Listing capability names and parameters For reference, a list of capability names, fault URNs, and parameters is available [in our fault library](chaos-studio-fault-library.md), but you can use the HTTP response to creating a capability or do a GET on an existing capability to get this information on demand. For example, doing a GET on a VM shutdown capability:
-```bash
+```azurecli
az rest --method get --url "https://management.azure.com/subscriptions/fd9ccc83-faf6-4121-9aff-2a2d685ca2a2/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine/capabilities/shutdown-1.0?api-version=2021-08-11-preview" ```
Will return the following JSON:
The `properties.urn` property is used to define the fault you want to run in a chaos experiment. To understand the schema for this fault's parameters, you can GET the schema referenced by `properties.parametersSchema`.
-```bash
+```azurecli
az rest --method get --url "https://schema-tc.eastus.chaos-prod.azure.com/targetTypes/Microsoft-VirtualMachine/capabilityTypes/Shutdown-1.0/parametersSchema.json" ```
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Before you can run Chaos Mesh faults in Chaos Studio, you need to install Chaos
1. Run the following commands in an [Azure Cloud Shell](../cloud-shell/overview.md) window where you have the active subscription set to be the subscription where your AKS cluster is deployed. Replace `$RESOURCE_GROUP` and `$CLUSTER_NAME` with the resource group and name of your cluster resource.
-```bash
+```azurecli
az aks get-credentials -g $RESOURCE_GROUP -n $CLUSTER_NAME
+```
+
+```bash
helm repo add chaos-mesh https://charts.chaos-mesh.org helm repo update kubectl create ns chaos-testing
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
The container provides REST-based query prediction endpoint APIs.
For the latest preview:
-Use the same Swagger path as 3.2 but a different port if you have already deployed 3.2 at the 5000 port.
+Use the same Swagger path as 3.2 but a different port if you have already deployed 3.2 at the 5000 port.
-# [Version 3.2](#tab/version-3-2)
-
-Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/vision-v3.2-read/swagger.json`.
-
-# [Version 2.0-preview](#tab/version-2)
-
-Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/vision-v2.0-preview-read/swagger.json`.
+Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/`.
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
The container also has the following container-specific configuration settings:
|No|Queue:Azure:QueueVisibilityTimeoutInMilliseconds | v3.x containers only. The time for a message to be invisible when another worker is processing it. | |No|Storage::DocumentStore::MongoDB|v2.0 containers only. Enables MongoDB for permanent result storage. | |No|Storage:ObjectStore:AzureBlob:ConnectionString| v3.x containers only. Azure blob storage connection string. |
-|No|Storage:TimeToLiveInDays| v3.x containers only. Result expiration period in days. The setting specifies when the system should clear recognition results. The default is 2 days (48 hours), which means any result live for longer than that period is not guaranteed to be successfully retrieved. |
+|No|Storage:TimeToLiveInDays| v3.x containers only. Result expiration period in days. The setting specifies when the system should clear recognition results. The default is 2 days, which means any result live for longer than that period is not guaranteed to be successfully retrieved. The value is integer and it must be between 1 day to 7 days.|
+|No|StorageTimeToLiveInMinutes| v3.2-model-2021-09-30-preview and new containers. Result expiration period in minutes. The setting specifies when the system should clear recognition results. The default is 2 days (2880 minutes), which means any result live for longer than that period is not guaranteed to be successfully retrieved. The value is integer and it must be between 60 minutes to 7 days (10080 minutes).|
|No|Task:MaxRunningTimeSpanInMinutes| v3.x containers only. Maximum running time for a single request. The default is 60 minutes. |
-|No|EnableSyncNTPServer| v3.x containers only. Enables the NTP server synchronization mechanism, which ensures synchronization between the system time and expected task runtime. Note that this requires external network traffic. The default is `true`. |
-|No|NTPServerAddress| v3.x containers only. NTP server for the time sync-up. The default is `time.windows.com`. |
-|No|Mounts::Shared| v3.x containers only. Local folder for storing recognition result. The default is `/share`. For running container without using Azure blob storage, we recommend mounting a volume to this folder to ensure you have enough space for the recognition results. |
+|No|EnableSyncNTPServer| v3.x containers only, except for v3.2-model-2021-09-30-preview and newer containers. Enables the NTP server synchronization mechanism, which ensures synchronization between the system time and expected task runtime. Note that this requires external network traffic. The default is `true`. |
+|No|NTPServerAddress| v3.x containers only, except for v3.2-model-2021-09-30-preview and newer containers. NTP server for the time sync-up. The default is `time.windows.com`. |
+|No|Mounts:Shared| v3.x containers only. Local folder for storing recognition result. The default is `/share`. For running container without using Azure blob storage, we recommend mounting a volume to this folder to ensure you have enough space for the recognition results. |
## ApiKey configuration setting
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters
```bash curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash ```
-```bash
+```azurecli
sudo az login ```
-```bash
+```azurecli
sudo az account set --subscription "<name or ID of Azure Subscription>" ```
-```bash
+```azurecli
sudo az group create --name "<resource-group-name>" --location "<your-region>" ``` See [Region Support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) for available regions.
-```bash
+```azurecli
sudo az iot hub create --name "<iothub-group-name>" --sku S1 --resource-group "<resource-group-name>" ```
-```bash
+```azurecli
sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled ```
Next, register the host computer as an IoT Edge device in your IoT Hub instance,
You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
-```bash
+```azurecli
sudo az iot hub device-identity connection-string show --device-id my-edge-device --hub-name test-iot-hub-123 ```
Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters
```bash curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash ```
-```bash
+```azurecli
sudo az login ```
-```bash
+```azurecli
sudo az account set --subscription "<name or ID of Azure Subscription>" ```
-```bash
+```azurecli
sudo az group create --name "<resource-group-name>" --location "<your-region>" ``` See [Region Support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) for available regions.
-```bash
+```azurecli
sudo az iot hub create --name "<iothub-name>" --sku S1 --resource-group "<resource-group-name>" ```
-```bash
+```azurecli
sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled ```
Next, register the VM as an IoT Edge device in your IoT Hub instance, using a [c
You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
-```bash
+```azurecli
sudo az iot hub device-identity connection-string show --device-id my-edge-device --hub-name test-iot-hub-123 ```
cognitive-services Customize Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/customize-pronunciation.md
Title: Customize pronunciation
+ Title: Create structured text data
description: Use phonemes to customize pronunciation of words in Speech-to-Text. Previously updated : 10/19/2021 Last updated : 03/01/2022
-# Customize pronunciation
+# Create structured text data
Custom speech allows you to provide different pronunciations for specific words using the Universal Phone Set. The Universal Phone Set (UPS) is a machine-readable phone set that is based on the International Phonetic Set Alphabet (IPA). The IPA is used by linguists world-wide and is accepted as a standard. UPS pronunciations consist of a string of UPS phones, each separated by whitespace. The phone set is case-sensitive. UPS phone labels are all defined using ASCII character strings.
-For steps on implementing UPS, see [Structured text data for training phone sets](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview)
+For steps on implementing UPS, see [Structured text data for training phone sets](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview). Structured text data is not the same as [pronunciation files](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training), and they cannot be used together.
-This structured text data is not the same as [pronunciation files](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training), and they cannot be used together.
-
-## Languages Supported
-
-Use the table below to navigate to the UPS for the respective language.
-
-| Language | Locale |
-|-||
-| [English (United States)](phone-sets.md) | `en-US` |
+See the sections in this article for the Universal Phone Set for each locale.
+## en-US
## Next steps
-* [Provide UPS pronunciation to Custom Speech](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview)
+- [Upload your data](how-to-custom-speech-upload-data.md)
+- [Inspect your data](how-to-custom-speech-inspect-data.md)
+- [Train your model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Human Labeled Transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions.md
In Japanese (ja-JP), there's a maximum length of 90 characters for each sentence
## Next Steps -- [Prepare and test your data](./how-to-custom-speech-test-and-train.md) - [Inspect your data](how-to-custom-speech-inspect-data.md) - [Evaluate your data](how-to-custom-speech-evaluate-data.md) - [Train your model](how-to-custom-speech-train-model.md)-- [Deploy your model](./how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
If you face the problem described in the previous paragraph, you can quickly dec
In regions with dedicated hardware for training, the Speech service will use up to 20 hours of audio for training. In other regions, it will only use up to 8 hours of audio.
-## Upload data
-
-To upload your data:
-
-1. Go to [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. After you create a project, go to the **Speech datasets** tab. Select **Upload data** to start the wizard and create your first dataset.
-1. Select a speech data type for your dataset, and upload your data.
-
-1. Specify whether the dataset will be used for **Training** or **Testing**.
-
- There are many types of data that can be uploaded and used for **Training** or **Testing**. Each dataset that you upload must be correctly formatted before uploading, and it must meet the requirements for the data type that you choose. Requirements are listed in the following sections.
-
-1. After your dataset is uploaded, you can either:
-
- * Go to the **Train custom models** tab to train a custom model.
- * Go to the **Test models** tab to visually inspect quality with audio-only data or evaluate accuracy with audio + human-labeled transcription data.
-
-### Upload data by using Speech-to-text REST API v3.0
-
-You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) to automate any operations related to your custom models. In particular, you can use the REST API to upload a dataset.
-
-To create and upload a dataset, use a [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
-
-A dataset that you create by using the Speech-to-text REST API v3.0 will *not* be connected to any of the Speech Studio projects, unless you specify a special parameter in the request body (see the code block later in this section). Connection with a Speech Studio project is *not* required for any model customization operations, if you perform them by using the REST API.
-
-When you log on to Speech Studio, its user interface will notify you when any unconnected object is found (like datasets uploaded through the REST API without any project reference). The interface will also offer to connect such objects to an existing project.
-
-To connect the new dataset to an existing project in Speech Studio during its upload, use [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) and fill out the request body according to the following format:
-
-```json
-{
- "kind": "Acoustic",
- "contentUrl": "https://contoso.com/mydatasetlocation",
- "locale": "en-US",
- "displayName": "My speech dataset name",
- "description": "My speech dataset description",
- "project": {
- "self": "https://westeurope.api.cognitive.microsoft.com/speechtotext/v3.0/projects/c1c643ae-7da5-4e38-9853-e56e840efcb2"
- }
-}
-```
-
-You can obtain the project URL that's required for the `project` element by using the [Get Projects](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request.
- ## Audio + human-labeled transcript data for training or testing You can use audio + human-labeled transcript data for both training and testing purposes. You must provide human-labeled transcriptions (word by word) for comparison:
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
## Next steps
+* [Upload your data](how-to-custom-speech-upload-data.md)
* [Inspect your data](how-to-custom-speech-inspect-data.md)
-* [Evaluate your data](how-to-custom-speech-evaluate-data.md)
* [Train a custom model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
+
+ Title: "Upload data for Custom Speech - Speech service"
+
+description: Learn about how to upload data to test or train a Custom Speech model.
++++++ Last updated : 03/03/2022+++
+# Upload data for Custom Speech
+
+You need audio or text data for testing the accuracy of Microsoft speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Prepare data for Custom Speech](how-to-custom-speech-test-and-train.md).
+
+## Upload data in Speech Studio
+
+To upload your data:
+
+1. Go to [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. After you create a project, go to the **Speech datasets** tab. Select **Upload data** to start the wizard and create your first dataset.
+1. Select a speech data type for your dataset, and upload your data.
+
+1. Specify whether the dataset will be used for **Training** or **Testing**.
+
+ There are many types of data that can be uploaded and used for **Training** or **Testing**. Each dataset that you upload must be correctly formatted before uploading, and it must meet the requirements for the data type that you choose. Requirements are listed in the following sections.
+
+1. After your dataset is uploaded, you can either:
+
+ * Go to the **Train custom models** tab to train a custom model.
+ * Go to the **Test models** tab to visually inspect quality with audio-only data or evaluate accuracy with audio + human-labeled transcription data.
+
+### Upload data by using Speech-to-text REST API v3.0
+
+You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) to automate any operations related to your custom models. In particular, you can use the REST API to upload a dataset.
+
+To create and upload a dataset, use a [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
+
+A dataset that you create by using the Speech-to-text REST API v3.0 will *not* be connected to any of the Speech Studio projects, unless you specify a special parameter in the request body (see the code block later in this section). Connection with a Speech Studio project is *not* required for any model customization operations, if you perform them by using the REST API.
+
+When you log on to Speech Studio, its user interface will notify you when any unconnected object is found (like datasets uploaded through the REST API without any project reference). The interface will also offer to connect such objects to an existing project.
+
+To connect the new dataset to an existing project in Speech Studio during its upload, use [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) and fill out the request body according to the following format:
+
+```json
+{
+ "kind": "Acoustic",
+ "contentUrl": "https://contoso.com/mydatasetlocation",
+ "locale": "en-US",
+ "displayName": "My speech dataset name",
+ "description": "My speech dataset description",
+ "project": {
+ "self": "https://westeurope.api.cognitive.microsoft.com/speechtotext/v3.0/projects/c1c643ae-7da5-4e38-9853-e56e840efcb2"
+ }
+}
+```
+
+You can obtain the project URL that's required for the `project` element by using the [Get Projects](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request.
+
+## Next steps
+
+* [Inspect your data](how-to-custom-speech-inspect-data.md)
+* [Evaluate your data](how-to-custom-speech-evaluate-data.md)
+* [Train a custom model](how-to-custom-speech-train-model.md)
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
Title: Long Audio API - Speech service
+ Title: Synthesize long-form text to speech - Speech service
description: Learn how the Long Audio API is designed for asynchronous synthesis of long-form text to speech.
Last updated 01/24/2022
-# Long Audio API
+# Synthesize long-form text to speech
The Long Audio API provides asynchronous synthesis of long-form text to speech. For example: audio books, news articles, and documents. There's no need to deploy a custom voice endpoint. Unlike the Text-to-speech API used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes. This makes it ideal for publishers and audio content platforms to create long audio content like audio books in a batch.
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
To create and deploy a confidential VM using an ARM template through the Azure C
1. Sign in to your Azure account in the Azure CLI.
- ```powershell-interactive
+ ```azurecli
az login ``` 1. Set your Azure subscription. Replace `<subscription-id>` with your subscription identifier. Make sure to use a subscription that meets the [prerequisites](#prerequisites).
- ```powershell-interactive
+ ```azurecli
az account set --subscription <subscription-id> ```
To create and deploy a confidential VM using an ARM template through the Azure C
If the resource group you specified doesn't exist, create a resource group with that name.
- ```powershell-interactive
+ ```azurecli
az group create -n $resourceGroup -l $region ``` 1. Deploy your VM to Azure using ARM template with custom parameter file
- ```powershell-interactive
+ ```azurecli
az deployment group create ` -g $resourceGroup ` -n $deployName `
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
QUEUE_CONNECTION_STRING=`az storage account show-connection-string -g $RESOURCE_
# [PowerShell](#tab/powershell)
-```powershell
+```azurecli
$QUEUE_CONNECTION_STRING=(az storage account show-connection-string -g $RESOURCE_GROUP --name $STORAGE_ACCOUNT_NAME --query connectionString --out json) -replace '"','' ```
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Next, install the Azure Container Apps extension for the Azure CLI.
```azurecli az extension add \
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.2-py2.py3-none-any.whl
+ --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.4-py2.py3-none-any.whl
``` # [PowerShell](#tab/powershell) ```azurecli az extension add `
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.2-py2.py3-none-any.whl
+ --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.4-py2.py3-none-any.whl
```
Next, retrieve the Log Analytics Client ID and client secret.
Make sure to run each query separately to give enough time for the request to complete.
-```bash
+```azurecli
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show --query customerId -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE -o tsv | tr -d '[:space:]'` ```
-```bash
+```azurecli
LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=`az monitor log-analytics workspace get-shared-keys --query primarySharedKey -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE -o tsv | tr -d '[:space:]'` ```
$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(Get-AzOperationalInsightsWorkspace -Resource
$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=(Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $RESOURCE_GROUP -Name $LOG_ANALYTICS_WORKSPACE).PrimarySharedKey >
-```powershell
+```azurecli
$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=(az monitor log-analytics workspace get-shared-keys --query primarySharedKey -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv) ```
Get the storage account key with the following command:
# [Bash](#tab/bash)
-```bash
+```azurecli
STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT --query '[0].value' --out tsv` ```
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Get the storage account key with the following command:
# [Bash](#tab/bash)
-```bash
+```azurecli
STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT --query '[0].value' --out tsv` ```
container-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/monitor.md
Set the name of your resource group and Log Analytics workspace, and then retrie
# [Bash](#tab/bash)
-```bash
+```azurecli
RESOURCE_GROUP="my-containerapps" LOG_ANALYTICS_WORKSPACE="containerapps-logs"
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show --que
# [PowerShell](#tab/powershell)
-```powershell
+```azurecli
$RESOURCE_GROUP="my-containerapps" $LOG_ANALYTICS_WORKSPACE="containerapps-logs"
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication.md
Output displays the access token, abbreviated here:
``` For registry authentication, we recommend that you store the token credential in a safe location and follow recommended practices to manage [docker login](https://docs.docker.com/engine/reference/commandline/login/) credentials. For example, store the token value in an environment variable:
-```bash
+```azurecli
TOKEN=$(az acr login --name <acrName> --expose-token --output tsv --query accessToken) ```
container-registry Container Registry Check Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-check-health.md
az acr check-health --ignore-errors
# Check environment and target registry; skip confirmation to pull image az acr check-health --name myregistry --ignore-errors --yes
-```
+```
Sample output:
-```console
-$ az acr check-health --name myregistry --ignore-errors --yes
+```azurecli
+az acr check-health --name myregistry --ignore-errors --yes
+```
+```output
Docker daemon status: available Docker version: Docker version 18.09.2, build 6247962 Docker pull of 'mcr.microsoft.com/mcr/hello-world:latest' : OK
container-registry Container Registry Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-customer-managed-keys.md
az keyvault set-policy \
Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) to assign permissions to the identity to access the key vault. For example, assign the Key Vault Crypto Service Encryption role to the identity using the [az role assignment create](/cli/azure/role/assignment#az_role_assignment_create) command:
-```azurecli
+```azurecli
az role assignment create --assignee $identityPrincipalID \ --role "Key Vault Crypto Service Encryption User" \ --scope $keyvaultID
az keyvault key create \
In the command output, take note of the key's ID, `kid`. You use this ID in the next step:
-```JSON
+```output
[...] "key": { "crv": null,
You can also use a Resource Manager template to create a registry and enable enc
The following template creates a new container registry and a user-assigned managed identity. Copy the following contents to a new file and save it using a filename such as `CMKtemplate.json`.
-```JSON
+```json
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
Follow the steps in the previous sections to create the following resources:
Run the following [az deployment group create][az-deployment-group-create] command to create the registry using the preceding template file. Where indicated, provide a new registry name and managed identity name, as well as the key vault name and key ID you created.
-```bash
+```azurecli
az deployment group create \ --resource-group <resource-group-name> \ --template-file CMKtemplate.json \
container-registry Container Registry Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-delete.md
After identifying stale manifest digests, you can run the following Bash script
> [!WARNING] > Use the following sample script with caution--deleted image data is UNRECOVERABLE. If you have systems that pull images by manifest digest (as opposed to image name), you should not run these scripts. Deleting the manifest digests will prevent those systems from pulling the images from your registry. Instead of pulling by manifest, consider adopting a *unique tagging* scheme, a [recommended best practice](container-registry-image-tag-version.md).
-```bash
+```azurecli
#!/bin/bash # WARNING! This script deletes data!
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
Successfully packaged chart and saved it to: /my/path/hello-world-0.1.0.tgz
Run `helm registry login` to authenticate with the registry. You may pass [registry credentials](container-registry-authentication.md) appropriate for your scenario, such as service principal credentials, user identity, or a repository-scoped token. - Authenticate with an Azure Active Directory [service principal with pull and push permissions](container-registry-auth-service-principal.md#create-a-service-principal) (AcrPush role) to the registry.
- ```bash
+ ```azurecli
SERVICE_PRINCIPAL_NAME=<acr-helm-sp> ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv) PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_NAME \
Run `helm registry login` to authenticate with the registry. You may pass [regi
USER_NAME=$(az ad sp list --display-name $SERVICE_PRINCIPAL_NAME --query "[].appId" --output tsv) ``` - Authenticate with your [individual Azure AD identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) to push and pull Helm charts using an AD token.
- ```bash
+ ```azurecli
USER_NAME="00000000-0000-0000-0000-000000000000" PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken) ``` - Authenticate with a [repository scoped token](container-registry-repository-scoped-permissions.md) (Preview).
- ```bash
+ ```azurecli
USER_NAME="helm-token" PASSWORD=$(az acr token create -n $USER_NAME \ -r $ACR_NAME \
container-registry Container Registry Java Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-java-quickstart.md
Finally, you'll update your project configuration and use the command prompt to
1. Navigate to the complete project directory for your Spring Boot application and run the following command to build the image and push the image to the registry:
- ```bash
+ ```azurecli
az acr login && mvn compile jib:build ```
container-registry Container Registry Oras Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md
az acr create \
In the command output, note the `zoneRedundancy` property for the registry. When enabled, the registry is zone redundant, and ORAS Artifact enabled:
-```JSON
+```output
{ [...] "zoneRedundancy": "Enabled",
Run `oras login` to authenticate with the registry. You may pass [registry cred
- Authenticate with your [individual Azure AD identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) to use an AD token.
- ```bash
+ ```azurecli
USER_NAME="00000000-0000-0000-0000-000000000000" PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken) ``` - Authenticate with a [repository scoped token](container-registry-repository-scoped-permissions.md) (Preview) to use non-AD based tokens.
- ```bash
+ ```azurecli
USER_NAME="oras-token" PASSWORD=$(az acr token create -n $USER_NAME \ -r $ACR_NAME \
Run `oras login` to authenticate with the registry. You may pass [registry cred
- Authenticate with an Azure Active Directory [service principal with pull and push permissions](container-registry-auth-service-principal.md#create-a-service-principal) (AcrPush role) to the registry.
- ```bash
+ ```azurecli
SERVICE_PRINCIPAL_NAME="oras-sp" ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv) PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_NAME \
The signature is untagged, but tracked as a `oras.artifact.manifest` reference t
Support for the ORAS Artifacts specification enables deleting the graph of artifacts associated with the root artifact. Use the [az acr repository delete][az-acr-repository-delete] command to delete the signature, SBoM and the signature of the SBoM.
-```bash
+```azurecli
az acr repository delete \ -n $ACR_NAME \ -t ${REPO}:$TAG -y
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
xxxx.westeurope.cloudapp.azure.com. 10 IN A 20.45.122.144
Also verify that you can perform registry operations from the virtual machine in the network. Make an SSH connection to your virtual machine, and run [az acr login][az-acr-login] to login to your registry. Depending on your VM configuration, you might need to prefix the following commands with `sudo`.
-```bash
+```azurecli
az acr login --name $REGISTRY_NAME ```
container-registry Container Registry Tutorial Private Base Image Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-private-base-image-update.md
az acr task list-runs --registry $ACR_NAME --output table
If you completed the previous tutorial (and didn't delete the registry), you should see output similar to the following. Take note of the number of task runs, and the latest RUN ID, so you can compare the output after you update the base image in the next section.
-```console
-$ az acr task list-runs --registry $ACR_NAME --output table
+```azurecli
+az acr task list-runs --registry $ACR_NAME --output table
+```
+```output
UN ID TASK PLATFORM STATUS TRIGGER STARTED DURATION -- -- - -- - ca12 baseexample2 linux Succeeded Manual 2020-11-21T00:00:56Z 00:00:36
az acr task list-runs --registry $ACR_NAME --output table
Output is similar to the following. The TRIGGER for the last-executed build should be "Image Update", indicating that the task was kicked off by your quick task of the base image.
-```console
-$ az acr task list-runs --registry $ACR_NAME --output table
+```azurecli
+az acr task list-runs --registry $ACR_NAME --output table
+```
+```output
PLATFORM STATUS TRIGGER STARTED DURATION -- -- - -- - ca13 baseexample2 linux Succeeded Image Update 2020-11-21T00:06:00Z 00:00:43
container-registry Container Registry Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-vnet.md
az network vnet list \
Output:
-```console
+```output
[ { "Name": "myDockerVMVNET",
az network vnet subnet show \
Output:
-```
+```output
/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myDockerVMVNET/subnets/myDockerVMSubnet ```
az acr network-rule add \
After waiting a few minutes for the configuration to update, verify that the VM can access the container registry. Make an SSH connection to your VM, and run the [az acr login][az-acr-login] command to login to your registry.
-```bash
+```azurecli
az acr login --name mycontainerregistry ```
Docker successfully pulls the image to the VM.
This example demonstrates that you can access the private container registry through the network access rule. However, the registry can't be accessed from a login host that doesn't have a network access rule configured. If you attempt to login from another host using the `az acr login` command or `docker login` command, output is similar to the following:
-```Console
+```output
Error response from daemon: login attempt to https://xxxxxxx.azurecr.io/v2/ failed with status: 403 Forbidden ```
cosmos-db Diagnostic Queries Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/diagnostic-queries-cassandra.md
Title: Troubleshoot issues with advanced diagnostics queries for Cassandra API
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the Cassandra API.
+description: Learn how to use Azure Log Analytics to improve the performance and health of your Azure Cosmos DB Cassandra API account.
- + Last updated 06/12/2021
> * [Gremlin API](../queries-gremlin.md)
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB Cassansra API account by using diagnostics logs sent to **resource-specific** tables.
For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
- Provides better discoverability of the schemas. - Improves performance across both ingestion latency and query times.
-## Common queries
-Common queries are shown in the resource-specific and Azure Diagnostics tables.
-
-### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- let topRequestsByRUcharge = CDBDataPlaneRequests
- | where TimeGenerated > ago(24h)
- | project RequestCharge , TimeGenerated, ActivityId;
- CDBCassandraRequests
- | project PIICommandText, ActivityId, DatabaseName , CollectionName
- | join kind=inner topRequestsByRUcharge on ActivityId
- | project DatabaseName , CollectionName , PIICommandText , RequestCharge, TimeGenerated
- | order by RequestCharge desc
- | take 10
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- let topRequestsByRUcharge = AzureDiagnostics
- | where Category == "DataPlaneRequests" and TimeGenerated > ago(1h)
- | project requestCharge_s , TimeGenerated, activityId_g;
- AzureDiagnostics
- | where Category == "CassandraRequests"
- | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
- | join kind=inner topRequestsByRUcharge on activityId_g
- | project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated
- | order by requestCharge_s desc
- | take 10
- ```
--
-### Requests throttled (statusCode = 429) in a specific time window
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- let throttledRequests = CDBDataPlaneRequests
- | where StatusCode == "429"
- | project OperationName , TimeGenerated, ActivityId;
- CDBCassandraRequests
- | project PIICommandText, ActivityId, DatabaseName , CollectionName
- | join kind=inner throttledRequests on ActivityId
- | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- let throttledRequests = AzureDiagnostics
- | where Category == "DataPlaneRequests"
- | where statusCode_s == "429"
- | project OperationName , TimeGenerated, activityId_g;
- AzureDiagnostics
- | where Category == "CassandraRequests"
- | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
- | join kind=inner throttledRequests on activityId_g
- | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
- ```
--
-### Queries with large response lengths (payload size of the server response)
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- let operationsbyUserAgent = CDBDataPlaneRequests
- | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
- CDBCassandraRequests
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- | join kind=inner operationsbyUserAgent on ActivityId
- | summarize max(ResponseLength) by PIICommandText
- | order by max_ResponseLength desc
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- let operationsbyUserAgent = AzureDiagnostics
- | where Category=="DataPlaneRequests"
- | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
- AzureDiagnostics
- | where Category == "CassandraRequests"
- //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- | join kind=inner operationsbyUserAgent on activityId_g
- | summarize max(responseLength_s1) by piiCommandText_s
- | order by max_responseLength_s1 desc
- ```
--
-### RU consumption by physical partition (across all replicas in the replica set)
-
-# [Resource-specific](#tab/resource-specific)
-
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= now(-1d)
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
- | render columnchart
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= now(-1d)
- | where Category == 'PartitionKeyRUConsumption'
- //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
- | render columnchart
- ```
--
-### RU consumption by logical partition (across all replicas in the replica set)
-
-# [Resource-specific](#tab/resource-specific)
- ```Kusto
- CDBPartitionKeyRUConsumption
- | where TimeGenerated >= now(-1d)
- //specify collection and database
- //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
- | render columnchart
- ```
-
-# [Azure Diagnostics](#tab/azure-diagnostics)
-
- ```Kusto
- AzureDiagnostics
- | where TimeGenerated >= now(-1d)
- | where Category == 'PartitionKeyRUConsumption'
- //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
- // filter by operation type
- //| where operationType_s == 'Create'
- | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
- | render columnchart
- ```
-
-## Next steps
-* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../cosmosdb-monitor-resource-logs.md).
-* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
+## Prerequisites
+
+- Create [Cassandra API account](create-account-java.md)
+- Create a [Log Analytics Workspace](../../azure-monitor/logs/quick-create-workspace.md).
+- Create [Diagnostic Settings](../cosmosdb-monitor-resource-logs.md).
+
+> [!WARNING]
+> When creating a Diagnostic Setting for the Cassandra API account, ensure that "DataPlaneRequests" remain unselected. In addition, for the Destination table, ensure "Resource specific" is chosen as it offers significant cost savings over "Azure diagnostics.
+
+> [!NOTE]
+> Note that enabling full text diagnostics, the queries returned will contain PII data.
+> This feature will not only log the skeleton of the query with obfuscated parameters but log the values of the parameters themselves.
+> This can help in diagnosing whether queries on a specific Primary Key (or set of Primary Keys) are consuming far more RUs than queries on other Primary Keys.
+
+## Log Analytics queries with different scenarios
++
+### RU consumption
+- What application queries are causing high RU consumption
+```kusto
+CDBCassandraRequests
+| where DatabaseName startswith "azure"
+| project TimeGenerated, RequestCharge, OperationName,
+requestType=split(split(PIICommandText,'"')[3], ' ')[0]
+| summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType);
+```
+
+- Monitoring RU Consumption per operation on logical partition keys.
+```kusto
+CDBPartitionKeyRUConsumption
+| where DatabaseName startswith "azure"
+| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+| order by TotalRequestCharge;
+
+CDBPartitionKeyRUConsumption
+| where DatabaseName startswith "azure"
+| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by OperationName, PartitionKey
+| order by TotalRequestCharge;
++
+CDBPartitionKeyRUConsumption
+| where DatabaseName startswith "azure"
+| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey, PartitionKeyRangeId
+| render timechart;
+```
+
+- What are the top queries impacting RU consumption?
+```kusto
+let topRequestsByRUcharge = CDBDataPlaneRequests
+| where TimeGenerated > ago(24h)
+| project RequestCharge , TimeGenerated, ActivityId;
+CDBCassandraRequests
+| project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0]
+| join kind=inner topRequestsByRUcharge on ActivityId
+| project DatabaseName, CollectionName, tostring(queryText), RequestCharge, TimeGenerated
+| order by RequestCharge desc
+| take 10;
+```
+- RU Consumption based on variations in payload sizes for read and write operations.
+```kusto
+// This query is looking at read operations
+CDBDataPlaneRequests
+| where OperationName in ("Read", "Query")
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), OperationName
+
+// This query is looking at write operations
+CDBDataPlaneRequests
+| where OperationName in ("Create", "Upsert", "Delete", "Execute")
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), OperationName
+
+// Write operations over a time period.
+CDBDataPlaneRequests
+| where OperationName in ("Create", "Update", "Delete", "Execute")
+| summarize maxResponseLength=max(ResponseLength) by bin(TimeGenerated, 1m), OperationName
+| render timechart;
+```
+
+- RU consumption by physical and logical partition.
+```kusto
+CDBPartitionKeyRUConsumption
+| where DatabaseName==ΓÇ¥uprofileΓÇ¥ and AccountName startswith ΓÇ£azureΓÇ¥
+| summarize totalRequestCharge=sum(RequestCharge) by PartitionKey, PartitionKeyRangeId;
+```
+
+- Is there a high RU consumption because of having hot partition?
+```kusto
+CDBPartitionKeyStatistics
+| where AccountName startswith ΓÇ£azureΓÇ¥
+| where TimeGenerated > now(-8h)
+| summarize StorageUsed = sum(SizeKb) by PartitionKey
+| order by StorageUsed desc
+```
+
+- How does the partition key affect RU consumption?
+```kusto
+let storageUtilizationPerPartitionKey =
+CDBPartitionKeyStatistics
+| project AccountName=tolower(AccountName), PartitionKey, SizeKb;
+CDBCassandraRequests
+| project AccountName=tolower(AccountName),RequestCharge, ErrorCode, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName
+| where DatabaseName != "<empty>"
+| join kind=inner storageUtilizationPerPartitionKey on $left.AccountName==$right.AccountName
+| where ErrorCode != -1 //successful
+| project AccountName, PartitionKey,ErrorCode,RequestCharge,SizeKb, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName;
+```
+
+### Latency
+- Number of server-side timeouts (Status Code - 408) seen in the time window.
+```kusto
+CDBDataPlaneRequests
+| where TimeGenerated >= now(-6h)
+| where AccountName startswith "azure"
+| where StatusCode == 408
+| summarize count() by bin(TimeGenerated, 10m)
+| render timechart
+```
+
+- Do we observe spikes in server-side latencies in the specified time window?
+```kusto
+CDBDataPlaneRequests
+| where TimeGenerated > now(-6h)
+| where AccountName startswith "azure"
+| summarize max(DurationMs) by bin(TimeGenerated, 10m)
+| render timechart
+```
+
+- Query operations that are getting throttled.
+```kusto
+CDBCassandraRequests
+| project RequestLength, ResponseLength,
+RequestCharge, DurationMs, TimeGenerated, OperationName,
+query=split(split(PIICommandText,'"')[3], ' ')[0]
+| summarize max(DurationMs) by bin(TimeGenerated, 10m), RequestCharge, tostring(query),
+RequestLength, OperationName
+| order by RequestLength, RequestCharge;
+```
+
+### Throttling
+- Is your application experiencing any throttling?
+```kusto
+CDBCassandraRequests
+| where RetriedDueToRateLimiting != false and RateLimitingDelayMs > 0;
+```
+- What queries are causing your application to throttle with a specified time period looking specifically at 429.
+```kusto
+let throttledRequests = CDBDataPlaneRequests
+| where StatusCode==429
+| project OperationName , TimeGenerated, ActivityId;
+CDBCassandraRequests
+| project PIICommandText, ActivityId, DatabaseName , CollectionName
+| join kind=inner throttledRequests on ActivityId
+| project DatabaseName , CollectionName , CassandraCommands=split(split(PIICommandText,'"')[3], ' ')[0] , OperationName, TimeGenerated;
+```
++
+## Next steps
+- Enable [log analytics](../../azure-monitor/logs/log-analytics-overview.md) on your Cassandra API account.
+- Overview [error code definition](error-codes-solution.md).
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/change-feed.md
Learn more about [change feed design patterns](change-feed-design-patterns.md).
This feature is currently supported by the following Azure Cosmos DB APIs and client SDKs.
-| **Client drivers** | **SQL API** | **Azure Cosmos DB's API for Cassandra** | **Azure Cosmos DB's API for MongoDB** | **Gremlin API**|**Table API** |
+| **Client drivers** | **SQL API** | **Azure Cosmos DB API for Cassandra** | **Azure Cosmos DB API for MongoDB** | **Gremlin API**|**Table API** |
| | | | | | | | | .NET | Yes | Yes | Yes | Yes | No | |Java|Yes|Yes|Yes|Yes|No|
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Last updated 12/08/2021
# Choose an API in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-explore-cosmos-db-apis" target="_blank">Video: Explore Azure Cosmos DB APIs</a></b>
+ Azure Cosmos DB is a fully managed NoSQL database for modern app development. Azure Cosmos DB takes database administration off your hands with automatic management, updates, and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand. ## APIs in Azure Cosmos DB
Core(SQL) API is native to Azure Cosmos DB.
API for MongoDB, Cassandra, Gremlin, and Table implement the wire protocol of open-source database engines. These APIs are best suited if the following conditions are true: * If you have existing MongoDB, Cassandra, or Gremlin applications.
-* If you donΓÇÖt want to rewrite your entire data access layer.
+* If you don't want to rewrite your entire data access layer.
* If you want to use the open-source developer ecosystem, client-drivers, expertise, and resources for your database. * If you want to use the Azure Cosmos DB key features such as global distribution, elastic scaling of storage and throughput, performance, low latency, ability to run transactional and analytical workload, and use a fully managed platform. * If you are developing modernized apps on a multi-cloud environment.
If you are migrating from other databases such as Oracle, DynamoDB, HBase etc. a
## API for MongoDB
-This API stores data in a document structure, via BSON format. It is compatible with MongoDB wire protocol; however, it does not use any native MongoDB related code. This API is a great choice if you want to use the broader MongoDB ecosystem and skills, without compromising on using Azure Cosmos DBΓÇÖs features such as scaling, high availability, geo-replication, multiple write locations, automatic and transparent shard management, transparent replication between operational and analytical stores, and more.
+This API stores data in a document structure, via BSON format. It is compatible with MongoDB wire protocol; however, it does not use any native MongoDB related code. This API is a great choice if you want to use the broader MongoDB ecosystem and skills, without compromising on using Azure Cosmos DB features such as scaling, high availability, geo-replication, multiple write locations, automatic and transparent shard management, transparent replication between operational and analytical stores, and more.
You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb/connect-using-compass.md), and [Robo3T](mongodb/connect-using-robomongo.md), can run queries and work with data as they do with native MongoDB.
API for MongoDB is compatible with the 4.0, 3.6, and 3.2 MongoDB server versions
## Cassandra API
-This API stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. Cassandra API in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. Cassandra API is wire protocol compatible with the Apache Cassandra. You should consider Cassandra API if you want to benefit the elasticity and fully managed nature of Azure Cosmos DB and still use most of the native Apache Cassandra features, tools, and ecosystem. This means on Cassandra API you donΓÇÖt need to manage the OS, Java VM, garbage collector, read/write performance, nodes, clusters, etc.
+This API stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. Cassandra API in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. Cassandra API is wire protocol compatible with the Apache Cassandra. You should consider Cassandra API if you want to benefit the elasticity and fully managed nature of Azure Cosmos DB and still use most of the native Apache Cassandra features, tools, and ecosystem. This means on Cassandra API you don't need to manage the OS, Java VM, garbage collector, read/write performance, nodes, clusters, etc.
You can use Apache Cassandra client drivers to connect to the Cassandra API. The Cassandra API enables you to interact with data using the Cassandra Query Language (CQL), and tools like CQL shell, Cassandra client drivers that you're already familiar with. Cassandra API currently only supports OLTP scenarios. Using Cassandra API, you can also use the unique features of Azure Cosmos DB such as change feed. To learn more, see [Cassandra API](cassandra-introduction.md) article. ## Gremlin API
-This API allows users to make graph queries and stores data as edges and vertices. Use this API for scenarios involving dynamic data, data with complex relations, data that is too complex to be modeled with relational databases, and if you want to use the existing Gremlin ecosystem and skills. Azure Cosmos DB's Gremlin API combines the power of graph database algorithms with highly scalable, managed infrastructure. It provides a unique, flexible solution to most common data problems associated with lack of flexibility and relational approaches. Gremlin API currently only supports OLTP scenarios.
+This API allows users to make graph queries and stores data as edges and vertices. Use this API for scenarios involving dynamic data, data with complex relations, data that is too complex to be modeled with relational databases, and if you want to use the existing Gremlin ecosystem and skills. The Azure Cosmos DB Gremlin API combines the power of graph database algorithms with highly scalable, managed infrastructure. It provides a unique, flexible solution to most common data problems associated with lack of flexibility and relational approaches. Gremlin API currently only supports OLTP scenarios.
-Azure Cosmos DB's Gremlin API is based on the [Apache TinkerPop](https://tinkerpop.apache.org/) graph computing framework. Gremlin API uses the same Graph query language to ingest and query data. It uses the Azure Cosmos DBΓÇÖs partition strategy to do the read/write operations from the Graph database engine. Gremlin API has a wire protocol support with the open-source Gremlin, so you can use the open-source Gremlin SDKs to build your application. Azure Cosmos DB Gremlin API also works with Apache Spark and [GraphFrames](https://github.com/graphframes/graphframes) for complex analytical graph scenarios. To learn more, see [Gremlin API](graph-introduction.md) article.
+The Azure Cosmos DB Gremlin API is based on the [Apache TinkerPop](https://tinkerpop.apache.org/) graph computing framework. Gremlin API uses the same Graph query language to ingest and query data. It uses the Azure Cosmos DB partition strategy to do the read/write operations from the Graph database engine. Gremlin API has a wire protocol support with the open-source Gremlin, so you can use the open-source Gremlin SDKs to build your application. Azure Cosmos DB Gremlin API also works with Apache Spark and [GraphFrames](https://github.com/graphframes/graphframes) for complex analytical graph scenarios. To learn more, see [Gremlin API](graph-introduction.md) article.
## Table API
-This API stores data in key/value format. If you are currently using Azure Table storage, you may see some limitations in latency, scaling, throughput, global distribution, index management, low query performance. Table API overcomes these limitations and itΓÇÖs recommended to migrate your app if you want to use the benefits of Azure Cosmos DB. Table API only supports OLTP scenarios.
+This API stores data in key/value format. If you are currently using Azure Table storage, you may see some limitations in latency, scaling, throughput, global distribution, index management, low query performance. Table API overcomes these limitations and it's recommended to migrate your app if you want to use the benefits of Azure Cosmos DB. Table API only supports OLTP scenarios.
Applications written for Azure Table storage can migrate to the Table API with little code changes and take advantage of premium capabilities. To learn more, see [Table API](table/introduction.md) article.
Trying to do capacity planning for a migration to Azure Cosmos DB API for SQL or
## Next steps * [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)
-* [Get started with Azure Cosmos DB's API for MongoDB](mongodb/create-mongodb-nodejs.md)
+* [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-nodejs.md)
* [Get started with Azure Cosmos DB Cassandra API](cassandr) * [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md) * [Get started with Azure Cosmos DB Table API](create-table-dotnet.md)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Read more about Cosmos DB's core concepts [global distribution](distribute-data-
Get started with Azure Cosmos DB with one of our quickstarts: * [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)
-* [Get started with Azure Cosmos DB's API for MongoDB](mongodb/create-mongodb-nodejs.md)
+* [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-nodejs.md)
* [Get started with Azure Cosmos DB Cassandra API](cassandr) * [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md) * [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Last updated 02/17/2022
# Consistency levels in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-consistency-levels" target="_blank">Video: Explore Consistency levels</a></b>
+ Distributed databases that rely on replication for high availability, low latency, or both, must make a fundamental tradeoff between the read consistency, availability, latency, and throughput as defined by the [PACELC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). The linearizability of the strong consistency model is the gold standard of data programmability. But it adds a steep price from higher write latencies due to data having to replicate and commit across large distances. Strong consistency may also suffer from reduced availability (during failures) because data cannot replicate and commit in every region. Eventual consistency offers higher availability and better performance, but it's more difficult to program applications because data may not be completely consistent across all regions. Most commercially available distributed NoSQL databases available in the market today provide only strong and eventual consistency. Azure Cosmos DB offers five well-defined levels. From strongest to weakest, the levels are:
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
# Continuous backup with point-in-time restore in Azure Cosmos DB [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-continuous-backup-restore-intro" target="_blank">Video: Learn more about continuous backup and point-in-time restore</a></b>
+ Azure Cosmos DB's point-in-time restore feature helps in multiple scenarios such as the following: * To recover from an accidental write or delete operation within a container.
You can add these configurations to the restored account after the restore is co
## Restorable timestamp for live accounts
-To restore Azure Cosmos DB live accounts that are not deleted, it is a best practice to always identify the [latest restorable timestamp](get-latest-restore-timestamp.md) for the container. You can then use this timestamp to restore the account to it's latest version.
+To restore Azure Cosmos DB live accounts that are not deleted, it is a best practice to always identify the [latest restorable timestamp](get-latest-restore-timestamp.md) for the container. You can then use this timestamp to restore the account to its latest version.
## Restore scenarios
For example, if you have 1-TB of data in two regions then:
See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk.md#how-do-customer-managed-keys-affect-continuous-backups) to learn: - How to configure your Azure Cosmos DB account when using customer-managed keys in conjunction with continuous backups.-- How do customer-managed keys affect restores.
+- How do customer-managed keys affect restores?
## Current limitations
cosmos-db Cosmosdb Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-monitor-resource-logs.md
Platform metrics and the Activity logs are collected automatically, whereas you
|Category |API | Definition | Key Properties | ||||| |DataPlaneRequests | All APIs | Logs back-end requests as data plane operations which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
- |MongoRequests | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- |CassandraRequests | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
- |GremlinRequests | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
+ |MongoRequests | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB API for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
+ |CassandraRequests | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB API for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ |GremlinRequests | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB API for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
|QueryRuntimeStatistics | SQL | This table details query operations executed against a SQL API account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` | |PartitionKeyStatistics | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: <br/><ul><li> At least 1% of the documents in the physical partition have same logical partition key. </li><li> Out of all the keys in the physical partition, the top 3 keys with largest storage size are captured by the PartitionKeyStatistics log. </li></ul> If the previous conditions are not met, the partition key statistics data is not available. It's okay if the above conditions are not met for your account, which typically indicates you have no logical partition storage skew. <br/><br/>Note: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes are not uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` | |PartitionKeyRUConsumption | SQL API | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for SQL API accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` | |ControlPlaneRequests | All APIs | Logs details on control plane operations i.e. creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
- |TableApiRequests | Table API | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB's API for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ |TableApiRequests | Table API | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB API for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
4. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/free-tier.md
New-AzCosmosDBAccount -ResourceGroupName "MyResourcegroup" `
After you create a free tier account, you can start building apps with Azure Cosmos DB with the following articles: * [Build a console app using the .NET V4 SDK](create-sql-api-dotnet-v4.md) to manage Azure Cosmos DB resources.
-* [Build a .NET web app using Azure Cosmos DB's API for MongoDB](mongodb/create-mongodb-dotnet.md)
+* [Build a .NET web app using Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-dotnet.md)
* [Download a notebook from the gallery](publish-notebook-gallery.md#download-a-notebook-from-the-gallery) and analyze your data. * Learn more about [Understanding your Azure Cosmos DB bill](understand-your-bill.md)
cosmos-db How Pricing Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-pricing-works.md
Last updated 12/07/2021
# Pricing model in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-overview-pricing-options" target="_blank">Video: Overview of Azure Cosmos DB pricing options</a></b>
+ The pricing model of Azure Cosmos DB simplifies the cost management and planning. With Azure Cosmos DB, you pay for the operations you perform against the database and for the storage consumed by your data. - **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos account you are using.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
In Azure Cosmos DB, every container has an indexing policy that dictates how the
In some situations, you may want to override this automatic behavior to better suit your requirements. You can customize a container's indexing policy by setting its *indexing mode*, and include or exclude *property paths*. > [!NOTE]
-> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in [Azure Cosmos DB's API for MongoDB](mongodb/mongodb-indexing.md)
+> The method of updating indexing policies described in this article only applies to Azure Cosmos DB's SQL (Core) API. Learn about indexing in [Azure Cosmos DB API for MongoDB](mongodb/mongodb-indexing.md)
## Indexing mode
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Title: Introduction to Azure Cosmos DB
-description: Learn about Azure Cosmos DB. This globally-distributed multi-model database is built for low latency, elastic scalability, high availability, and offers native support for NoSQL data.
+description: Learn about Azure Cosmos DB. This globally distributed multi-model database is built for low latency, elastic scalability, high availability, and offers native support for NoSQL data.
Last updated 08/26/2021
# Welcome to Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-what-is-cosmos-db" target="_blank">Video: What is Cosmos DB?</a></b>
+ Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds. Azure Cosmos DB is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security. App development is faster and more productive thanks to turnkey multi region data distribution anywhere in the world, open source APIs and SDKs for popular languages. As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
Build fast with open source APIs, multiple SDKs, schemaless data and no-ETL anal
- Choose from multiple database APIs including the native Core (SQL) API, API for MongoDB, Cassandra API, Gremlin API, and Table API. - Build apps on Core (SQL) API using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs. - Change feed makes it easy to track and manage changes to database containers and create triggered events with Azure Functions.-- Azure Cosmos DBΓÇÖs schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
+- Azure Cosmos DB's schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
### Mission-critical ready
Guarantee business continuity, 99.999% availability, and enterprise-level securi
End-to-end database management, with serverless and automatic scaling matching your application and TCO needs -- Fully-managed database service. Automatic, no touch, maintenance, patching, and updates, saving developers time and money.
+- Fully managed database service. Automatic, no touch, maintenance, patching, and updates, saving developers time and money.
- Cost-effective options for unpredictable or sporadic workloads of any size or scale, enabling developers to get started easily without having to plan or manage capacity. - Serverless model offers spiky workloads automatic and responsive service to manage traffic bursts on demand. - Autoscale provisioned throughput automatically and instantly scales capacity for unpredictable workloads, while maintaining [SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db).
Get started with Azure Cosmos DB with one of our quickstarts:
- Learn [how to choose an API](choose-api.md) in Azure Cosmos DB - [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)-- [Get started with Azure Cosmos DB's API for MongoDB](mongodb/create-mongodb-nodejs.md)
+- [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-nodejs.md)
- [Get started with Azure Cosmos DB Cassandra API](cassandr) - [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md) - [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md) - [A whitepaper on next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/) - Trying to do capacity planning for a migration to Azure Cosmos DB?
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"]
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-indexing.md
Title: Manage indexing in Azure Cosmos DB's API for MongoDB
-description: This article presents an overview of Azure Cosmos DB indexing capabilities using Azure Cosmos DB's API for MongoDB
+ Title: Manage indexing in Azure Cosmos DB API for MongoDB
+description: This article presents an overview of Azure Cosmos DB indexing capabilities using Azure Cosmos DB API for MongoDB
ms.devlang: javascript
-# Manage indexing in Azure Cosmos DB's API for MongoDB
+# Manage indexing in Azure Cosmos DB API for MongoDB
[!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
-Azure Cosmos DB's API for MongoDB takes advantage of the core index-management capabilities of Azure Cosmos DB. This article focuses on how to add indexes using Azure Cosmos DB's API for MongoDB. Indexes are specialized data structures that make querying your data roughly an order of magnitude faster.
+📺 <B><a href="https://aka.ms/cosmos-db-video-indexing-best-practices-mongodb-api" target="_blank">Video: Explore indexing best practices for the Azure Cosmos DB API for MongoDB</a></b>
+
+Azure Cosmos DB API for MongoDB takes advantage of the core index-management capabilities of Azure Cosmos DB. This article focuses on how to add indexes using Azure Cosmos DB API for MongoDB. Indexes are specialized data structures that make querying your data roughly an order of magnitude faster.
## Indexing for MongoDB server version 3.6 and higher
-Azure Cosmos DB's API for MongoDB server version 3.6+ automatically indexes the `_id` field and the shard key (only in sharded collections). The API automatically enforces the uniqueness of the `_id` field per shard key.
+Azure Cosmos DB API for MongoDB server version 3.6+ automatically indexes the `_id` field and the shard key (only in sharded collections). The API automatically enforces the uniqueness of the `_id` field per shard key.
The API for MongoDB behaves differently from the Azure Cosmos DB SQL API, which indexes all fields by default. ### Editing indexing policy
-We recommend editing your indexing policy in the Data Explorer within the Azure portal.
-. You can add single field and wildcard indexes from the indexing policy editor in the Data Explorer:
+We recommend editing your indexing policy in the Data Explorer within the Azure portal. You can add single field and wildcard indexes from the indexing policy editor in the Data Explorer:
:::image type="content" source="./media/mongodb-indexing/indexing-policy-editor.png" alt-text="Indexing policy editor":::
Azure Cosmos DB creates multikey indexes to index content stored in arrays. If y
### Geospatial indexes
-Many geospatial operators will benefit from geospatial indexes. Currently, Azure Cosmos DB's API for MongoDB supports `2dsphere` indexes. The API does not yet support `2d` indexes.
+Many geospatial operators will benefit from geospatial indexes. Currently, Azure Cosmos DB API for MongoDB supports `2dsphere` indexes. The API does not yet support `2d` indexes.
Here's an example of creating a geospatial index on the `location` field:
Here's an example of creating a geospatial index on the `location` field:
### Text indexes
-Azure Cosmos DB's API for MongoDB does not currently support text indexes. For text search queries on strings, you should use [Azure Cognitive Search](../../search/search-howto-index-cosmosdb.md) integration with Azure Cosmos DB.
+Azure Cosmos DB API for MongoDB does not currently support text indexes. For text search queries on strings, you should use [Azure Cognitive Search](../../search/search-howto-index-cosmosdb.md) integration with Azure Cosmos DB.
## Wildcard indexes
Wildcard indexes do not support any of the following index types or properties:
* TTL * Unique
-**Unlike in MongoDB**, in Azure Cosmos DB's API for MongoDB you **can't** use wildcard indexes for:
+**Unlike in MongoDB**, in Azure Cosmos DB API for MongoDB you **can't** use wildcard indexes for:
* Creating a wildcard index that includes multiple specific fields
globaldb:PRIMARY> db.coll.createIndex( { "university" : 1, "student_id" : 1 }, {
In the preceding example, omitting the ```"university":1``` clause returns an error with the following message:
-*cannot create unique index over {student_id : 1.0} with shard key pattern { university : 1.0 }*
+`cannot create unique index over {student_id : 1.0} with shard key pattern { university : 1.0 }`
### TTL indexes
The preceding command deletes any documents in the ```db.coll``` collection that
## Track index progress
-Version 3.6+ of Azure Cosmos DB's API for MongoDB support the `currentOp()` command to track index progress on a database instance. This command returns a document that contains information about in-progress operations on a database instance. You use the `currentOp` command to track all in-progress operations in native MongoDB. In Azure Cosmos DB's API for MongoDB, this command only supports tracking the index operation.
+Version 3.6+ of Azure Cosmos DB API for MongoDB support the `currentOp()` command to track index progress on a database instance. This command returns a document that contains information about in-progress operations on a database instance. You use the `currentOp` command to track all in-progress operations in native MongoDB. In Azure Cosmos DB API for MongoDB, this command only supports tracking the index operation.
Here are some examples that show how to use the `currentOp` command to track index progress:
If you're using version 3.2, this section outlines key differences with versions
### Dropping default indexes (version 3.2)
-Unlike the 3.6+ versions of Azure Cosmos DB's API for MongoDB, version 3.2 indexes every property by default. You can use the following command to drop these default indexes for a collection (```coll```):
+Unlike the 3.6+ versions of Azure Cosmos DB API for MongoDB, version 3.2 indexes every property by default. You can use the following command to drop these default indexes for a collection (```coll```):
```JavaScript > db.coll.dropIndexes()
If you want to create a wildcard index, [upgrade to version 4.0 or 3.6](upgrade-
* [Expire data in Azure Cosmos DB automatically with time to live](../time-to-live.md) * To learn about the relationship between partitioning and indexing, see how to [Query an Azure Cosmos container](../how-to-query-container.md) article. * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
cosmos-db Monitor Normalized Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-normalized-request-units.md
Title: Monitor normalized RU/s for an Azure Cosmos container or an account
-description: Learn how to monitor the normalized request unit usage of an operation in Azure Cosmos DB. Owners of an Azure Cosmos DB account can understand which operations are consuming more request units.
+description: Learn how to monitor the normalized request unit usage of an operation in Azure Cosmos DB. Owners of an Azure Cosmos DB account can understand which operations are consuming more request units.
- Previously updated : 02/17/2022+ Last updated : 03/03/2022 # How to monitor normalized RU/s for an Azure Cosmos container or an account [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything explicitly.
+Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature doesn't require you to enable or configure anything explicitly.
-The **Normalized RU Consumption** metric is used to see how well saturated the partition key ranges are with respect to the traffic. Azure Cosmos DB distributes the throughput equally across all the partition key ranges. This metric provides a per second view of the maximum throughput utilization for partition key range. Use this metric to calculate the RU/s usage across partition key range for given container. By using this metric, if you see high percentage of request units utilization across all partition key ranges in Azure monitor, you should increase the throughput to meet the needs of your workload.
-Example - Normalized utilization is defined as the max of the RU/s utilization across all partition key ranges. For example, suppose your max throughput is 24,000 RU/s and you have three partition key ranges, P_1, P_2, and P_3 each capable of scaling to 8,000 RU/s. In a given second, if P_1 has used 6000 RUs, P_2 7000 RUs, and P_3 5000 RUs the normalized utilization is MAX(6000 RU / 8000 RU, 7000 RU / 8000 RU, 5000 RU / 8000 RU) = 0.875.
+## Metric Definition
+The **Normalized RU Consumption** metric is a metric between 0% to 100% that is used to help measure the utilization of provisioned throughput on a database or container. The metric is emitted at 1 minute intervals and is defined as the maximum RU/s utilization across all partition key ranges in the time interval. Each partition key range maps to one physical partition and is assigned to hold data for a range of possible hash values. In general, the higher the Normalized RU percentage, the more you've utilized your provisioned throughput. The metric can also be used to view the utilization of individual partition key ranges on a database or container.
+
+For example, suppose you have a container where you set [autoscale max throughput](provision-throughput-autoscale.md#how-autoscale-provisioned-throughput-works) of 20,000 RU/s (scales between 2000 - 20,000 RU/s) and you have two partition key ranges (physical partitions) *P1* and *P2*. Because Azure Cosmos DB distributes the provisioned throughput equally across all the partition key ranges, *P1* and *P2* each can scale between 1000 - 10,000 RU/s. Suppose in a 1 minute interval, in a given second, *P1* consumed 6000 request units and *P2* consumed 8000 request units. The normalized RU consumption of P1 is 60% and 80% for *P2*. The overall normalized RU consumption of the entire container is MAX(60%, 80%) = 80%.
+
+If you're interested in seeing the request unit consumption at a per second interval, along with operation type, you can use the opt-in feature [Diagnostic Logs](cosmosdb-monitor-resource-logs.md) and query the **PartitionKeyRUConsumption** table. To get a high-level overview of the operations and status code your application is performing on the Azure Cosmos DB resource, you can use the built-in Azure Monitor **Total Requests** (SQL API), **Mongo Requests**, **Gremlin Requests**, or **Cassandra Requests** metric. Later you can filter on these requests by the 429 status code and split them by **Operation Type**.
## What to expect and do when normalized RU/s is higher
-When the normalized RU/s consumption reaches 100% for given partition key range, and if a client still makes requests in that time window of 1 second to that specific partition key range - it receives a rate limited error. The client should respect the suggested wait time and retry the request. The SDK makes it easy to handle this situation by retrying preconfigured times by waiting appropriately. It is not necessary that you see the RU rate limiting error just because the normalized RU has reached 100%. That's because the normalized RU is a single value that represents the max usage over all partition key ranges, one partition key range may be busy but the other partition key ranges can serve the requests without issues. For example, a single operation such as a stored procedure that consumes all the RU/s on a partition key range will lead to a short spike in the normalized RU/s consumption. In such cases, there will not be any immediate rate limiting errors if the request rate is low or requests are made to other partitions on different partition key ranges.
+When the normalized RU consumption reaches 100% for given partition key range, and if a client still makes requests in that time window of 1 second to that specific partition key range - it receives a rate limited error (429).
+
+This doesn't necessarily mean there's a problem with your resource. By default, the Azure Cosmos DB client SDKs and data import tools such as Azure Data Factory and bulk executor library automatically retry requests on 429s. They retry typically up to 9 times. As a result, while you may see 429s in the metrics, these errors may not even have been returned to your application.
+
+In general, for a production workload, if you see between 1-5% of requests with 429s, and your end to end latency is acceptable, this is a healthy sign that the RU/s are being fully utilized. In this case, the normalized RU consumption metric reaching 100% only means that in a given second, at least one partition key range used all its provisioned throughput. This is acceptable because the overall rate of 429s is still low. No further action is required.
+
+To determine what percent of your requests to your database or container resulted in 429s, from your Azure Cosmos DB account blade, navigate to **Insights** > **Requests** > **Total Requests by Status Code**. Filter to a specific database and container. For Gremlin API, use the **Gremlin Requests** metric.
+
+If the normalized RU consumption metric is consistently 100% across multiple partition key ranges and the rate of 429s is greater than 5%, it's recommended to increase the throughput. You can find out which operations are heavy and what their peak usage is by using the [Azure monitor metrics and Azure monitor diagnostic logs](sql/troubleshoot-request-rate-too-large.md#step-3-determine-what-requests-are-returning-429s). Follow the [best practices for scaling provisioned throughput (RU/s)](scaling-provisioned-throughput-best-practices.md).
+
+It isn't always the case that you'll see a 429 rate limiting error just because the normalized RU has reached 100%. That's because the normalized RU is a single value that represents the max usage over all partition key ranges. One partition key range may be busy but the other partition key ranges can serve requests without issues. For example, a single operation such as a stored procedure that consumes all the RU/s on a partition key range will lead to a short spike in the normalized RU consumption metric. In such cases, there won't be any immediate rate limiting errors if the overall request rate is low or requests are made to other partitions on different partition key ranges.
+
+Learn more about how to [interpret and debug 429 rate limiting errors](sql/troubleshoot-request-rate-too-large.md).
+
+## How to monitor for hot partitions
+The normalized RU consumption metric can be used to monitor if your workload has a hot partition. A hot partition arises when one or a few logical partition keys consume a disproportionate amount of the total RU/s due to higher request volume. This can be caused by a partition key design that doesn't evenly distribute requests. It results in many requests being directed to a small subset of logical partitions (which implies partition key ranges) that become "hot." Because all data for a logical partition resides on one partition key range and total RU/s is evenly distributed among all the partition key ranges, a hot partition can lead to 429s and inefficient use of throughput.
+
+#### How to identify if there's a hot partition
+
+To verify if there's a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
+
+Each PartitionKeyRangeId maps to one physical partition. If there's one PartitionKeyRangeId that has significantly higher normalized RU consumption than others (for example, one is consistently at 100%, but others are at 30% or less), this can be a sign of a hot partition.
++
+To identify the logical partitions that are consuming the most RU/s, as well as recommended solutions, see the article [Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions](sql/troubleshoot-request-rate-too-large.md#how-to-identify-the-hot-partition).
+
+## Normalized RU Consumption and autoscale
+
+The normalized RU consumption metric will show as 100% if at least 1 partition key range uses all its allocated RU/s in any given second in the time interval. One common question that arises is, why is normalized RU consumption at 100%, but Azure Cosmos DB didn't scale the RU/s to the maximum throughput with autoscale?
+
+> [!NOTE]
+> The information below describes the current implementation of autoscale and may be subject to change in the future.
+
+When you use autoscale, Azure Cosmos DB only scales the RU/s to the maximum throughput when the normalized RU consumption is 100% for a sustained, continuous period of time in a 5 second interval. This is done to ensure the scaling logic is cost friendly to the user, as it ensures that single, momentary spikes to not lead to unnecessary scaling and higher cost. When there are momentary spikes, the system typically scales up to a value higher than the previously scaled to RU/s, but lower than the max RU/s.
-The Azure Monitor metrics help you to find the operations per status code for SQL API by using the **Total Requests** metric. Later you can filter on these requests by the 429 status code and split them by **Operation Type**.
+For example, suppose you have a container with autoscale max throughput of 20,000 RU/s (scales between 2000 - 20,000 RU/s) and 2 partition key ranges. Each partition key range can scale between 1000 - 10,000 RU/s. Because autoscale provisions all required resources upfront, you can use up to 20,000 RU/s at any time. Let's say you have an intermittent spike of traffic, where for a single second, the usage of one of the partition key ranges is 10,000 RU/s. For subsequent seconds, the usage goes back down to 1000 RU/s. Because normalized RU consumption metric shows the highest utilization in the time period across all partitions, it will show 100%. However, because the utilization was only 100% for 1 second, autoscale won't automatically scale to the max.
-To find the requests, which are rate limited, the recommended way is to get this information through diagnostic logs.
+As a result, even though autoscale didn't scale to the maximum, you were still able to use the total RU/s available. To verify your RU/s consumption, you can use the opt-in feature Diagnostic Logs to query for the overall RU/s consumption at a per second level across all partition key ranges.
-If there is continuous peak of 100% normalized RU/s consumption or close to 100% across multiple partition key ranges, it's recommended to increase the throughput. You can find out which operations are heavy and their peak usage by utilizing the Azure monitor metrics and Azure monitor diagnostic logs.
+```kusto
+CDBPartitionKeyRUConsumption
+| where TimeGenerated >= (todatetime('2022-01-28T20:35:00Z')) and TimeGenerated <= todatetime('2022-01-28T20:40:00Z')
+| where DatabaseName == "MyDatabase" and CollectionName == "MyContainer"
+| summarize sum(RequestCharge) by bin(TimeGenerated, 1sec), PartitionKeyRangeId
+| render timechart
+```
+In general, for a production workload using autoscale, if you see between 1-5% of requests with 429s, and your end to end latency is acceptable, this is a healthy sign that the RU/s are being fully utilized. Even if the normalized RU consumption occasionally reaches 100% and autoscale does not scale up to the max RU/s, this is ok because the overall rate of 429s is low. No action is required.
-In summary, the **Normalized RU Consumption** metric is used to see which partition key range is more warm in terms of usage. So it gives you the skew of throughput towards a partition key range. You can later follow up to see the **PartitionKeyRUConsumption** log in Azure Monitor logs to get information about which logical partition keys are hot in terms of usage. This will point to change in either the partition key choice, or the change in application logic. To resolve the rate limiting, distribute the load of data say across multiple partitions or just increase in the throughput as it is required.
+> [!TIP]
+> If you are using autoscale and find that normalized RU consumption is consistently 100% and you are consistently scaled to the max RU/s, this is a sign that using manual throughput may be more cost-effective. To determine whether autoscale or manual throughput is best for your workload, see [how to choose between standard (manual) and autoscale provisioned throughput](how-to-choose-offer.md). Azure Cosmos DB also sends [Azure Advisor recommendations](../advisor/advisor-reference-cost-recommendations.md#configure-manual-throughput-instead-of-autoscale-on-your-azure-cosmos-db-database-or-container) based on your workload patterns to to recommend either manual or autoscale throughput.
## View the normalized request unit consumption metric
In summary, the **Normalized RU Consumption** metric is used to see which parti
:::image type="content" source="./media/monitor-normalized-request-units/normalized-request-unit-usage-metric.png" alt-text="Choose a metric from the Azure portal" border="true":::
-### Filters for normalized request unit consumption
+### Filters for normalized RU consumption metric
-You can also filter metrics and the chart displayed by a specific **CollectionName**, **DatabaseName**, **PartitionKeyRangeID**, and **Region**. To filter the metrics, select **Add filter** and choose the required property such as **CollectionName** and corresponding value you are interested in. The graph then displays the Normalized RU Consumption units consumed for the container for the selected period.
+You can also filter metrics and the chart displayed by a specific **CollectionName**, **DatabaseName**, **PartitionKeyRangeID**, and **Region**. To filter the metrics, select **Add filter** and choose the required property such as **CollectionName** and corresponding value you are interested in. The graph then displays the normalized RU consumption metric for the container for the selected period.
You can group metrics by using the **Apply splitting** option. For shared throughput databases, the normalized RU metric shows data at the database granularity only, it doesn't show any data per collection. So for shared throughput database, you won't see any data when you apply splitting by collection name.
The normalized request unit consumption metric for each container is displayed a
* Monitor Azure Cosmos DB data by using [diagnostic settings](cosmosdb-monitor-resource-logs.md) in Azure. * [Audit Azure Cosmos DB control plane operations](audit-control-plane-logs.md)
+* [Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions](sql/troubleshoot-request-rate-too-large.md)
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Last updated 02/08/2022
# Partitioning and horizontal scaling in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-data-partitioning-best-practices" target="_blank">Video: Data partitioning best practices</a></b>
+ Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance needs of your application. In partitioning, the items in a container are divided into distinct subsets called *logical partitions*. Logical partitions are formed based on the value of a *partition key* that is associated with each item in a container. All the items in a logical partition have the same partition key value. For example, a container holds items. Each item has a unique value for the `UserID` property. If `UserID` serves as the partition key for the items in the container and there are 1,000 unique `UserID` values, 1,000 logical partitions are created for the container.
A container is scaled by distributing data and throughput across physical partit
The number of physical partitions in your container depends on the following:
-* The number of throughput provisioned (each individual physical partition can provide a throughput of up to 10,000 request units per second). The 10,000 RU/s limit for physical partitions implies that logical partitions also have a 10,000 RU/s limit, as each logical partition is only mapped to one physical partition.
+* The amount of throughput provisioned (each individual physical partition can provide a throughput of up to 10,000 request units per second). The 10,000 RU/s limit for physical partitions implies that logical partitions also have a 10,000 RU/s limit, as each logical partition is only mapped to one physical partition.
* The total data storage (each individual physical partition can store up to 50GB data).
You can see your container's physical partitions in the **Storage** section of t
In the above screenshot, a container has `/foodGroup` as the partition key. Each of the three bars in the graph represents a physical partition. In the image, **partition key range** is the same as a physical partition. The selected physical partition contains the top 3 most significant size logical partitions: `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies`.
-If you provision a throughput of 18,000 request units per second (RU/s), then each of the three physical partition can utilize 1/3 of the total provisioned throughput. Within the selected physical partition, the logical partition keys `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies` can, collectively, utilize the physical partition's 6,000 provisioned RU/s. Because provisioned throughput is evenly divided across your container's physical partitions, it's important to choose a partition key that evenly distributes throughput consumption by [choosing the right logical partition key](#choose-partitionkey).
+If you provision a throughput of 18,000 request units per second (RU/s), then each of the three physical partitions can utilize 1/3 of the total provisioned throughput. Within the selected physical partition, the logical partition keys `Beef Products`, `Vegetable and Vegetable Products`, and `Soups, Sauces, and Gravies` can, collectively, utilize the physical partition's 6,000 provisioned RU/s. Because provisioned throughput is evenly divided across your container's physical partitions, it's important to choose a partition key that evenly distributes throughput consumption by [choosing the right logical partition key](#choose-partitionkey).
## Managing logical partitions
The following image shows how logical partitions are mapped to physical partitio
## <a id="choose-partitionkey"></a>Choosing a partition key
-A partition key has two components: **partition key path** and the **partition key value**. For example, consider an item { "userId" : "Andrew", "worksFor": "Microsoft" } if you choose "userId" as the partition key, the following are the two partition key components:
+A partition key has two components: **partition key path** and the **partition key value**. For example, consider an item `{ "userId" : "Andrew", "worksFor": "Microsoft" }` if you choose "userId" as the partition key, the following are the two partition key components:
-* The partition key path (For example: "/userId"). The partition key path accepts alphanumeric and underscore(_) characters. You can also use nested objects by using the standard path notation(/).
+* The partition key path (For example: "/userId"). The partition key path accepts alphanumeric and underscore (_) characters. You can also use nested objects by using the standard path notation(/).
* The partition key value (For example: "Andrew"). The partition key value can be of string or numeric types.
Some things to consider when selecting the *item ID* as the partition key includ
* Learn how to [provision throughput on an Azure Cosmos database](how-to-provision-database-throughput.md). * See the learn module on how to [Model and partition your data in Azure Cosmos DB.](/learn/modules/model-partition-data-azure-cosmos-db/) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
# Request Units in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-what-is-request-unit" target="_blank">Video: What is a Request Unit?</a></b>
+ Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation. The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
To manage and plan capacity, Azure Cosmos DB ensures that the number of RUs for
The type of Azure Cosmos account you're using determines the way consumed RUs get charged. There are 3 modes in which you can create an account:
-1. **Provisioned throughput mode**: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. To scale the provisioned throughput for your application, you can increase or decrease the number of RUs at any time in increments or decrements of 100 RUs. You can make your changes either programmatically or by using the Azure portal. You are billed on an hourly basis for the amount of RUs per second you have provisioned. To learn more, see the [Provisioned throughput](set-throughput.md) article.
+1. **Provisioned throughput mode**: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. To scale the provisioned throughput for your application, you can increase or decrease the number of RUs at any time in increments or decrements of 100 RUs. You can make your changes either programmatically or by using the Azure portal. You are billed on an hourly basis for the number of RUs per second you have provisioned. To learn more, see the [Provisioned throughput](set-throughput.md) article.
You can provision throughput at two distinct granularities: * **Containers**: For more information, see [Provision throughput on an Azure Cosmos container](how-to-provision-container-throughput.md). * **Databases**: For more information, see [Provision throughput on an Azure Cosmos database](how-to-provision-database-throughput.md).
-2. **Serverless mode**: In this mode, you don't have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the amount of Request Units that has been consumed by your database operations. To learn more, see the [Serverless throughput](serverless.md) article.
+2. **Serverless mode**: In this mode, you don't have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations. To learn more, see the [Serverless throughput](serverless.md) article.
-3. **Autoscale mode**: In this mode, you can automatically and instantly scale the throughput (RU/s) of your database or container based on it's usage, without impacting the availability, latency, throughput, or performance of the workload. This mode is well suited for mission-critical workloads that have variable or unpredictable traffic patterns, and require SLAs on high performance and scale. To learn more, see the [autoscale throughput](provision-throughput-autoscale.md) article.
+3. **Autoscale mode**: In this mode, you can automatically and instantly scale the throughput (RU/s) of your database or container based on its usage, without impacting the availability, latency, throughput, or performance of the workload. This mode is well suited for mission-critical workloads that have variable or unpredictable traffic patterns, and require SLAs on high performance and scale. To learn more, see the [autoscale throughput](provision-throughput-autoscale.md) article.
## Request Unit considerations
While you estimate the number of RUs consumed by your workload, consider the fol
* The size of the result set * Projections
- The same query on the same data will always costs the same number of RUs on repeated executions.
+ The same query on the same data will always cost the same number of RUs on repeated executions.
* **Script usage**: As with queries, stored procedures and triggers consume RUs based on the complexity of the operations that are performed. As you develop your application, inspect the [request charge header](./optimize-cost-reads-writes.md#measuring-the-ru-charge-of-a-request) to better understand how much RU capacity each operation consumes.
Your choice of [consistency model](consistency-levels.md) also affects the throu
- Learn how to [optimize query cost in Azure Cosmos DB](./optimize-cost-reads-writes.md). - Learn how to [use metrics to monitor throughput](use-metrics.md). - Trying to do capacity planning for a migration to Azure Cosmos DB?
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Change Feed Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-design-patterns.md
Last updated 08/26/2021
# Change feed design patterns in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-deploy-event-sourcing-solution-with-azure-functions-dotnet" target="_blank">Video: Deploy an event sourcing solution with Azure Functions + .NET in 7 minutes</a></b>
+ The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed. This document focuses on common change feed design patterns, design tradeoffs, and change feed limitations. Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger additional actions. Examples of additional actions include:
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Last updated 02/15/2022
# Data modeling in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-data-modeling-best-practices" target="_blank">Video: Data modeling best practices</a></b>
++ While schema-free databases, like Azure Cosmos DB, make it super easy to store and query unstructured and semi-structured data, you should spend some time thinking about your data model to get the most of the service in terms of performance and scalability and lowest cost. How is data going to be stored? How is your application going to retrieve and query data? Is your application read-heavy, or write-heavy?
cosmos-db Sql Api Sdk Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-core.md
ms.devlang: csharp Previously updated : 11/11/2021 Last updated : 03/04/2022
Below is a list of any know issues affecting the [recommended minimum version](#
| Issue | Impact | Mitigation | Tracking link | | | | | |
-| When using Direct mode with an account with multiple write locations, the SDK might not detect when a region is added to the account. The background process that [refreshes the account information](troubleshoot-sdk-availability.md#adding-a-region-to-an-account) fails to start. |If a new region is added to the account which is part of the PreferredLocations on a higher order than the current region, the SDK won't detect the new available region. |Restart the application. |https://github.com/Azure/azure-cosmos-dotnet-v2/issues/852 |
+| When using Direct mode with an account with multiple write locations, the SDK might not detect when a region is added to the account. The background process that [refreshes the account information](troubleshoot-sdk-availability.md#adding-a-region-to-an-account) fails to start. |If a new region is added to the account which is part of the PreferredLocations on a higher order than the current region, the SDK won't detect the new available region. |Upgrade to 2.17.0. |https://github.com/Azure/azure-cosmos-dotnet-v2/issues/852 |
## See Also
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet.md
ms.devlang: csharp Previously updated : 11/11/2021 Last updated : 03/04/2022
Below is a list of any know issues affecting the [recommended minimum version](#
| Issue | Impact | Mitigation | Tracking link | | | | | |
-| When using Direct mode with an account with multiple write locations, the SDK might not detect when a region is added to the account. The background process that [refreshes the account information](troubleshoot-sdk-availability.md#adding-a-region-to-an-account) fails to start. |If a new region is added to the account which is part of the PreferredLocations on a higher order than the current region, the SDK won't detect the new available region. |Restart the application. |https://github.com/Azure/azure-cosmos-dotnet-v2/issues/852 |
+| When using Direct mode with an account with multiple write locations, the SDK might not detect when a region is added to the account. The background process that [refreshes the account information](troubleshoot-sdk-availability.md#adding-a-region-to-an-account) fails to start. |If a new region is added to the account which is part of the PreferredLocations on a higher order than the current region, the SDK won't detect the new available region. |Upgrade to 2.17.0. |https://github.com/Azure/azure-cosmos-dotnet-v2/issues/852 |
## FAQ
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-rate-too-large.md
description: Learn how to diagnose and fix request rate too large exceptions.
Previously updated : 02/28/2022 Last updated : 03/03/2022
# Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-This article contains known causes and solutions for various 429 status code errors for the SQL API. If you are using the API for MongoDB, see the [Troubleshoot common issues in API for MongoDB](../mongodb/error-codes-solutions.md) article for how to debug status code 16500.
+This article contains known causes and solutions for various 429 status code errors for the SQL API. If you're using the API for MongoDB, see the [Troubleshoot common issues in API for MongoDB](../mongodb/error-codes-solutions.md) article for how to debug status code 16500.
A "Request rate too large" exception, also known as error code 429, indicates that your requests against Azure Cosmos DB are being rate limited.
When you use provisioned throughput, you set the throughput measured in request
In a given second, if the operations consume more than the provisioned request units, Azure Cosmos DB will return a 429 exception. Each second, the number of request units available to use is reset.
-Before taking an action to change the RU/s, it's important to understand the root cause of rate limiting and address the underlying issue.
+Before taking an action to change the RU/s, it's important to understand the root cause of rate limiting and address the underlying issue.
+> [!TIP]
+> The guidance in this article applies to databases and containers using provisioned throughput - both autoscale and manual throughput.
There are different error messages that correspond to different types of 429 exceptions: - [Request rate is large. More Request Units may be needed, so no changes were made.](#request-rate-is-large)-- [The request did not complete due to a high rate of metadata requests.](#rate-limiting-on-metadata-requests)-- [The request did not complete due to a transient service error.](#rate-limiting-due-to-transient-service-error)
+- [The request didn't complete due to a high rate of metadata requests.](#rate-limiting-on-metadata-requests)
+- [The request didn't complete due to a transient service error.](#rate-limiting-due-to-transient-service-error)
## Request rate is large
-This is the most common scenario. It occurs when the request units consumed by operations on data exceed the provisioned number of RU/s.
+This is the most common scenario. It occurs when the request units consumed by operations on data exceed the provisioned number of RU/s. If you're using manual throughput, this occurs when you've consumed more RU/s than the manual throughput provisioned. If you're using autoscale, this occurs when you've consumed more than the maximum RU/s provisioned. For example, if you have a resource provisioned with manual throughput of 400 RU/s, you will see 429 when you consume more than 400 request units in a single second. If you have a resource provisioned with autoscale max RU/s of 4000 RU/s (scales between 400 RU/s - 4000 RU/s), you will see 429s when you consume more than 4000 request units in a single second.
### Step 1: Check the metrics to determine the percentage of requests with 429 error
-Seeing 429 error messages doesn't necessarily mean there is a problem with your database or container.
+Seeing 429 error messages doesn't necessarily mean there is a problem with your database or container. A small percentage of 429s is normal whether you are using manual or autoscale throughput, and is a sign that you are maximizing the RU/s you've provisioned.
#### How to investigate
-Determine what percent of your requests to your database or container resulted in 429s, compared to the overall count of successful requests. From your Azure Cosmos DB account blade, navigate to **Insights** > **Requests** > **Total Requests by Status Code**. Filter to a specific database and container.
+Determine what percent of your requests to your database or container resulted in 429s, compared to the overall count of successful requests. From your Azure Cosmos DB account blade, navigate to **Insights** > **Requests** > **Total Requests by Status Code**. Filter to a specific database and container.
-By default, the Azure Cosmos DB client SDKs and data import tools such as Azure Data Factory and bulk executor library automatically retry requests on 429s. They retry typically up to 9 times. As a result, while you may see 429s in the metrics, these errors may not even have been returned to your application.
+By default, the Azure Cosmos DB client SDKs and data import tools such as Azure Data Factory and bulk executor library automatically retry requests on 429s. They retry typically up to nine times. As a result, while you may see 429s in the metrics, these errors may not even have been returned to your application.
:::image type="content" source="media/troubleshoot-request-rate-too-large/insights-429-requests.png" alt-text="Total Requests by Status Code chart that shows number of 429 and 2xx requests."::: - #### Recommended solution
-In general, for a production workload, if you see between 1-5% of requests with 429s, and your end to end latency is acceptable, this is a healthy sign that the RU/s are being fully utilized. No action is required. Otherwise, move to the next troubleshooting steps.
+In general, for a production workload, **if you see between 1-5% of requests with 429s, and your end to end latency is acceptable, this is a healthy sign that the RU/s are being fully utilized**. No action is required. Otherwise, move to the next troubleshooting steps.
+
+If you're using autoscale, it's possible to see 429s on your database or container, even if the RU/s was not scaled to the maximum RU/s. See the section [Request rate is large with autoscale](#request-rate-is-large-with-autoscale) for an explanation.
-### Step 2: Determine if there is a hot partition
-A hot partition arises when one or a few logical partition keys consume a disproportionate amount of the total RU/s due to higher request volume. This can be caused by a partition key design that doesn't evenly distribute requests. It results in many requests being directed to a small subset of logical (which implies physical) partitions that become "hot." Because all data for a logical partition resides on one physical partition and total RU/s is evenly distributed among the physical partitions, a hot partition can lead to 429s and inefficient use of throughput.
+One common question that arises is, **"Why am I seeing 429s in the Azure Monitor metrics, but none in my own application monitoring?"** If Azure Monitor Metrics show you have 429s, but you've not seen any in your own application, this is because by default, the Cosmos client SDKs [`automatically retried internally on the 429s`](xref:Microsoft.Azure.Cosmos.CosmosClientOptions.MaxRetryAttemptsOnRateLimitedRequests) and the request succeeded in subsequent retries. As a result, the 429 status code is not returned to the application. In these cases, the overall rate of 429s is typically very low and can be safely ignored, assuming the overall rate is between 1-5% and end to end latency is acceptable to your application.
+
+### Step 2: Determine if there's a hot partition
+A hot partition arises when one or a few logical partition keys consume a disproportionate amount of the total RU/s due to higher request volume. This can be caused by a partition key design that doesn't evenly distribute requests. It results in many requests being directed to a small subset of logical (which implies physical) partitions that become "hot." Because all data for a logical partition resides on one physical partition and total RU/s is evenly distributed among the physical partitions, a hot partition can lead to 429s and inefficient use of throughput.
Here are some examples of partitioning strategies that lead to hot partitions:-- You have a container storing IoT device data for a write-heavy workload that is partitioned by `date`. All data for a single date will reside on the same logical and physical partition. Because all the data written each day has the same date, this would result in a hot partition every day.
- - Instead, for this scenario, a partition key like `id` (either a GUID or device id), or a [synthetic partition key](./synthetic-partition-keys.md) combining `id` and `date` would yield a higher cardinality of values and better distribution of request volume.
-- You have a multi-tenant scenario with a container partitioned by `tenantId`. If one tenant is significantly more active than the others, it results in a hot partition. For example, if the largest tenant has 100,000 users, but most tenants have fewer than 10 users, you will have a hot partition when partitioned by `tenantID`.
- - For this previous scenario, consider having a dedicated container for the largest tenant, partitioned by a more granular property such as `UserId`.
-
+- You have a container storing IoT device data for a write-heavy workload that is partitioned by `date`. All data for a single date will reside on the same logical and physical partition. Because all the data written each day has the same date, this would result in a hot partition every day.
+ - Instead, for this scenario, a partition key like `id` (either a GUID or device ID), or a [synthetic partition key](./synthetic-partition-keys.md) combining `id` and `date` would yield a higher cardinality of values and better distribution of request volume.
+- You have a multi-tenant scenario with a container partitioned by `tenantId`. If one tenant is much more active than the others, it results in a hot partition. For example, if the largest tenant has 100,000 users, but most tenants have fewer than 10 users, you will have a hot partition when partitioned by `tenantID`.
+ - For this previous scenario, consider having a dedicated container for the largest tenant, partitioned by a more granular property such as `UserId`.
+ #### How to identify the hot partition To verify if there's a hot partition, navigate to **Insights** > **Throughput** > **Normalized RU Consumption (%) By PartitionKeyRangeID**. Filter to a specific database and container.
-Each PartitionKeyRangeId maps to one physical partition. If there's one PartitionKeyRangeId that has significantly higher Normalized RU consumption than others (for example, one is consistently at 100%, but others are at 30% or less), this can be a sign of a hot partition. Learn more about the [Normalized RU Consumption metric](../monitor-normalized-request-units.md).
+Each PartitionKeyRangeId maps to one physical partition. If there's one PartitionKeyRangeId that has much higher Normalized RU consumption than others (for example, one is consistently at 100%, but others are at 30% or less), this can be a sign of a hot partition. Learn more about the [Normalized RU Consumption metric](../monitor-normalized-request-units.md).
:::image type="content" source="media/troubleshoot-request-rate-too-large/split-norm-utilization-by-pkrange-hot-partition.png" alt-text="Normalized RU Consumption by PartitionKeyRangeId chart with a hot partition.":::
-To see which logical partition keys are consuming the most RU/s,
-use [Azure Diagnostic Logs](../cosmosdb-monitor-resource-logs.md). This sample query sums up the total request units consumed per second on each logical partition key.
+To see which logical partition keys are consuming the most RU/s,
+use [Azure Diagnostic Logs](../cosmosdb-monitor-resource-logs.md). This sample query sums up the total request units consumed per second on each logical partition key.
> [!IMPORTANT]
-> Enabling diagnostic logs incurs a separate charge for the Log Analytics service, which is billed based on the volume of data ingested. It is recommended you turn on diagnostic logs for a limited amount of time for debugging, and turn off when no longer required. See [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for details.
-
-```kusto
-AzureDiagnostics
-| where TimeGenerated >= ago(24hour)
-| where Category == "PartitionKeyRUConsumption"
-| where collectionName_s == "CollectionName"
-| where isnotempty(partitionKey_s)
-// Sum total request units consumed by logical partition key for each second
-| summarize sum(todouble(requestCharge_s)) by partitionKey_s, operationType_s, bin(TimeGenerated, 1s)
-| order by sum_requestCharge_s desc
-```
-This sample output shows that in a particular minute, the logical partition key with value "Contoso" consumed around 12,000 RU/s, while the logical partition key with value "Fabrikam" consumed less than 600 RU/s. If this pattern was consistent during the time period where rate limiting occurred, this would indicate a hot partition.
+> Enabling diagnostic logs incurs a separate charge for the Log Analytics service, which is billed based on the volume of data ingested. It's recommended you turn on diagnostic logs for a limited amount of time for debugging, and turn off when no longer required. See [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for details.
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= ago(24hour)
+ | where CollectionName == "CollectionName"
+ | where isnotempty(PartitionKey)
+ // Sum total request units consumed by logical partition key for each second
+ | summarize sum(RequestCharge) by PartitionKey, OperationName, bin(TimeGenerated, 1s)
+ | order by sum_RequestCharge desc
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= ago(24hour)
+ | where Category == "PartitionKeyRUConsumption"
+ | where collectionName_s == "CollectionName"
+ | where isnotempty(partitionKey_s)
+ // Sum total request units consumed by logical partition key for each second
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, operationType_s, bin(TimeGenerated, 1s)
+ | order by sum_requestCharge_s desc
+ ```
++
+This sample output shows that in a particular minute, the logical partition key with value "Contoso" consumed around 12,000 RU/s, while the logical partition key with value "Fabrikam" consumed less than 600 RU/s. If this pattern was consistent during the time period where rate limiting occurred, this would indicate a hot partition.
:::image type="content" source="media/troubleshoot-request-rate-too-large/hot-logical-partition-key-results.png" alt-text="Logical partition keys consuming the most request units per second.":::
If there's high percent of rate limited requests and there's an underlying hot p
- Short-term, you can temporarily increase the RU/s to allow more throughput to the hot partition. This isn't recommended as a long-term strategy, as it leads to overprovisioning RU/s and higher cost. > [!TIP]
-> When you increase the throughput, the scale-up operation will either complete instantaneously or require up to 5-6 hours to complete, depending on the number of RU/s you want to scale up to. If you want to know the highest number of RU/s you can set without triggering the asynchronous scale-up operation (which requires Azure Cosmos DB to provision more physical partitions), multiply the number of distinct PartitionKeyRangeIds by 10,0000 RU/s. For example, if you have 30,000 RU/s provisioned and 5 physical partitions (6000 RU/s allocated per physical partition), you can increase to 50,000 RU/s (10,000 RU/s per physical partition) in an instantaneous scale-up operation. Increasing to >50,000 RU/s would require an asynchronous scale-up operation.
+> When you increase the throughput, the scale-up operation will either complete instantaneously or require up to 5-6 hours to complete, depending on the number of RU/s you want to scale up to. If you want to know the highest number of RU/s you can set without triggering the asynchronous scale-up operation (which requires Azure Cosmos DB to provision more physical partitions), multiply the number of distinct PartitionKeyRangeIds by 10,0000 RU/s. For example, if you have 30,000 RU/s provisioned and 5 physical partitions (6000 RU/s allocated per physical partition), you can increase to 50,000 RU/s (10,000 RU/s per physical partition) in an instantaneous scale-up operation. Increasing to >50,000 RU/s would require an asynchronous scale-up operation. Learn more about [best practices for scaling provisioned throughput (RU/s)](../scaling-provisioned-throughput-best-practices.md).
### Step 3: Determine what requests are returning 429s #### How to investigate requests with 429s
-Use [Azure Diagnostic Logs](../cosmosdb-monitor-resource-logs.md) to identify which requests are returning 429s and how many RUs they consumed. This sample query aggregates at the minute level.
+Use [Azure Diagnostic Logs](../cosmosdb-monitor-resource-logs.md) to identify which requests are returning 429s and how many RUs they consumed. This sample query aggregates at the minute level.
> [!IMPORTANT] > Enabling diagnostic logs incurs a separate charge for the Log Analytics service, which is billed based on volume of data ingested. It is recommended you turn on diagnostic logs for a limited amount of time for debugging, and turn off when no longer required. See [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for details.
-```kusto
-AzureDiagnostics
-| where TimeGenerated >= ago(24h)
-| where Category == "DataPlaneRequests"
-| summarize throttledOperations = dcountif(activityId_g, statusCode_s == 429), totalOperations = dcount(activityId_g), totalConsumedRUPerMinute = sum(todouble(requestCharge_s)) by databaseName_s, collectionName_s, OperationName, requestResourceType_s, bin(TimeGenerated, 1min)
-| extend averageRUPerOperation = 1.0 * totalConsumedRUPerMinute / totalOperations
-| extend fractionOf429s = 1.0 * throttledOperations / totalOperations
-| order by fractionOf429s desc
-```
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ CDBDataPlaneRequests
+ | where TimeGenerated >= ago(24h)
+ | summarize throttledOperations = dcountif(ActivityId, StatusCode == 429), totalOperations = dcount(ActivityId), totalConsumedRUPerMinute = sum(RequestCharge) by DatabaseName, CollectionName, OperationName, RequestResourceType, bin(TimeGenerated, 1min)
+ | extend averageRUPerOperation = 1.0 * totalConsumedRUPerMinute / totalOperations
+ | extend fractionOf429s = 1.0 * throttledOperations / totalOperations
+ | order by fractionOf429s desc
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= ago(24h)
+ | where Category == "DataPlaneRequests"
+ | summarize throttledOperations = dcountif(activityId_g, statusCode_s == 429), totalOperations = dcount(activityId_g), totalConsumedRUPerMinute = sum(todouble(requestCharge_s)) by databaseName_s, collectionName_s, OperationName, requestResourceType_s, bin(TimeGenerated, 1min)
+ | extend averageRUPerOperation = 1.0 * totalConsumedRUPerMinute / totalOperations
+ | extend fractionOf429s = 1.0 * throttledOperations / totalOperations
+ | order by fractionOf429s desc
+ ```
++ For example, this sample output shows that each minute, 30% of Create Document requests were being rate limited, with each request consuming an average of 17 RUs. :::image type="content" source="media/troubleshoot-request-rate-too-large/throttled-requests-diagnostic-logs-results.png" alt-text="Requests with 429 in Diagnostic Logs."::: #### Recommended solution ##### Use the Azure Cosmos DB capacity planner
-You can leverage the [Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) to understand what is the best provisioned throughput based on your workload (volume and type of operations and size of documents). You can customize further the calculations by providing sample data to get a more accurate estimation.
+You can use the [Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) to understand what is the best provisioned throughput based on your workload (volume and type of operations and size of documents). You can customize further the calculations by providing sample data to get a more accurate estimation.
##### 429s on create, replace, or upsert document requests - By default, in the SQL API, all properties are indexed by default. Tune the [indexing policy](../index-policy.md) to only index the properties needed.
-This will lower the Request Units required per create document operation, which will reduce the likelihood of seeing 429s or allow you to achieve higher operations per second for the same amount of provisioned RU/s.
+This will lower the Request Units required per create document operation, which will reduce the likelihood of seeing 429s or allow you to achieve higher operations per second for the same amount of provisioned RU/s.
##### 429s on query document requests - Follow the guidance to [troubleshoot queries with high RU charge](troubleshoot-query-performance.md#querys-ru-charge-is-too-high) ##### 429s on execute stored procedures-- [Stored procedures](stored-procedures-triggers-udfs.md) are intended for operations that require write transactions across a partition key value. It isn't recommended to use stored procedures for a large number of read or query operations. For best performance, these read or query operations should be done on the client-side, using the Cosmos SDKs.
+- [Stored procedures](stored-procedures-triggers-udfs.md) are intended for operations that require write transactions across a partition key value. It is not recommended to use stored procedures for a large number of read or query operations. For best performance, these read or query operations should be done on the client-side, using the Cosmos SDKs.
+
+## Request rate is large with autoscale
+All the guidance in this article applies to both manual and autoscale throughput.
+
+When using autoscale, a common question that arises is, **"Is it still possible to see 429s with autoscale?"**
+
+Yes. There are two main scenarios where this can occur.
+
+**Scenario 1**: When the overall consumed RU/s exceeds the max RU/s of the database or container, the service will throttle requests accordingly. This is analogous to exceeding the overall manual provisioned throughput of a database or container.
+
+**Scenario 2**: If there is a hot partition, that is, a logical partition key value that has a disproportionately higher amount of requests compared to other partition key values, it is possible for the underlying physical partition to exceed its RU/s budget. As a best practice, to avoid hot partitions, choose a good partition key that results in an even distribution of both storage and throughput. This is similar to when there is a hot partition when using manual throughput.
+
+For example, if you select the 20,000 RU/s max throughput option and have 200 GB of storage with four physical partitions, each physical partition can be autoscaled up to 5000 RU/s. If there was a hot partition on a particular logical partition key, you will see 429s when the underlying physical partition it resides in exceeds 5000 RU/s, that is, exceeds 100% normalized utilization.
+
+Follow the guidance in [Step 1](#step-1-check-the-metrics-to-determine-the-percentage-of-requests-with-429-error), [Step 2](#step-2-determineif-theres-a-hot-partition), and [Step 3](#step-3-determine-what-requests-are-returning-429s) to debug these scenarios.
+
+Another common question that arises is, **Why is normalized RU consumption 100%, but autoscale didn't scale to the max RU/s?**
+This typically occurs for workloads that have temporary or intermittent spikes of usage. When you use autoscale, Azure Cosmos DB only scales the RU/s to the maximum throughput when the normalized RU consumption is 100% for a sustained, continuous period of time in a 5 second interval. This is done to ensure the scaling logic is cost friendly to the user, as it ensures that single, momentary spikes to not lead to unnecessary scaling and higher cost. When there are momentary spikes, the system typically scales up to a value higher than the previously scaled to RU/s, but lower than the max RU/s. Learn more about how to [interpret the normalized RU consumption metric with autoscale](../monitor-normalized-request-units.md#normalized-ru-consumption-and-autoscale).
## Rate limiting on metadata requests Metadata rate limiting can occur when you are performing a high volume of metadata operations on databases and/or containers. Metadata operations include: - Create, read, update, or delete a container or database - List databases or containers in a Cosmos account-- Query for offers to see the current provisioned throughput
+- Query for offers to see the current provisioned throughput
There's a system-reserved RU limit for these operations, so increasing the provisioned RU/s of the database or container will have no impact and isn't recommended. See [limits on metadata operations](../concepts-limits.md#metadata-request-limits). #### How to investigate
-Navigate to **Insights** > **System** > **Metadata Requests By Status Code**. Filter to a specific database and container if desired.
+Navigate to **Insights** > **System** > **Metadata Requests By Status Code**. Filter to a specific database and container if desired.
:::image type="content" source="media/troubleshoot-request-rate-too-large/metadata-throttling-insights.png" alt-text="Metadata requests by status code chart in Insights."::: #### Recommended solution-- If your application needs to perform metadata operations, consider implementing a backoff policy to send these requests at a lower rate.
+- If your application needs to perform metadata operations, consider implementing a backoff policy to send these requests at a lower rate.
- Use static Cosmos DB client instances. When the DocumentClient or CosmosClient is initialized, the Cosmos DB SDK fetches metadata about the account, including information about the consistency level, databases, containers, partitions, and offers. This initialization may consume a high number of RUs, and should be performed infrequently. Use a single DocumentClient instance and use it for the lifetime of your application.
cosmos-db Tutorial Global Distribution Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-global-distribution-sql-api.md
Previously updated : 08/26/2021 Last updated : 04/03/2022
The following code shows how to set preferred locations by using the Java SDK:
+## Spark 3 Connector
+
+You can define the preferred regional list using the `spark.cosmos.preferredRegionsList` [configuration](https://github.com/Azure/azure-sdk-for-jav), for example:
+
+```scala
+val sparkConnectorConfig = Map(
+ "spark.cosmos.accountEndpoint" -> cosmosEndpoint,
+ "spark.cosmos.accountKey" -> cosmosMasterKey,
+ "spark.cosmos.preferredRegionsList" -> "[West US, East US, North Europe]"
+ // other settings
+)
+```
+ ## REST Once a database account has been made available in multiple regions, clients can query its availability by performing a GET request on this URI `https://{databaseaccount}.documents.azure.com/`
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-cases.md
Last updated 05/21/2019
# Common Azure Cosmos DB use cases [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
+📺 <B><a href="https://aka.ms/cosmos-db-video-top-cosmos-db-use-cases" target="_blank">Video: Top Azure Cosmos DB use cases</a></b>
+ This article provides an overview of several common use cases for Azure Cosmos DB. The recommendations in this article serve as a starting point as you develop your application with Cosmos DB. After reading this article, you'll be able to answer the following questions:
After reading this article, you'll be able to answer the following questions:
## Introduction
-[Azure Cosmos DB](../cosmos-db/introduction.md) is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. The service is designed to allow customers to elastically (and independently) scale throughput and storage across any number of geographical regions. Azure Cosmos DB is the first globally distributed database service in the market today to offer comprehensive [service level agreements](https://azure.microsoft.com/support/legal/sla/cosmos-db/) encompassing throughput, latency, availability, and consistency.
+[Azure Cosmos DB](../cosmos-db/introduction.md) is the Azure solution for a fast NoSQL database, with open APIs for any scale. The service is designed to allow customers to elastically (and independently) scale throughput and storage across any number of geographical regions. Azure Cosmos DB is the first globally distributed database service in the market today to offer comprehensive [service level agreements](https://azure.microsoft.com/support/legal/sla/cosmos-db/) encompassing throughput, latency, availability, and consistency.
Azure Cosmos DB is a global distributed, multi-model database that is used in a wide range of applications and use cases. It is a good choice for any [serverless](https://azure.com/serverless) application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. It supports multiple data models (key-value, documents, graphs and columnar) and many APIs for data access including [Azure Cosmos DB's API for MongoDB](mongodb/mongodb-introduction.md), [SQL API](./introduction.md), [Gremlin API](graph-introduction.md), and [Tables API](table/introduction.md) natively, and in an extensible manner.
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
Previously updated : 09/30/2021 Last updated : 03/04/2022 # Copy data from Office 365 into Azure using Azure Data Factory or Synapse Analytics
For now, within a single copy activity you can only **copy data from Office 365
>- Ensure the Azure Integration Runtime region used for copy activity as well as the destination is in the same region where the Office 365 tenant users' mailbox is located. Refer [here](concepts-integration-runtime.md#integration-runtime-location) to understand how the Azure IR location is determined. Refer to [table here](/graph/data-connect-datasets#regions) for the list of supported Office regions and corresponding Azure regions. >- Service Principal authentication is the only authentication mechanism supported for Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2 as destination stores.
+> [!Note]
+> Please use Azure integration runtime in both source and sink linked services. The self-hosted integration runtime and the managed virtual network integration runtime are not supported.
+ ## Prerequisites To copy data from Office 365 into Azure, you need to complete the following prerequisite steps:
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
If you're using Git integration with your data factory and have a CI/CD pipeline
- **Resource naming**. Due to ARM template constraints, issues in deployment may arise if your resources contain spaces in the name. The Azure Data Factory team recommends using '_' or '-' characters instead of spaces for resources. For example, 'Pipeline_1' would be a preferable name over 'Pipeline 1'. -- **Exposure control and feature flags**. When working on a team, there are instances where you may merge changes, but don't want them to be run in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends [the DevOps concept of using feature flags](/azure/devops/operate/progressive-experimentation-feature-flags?view=azure-devops&preserve-view=true). In ADF, you can combine [global parameters](author-global-parameters.md) and the [if condition activity](control-flow-if-condition-activity.md) to hide sets of logic based upon these environment flags.
+- **Exposure control and feature flags**. When working on a team, there are instances where you may merge changes, but don't want them to be run in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends [the DevOps concept of using feature flags](/devops/operate/progressive-experimentation-feature-flags). In ADF, you can combine [global parameters](author-global-parameters.md) and the [if condition activity](control-flow-if-condition-activity.md) to hide sets of logic based upon these environment flags.
To learn how to set up a feature flag, see the below video tutorial:
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
Previously updated : 01/28/2022 Last updated : 03/03/2022
The following table applies to Azure Batch.
:::image type="content" source="./media/connector-troubleshoot-guide/system-trust-store-setting.png" alt-text="Uncheck Use System Trust Store":::
-## Web Activity
+### HDI activity stuck in preparing for cluster
+
+If the HDI activity is stuck in preparing for cluster, follow the guidelines below:
+
+1. Make sure the timeout is greater than what is described below and wait for the execution to complete or until it is timed out, and wait for Time To Live (TTL) time before submitting new jobs.
+
+ *The max default time that it takes to spin up a cluster is 2 hours, and if you have any init script, it will add up, up to another 2 hours.*
+
+2. Make sure the storage and HDI are provisioned in the same region.
+3. Make sure that the service principal used for accessing the HDI cluster is valid.
+4. If the issue still persists, as a workaround, delete the HDI linked service and re-create it with a new name.
+
+## Web Activity
### Error code: 2128
data-factory Load Office 365 Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-office-365-data.md
Previously updated : 07/05/2021 Last updated : 03/04/2022
This article shows you how to use the Data Factory _load data from Office 365 in
3. In the Activities tool box > Move & Transform category > drag and drop the **Copy activity** from the tool box to the pipeline designer surface. Specify "CopyFromOffice365ToBlob" as activity name.
+> [!Note]
+> Please use Azure integration runtime in both source and sink linked services. The self-hosted integration runtime and the managed virtual network integration runtime are not supported.
+ ### Configure source 1. Go to the pipeline > **Source tab**, click **+ New** to create a source dataset.
This article shows you how to use the Data Factory _load data from Office 365 in
:::image type="content" source="./media/load-office-365-data/edit-source-properties.png" alt-text="Config Office 365 dataset schema"::: + ### Configure sink 1. Go to the pipeline > **Sink tab**, and select **+ New** to create a sink dataset.
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
The Azure Data Factory service is improved on an ongoing basis. To stay up to da
This page is updated monthly, so revisit it regularly.
+## February 2022
+<br>
+<table>
+<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=4><b>Data Flow</b></td><td>Parameterized linked services supported in mapping data flows</td><td>You can now use your parameterized linked services in mapping data flows to make your data flow pipelines generic and flexible.<br><a href="parameterize-linked-services.md?tabs=data-factory">Learn more</a></td></tr>
+
+<tr><td>Azure SQL DB incremental source extract available in data flow (Public Preview)</td><td>A new option has been added on mapping data flow Azure SQL DB sources called <i>Enable incremental extract (preview)</i>. Now you can automatically pull only the rows that have changed on your SQL DB sources using data flows.<br><a href="connector-azure-sql-database.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+
+<tr><td>Four new connectors available for mapping data flows (Public Preview)</td><td>Azure Data Factory now supports the four following new connectors (Public Preview) for mapping data flows: Quickbase Connector, Smartsheet Connector, TeamDesk Connector, and Zendesk Connector.<br><a href="connector-overview.md?tabs=data-factory">Learn more</a></td></tr>
+
+<tr><td>Azure Cosmos DB (SQL API) for mapping data flow now supports inline mode</td><td>Azure Cosmos DB (SQL API) for mapping data flow can now use inline datasets.<br><a href="connector-azure-cosmos-db.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Data Movement</b></td><td>Get metadata driven data ingestion pipelines on ADF Copy Data Tool within 10 minutes (GA)</td><td>You can build large-scale data copy pipelines with metadata-driven approach on copy data tool (GA) within 10 minutes.<br><a href="copy-data-tool-metadata-driven.md">Learn more</a></td></tr>
+
+<tr><td>Azure Data Factory Google AdWords Connector API Upgrade Available</td><td>The Azure Data Factory Google AdWords connector now supports the new AdWords API version. No action is required for the new connector user as it is enabled by default.<br><a href="connector-troubleshoot-google-adwords.md#migrate-to-the-new-version-of-google-ads-api">Learn more</a></td></tr>
+
+<tr><td><b>Region Expansion</b></td><td>Azure Data Factory is now available in West US3 and Jio India West</td><td>Azure Data Factory is now available in two new regions: West US3 and Jio India West. You can co-locate your ETL workflow in these new regions if you are utilizing these regions for storing and managing your modern data warehouse. You can also use these regions for BCDR purposes in case you need to failover from another region within the geo.<br><a href="https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all">Learn more</a></td></tr>
+
+<tr><td><b>Security</b></td><td>Connect to an Azure DevOps account in another Azure Active Directory tenant</td><td>You can connect your Azure Data Factory to an Azure DevOps Account in a different Azure Active Directory tenant for source control purposes.<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
+</table>
++ ## January 2022 <br> <table>
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 02/23/2022 Last updated : 03/03/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
You can add or delete virtual networks associated with your virtual switches. To
1. Select a virtual switch for which you want to create a virtual network. 1. Provide a **Name** for your virtual network.
- 1. Enter a **VLAN ID** as a unique number in 1-4094 range.
+ 1. Enter a **VLAN ID** as a unique number in 1-4094 range. The VLAN ID that you provide should be in your trunk configuration. For more information on trunk configuration for your switch, refer to the instructions from your physical switch manufacturer.
1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration. 1. Select **Apply**.
You can add or delete virtual networks associated with your virtual switches. To
1. Select a virtual switch for which you want to create a virtual network. 1. Provide a **Name** for your virtual network.
- 1. Enter a **VLAN ID** as a unique number in 1-4094 range.
+ 1. Enter a **VLAN ID** as a unique number in 1-4094 range. The VLAN ID that you provide should be in your trunk configuration. For more information on trunk configuration for your switch, refer to the instructions from your physical switch manufacturer.
1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration. 1. Select **Apply**.
databox-online Azure Stack Edge Pro 2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-overview.md
Last updated 03/03/2022
-#Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro GPU is and how it works so I can use it to process and transform data before sending to Azure.
+#Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro 2 is and how it works so I can use it to process and transform data before sending to Azure.
# What is Azure Stack Edge Pro 2?
The Azure Stack Edge Pro 2 solution consists of Azure Stack Edge resource, Azure
## Region availability
-The Azure Stack Edge Pro GPU physical device, Azure resource, and target storage account to which you transfer data donΓÇÖt all have to be in the same region.
+The Azure Stack Edge Pro 2 physical device, Azure resource, and target storage account to which you transfer data donΓÇÖt all have to be in the same region.
- **Resource availability** - For this release, the resource is available in East US, West EU, and South East Asia regions. - **Device availability** - You should be able to see Azure Stack Edge Pro 2 as one of the available SKUs when placing the order.
- For a list of all the countries/regions where the Azure Stack Edge Pro GPU device is available, go to **Availability** section in the **Azure Stack Edge Pro** tab for [Azure Stack Edge Pro GPU pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/#azureStackEdgePro).
+ For a list of all the countries/regions where the Azure Stack Edge Pro 2 device is available, go to **Availability** section in the **Azure Stack Edge Pro** tab for [Azure Stack Edge Pro 2 pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/#azureStackEdgePro).
-- **Destination Storage accounts** - The storage accounts that store the data are available in all Azure regions. The regions where the storage accounts store Azure Stack Edge Pro GPU data should be located close to where the device is located for optimum performance. A storage account located far from the device results in long latencies and slower performance.
+- **Destination Storage accounts** - The storage accounts that store the data are available in all Azure regions. The regions where the storage accounts store Azure Stack Edge Pro 2 data should be located close to where the device is located for optimum performance. A storage account located far from the device results in long latencies and slower performance.
Azure Stack Edge service is a non-regional service. For more information, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md). Azure Stack Edge service doesnΓÇÖt have dependency on a specific Azure region, making it resilient to zone-wide outages and region-wide outages.
To understand how to choose a region for the Azure Stack Edge service, device, a
## Billing and pricing
-These devices can be ordered via the Azure Edge Hardware center. These devices are billed as a monthly service through the Azure portal. For more information, see [Azure Stack Edge Pro 2 pricing](azure-stack-edge-placeholder.md).
+These devices can be ordered via the Azure Edge Hardware center. These devices are billed as a monthly service through the Azure portal. For more information, see [Azure Stack Edge Pro 2 pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/#azureStackEdgePro).
## Next steps -- Review the [Azure Stack Edge Pro 2 system requirements](azure-stack-edge-placeholder.md).
+- Review the [Azure Stack Edge Pro 2 system requirements](azure-stack-edge-pro-2-system-requirements.md).
-- Understand the [Azure Stack Edge Pro 2 limits](azure-stack-edge-placeholder.md).
+- Understand the [Azure Stack Edge Pro 2 limits](azure-stack-edge-pro-2-limits.md).
-- Deploy [Azure Stack Edge Pro 2](azure-stack-edge-placeholder.md) in Azure portal.
+- Deploy [Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-prep.md) in Azure portal.
devtest-labs Devtest Lab Add Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-vm.md
Title: Create and add a virtual machine to a lab
-description: Learn how to use the Azure portal to add a virtual machine to a lab in Azure DevTest Labs. You can choose a base that is either a custom image or a formula.
+description: Learn how to use the Azure portal to add a virtual machine (VM) to a lab in Azure DevTest Labs. Configure basic settings, artifacts, and advanced settings.
Previously updated : 10/20/2021 Last updated : 03/03/2022
-# Create and add virtual machines to a lab in Azure DevTest Labs
+# Create lab virtual machines in Azure DevTest Labs
-This article walks you through on how to create and add Azure virtual machines (VMs) to a lab in your existing DevTest Labs using the Azure portal.
+This article describes how to create Azure virtual machines (VMs) in Azure DevTest Labs by using the Azure portal.
-## Create and add virtual machines
+## Prerequisite
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+You need at least [user](devtest-lab-add-devtest-user.md#devtest-labs-user) access to a lab in DevTest Labs. For more information about creating labs, see [Create a lab in the Azure portal](devtest-lab-create-lab.md).
-1. Navigate to your lab in **DevTest Labs**.
+<a name="create-and-add-virtual-machines"></a>
+## Configure basic settings
-1. On the **Overview** page, select **+ Add**.
+1. In the [Azure portal](https://portal.azure.com), go to the **Overview** page for the lab.
+
+1. On the lab **Overview** page, select **Add**.
:::image type="content" source="./media/devtest-lab-add-vm/portal-lab-add-vm.png" alt-text="Lab overview page showing add button.":::
-1. On the **Choose a base** page, select a marketplace image for the VM. This guide use **Windows 11 Pro**. Certain options may differ if you use a different image.
+1. On the **Choose a base** page, select an image for the VM. You can choose Marketplace images, custom images, or formulas that the lab owner made available. The following instructions use Windows 11 Pro. Some bases might have different settings.
-1. From the **Basics Settings** tab, provide the following information:
+1. On the **Basics Settings** tab of the **Create lab resource** screen, provide the following information:
- |Property |Description |
- |||
- |Virtual&nbsp;machine&nbsp;name| The text box is pre-filled with a unique autogenerated name. The name corresponds to the user name within your email address followed by a unique three-digit number. Leave as-is, or enter a unique name of your choosing.|
- |User Name| The text box is pre-filled with a unique autogenerated name. The name corresponds to the user name within your email address. Leave as-is, or enter a name of your choosing. The user is granted **administrator** privileges on the virtual machine.|
- |Use a saved secret| For this walk-through, leave the box unchecked. You can save secrets in Azure Key Vault first and then use it here. For more information, see [Store secrets in a key vault](devtest-lab-store-secrets-in-key-vault.md). If you prefer to use a saved secret, check the box and then select the secret from the **Secret** drop-down list.|
- |Password|Enter a password between 8 and 123 characters long.|
- |Save as default password| Select the checkbox to save the password in the Azure Key Vault associated with the lab.|
- |Virtual machine size| Keep the default value or select **Change Size** to select different physical components. This walk-through uses **Standard_B2**.|
- |OS disk type|Keep the default value or select a different option from the drop-down list.|
- |Artifacts| Select **Add or Remove Artifacts**. Select and configure the artifacts that you want to add to the base image. Each lab includes artifacts from the Public DevTest Labs Artifact Repository and artifacts that you've created and added to your own Artifact Repository. For expanded instructions, see [Add artifacts during installation](#add-artifacts-during-installation), further below.|
+ - **Virtual machine name**: Keep the autogenerated name, or enter another unique VM name.
+ - **User name**: Keep the user name, or enter another user name to grant administrator privileges on the VM.
+ - **Use a saved secret**: Select this checkbox to use a secret from Azure Key Vault instead of a password to access the VM. If you select this option, under **Secret**, select the secret to use from the dropdown list. For more information, see [Store secrets in a key vault](devtest-lab-store-secrets-in-key-vault.md).
+ - **Password**: If you don't choose to use a secret, enter a VM password between 8 and 123 characters long.
+ - **Save as default password**: Select this checkbox to save the password in the Key Vault associated with the lab.
+ - **Virtual machine size**: Keep the default value for the base, or select **Change Size** to select different sizes.
+ - **OS disk type**: Keep the default value for the base, or select a different option from the dropdown list.
+ - **Artifacts**: This field shows the number of artifacts already configured for this VM base. Optionally, select **Add or Remove Artifacts** to select and configure artifacts to add to the VM.
:::image type="content" source="./media/devtest-lab-add-vm/portal-lab-vm-basic-settings.png" alt-text="Virtual machine basic settings page.":::
-1. Select the **Advanced Settings** tab and provide the following information:
-
- |Property |Description |
- |||
- |Virtual network| Leave as-is or select a different network from the drop-down list.|
- |Subnet&nbsp;Selector| Leave as-is or select a different subnet from the drop-down list.|
- |IP address| For this walk-through, leave the default value **Shared**.|
- |Expiration date| Leave as is for no expiration date, or select the calendar icon to set an expiration date.|
- |Make this machine claimable| To make the VM claimable by a lab user, select **Yes**. Marking the machine as claimable means that it won't be assigned ownership at the time of creation. This walk-through selects **Yes**.|
- |Number of instances| For this walk-through, enter **2**. The number of virtual machine instances to be created.|
- |Automation | Optional. Selecting **View ARM Template** will open the template in a new page. You can copy and save the template to create the same virtual machine later. Once saved, you can use the Azure Resource Manager template to [deploy new VMs with Azure PowerShell](../azure-resource-manager/templates/overview.md).|
-
- :::image type="content" source="./media/devtest-lab-add-vm/portal-lab-vm-advanced-settings.png" alt-text="Virtual machine advanced settings page.":::
-
-1. Return to the **Basic Settings** tab and then select **Create**.
-
-1. From the **DevTest Lab** page, under **My Lab**, select **Claimable virtual machines**.
+<a name="add-artifacts-during-installation"></a>
+## Add optional artifacts
- :::image type="content" source="./media/devtest-lab-add-vm/portal-lab-vm-creation-status.png" alt-text="Lab VM creation status page.":::
+Artifacts are tools, actions, or software you can add to lab VMs. You can add artifacts to VMs from the [DevTest Labs public artifact repository](https://github.com/Azure/azure-devtestlab/Artifacts), or from private artifact repositories connected to the lab. For more information, see [Add artifacts to DevTest Labs VMs](add-artifact-vm.md).
-1. After a few minutes, select **Refresh** if your virtual machines don't appear. Installations times will vary based on the selected hardware, base image, and artifact(s). The installation for the configurations used in this walk-through was approximately 25 minutes.
+To add or modify artifacts during VM creation:
-## Add artifacts during installation
+1. On the **Basic Settings** tab of the **Create lab resource** screen, select **Add or Remove Artifacts**.
-These steps are expanded instructions from the prior section. The steps begin after you've selected **Add or Remove Artifacts** from the **Basic Settings** tab. For more information on artifacts, see [Learn how to author your own artifacts for use with DevTest Labs](devtest-lab-artifact-author.md).
+1. On the **Add artifacts** page, select the arrow next to each artifact you want to add to the VM.
-1. From the **Add artifacts** page, identify an artifact, and then select **>** (greater-than symbol). Then select **OK**.
+1. On each **Add artifact** pane, enter any required and optional parameter values, and then select **OK**. The artifact appears under **Selected artifacts**, and the number of configured artifacts updates.
:::image type="content" source="./media/devtest-lab-add-vm/portal-add-artifact-during.png" alt-text="Add artifact to virtual machine.":::
-1. Select another artifact: **Install PowerShell Module**. This artifact requires additional information, specifically, the name of a PowerShell module. Enter **Az**, and then select **OK**.
+1. When you're done adding artifacts, select **OK** on the **Add artifacts** page.
-1. Continue adding artifacts as needed for your VM.
+## Configure optional advanced settings
-1. Select **...** (ellipsis) from one of your selected artifacts and note the various options, including the ability to change the install order.
+Optionally, select the **Advanced Settings** tab on the **Create lab resource** screen, and change any of the following values:
-1. When you're done adding artifacts, select **OK** to return to the **Basic Settings** tab.
+- **Virtual network**: Select a network from the dropdown list. For more information, see [Add a virtual network](devtest-lab-configure-vnet.md).
+- **Subnet**: If necessary, select a different subnet from the dropdown list.
+- **IP address**: Leave at **Shared**, or select **Public** or **Private**. For more information, see [Understand shared IP addresses](devtest-lab-shared-ip.md).
+- **Expiration date**: Leave at **Will not expire**, or [set an expiration date](devtest-lab-use-resource-manager-template.md#set-vm-expiration-date) and time for the VM.
+- **Make this machine claimable**: Leave at **No** to keep yourself as the owner of the VM. Select **Yes** to make the VM claimable by any lab user after creation. For more information, see [Create and manage claimable VMs](devtest-lab-add-claimable-vm.md).
+- **Number of instances**: To create more than one VM with this configuration, enter the number of VMs to create.
+- **View ARM Template**: Select to view and save the VM configuration as an Azure Resource Manager (ARM) template. You can use the ARM template to [deploy new VMs with Azure PowerShell](../azure-resource-manager/templates/overview.md).
-## Add artifacts after installation
-You can also add artifacts after the VM has been created.
+## Complete the VM deployment
-1. From the **DevTest Lab** page, under **My Lab**, select **All resources**. **All resources** will list both claimed and unclaimed VMs.
+After you configure all settings, on the **Basic Settings** tab of the **Create lab resource** screen, select **Create**.
- :::image type="content" source="./media/devtest-lab-add-vm/portal-lab-vm-all-resources.png" alt-text="Lab showing status of all resources.":::
+During VM deployment, you can select the **Notifications** icon at the top of the screen to see progress. Creating a VM takes a while.
-1. Select your VM once the **Status** shows **Available**.
+When the deployment is complete, if you kept yourself as VM owner, the VM appears under **My virtual machines** on the lab **Overview** page. To connect to the VM, select it from the list, and then select **Connect** on the VM's **Overview** page.
-1. From your **virtual machine** page, select **Start** to start the VM.
+Or, if you chose **Make this machine claimable** during VM creation, select **Claimable virtual machines** in the left navigation to see the VM listed on the **Claimable virtual machines** page. Select **Refresh** if your VM doesn't appear. To take ownership of a VM in the claimable list, see [Use a claimable VM](devtest-lab-add-claimable-vm.md#use-a-claimable-vm).
-1. A few moments after the page shows **Running**, then under **Operations**, select **Artifacts**.
-
- :::image type="content" source="./media/devtest-lab-add-vm/portal-lab-vm-overview.png" alt-text="Lab VM overview showing start button.":::
-
-1. Select **Apply artifacts** to open the **Add artifacts** page.
-
-1. From here, the steps are basically the same as from [Add artifacts during installation](#add-artifacts-during-installation), above.
+<a name="add-artifacts-after-installation"></a>
## Next steps
-* Once the VM has been created, you can connect to the VM by selecting **Connect** on the VM's pane.
-* Learn how to [create custom artifacts for your DevTest Labs VM](devtest-lab-artifact-author.md).
-* Explore the [DevTest Labs Azure Resource Manager QuickStart template gallery](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/QuickStartTemplates).
+- [Add artifacts to VMs after creation](add-artifact-vm.md#add-artifacts-to-vms-from-the-azure-portal).
+- Create DevTest Labs VMs by using [PowerShell](devtest-lab-vm-powershell.md), [Azure CLI](devtest-lab-vmcli.md), an [ARM template](devtest-lab-use-resource-manager-template.md), or from a [shared image gallery](add-vm-use-shared-image.md).
+- Explore the DevTest Labs public repositories of [artifacts](https://github.com/Azure/azure-devtestlab/Artifacts), [environments](https://github.com/Azure/azure-devtestlab/Environments), and [QuickStart ARM templates](https://github.com/Azure/azure-devtestlab/samples/DevTestLabs/QuickStartTemplates).
+
devtest-labs Devtest Lab Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-concepts.md
Title: Azure DevTest Labs concepts
-description: Learn the basic concepts of DevTest Labs, and how it can make it easy to create, manage, and monitor Azure virtual machines
+description: Learn definitions of some basic DevTest Labs concepts related to labs, virtual machines (VMs), and environments.
Previously updated : 10/29/2021 Last updated : 03/03/2022 # DevTest Labs concepts
-This article lists key DevTest Labs concepts and definitions:
+This article lists key [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab) concepts and definitions. DevTest Labs is a service for easily creating, using, and managing Azure VMs and other resources.
-## Lab
-A lab is the infrastructure that encompasses a group of resources, such as Virtual Machines (VMs), that lets you better manage those resources by specifying limits and quotas.
+## Labs
-## Virtual machine
-An Azure VM is one type of [on-demand, scalable computing resource](/azure/architecture/guide/technology-choices/compute-decision-tree) that Azure offers. Azure VMs give you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it.
+A lab is the infrastructure that encompasses a group of resources such as virtual machines (VMs). In a lab, you can:
-[Overview of Windows virtual machines in Azure](../virtual-machines/windows/overview.md) gives you information to consider before you create a VM, how you create it, and how you manage it.
+- Add and configure users.
+- Create ready-made VMs for lab users to claim and use.
+- Let users create and configure their own lab VMs and environments.
+- Connect artifact and template repositories to the lab.
+- Specify allowed VM limits, sizes, and configurations.
+- Set auto-shutdown and auto-startup policies.
+- Track and manage lab costs.
-## Claimable VM
-An Azure Claimable VM is a virtual machine available to any lab user with permissions. Lab admins can prepare VMs with specific base images and artifacts and then save them to a shared pool. Lab users can claim a VM from the pool when they need one with that specific configuration.
+### Policies
-A VM that is claimable isn't initially assigned to any particular user, but will show up in every user's list under "Claimable virtual machines". After a VM is claimed by a user, it's moved up to **My virtual machines** and is no longer claimable by any other user.
+Policies help control lab costs and reduce waste. For example, policies can automatically shut down lab VMs based on a defined schedule, or limit the number or sizes of VMs per user or lab. For more information, see [Manage lab policies to control costs](devtest-lab-set-lab-policy.md).
-## Environment
-In DevTest Labs, an environment refers to a collection of Azure resources in a lab. [Create an environment](./devtest-lab-create-environment-from-arm.md) discusses how to create multi-VM environments from your Azure Resource Manager templates.
+### Repositories
-## Base images
-Base images are VM images with all the tools and settings preinstalled and configured. You can create a VM by picking an existing base and adding an artifact to install your test agent. The use of base images reduces VM creation time.
+Lab users can use artifacts and templates from public and private Git repositories to create lab VMs and environments. The [DevTest Labs public GitHub repositories](https://github.com/Azure/azure-devtestlab) offer many ready-to-use artifacts and Azure Resource Manager (ARM) templates.
-## Artifacts
-Artifacts are used to deploy and configure your application after a VM is provisioned. Artifacts can be:
+Lab owners can also create custom artifacts and ARM templates, store them in private Git repositories, and connect the repositories to their labs. Lab users and automated processes can then use the templates and artifacts. You can add the same repositories to multiple labs in your organization, promoting consistency, reuse, and sharing.
-* Tools that you want to install on the VM - such as agents, Fiddler, and Visual Studio.
-* Actions that you want to run on the VM - such as cloning a repo.
-* Applications that you want to test.
+For more information, see [Add an artifact repository to a lab](add-artifact-repository.md) and [Add template repositories to labs](devtest-lab-use-resource-manager-template.md#add-template-repositories-to-labs).
-Artifacts are [Azure Resource Manager](../azure-resource-manager/management/overview.md) JSON files that contain instructions to deploy and apply configurations.
+### Roles
-## Artifact repositories
-Artifact repositories are git repositories where artifacts are checked in. Artifact repositories can be added to multiple labs in your organization enabling reuse and sharing.
+[Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview) defines DevTest Labs access and roles. DevTest Labs has three roles that define lab member permissions: Owner, Contributor, and DevTest Labs User.
-## Formulas
-Formulas provide a mechanism for fast VM provisioning. A formula in DevTest Labs is a list of default property values used to create a lab VM.
-With formulas, VMs with the same set of properties - such as base image, VM size, virtual network, and artifacts - can be created without needing to specify those
-properties each time. When creating a VM from a formula, the default values can be used as-is or modified.
+- Lab Owners can do all lab tasks, such as reading or writing to lab resources, managing users, setting policies and configurations, and adding repositories and base images.
+ - Because Azure subscription owners have access to all resources in a subscription, which include labs, virtual networks, and VMs, a subscription owner automatically inherits the lab Owner role.
+ - Lab Owners can also create custom DevTest Labs roles. For more information, see [Grant user permissions to specific lab policies](devtest-lab-grant-user-permissions-to-specific-lab-policies.md).
-## Policies
-Policies help in controlling cost in your lab. For example, you can create a policy to automatically shut down VMs based on a defined schedule.
+- Contributors can do everything that owners can, except manage users.
-## Caps
-Caps is a mechanism to minimize waste in your lab. For example, you can set a cap to restrict the number of VMs that can be created per user, or in a lab.
+- DevTest Labs Users can view all lab resources and policies, and create and modify their own VMs and environments.
+ - Users automatically have Owner permissions on their own VMs.
+ - Users can't modify lab policies, or change any VMs that other users own.
-## Security levels
-Security access is determined by Azure role-based access control (Azure RBAC). To understand how access works, it helps to understand the differences between a permission, a role, and a scope as defined by Azure RBAC.
+For more information about access and roles, see [Add lab owners, contributors, and users](devtest-lab-add-devtest-user.md).
-|Term | Description |
-|||
-|Permission|A defined access to a specific action (for example, read-access to all virtual machines).|
-|Role| A set of permissions that can be grouped and assigned to a user. For example, the *subscription owner* role has access to all resources within a subscription.|
-|Scope| A level within the hierarchy of an Azure resource, such as a resource group, a single lab, or the entire subscription.|
+## Virtual machines
+An Azure VM is one type of [on-demand, scalable computing resource](/azure/architecture/guide/technology-choices/compute-decision-tree) that Azure offers. Azure VMs give you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. For more information about VMs, see [Windows virtual machines in Azure](../virtual-machines/windows/overview.md).
-Within the scope of DevTest Labs, there are two types of roles to define user permissions: lab owner and lab user.
+### Artifacts
+Artifacts are tools, actions, or software you can add to lab VMs during or after VM creation. For example, artifacts can be:
-|Role | Description |
-|||
-|Lab&nbsp;Owner| Has access to any resources within the lab. A lab owner can modify policies, read and write any VMs, change the virtual network, and so on.|
-|Lab User | Can view all lab resources, such as VMs, policies, and virtual networks, but can't modify policies or any VMs created by other users.|
+- Tools to install on the VM, like agents, Fiddler, or Visual Studio.
+- Actions to take on the VM, such as cloning a repository or joining a domain.
+- Applications that you want to test.
-To see how to create custom roles in DevTest Labs, refer to the article [Grant user permissions to specific lab policies](devtest-lab-grant-user-permissions-to-specific-lab-policies.md).
+For more information, see [Add artifacts to DevTest Labs VMs](add-artifact-vm.md).
-Since scopes are hierarchical, when a user has permissions at a certain scope, they also have permissions at every lower-level scope. Subscription owners have access to all resources in a subscription, which include virtual machines, virtual networks, and labs. A subscription owner automatically inherits the role of lab owner. However, the opposite isn't true; a lab owner has access to a lab, which is a lower scope than the subscription level. So, a lab owner can't see virtual machines or virtual networks or any resources that are outside of the lab.
+Lab owners can specify mandatory artifacts to be installed on all lab VMs during VM creation. For more information, see [Specify mandatory artifacts for DevTest Labs VMs](devtest-lab-mandatory-artifacts.md).
-## Azure Resource Manager templates
-The concepts discussed in this article can be configured by using Azure Resource Manager (ARM) templates. ARM templates let you define the infrastructure/configuration of your Azure solution and repeatedly deploy it in a consistent state.
+### Base images
-[Template format](../azure-resource-manager/templates/syntax.md#template-format) describes the structure of an Azure Resource Manager template and the properties that are available in the different sections of a template.
+A base image is a VM image that can have software and settings preinstalled and configured. Base images reduce VM creation time and complexity. Lab owners can choose which base images to make available in their labs. Lab users can create VMs by choosing from the available bases. For more information, see [Create and add virtual machines to a lab](devtest-lab-add-vm.md).
-## Next steps
+### Claimable VMs
+
+Lab owners or admins can prepare VMs with specific base images and artifacts, and save them to a shared pool. These claimable VMs appear in the lab's **Claimable virtual machines** list. Any lab user can claim a VM from the claimable pool when they need a VM with that configuration.
+
+After a lab user claims a VM, the VM moves to that user's **My virtual machines** list, and the user becomes the owner of the VM. The VM is no longer claimable or configurable by other users. For more information, see [Create and manage claimable VMs](devtest-lab-add-claimable-vm.md).
+
+### Custom images and formulas
+
+In DevTest Labs, custom images and formulas are mechanisms for fast VM creation and provisioning.
+
+- A custom image is a VM image created from an existing VM or virtual hard drive (VHD), which can have software and other artifacts installed. Lab users can create identical VMs from the custom image. For more information, see [Create a custom image from a VM](devtest-lab-create-custom-image-from-vm-using-portal.md).
+
+- A formula is a list of default property values for creating a lab VM, such as base image, VM size, virtual network, and artifacts. You can create VMs with the same properties without having to specify those properties each time. When you create a VM from a formula, you can use the default values as-is or modify them. For more information, see [Manage Azure DevTest Labs formulas](devtest-lab-manage-formulas.md).
+
+For more information about the differences between custom images and formulas, see [Compare custom images and formulas](devtest-lab-comparing-vm-base-image-types.md).
+
+## Environments
+
+In DevTest Labs, an environment is a collection of Azure platform-as-a-service (PaaS) resources, such as an Azure Web App or a SharePoint farm. You can create environments in labs by using ARM templates. For more information, see [Use ARM templates to create DevTest Labs environments](devtest-lab-create-environment-from-arm.md). For more information about ARM template structure and properties, see [Template format](../azure-resource-manager/templates/syntax.md#template-format).
+
-[Create a lab in DevTest Labs](devtest-lab-create-lab.md)
devtest-labs Devtest Lab Create Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-lab.md
Title: 'Quickstart: Create a lab in Azure portal'
-description: In this quickstart, you create a lab using the Azure portal and Azure DevTest Labs.
+ Title: 'Quickstart: Create a lab in the Azure portal'
+description: Learn how to quickly create a lab in Azure DevTest Labs by using the Azure portal.
Previously updated : 11/04/2021 Last updated : 03/03/2022
-# Quickstart: Create a lab in Azure DevTest Labs in Azure portal
+# Quickstart: Create a lab in the Azure portal
-Get started with Azure DevTest Labs by using the Azure portal to create a lab. Azure DevTest Labs encompasses a group of resources, such as Azure virtual machines (VMs) and networks. This infrastructure lets you better manage those resources by specifying limits and quotas. This quickstart walks you through the process of creating a lab using the Azure portal.
+This quickstart walks you through creating a lab in Azure DevTest Labs by using the Azure portal. [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab) is a service for easily creating, using, and managing infrastructure-as-a-service (IaaS) virtual machines (VMs) and platform-as-a-service (PaaS) environments in a lab context.
-## Prerequisites
+## Prerequisite
-An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You must be the owner of the subscription to create the lab.
+- At least [Contributor](/azure/role-based-access-control/built-in-roles#contributor) access to an Azure subscription. If you don't have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Sign in to the Azure portal
+## Create a lab
-By selecting the following link, you'll be transferred to the Azure portal page that allows you to start creating a new lab in Azure DevTest Labs.
+1. In the [Azure portal](https://portal.azure.com), search for and select *devtest labs*.
+1. On the **DevTest Labs** page, select **Create**. The **Create DevTest Lab** page appears.
+1. On the **Basic Settings** tab, provide the following information:
+ - **Subscription**: Change the subscription if you want to use a different subscription for the lab.
+ - **Resource group**: Select an existing resource group from the dropdown list, or select **Create new** to create a new resource group so it's easy to delete later.
+ - **Lab Name**: Enter a name for the lab.
+ - **Location**: If you're creating a new resource group, select an Azure region for the resource group and lab.
+ - **Public environments**: Leave **On** for access to the [DevTest Labs public environment repository](https://github.com/Azure/azure-devtestlab/Environments). Set to **Off** to disable access. For more information, see [Enable public environments when you create a lab](devtest-lab-create-environment-from-arm.md#enable-public-environments-when-you-create-a-lab).
-[Get started with Azure DevTest Labs in minutes](https://go.microsoft.com/fwlink/?LinkID=627034&clcid=0x409)
+ :::image type="content" source="./media/devtest-lab-create-lab/portal-create-basic-settings.png" alt-text="Screenshot of the Basic Settings tab in the Create DevTest Labs form.":::
-## Create a DevTest Labs resource
-
-The **Create Devtest Lab** page contains five tabs. The first tab is **Basic Settings**.
+1. Optionally, select the [Auto-shutdown](#auto-shutdown-tab), [Networking](#networking-tab), or [Tags](#tags-tab) tabs at the top of the page, and customize those settings. You can also apply or change most of these settings after lab creation.
+1. After you complete all settings, select **Review + create** at the bottom of the page.
+1. If the settings are valid, **Succeeded** appears at the top of the **Review + create** page. Review the settings, and then select **Create**.
> [!TIP]
-> At the bottom of each page, you will find a link that allows you to **download a template for automation**.
-
-### Basic Settings tab
-
-Provide the following information:
-
-|Property | Description |
-|||
-|Subscription| From the drop-down list, select the Azure subscription to be used for the lab.|
-|Resource&nbsp;group| From the drop-down list, select your existing resource group, or select **Create new**.|
-|Lab name| Enter a unique name within your subscription for the lab.|
-|Location| From the drop-down list, select a location that's used for the lab.|
-|Public environments| Public environment repository contains a list of curated Azure Resource Manager templates that enable lab users to create PaaS resources within Labs. For more information, see [Configure and use public environments](devtest-lab-configure-use-public-environments.md).|
--
+> Select **Download a template for automation** at the bottom of the page to view and download the lab configuration as an Azure Resource Manager (ARM) template. You can use the ARM template to create more labs.
### Auto-shutdown tab
-Auto-shutdown allows you to automatically shut down all machines in a lab at a scheduled time each day. The auto-shutdown feature is mainly a cost-saving feature. To change the auto-shutdown settings after creating the lab, see [Manage all policies for a lab in Azure DevTest Labs](./devtest-lab-set-lab-policy.md#set-auto-shutdown).
+Auto-shutdown helps save lab costs by shutting down all lab VMs at a certain time of day. To configure auto-shutdown:
-Provide the following information:
+1. On the **Create DevTest Lab** page, select the **Auto-shutdown** tab.
+1. Fill out the following information:
+ - **Enabled**: Select **On** to enable auto shutdown.
+ - **Scheduled shutdown** and **Time zone**: Specify the daily time and time zone to shut down all lab VMs.
+ - **Send notification before auto-shutdown**: Select **Yes** or **No** for the option to post or send a notification 30 minutes before the auto-shutdown time.
+ - **Webhook URL** and **Email address**: If you choose to send notifications, enter a webhook URL endpoint or semicolon-separated list of email addresses where you want the notification to post or be sent. For more information, see [Configure auto shutdown for labs and VMs](devtest-lab-auto-shutdown.md).
-|Property | Description |
-|||
-|Enabled| Select **On** to enable this policy, or **Off** to disable it.|
-|Scheduled&nbsp;shutdown| Enter a time to shut down all VMs in the current lab.|
-|Time zone| Select a time zone from the drop-down list.|
-|Send notification before auto-shutdown? | Select **Yes** or **No** to send a notification 30 minutes before the specified auto-shutdown time. If you choose **Yes**, enter a webhook URL endpoint or email address specifying where you want the notification to be posted or sent. The user receives notification and is given the option to delay the shutdown.|
-|Webhook URL| A notification will be posted to the specified webhook endpoint when the auto-shutdown is about to happen.|
-|Email address| Enter a set of semicolon-delimited email addresses to receive alert notification emails.|
-
+ :::image type="content" source="./media/devtest-lab-create-lab/portal-create-auto-shutdown.png" alt-text="Screenshot of the Auto-shutdown tab in the Create DevTest Labs form.":::
### Networking tab
-A default network will be created for you (that can be changed/configured later), or an existing virtual network can be selected.
+Azure DevTest Labs creates a new default virtual network for each lab. If you have another virtual network, you can choose to use it for the new lab instead of the default. For more information, see [Add a virtual network in Azure DevTest Labs](devtest-lab-configure-vnet.md).
-Provide the following information:
+To configure networking:
-|Property | Description |
-|||
-|Virtual&nbsp;Network| Keep the default or select an existing one from the drop-down list. Virtual networks are logically isolated from each other in Azure. By default, virtual machines in the same virtual network can access each other.|
-|Subnet| Keep the default or select an existing one from the drop-down list. A subnet is a range of IP addresses in your virtual network, which can be used to isolate virtual machines from each other or from the Internet.|
+1. On the **Create DevTest Lab** page, select the **Networking** tab.
+1. For **Virtual Network**, select a different virtual network from the dropdown list. For **Subnet**, if necessary, select a subnet from the dropdown list.
+1. For **Isolate lab resources**, select **Yes** to completely isolate lab resources to the selected network. For more information, see [Network isolation in DevTest Labs](network-isolation.md).
### Tags tab
-Tags are useful to help you manage and organize lab resources by category. For more information, see [Add tags to a lab](devtest-lab-add-tag.md).
-
-Provide the following information:
-
-|Property | Description |
-|||
-|Name| Tag names are case-insensitive and are limited to 512 characters.|
-|Value| Tag values are case-sensitive and are limited to 256 characters.|
+You can assign tags that apply to all lab resources. Tags can help you manage and track resources. For more information, see [Add tags to a lab](devtest-lab-add-tag.md).
+To add tags:
-### Review + create tab
+1. On the **Create DevTest Lab** page, select the **Tags** tab.
+1. Under **Name** and **Value**, select or enter one or more case-sensitive name-value pairs to help identify resources.
-The **Review + create** tab validates all of your configurations. If all settings are valid, **Succeeded** will appear at the top. Review your settings and then select **Create**. You can monitor the status of the lab creation process by watching the **Notifications** area at the top-right of the portal page.
+## Verify lab creation
-## Post creation
+After you select **Create**, you can monitor the lab creation process in **Notifications** at top right in the portal.
-1. After the creation process finishes, from the deployment notification, select **Go to resource**.
- :::image type="content" source="./media/devtest-lab-create-lab/creation-notification.png" alt-text="Screenshot of DevTest Labs deployment notification.":::
+When the deployment finishes, select **Go to resource**. The lab's **Overview** page appears.
-1. The lab's **Overview** page looks similar to the following image:
- :::image type="content" source="./media/devtest-lab-create-lab/lab-home-page.png" alt-text="Screenshot of DevTest Labs overview page.":::
+You can now add and configure VMs, environments, users, and policies for the lab.
## Clean up resources
-Delete resources to avoid charges for running the lab on Azure. If you plan to go through the next article to add a VM to the lab, you can clean up the resources after you finish that article. Otherwise, follow these steps:
+When you're done using the lab, delete it and its resources to avoid further charges.
-1. Return to the home page for the lab you created.
-
-1. From the top menu, select **Delete**.
+1. On the lab **Overview** page, select **Delete** from the top menu.
:::image type="content" source="./media/devtest-lab-create-lab/portal-lab-delete.png" alt-text="Screenshot of lab delete button.":::
-1. On the **Are you sure you want to delete it** page, enter the lab name in the text box and then select **Delete**.
+1. On the **Are you sure you want to delete it** page, enter the lab name, and then select **Delete**.
+
+ During the deletion process, you can select **Notifications** at the top of your screen to view progress. Deleting a lab can take a while.
-1. During the deletion, you can select **Notifications** at the top of your screen to view progress. Deleting the lab takes a while. Continue to the next step once the lab is deleted.
+If you created a resource group for the lab, you can now delete the resource group. You can't delete a resource group that has a lab in it. Deleting the resource group that contained the lab deletes all resources in the resource group.
-1. If you created the lab in an existing resource group, then all of the lab resources have been removed. If you created a new resource group for this tutorial, it's now empty and can be deleted. It wouldn't have been possible to have deleted the resource group earlier while the lab was still in it.
+1. Select the resource group that contained the lab from your subscription's **Resource groups** list.
+1. At the top of the page, select **Delete resource group**.
+1. On the **Are you sure you want to delete "\<resource group name>"** page, enter the resource group name, and then select **Delete**.
## Next steps
-In this quickstart, you created a lab. To learn how to add a VM, advance to the next article:
+
+To learn how to add VMs to your lab, go on to the next article:
> [!div class="nextstepaction"] > [Create and add virtual machines to a lab in Azure DevTest Labs](devtest-lab-add-vm.md)
devtest-labs Devtest Lab Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-overview.md
Title: What is Azure DevTest Labs?
-description: Learn how DevTest Labs can make it easy to create, manage, and monitor Azure virtual machines
+description: Learn how DevTest Labs makes it easy to create, manage, and monitor Azure virtual machines and environments.
Previously updated : 10/20/2021 Last updated : 03/03/2022 # What is Azure DevTest Labs?
-Azure DevTest Labs is a service that enables developers to efficiently self-manage virtual machines (VMs) and Platform as a service (PaaS) resources without waiting for approvals. DevTest Labs creates labs consisting of pre-configured bases or Azure Resource Manager templates. These labs have all the necessary tools and software that you can use to create environments.
-By using DevTest Labs, you can test the latest versions of your applications by doing the following tasks:
+[Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab) is a service for easily creating, using, and managing infrastructure-as-a-service (IaaS) virtual machines (VMs) and platform-as-a-service (PaaS) environments in labs. Labs offer preconfigured bases and artifacts for creating VMs, and Azure Resource Manager (ARM) templates for creating environments like Azure Web Apps or SharePoint farms.
-- Quickly create Windows and Linux environments by using reusable templates and artifacts.-- Easily integrate your deployment pipeline with DevTest Labs to create on-demand environments.-- Scale up your load testing by creating multiple test agents and pre-prepared environments for training and demos.
+Lab owners can create preconfigured VMs that have tools and software lab users need. Lab users can claim preconfigured VMs, or create and configure their own VMs and environments. Lab policies and other methods track and control lab usage and costs.
-To learn more about the key concepts of DevTest Labs, see [DevTest Labs concepts](devtest-lab-concepts.md).
+### Common DevTest Labs scenarios
-## Cost control and governance
-DevTest Labs makes it easier to control costs by allowing you to do the following tasks:
+Common [DevTest Labs scenarios](devtest-lab-guidance-get-started.md) include development VMs, test environments, and classroom or training labs. DevTest Labs promotes efficiency, consistency, and cost control by keeping all resource usage within the lab context.
-- [Set policies on your labs](devtest-lab-set-lab-policy.md), such as number of VMs per user or per lab. -- Create [policies to automatically shut down](devtest-lab-set-lab-policy.md) and start VMs.-- Track costs on VMs and PaaS resources spun up inside labs to stay within [your budget](devtest-lab-configure-cost-management.md). Receive notice of high-projected costs for labs so you can take necessary actions.-- Stay within the context of your labs so you don't spin up resources outside of them.
+## Custom VM bases, artifacts, and templates
-## Quickly get to ready-to-test
-DevTest Labs lets you create pre-provisioned environments to develop and test applications. Just [claim the environment](devtest-lab-add-claimable-vm.md) of your application's last good build and start working. Or use containers for even faster, leaner environment creation.
+DevTest Labs can use custom images, formulas, artifacts, and templates to create and manage labs, VMs, and environments. The [DevTest Labs public GitHub repository](https://github.com/Azure/azure-devtestlab) has many ready-to-use VM artifacts and ARM templates for creating labs, environments, or sandbox resource groups. Lab owners can also create [custom images](devtest-lab-create-custom-image-from-vm-using-portal.md), [formulas](devtest-lab-manage-formulas.md), and ARM templates to use for creating and managing labs, [VMs](devtest-lab-use-resource-manager-template.md#view-edit-and-save-arm-templates-for-vms), and [environments](devtest-lab-create-environment-from-arm.md).
-## Create once, use everywhere
-Capture and share PaaS [environment templates](devtest-lab-create-environment-from-arm.md) and [artifacts](add-artifact-repository.md) within your team or organizationΓÇöall in source controlΓÇöto easily create developer and test environments.
+Lab owners can store artifacts and ARM templates in private Git repositories, and connect the [artifact repositories](add-artifact-repository.md) and [template repositories](devtest-lab-use-resource-manager-template.md#add-template-repositories-to-labs) to their labs so lab users can access them directly from the Azure portal. Add the same repositories to multiple labs in your organization to promote consistency, reuse, and sharing.
-## Worry-free self-service
-DevTest Labs enables your developers and testers to quickly and easily [create IaaS VMs](devtest-lab-add-vm.md) and [PaaS resources](devtest-lab-create-environment-from-arm.md) by using a set of pre-configured resources.
+## Development, test, and training scenarios
-## Use IaaS and PaaS resources
-Spin up resources, such as Azure Service Fabric clusters, or SharePoint farms, by using Resource Manager templates. The templates come from the [public environment repository](devtest-lab-configure-use-public-environments.md) or [connect the lab to your own Git repository](devtest-lab-create-environment-from-arm.md#configure-your-own-template-repositories). You can also spin up an empty resource group (sandbox) by using a Resource Manager template to explore Azure within the context of a lab.
+DevTest Labs users can quickly and easily create [IaaS VMs](devtest-lab-add-vm.md) and [PaaS environments](devtest-lab-create-environment-from-arm.md) from preconfigured bases, artifacts, and templates. Developers, testers, and trainers can:
-## Integrate with your existing toolchain
-Use pre-made plug-ins or the API to create development/testing environments directly from your preferred [continuous integration (CI) tool](devtest-lab-integrate-ci-cd.md), integrated development environment (IDE), or automated release pipeline. You can also use the comprehensive command-line tool.
+- Create Windows and Linux training and demo environments, or sandbox resource groups for exploring Azure, by using reusable ARM templates and artifacts.
+- Test app versions and scale up load testing by creating multiple test agents and environments.
+- Create development or testing environments from [continuous integration and deployment (CI/CD)](devtest-lab-integrate-ci-cd.md) tools, integrated development environments (IDEs), or automated release pipelines. Integrate deployment pipelines with DevTest Labs to create environments on demand.
+- Use the [Azure CLI](devtest-lab-vmcli.md) command-line tool to manage VMs and environments.
+
+## Lab policies and procedures to control costs
+
+Lab owners can take several measures to reduce waste and control lab costs.
+
+- [Set lab policies](devtest-lab-set-lab-policy.md) like allowed number or sizes of VMs per user or lab.
+- [Set auto-shutdown](devtest-lab-auto-shutdown.md) and [auto-startup](devtest-lab-auto-startup-vm.md) schedules to shut down and start up lab VMs at specific times of day.
+- [Monitor costs](devtest-lab-configure-cost-management.md) to track lab and resource usage and estimate trends.
+- [Set VM expiration dates](devtest-lab-use-resource-manager-template.md#set-vm-expiration-date), or [delete labs or lab VMs](devtest-lab-delete-lab-vm.md) when no longer needed.
## Next steps
-See the following articles:
-- To learn more about DevTest Labs, see [DevTest Labs concepts](devtest-lab-concepts.md).-- For a walkthrough with step-by-step instructions, see [Tutorial: Set up a lab by using Azure DevTest Labs](tutorial-create-custom-lab.md).
+- [DevTest Labs concepts](devtest-lab-concepts.md)
+- [Quickstart: Create a lab in Azure DevTest Labs](devtest-lab-create-lab.md)
+- [DevTest Labs FAQ](devtest-lab-faq.yml)
+
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 01/31/2022 Last updated : 03/04/2022
As more applications move to the cloud, the performance of the network elements
This feature significantly increases the throughput of Azure Firewall Premium. For more details, see [Azure Firewall performance](firewall-performance.md).
-To enable the Azure Firewall Premium Performance boost feature, run the following commands in Azure PowerShell. Stop and start the firewall for the feature to take effect immediatately. Otherwise, the firewall/s is updated with the feature within several days.
+To enable the Azure Firewall Premium Performance boost feature, run the following commands in Azure PowerShell. Stop and start the firewall for the feature to take effect immediately. Otherwise, the firewall/s is updated with the feature within several days.
-Currently, the performance boost feature isn't recommended for SecureHub Firewalls. Refer back to this article for the latest updates as we work to change this recommendation. Also, this setting has no effect on Standard Firewalls.
+The Premium performance boost feature can be enabled on both the [hub virtual network](../firewall-manager/vhubs-and-vnets.md) firewall and the [secured virtual hub](../firewall-manager/vhubs-and-vnets.md) firewall. This feature has no effect on Standard Firewalls.
Run the following Azure PowerShell commands to configure the Azure Firewall Premium performance boost:
frontdoor Rules Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/rules-match-conditions.md
Previously updated : 01/16/2022 Last updated : 03/03/2022 zone_pivot_groups: front-door-tiers
In this example, we match all requests that have been detected as coming from a
# [JSON](#tab/json) +
+```json
+{
+ "name": "IsDevice",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "Mobile"
+ ],
+ "typeName": "DeliveryRuleIsDeviceConditionParameters"
+ }
+}
+```
+++ ```json { "name": "IsDevice",
In this example, we match all requests that have been detected as coming from a
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'IsDevice'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ 'Mobile'
+ ]
+ typeName: 'DeliveryRuleIsDeviceConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'IsDevice'
In this example, we match all requests that have been detected as coming from a
} ``` + ::: zone pivot="front-door-standard-premium"
In this example, we match all requests that have been sent by using the HTTP 2.0
"matchValues": [ "2.0" ],
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleHttpVersionConditionParameters"
+ "typeName": "DeliveryRuleHttpVersionConditionParameters"
} } ```
In this example, we match all requests that have been sent by using the HTTP 2.0
matchValues: [ '2.0' ]
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleHttpVersionConditionParameters'
+ typeName: 'DeliveryRuleHttpVersionConditionParameters'
} } ```
In this example, we match all requests that have include a cookie named `deploym
"1" ], "transforms": [],
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleCookiesConditionParameters"
+ "typeName": "DeliveryRuleCookiesConditionParameters"
} } ```
In this example, we match all requests that have include a cookie named `deploym
matchValues: [ '1' ]
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleCookiesConditionParameters'
+ typeName: 'DeliveryRuleCookiesConditionParameters'
} } ```
In this example, we match all POST requests where a `customerName` argument is p
# [JSON](#tab/json) +
+```json
+{
+ "name": "PostArgs",
+ "parameters": {
+ "selector": "customerName",
+ "operator": "BeginsWith",
+ "negateCondition": false,
+ "matchValues": [
+ "J",
+ "K"
+ ],
+ "transforms": [
+ "Uppercase"
+ ],
+ "typeName": "DeliveryRulePostArgsConditionParameters"
+}
+```
+++ ```json { "name": "PostArgs",
In this example, we match all POST requests where a `customerName` argument is p
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'PostArgs'
+ parameters: {
+ selector: 'customerName'
+ operator: 'BeginsWith'
+ negateCondition: false
+ matchValues: [
+ 'J'
+ 'K'
+ ]
+ transforms: [
+ 'Uppercase'
+ ]
+ typeName: 'DeliveryRulePostArgsConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'PostArgs'
In this example, we match all POST requests where a `customerName` argument is p
} ``` + ## Query string
In this example, we match all requests where the query string contains the strin
# [JSON](#tab/json) +
+```json
+{
+ "name": "QueryString",
+ "parameters": {
+ "operator": "Contains",
+ "negateCondition": false,
+ "matchValues": [
+ "language=en-US"
+ ],
+ "typeName": "DeliveryRuleQueryStringConditionParameters"
+ }
+}
+```
+++ ```json { "name": "QueryString",
In this example, we match all requests where the query string contains the strin
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'QueryString'
+ parameters: {
+ operator: 'Contains'
+ negateCondition: false
+ matchValues: [
+ 'language=en-US'
+ ]
+ typeName: 'DeliveryRuleQueryStringConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'QueryString'
In this example, we match all requests where the query string contains the strin
} ``` + ## Remote address
In this example, we match all requests where the request has not originated from
# [JSON](#tab/json) +
+```json
+{
+ "name": "RemoteAddress",
+ "parameters": {
+ "operator": "GeoMatch",
+ "negateCondition": true,
+ "matchValues": [
+ "US"
+ ],
+ "typeName": "DeliveryRuleRemoteAddressConditionParameters"
+ }
+}
+```
+++ ```json { "name": "RemoteAddress",
In this example, we match all requests where the request has not originated from
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'RemoteAddress'
+ parameters: {
+ operator: 'GeoMatch'
+ negateCondition: true
+ matchValues: [
+ 'US'
+ ]
+ typeName: 'DeliveryRuleRemoteAddressConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'RemoteAddress'
In this example, we match all requests where the request has not originated from
} ``` + ## Request body
In this example, we match all requests where the request body contains the strin
# [JSON](#tab/json) +
+```json
+{
+ "name": "RequestBody",
+ "parameters": {
+ "operator": "Contains",
+ "negateCondition": false,
+ "matchValues": [
+ "ERROR"
+ ],
+ "transforms": [
+ "Uppercase"
+ ],
+ "typeName": "DeliveryRuleRequestBodyConditionParameters"
+ }
+}
+```
+++ ```json { "name": "RequestBody",
In this example, we match all requests where the request body contains the strin
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'RequestBody'
+ parameters: {
+ operator: 'Contains'
+ negateCondition: false
+ matchValues: [
+ 'ERROR'
+ ]
+ transforms: [
+ 'Uppercase'
+ ]
+ typeName: 'DeliveryRuleRequestBodyConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'RequestBody'
In this example, we match all requests where the request body contains the strin
} ``` + ## Request file name
In this example, we match all requests where the request file name is `media.mp4
# [JSON](#tab/json) +
+```json
+{
+ "name": "UrlFileName",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "media.mp4"
+ ],
+ "transforms": [
+ "Lowercase"
+ ],
+ "typeName": "DeliveryRuleUrlFilenameConditionParameters"
+ }
+}
+```
+++ ```json { "name": "UrlFileName",
In this example, we match all requests where the request file name is `media.mp4
} ``` + # [Bicep](#tab/bicep) + ```bicep { name: 'UrlFileName'
In this example, we match all requests where the request file name is `media.mp4
transforms: [ 'Lowercase' ]
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlFilenameConditionParameters'
+ typeName: 'DeliveryRuleUrlFilenameConditionParameters'
} } ``` -
-## Request file extension
+
+```bicep
+{
+ name: 'UrlFileName'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ 'media.mp4'
+ ]
+ transforms: [
+ 'Lowercase'
+ ]
+ '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlFilenameConditionParameters'
+ }
+}
+```
++++
+## Request file extension
The **request file extension** match condition identifies requests that include the specified file extension in the file name in the request URL. You can specify multiple values to match, which will be combined using OR logic.
In this example, we match all requests where the request file extension is `pdf`
# [JSON](#tab/json) +
+```json
+{
+ "name": "UrlFileExtension",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "pdf",
+ "docx"
+ ],
+ "transforms": [
+ "Lowercase"
+ ],
+ "typeName": "DeliveryRuleUrlFileExtensionMatchConditionParameters"
+ }
+```
+++ ```json { "name": "UrlFileExtension",
In this example, we match all requests where the request file extension is `pdf`
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'UrlFileExtension'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ 'pdf'
+ 'docx'
+ ]
+ transforms: [
+ 'Lowercase'
+ ]
+ typeName: 'DeliveryRuleUrlFileExtensionMatchConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'UrlFileExtension'
In this example, we match all requests where the request file extension is `pdf`
} ``` ++ ## Request header
In this example, we match all requests where the request contains a header named
# [JSON](#tab/json) +
+```json
+{
+ "name": "RequestHeader",
+ "parameters": {
+ "selector": "MyCustomHeader",
+ "operator": "Any",
+ "negateCondition": false,
+ "typeName": "DeliveryRuleRequestHeaderConditionParameters"
+ }
+}
+```
+++ ```json { "name": "RequestHeader",
In this example, we match all requests where the request contains a header named
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'RequestHeader'
+ parameters: {
+ selector: 'MyCustomHeader',
+ operator: 'Any'
+ negateCondition: false
+ typeName: 'DeliveryRuleRequestHeaderConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'RequestHeader'
In this example, we match all requests where the request contains a header named
} ``` + ## Request method
In this example, we match all requests where the request uses the `DELETE` metho
# [JSON](#tab/json) +
+```json
+{
+ "name": "RequestMethod",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "DELETE"
+ ],
+ "typeName": "DeliveryRuleRequestMethodConditionParameters"
+ }
+}
+```
+++ ```json { "name": "RequestMethod",
In this example, we match all requests where the request uses the `DELETE` metho
} ``` + # [Bicep](#tab/bicep) + ```bicep { name: 'RequestMethod'
In this example, we match all requests where the request uses the `DELETE` metho
operator: 'Equal' negateCondition: false matchValues: [
- 'DELETE
+ 'DELETE'
+ ]
+ typeName: 'DeliveryRuleRequestMethodConditionParameters'
+ }
+}
+```
+++
+```bicep
+{
+ name: 'RequestMethod'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ 'DELETE'
] '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleRequestMethodConditionParameters' } } ``` + ## Request path
In this example, we match all requests where the request file path begins with `
# [JSON](#tab/json) +
+```json
+{
+ "name": "UrlPath",
+ "parameters": {
+ "operator": "BeginsWith",
+ "negateCondition": false,
+ "matchValues": [
+ "files/secure/"
+ ],
+ "transforms": [
+ "Lowercase"
+ ],
+ "typeName": "DeliveryRuleUrlPathMatchConditionParameters"
+ }
+}
+```
+++ ```json { "name": "UrlPath",
In this example, we match all requests where the request file path begins with `
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'UrlPath'
+ parameters: {
+ operator: 'BeginsWith'
+ negateCondition: false
+ matchValues: [
+ 'files/secure/'
+ ]
+ transforms: [
+ 'Lowercase'
+ ]
+ typeName: 'DeliveryRuleUrlPathMatchConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'UrlPath'
In this example, we match all requests where the request file path begins with `
} ``` + ## Request protocol
In this example, we match all requests where the request uses the `HTTP` protoco
# [JSON](#tab/json) +
+```json
+{
+ "name": "RequestScheme",
+ "parameters": {
+ "operator": "Equal",
+ "negateCondition": false,
+ "matchValues": [
+ "HTTP"
+ ],
+ "typeName": "DeliveryRuleRequestSchemeConditionParameters"
+ }
+}
+```
+++ ```json { "name": "RequestScheme",
In this example, we match all requests where the request uses the `HTTP` protoco
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'RequestScheme'
+ parameters: {
+ operator: 'Equal'
+ negateCondition: false
+ matchValues: [
+ 'HTTP'
+ ]
+ typeName: 'DeliveryRuleRequestSchemeConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'RequestScheme'
In this example, we match all requests where the request uses the `HTTP` protoco
operator: 'Equal' negateCondition: false matchValues: [
- 'HTTP
+ 'HTTP'
] '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleRequestSchemeConditionParameters' } } ``` + ## Request URL
In this example, we match all requests where the request URL begins with `https:
# [JSON](#tab/json) +
+```json
+{
+ "name": "RequestUri",
+ "parameters": {
+ "operator": "BeginsWith",
+ "negateCondition": false,
+ "matchValues": [
+ "https://api.contoso.com/customers/123"
+ ],
+ "transforms": [
+ "Lowercase"
+ ],
+ "typeName": "DeliveryRuleRequestUriConditionParameters"
+ }
+}
+```
+++ ```json { "name": "RequestUri",
In this example, we match all requests where the request URL begins with `https:
} ``` + # [Bicep](#tab/bicep) +
+```bicep
+{
+ name: 'RequestUri'
+ parameters: {
+ operator: 'BeginsWith'
+ negateCondition: false
+ matchValues: [
+ 'https://api.contoso.com/customers/123'
+ ]
+ transforms: [
+ 'Lowercase'
+ ]
+ typeName: 'DeliveryRuleRequestUriConditionParameters'
+ }
+}
+```
+++ ```bicep { name: 'RequestUri'
In this example, we match all requests where the request URL begins with `https:
} ``` + ## Operator list
frontdoor Concept Rule Set Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-rule-set-actions.md
Previously updated : 03/31/2021 Last updated : 03/03/2022
In this example, we override the cache expiration to 6 hours, for matched reques
"cacheBehavior": "SetIfMissing", "cacheType": "All", "cacheDuration": "0.06:00:00",
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleCacheExpirationActionParameters"
+ "typeName": "DeliveryRuleCacheExpirationActionParameters"
} } ```
In this example, we override the cache expiration to 6 hours, for matched reques
cacheBehavior: 'SetIfMissing' cacheType: All cacheDuration: '0.06:00:00'
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleCacheExpirationActionParameters'
+ typeName: 'DeliveryRuleCacheExpirationActionParameters'
} } ```
In this example, we modify the cache key to include a query string parameter nam
"parameters": { "queryStringBehavior": "Include", "queryParameters": "customerId",
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleCacheKeyQueryStringBehaviorActionParameters"
+ "typeName": "DeliveryRuleCacheKeyQueryStringBehaviorActionParameters"
} } ```
In this example, we modify the cache key to include a query string parameter nam
parameters: { queryStringBehavior: 'Include' queryParameters: 'customerId'
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleCacheKeyQueryStringBehaviorActionParameters'
+ typeName: 'DeliveryRuleCacheKeyQueryStringBehaviorActionParameters'
} } ```
In this example, we append the value `AdditionalValue` to the `MyRequestHeader`
"headerAction": "Append", "headerName": "MyRequestHeader", "value": "AdditionalValue",
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleHeaderActionParameters"
+ "typeName": "DeliveryRuleHeaderActionParameters"
} } ```
In this example, we append the value `AdditionalValue` to the `MyRequestHeader`
headerAction: 'Append' headerName: 'MyRequestHeader' value: 'AdditionalValue'
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleHeaderActionParameters'
+ typeName: 'DeliveryRuleHeaderActionParameters'
} } ```
In this example, we delete the header with the name `X-Powered-By` from the resp
"parameters": { "headerAction": "Delete", "headerName": "X-Powered-By",
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleHeaderActionParameters"
+ "typeName": "DeliveryRuleHeaderActionParameters"
} } ```
In this example, we delete the header with the name `X-Powered-By` from the resp
parameters: { headerAction: 'Delete' headerName: 'X-Powered-By'
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleHeaderActionParameters'
+ typeName: 'DeliveryRuleHeaderActionParameters'
} } ```
In this example, we redirect the request to `https://contoso.com/exampleredirect
"customHostname": "contoso.com", "customPath": "/exampleredirection", "customQueryString": "clientIp={client_ip}",
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlRedirectActionParameters"
+ "typeName": "DeliveryRuleUrlRedirectActionParameters"
} } ```
In this example, we redirect the request to `https://contoso.com/exampleredirect
customHostname: 'contoso.com' customPath: '/exampleredirection' customQueryString: 'clientIp={client_ip}'
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlRedirectActionParameters'
+ typeName: 'DeliveryRuleUrlRedirectActionParameters'
} } ```
In this example, we rewrite all requests to the path `/redirection`, and don't p
"sourcePattern": "/", "destination": "/redirection", "preserveUnmatchedPath": false,
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlRewriteActionParameters"
+ "typeName": "DeliveryRuleUrlRewriteActionParameters"
} } ```
In this example, we rewrite all requests to the path `/redirection`, and don't p
sourcePattern: '/' destination: '/redirection' preserveUnmatchedPath: false
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlRewriteActionParameters'
+ typeName: 'DeliveryRuleUrlRewriteActionParameters'
} } ```
In this example, we route all matched requests to an origin group named `SecondO
"originGroup": { "id": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup" },
- "@odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleOriginGroupOverrideActionParameters"
+ "typeName": "DeliveryRuleOriginGroupOverrideActionParameters"
} } ```
In this example, we route all matched requests to an origin group named `SecondO
originGroup: { id: '/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Cdn/profiles/<profile-name>/originGroups/SecondOriginGroup' }
- '@odata.type': '#Microsoft.Azure.Cdn.Models.DeliveryRuleOriginGroupOverrideActionParameters'
+ typeName: 'DeliveryRuleOriginGroupOverrideActionParameters'
} } ```
hdinsight Hdinsight Custom Ambari Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-custom-ambari-db.md
When you host your Apache Ambari DB in an external database, remember the follow
- You're responsible for the additional costs of the Azure SQL DB that holds Ambari. - Back up your custom Ambari DB periodically. Azure SQL Database generates backups automatically, but the backup retention time-frame varies. For more information, see [Learn about automatic SQL Database backups](../azure-sql/database/automated-backups-overview.md).
+- Don't change the custom Ambari DB password after the HDInsight cluster reaches the **Running** state. It is not supported.
## Deploy clusters with a custom Ambari DB
hdinsight Interactive Query Troubleshoot Hive Logs Diskspace Full Headnodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-hive-logs-diskspace-full-headnodes.md
Previously updated : 05/21/2021 Last updated : 03/04/2022 # Scenario: Apache Hive logs are filling up the disk space on the head nodes in Azure HDInsight
In a HDI 4.0 Apache Hive/LLAP cluster, unwanted logs are taking up the entire di
## Cause
-Automatic hive log deletion is not configured in the advanced hive-log4j2 configurations. The default size limit of 60GB takes too much space for the customer's usage pattern.
+Automatic hive log deletion is not configured in the advanced hive-log4j2 configurations. The default size limit of 60GB takes too much space for the customer's usage pattern. By default, the amount of logs kept is defined by this equation `MB logs/day = appender.RFA.strategy.max * 10MB`.
## Resolution 1. Go to the Hive component summary on the Ambari portal and select the **Configs** tab. 2. Go to the `Advanced hive-log4j2` section in **Advanced settings**.-
+ - Optionally, you can lower the value of `appender.RFA.strategy.max` to decrease the total megabytes of logs kept in a day.
3. Make sure you have these settings. If you don't see any related settings, append these settings: ``` # automatically delete hive log
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Title: FHIR Rest API capabilities for Azure API for FHIR
+ Title: FHIR REST API capabilities for Azure API for FHIR
description: This article describes the RESTful interactions and capabilities for Azure API for FHIR.
Last updated 01/05/2022
-# FHIR Rest API capabilities for Azure API for FHIR
+# FHIR REST API capabilities for Azure API for FHIR
In this article, we'll cover some of the nuances of the RESTful interactions of Azure API for FHIR.
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md
Title: FHIR Rest API capabilities for Azure Healthcare APIs FHIR service
+ Title: FHIR REST API capabilities for Azure Healthcare APIs FHIR service
description: This article describes the RESTful interactions and capabilities for Azure Healthcare APIs FHIR service.
Last updated 01/03/2022
-# FHIR Rest API capabilities for Azure Healthcare APIs FHIR service
+# FHIR REST API capabilities for Azure Healthcare APIs FHIR service
In this article, we'll cover some of the nuances of the RESTful interactions of Azure Healthcare APIs FHIR service (hereby called the FHIR service).
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
You can follow all the steps, or skip some if you have an existing environment.
## Create a workspace in your Azure subscription
-You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or using PowerShell, Azure CLI and Rest API]. You can find scripts from the [Healthcare APIs samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
+You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or using PowerShell, Azure CLI and REST API]. You can find scripts from the [Healthcare APIs samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
> [!NOTE] > There are limits to the number of workspaces and the number of IoT connector instances you can create in each Azure subscription.
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The response to this request looks like the following example:
} ```
+### List device groups
+
+Use the following request to retrieve a list of device groups from your application:
+
+```http
+GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=1.1-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "id": "1dbb2610-04f5-47f8-81ca-ba38a24a6cf3",
+ "displayName": "Thermostat - All devices",
+ "organizations": [
+ "seattle"
+ ]
+ },
+ {
+ "id": "b37511ca-1beb-4781-ae09-c2d73c9104bf",
+ "displayName": "Cascade 500 - All devices",
+ "organizations": [
+ "redmond"
+ ]
+ },
+ {
+ "id": "788d08c6-2d11-4372-a994-71f63e108cef",
+ "displayName": "RS40 Occupancy Sensor - All devices"
+ }
+ ]
+}
+```
+
+The organizations field is only used when an application has an organization hierarchy defined. To learn more about organizations, see [Manage IoT Central organizations](howto-edit-device-template.md)
+ ### Use ODATA filters You can use ODATA filters to filter the results returned by the list devices API.
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
In a typical manufacturing process for creating secure devices, root CA certific
* A customer buying a root CA and deriving a signing certificate for the manufacturer to sign the devices they make on that customer's behalf.
-In any case, the manufacturer uses an intermediate CA certificate at the end of this chain to sign the device CA certificate placed on the end device. Generally, these intermediate certificates are closely guarded at the manufacturing plant. They undergo strict processes, both physical and electronic for their usage.
+In any case, the manufacturer uses an intermediate CA certificate at the end of this chain to sign the edge CA certificate placed on the end device. Generally, these intermediate certificates are closely guarded at the manufacturing plant. They undergo strict processes, both physical and electronic for their usage.
<!--1.1--> :::moniker range="iotedge-2018-06"
You can see the hierarchy of certificate depth represented in the screenshot:
|--|--| | Root CA Certificate | Azure IoT Hub CA Cert Test Only | | Intermediate CA Certificate | Azure IoT Hub Intermediate Cert Test Only |
-| Device CA Certificate | iotgateway.ca ("iotgateway" was passed in as the CA cert name to the convenience scripts) |
+| Edge CA Certificate | iotgateway.ca ("iotgateway" was passed in as the CA cert name to the convenience scripts) |
| IoT Edge Hub Server Certificate | iotedgegw.local (matches the 'hostname' from the config file) | :::moniker-end
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> [!NOTE] > Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md).
+> [!NOTE]
+> Azure App Service certificate configuration does not support Key Vault RBAC permission model.
++ Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources. Azure RBAC allows users to manage Key, Secrets, and Certificates permissions. It provides one place to manage all permissions across all key vaults.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 11/30/2021 Last updated : 03/04/2022 ms.devlang: azurecli
The hosts in the following tables are owned by Microsoft, and provide services r
| **Required for** | **Hosts** | **Protocol** | **Ports** | | -- | -- | -- | -- |
-| Microsoft Container Registry | mcr.microsoft.com | TCP | 443 |
+| Microsoft Container Registry | mcr.microsoft.com</br>\*.data.mcr.microsoft.com | TCP | 443 |
| Azure Machine Learning pre-built images | viennaglobal.azurecr.io | TCP | 443 | > [!TIP]
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data.md
Datastores currently support storing connection information to the storage servi
> [!TIP] > **For unsupported storage solutions**, and to save data egress cost during ML experiments, [move your data](#move) to a supported Azure storage solution.
-| Storage&nbsp;type | Authentication&nbsp;type | [Azure&nbsp;Machine&nbsp;Learning studio](https://ml.azure.com/) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; Python SDK](/python/api/overview/azure/ml/intro) | [Azure&nbsp;Machine&nbsp;Learning CLI](reference-azure-machine-learning-cli.md) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; Rest API](/rest/api/azureml/) | VS Code
+| Storage&nbsp;type | Authentication&nbsp;type | [Azure&nbsp;Machine&nbsp;Learning studio](https://ml.azure.com/) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; Python SDK](/python/api/overview/azure/ml/intro) | [Azure&nbsp;Machine&nbsp;Learning CLI](reference-azure-machine-learning-cli.md) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; REST API](/rest/api/azureml/) | VS Code
|||||| [Azure&nbsp;Blob&nbsp;Storage](../storage/blobs/storage-blobs-overview.md)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô |Γ£ô [Azure&nbsp;File&nbsp;Share](../storage/files/storage-files-introduction.md)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô|Γ£ô
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Previously updated : 10/21/2021 Last updated : 02/28/2022
You can show your current defaults using `--list-defaults/-l`:
> [!TIP] > Combining with `--output/-o` allows for more readable output formats.
+## Secure communications
+
+The `ml` CLI extension (sometimes called 'CLI v2') for Azure Machine Learning sends operational data (YAML parameters and metadata) over the public internet. All the `ml` CLI extension commands communicate with the Azure Resource Manager. This communication is secured using HTTPS/TLS 1.2.
+
+> [!NOTE]
+> With the previous extension (`azure-cli-ml`, sometimes called 'CLI v1'), only some of the commands communicate with the Azure Resource Manager. Specifically, commands that create, update, delete, list, or show Azure resources. Operations such as submitting a training job communicate directly with the Azure Machine Learning workspace. If your workspace is [secured with a private endpoint](how-to-configure-private-link.md), that is enough to secure commands provided by the `azure-cli-ml` extension.
+
+> [!TIP]
+> Data stored in a data store that is secured in a virtual network is _not_ sent over the public internet. For example, if your training data is secured on the default storage account for the workspace, and the storage account is in the virtual network.
+
+You can increase the security of CLI communications with Azure Resource Manager by using Azure Private Link. The following links provide information on using a Private Link for managing Azure resources:
+
+1. [Secure your Azure Machine Learning workspace inside a virtual network using a private endpoint](how-to-configure-private-link.md).
+2. [Create a Private Link for managing Azure resources](/azure/azure-resource-manager/management/create-private-link-access-portal).
+3. [Create a private endpoint](/azure/azure-resource-manager/management/create-private-link-access-portal#create-private-endpoint) for the Private Link created in the previous step.
+
+> [!IMPORTANT]
+> To configure the private link for Azure Resource Manager, you must be the _subscription owner_ for the Azure subscription, and an _owner_ or _contributor_ of the root management group. For more information, see [Create a private link for managing Azure resources](/azure/azure-resource-manager/management/create-private-link-access-portal).
+ ## Next steps - [Train models using CLI (v2)](how-to-train-cli.md)
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
This quickstart demonstrates how to use the Azure CLI commands to create a clust
> [!NOTE] > The value of the `delegatedManagementSubnetId` variable you will supply below is exactly the same as the value of `--scope` that you supplied in the command above:
+ > [!NOTE]
+ > Cassandra 4.0 is in public preview and not recommended for production use cases.
++ ```azurecli-interactive resourceGroupName='<Resource_Group_Name>' clusterName='<Cluster_Name>' location='eastus2' delegatedManagementSubnetId='/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.Network/virtualNetworks/<VNet name>/subnets/<subnet name>' initialCassandraAdminPassword='myPassword'
+ cassandraVersion='3.11' # set to 4.0 for a Cassandra 4.0 cluster
az managed-cassandra cluster create \ --cluster-name $clusterName \
This quickstart demonstrates how to use the Azure CLI commands to create a clust
--location $location \ --delegated-management-subnet-id $delegatedManagementSubnetId \ --initial-cassandra-admin-password $initialCassandraAdminPassword \
+ -cassandra-version $cassandraVersion \
--debug ```
managed-instance-apache-cassandra Dba Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/dba-commands.md
+
+ Title: How to run DBA commands for Azure Managed Instance for Apache Cassandra
+description: Learn how to run DBA commands
+++ Last updated : 03/02/2022+++
+# DBA commands for Azure Managed Instance for Apache Cassandra
+
+Azure Managed Instance for Apache Cassandra provides automated deployment, scaling, and [management operations](management-operations.md) for open-source Apache Cassandra data centers. The automation in the service should be sufficient for many use cases. However, this article describes how to run DBA commands manually when the need arises.
+
+> [!IMPORTANT]
+> Nodetool commands are in public preview.
+> This feature is provided without a service level agreement, and it's not recommended for production workloads.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+<!-- ## DBA command support
+Azure Managed Instance for Apache Cassandra allows you to run `nodetool` and `sstable` commands via Azure CLI, for routine DBA administration. Not all commands are supported and there are some limitations. For supported commands, see the sections below. -->
+
+## DBA command support
+Azure Managed Instance for Apache Cassandra allows you to run `nodetool` commands via Azure CLI, for routine DBA administration. Not all commands are supported and there are some limitations. For supported commands, see the sections below.
+
+>[!WARNING]
+> Some of these commands can destabilize the cassandra cluster and should only be run carefully and after being tested in non-production environments. Where possible a `--dry-run` option should be deployed first. Microsoft cannot offer any SLA or support on issues with running commands which alter the default database configuration and/or tables.
+++
+## How to run a nodetool command
+Azure Managed Instance for Apache Cassandra provides the following Azure CLI command to run DBA commands:
+
+```azurecli-interactive
+ az managed-cassandra cluster invoke-command --resource-group <rg> --cluster-name <cluster> --host <ip of data node> --command-name nodetool --arguments "<nodetool-subcommand>"="" "paramerter1"=""
+```
+
+The particular subcommand needs to be in the `--arguments` section with an empty value. `Nodetool` flags without a value are in the form: `"<flag>"=""`. If the flag has a value, it is in the form: `"<flag>"="value"`.
+
+Here's an example of how to run a `nodetool` command without flags, in this case the `nodetool status` command:
+
+```azurecli-interactive
+ az managed-cassandra cluster invoke-command --resource-group <rg> --cluster-name <cluster> --host <ip of data node> --command-name nodetool --arguments "status"=""
+```
+
+Here's an example of how to run a `nodetool` command with a flag, in this case the `nodetool compact` command:
+
+```azurecli-interactive
+ az managed-cassandra cluster invoke-command --resource-group <rg> --cluster-name <cluster> --host <ip of data node> --command-name nodetool --arguments "compact"="" "-st"="65678794"
+```
+
+Both will return a json of the following form:
+
+```json
+ {
+ "commandErrorOutput": "",
+ "commandOutput": "<result>",
+ "exitCode": 0
+ }
+```
+
+<!-- ## How to run an sstable command
+
+The `sstable` commands require read/write access to the cassandra data directory and the cassandra database to be stopped. To accomodate this, two additional parameters `--cassandra-stop-start true` and `--readwrite true` need to be given:
+
+```azurecli-interactive
+ az managed-cassandra cluster invoke-command --resource-group <test-rg> --cluster-name <test-cluster> --host <ip> --cassandra-stop-start true --readwrite true --command-name sstableutil --arguments "system"="peers"
+```
+
+```json
+ {
+ "commandErrorOutput": "",
+ "commandOutput": "Listing files...\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-CompressionInfo.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Data.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Digest.crc32\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Filter.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Index.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Statistics.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Summary.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-TOC.txt\n",
+ "exitCode": 0
+ }
+``` -->
+
+<!-- ## List of supported sstable commands
+
+For more information on each command, see https://cassandra.apache.org/doc/latest/cassandra/tools/sstable/https://docsupdatetracker.net/index.html
+
+* `sstableverify`
+* `sstablescrub`
+* `sstablemetadata`
+* `sstablelevelreset`
+* `sstableutil`
+* `sstablesplit`
+* `sstablerepairedset`
+* `sstableofflinerelevel`
+* `sstableexpiredblockers` -->
+
+## List of supported nodetool commands
+
+For more information on each command, see https://cassandra.apache.org/doc/latest/cassandra/tools/nodetool/nodetool.html
+
+* `status`
+* `cleanup`
+* `clearsnapshot`
+* `compact`
+* `compactionhistory`
+* `compactionstats`
+* `describecluster`
+* `describering`
+* `disableautocompaction`
+* `disablehandoff`
+* `disablehintsfordc`
+* `drain`
+* `enableautocompaction`
+* `enablehandoff`
+* `enablehintsfordc`
+* `failuredetector`
+* `flush`
+* `garbagecollect`
+* `gcstats`
+* `getcompactionthreshold`
+* `getcompactionthroughput`
+* `getconcurrentcompactors`
+* `getendpoints`
+* `getinterdcstreamthroughput`
+* `getlogginglevels`
+* `getsstables`
+* `getstreamthroughput`
+* `gettimeout`
+* `gettraceprobability`
+* `gossipinfo`
+* `info`
+* `invalidatecountercache`
+* `invalidatekeycache`
+* `invalidaterowcache`
+* `listsnapshots`
+* `netstats`
+* `pausehandoff`
+* `proxyhistograms`
+* `rangekeysample`
+* `rebuild`
+* `rebuild_index` - for arguments use `"keyspace"="table indexname..."`
+* `refresh`
+* `refreshsizeestimates`
+* `reloadlocalschema`
+* `replaybatchlog`
+* `resetlocalschema`
+* `resumehandoff`
+* `ring`
+* `scrub`
+* `setcachecapacity` - for arguments use `"key-cache-capacity" = "<row-cache-capacity> <counter-cache-capacity>"`
+* `setcachekeystosave` - for arguments use `"key-cache-keys-to-save":"<row-cache-keys-to-save> <counter-cache-keys-to-save>"`
+* `setcompactionthreshold` - for arguments use `"<keyspace>"="<table> <minthreshold> <maxthreshold>`
+* `setcompactionthroughput`
+* `setconcurrentcompactors`
+* `sethintedhandoffthrottlekb`
+* `setinterdcstreamthroughput`
+* `setstreamthroughput`
+* `settimeout`
+* `settraceprobability`
+* `statusbackup`
+* `statusbinary`
+* `statusgossip`
+* `statushandoff`
+* `stop`
+* `tablehistograms`
+* `tablestats`
+* `toppartitions`
+* `tpstats`
+* `truncatehints`
+* `verify`
+* `version`
+* `viewbuildstatus`
+
+## Next steps
+
+* [Create a managed instance cluster from the Azure portal](create-cluster-portal.md)
+* [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
+* [Management operations in Azure Managed Instance for Apache Cassandra](management-operations.md)
managed-instance-apache-cassandra Management Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/management-operations.md
Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.micr
> > In the event that we investigate a support case and discover that the root cause of the issue is at the Apache Cassandra configuration level (and not any underlying platform level aspects we maintain), the case may be closed. Where possible, we will also provide recommendations and guidance on remediation. We therefore recommend you [enable metrics](visualize-prometheus-grafana.md) and/or become familiar with our [Azure monitor integration](monitor-clusters.md ) in order to prevent common application/configuration level issues in Apache Cassandra, such as the above.
+>[!WARNING]
+> Azure Managed Instance for Apache Cassandra also let's you run `nodetool` and `sstable` commands for routine DBA administration - see article [here](dba-commands.md). Some of these commands can destabilize the cassandra cluster and should only be run carefully and after being tested in non-production environments. Where possible, a `--dry-run` option should be deployed first. Microsoft cannot offer any SLA or support on issues with running commands which alter the default database configuration and/or tables.
+ ## Backup and restore Snapshot backups are enabled by default and taken every 4 hours with [Medusa](https://github.com/thelastpickle/cassandra-medusa). Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There is no cost for backups. To restore from a backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-your-marketplace-benefits.md
Last updated 12/14/2021--++ # Your commercial marketplace benefits
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
For SaaS apps that run in your (the publisherΓÇÖs) Azure subscription, infrastru
SaaS app offers that are sold through Microsoft support monthly or annual billing based on a flat fee, per user, or consumption charges using the [metered billing service](./partner-center-portal/saas-metered-billing.md). The commercial marketplace operates on an agency model, whereby publishers set prices, Microsoft bills customers, and Microsoft pays revenue to publishers while withholding an agency fee.
-The following example shows a sample breakdown of costs and payouts to demonstrate the agency model. In this example, Microsoft bills $100.00 to the customer for your software license and pays out $80.00 to the publisher.
+The following example shows a sample breakdown of costs and payouts to demonstrate the agency model. In this example, Microsoft bills $100.00 to the customer for your software license and pays out $97.00 to the publisher.
| Your license cost | $100 per month | | | - |
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Previously updated : 01/11/2022 Last updated : 02/22/2022 # What's new in the Microsoft commercial marketplace
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | | |
-| Analytics | Added a [Revenue Dashboard](revenue-dashboard.md) to Partner Center, including a revenue report, [sample queries](analytics-sample-queries.md#revenue-report-queries), and [FAQs](/azure/marketplace/analytics-faq#revenue) page. | 2021-12-08 |
+| Offers | An ISV can now specify time-bound margins for CSP partners to incentivize them to sell it to their customers. When their partner makes a sale to a customer, Microsoft will pay the ISV the wholesale price. See [ISV to CSP Partner private offers](/azure/marketplace/isv-csp-reseller) and [the FAQs](/azure/marketplace/isv-csp-faq). | 2022-02-15 |
+| Analytics | We added a new [Customer Retention Dashboard](/azure/marketplace/customer-retention-dashboard) that provides vital insights into customer retention and engagement. See the [FAQ article](/azure/marketplace/analytics-faq). | 2022-02-15 |
+| Analytics | We added a Quality of Service (QoS) report query to the [List of system queries](/azure/marketplace/analytics-system-queries) used in the Create Report API. | 2022-01-27 |
+| Offers | Added a [Revenue Dashboard](revenue-dashboard.md) to Partner Center, including a revenue report, [sample queries](analytics-sample-queries.md#revenue-report-queries), and [FAQs](/azure/marketplace/analytics-faq#revenue) page. | 2021-12-08 |
| Offers | Container and container apps offers can now use the Microsoft [Standard Contract](standard-contract.md). | 2021-11-02 | | Offers | Private plans for [SaaS offers](plan-saas-offer.md) are now available on AppSource. | 2021-10-06 | | Offers | In [Set up an Azure Marketplace subscription for hosted test drives](test-drive-azure-subscription-setup.md), for **Set up for Dynamics 365 apps on Dataverse and Power Apps**, we added a new method to remove users from your Azure tenant. | 2021-10-01 |
Learn about important updates in the commercial marketplace program of Partner C
| Offers | While [private plans](private-plans.md) were previously only available on the Azure portal, they are now also available on Microsoft AppSource. | 2021-09-10 | | Analytics | Publishers of Azure application offers can view offer deployment health in the Quality of service (QoS) reports. QoS helps publishers understand the reasons for offer deployment failures and provides actionable insights for their remediation. For details, see [Quality of service (QoS) dashboard](quality-of-service-dashboard.md). | 2021-09-07 | | Policy | The SaaS customer [refund window](/marketplace/refund-policies) is now [72 hours](/azure/marketplace/marketplace-faq-publisher-guide) for all offers. | 2021-09-01 |
-| Offers | Additional properties at the plan level are now available for Azure Virtual Machine offers. See the [virtual machine technical configuration properties](azure-vm-plan-technical-configuration.md#properties) article for more information. | 2021-07-26 |
-| Fees | Microsoft has reduced its standard store service fee to 3%. See [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md#examples-of-pricing-and-store-fees) and Common questions about payouts and taxes, "[How do I find the current Store Service Fee and the payout rate?](/partner-center/payout-faq)". | 2021-07-14 |
| ## Tax updates | Category | Description | Date | | | | |
+| Taxation | - Kenya, Moldova, Tajikistan, and Uzbekistan were moved from the Publisher/Developer managed list to the [End-customer taxation with differences in marketplace](/partner-center/tax-details-marketplace) list to show the difference in treatment between the two Marketplaces. <br> - Rwanda and Qatar were added to the [Publisher/Developer managed countries](/partner-center/tax-details-marketplace) list. <br> - Barbados was moved from the [Publisher/Developer managed countries](/partner-center/tax-details-marketplace) list to [Microsoft Managed country](/partner-center/tax-details-marketplace) list. | 2022-02-10 |
| Payouts | We've updated the external tax form page, including instructions on how to reconcile 1099-k forms; see questions about tax forms at [Understand IRS tax forms issued by Microsoft](/partner-center/understand-irs-tax-forms). | 2022-01-06 | | Taxation | Nigeria and Thailand are now [Microsoft-managed countries](/partner-center/tax-details-marketplace) in Azure Marketplace. | 2021-09-13 |
-| Taxation | End-customer taxation in Australia is managed by Microsoft, except for customer purchases made through an enterprise agreement, which are managed by the publisher. | 2021-07-01 |
-| Taxation | Updated [tax details page](/partner-center/tax-details-marketplace) country list to include the following: <br><br> - Argentina <br> - Bulgaria <br> - Hong Kong SAR <br> - Korea (South) <br>- Pakistan <br> - Palestinian Authority <br> - Panama <br> - Paraguay <br> - Peru <br> - Philippines <br> - Saint Kitts and Nevis <br> - Senegal <br> - Sri Lanka <br> - Tajikistan <br> - Tanzania <br> - Thailand <br> - Trinidad and Tobago <br> - Tunisia <br> - Turkmenistan <br> - Uganda <br> - Uzbekistan <br> - Zimbabwe | 2021-07-01 |
-| Taxation | Nigeria moved from the "shared publisher/developer-managed countries" list to the ΓÇ£end-customer taxation with differences in Marketplaces". | 2021-07-01 |
| ## Documentation updates | Category | Description | Date | | | - | - |
-| Payouts | Updated the payment schedule on [Payout schedules and processes](/partner-center/payout-policy-details), including terminology and graphics. | 2022-01-19 |
+| Payouts | We updated the payment schedule for [Payout schedules and processes](/partner-center/payout-policy-details). | 2022-01-19 |
+| Analytics | Added questions and answers to the [Commercial marketplace analytics FAQ](/azure/marketplace/analytics-faq), such as enrolling in the commercial marketplace, where to create a marketplace offer, getting started with programmatic access to commercial marketplace analytics reports, and more. | 2022-01-07 |
| Offers | Added a new article, [Troubleshooting Private Plans in the commercial marketplace](azure-private-plan-troubleshooting.md). | 2021-12-13 | | Offers | We have updated the names of [Dynamics 365](./marketplace-dynamics-365.md#licensing-options) offer types: <br><br> - Dynamics 365 for Customer Engagement &amp; PowerApps is now **Dynamics 365 apps on Dataverse and Power Apps** <br> - Dynamics 365 for operations is now **Dynamics 365 Operations Apps** <br> - Dynamics 365 business central is now **Dynamics 365 Business Central** | 2021-12-03 | | Policy | WeΓÇÖve created an [FAQ topic](/legal/marketplace/mpa-faq) to answer publisher questions about the Microsoft Publisher Agreement. | 2021-09-27 |
media-services Face Redaction Event Based Python Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/face-redaction-event-based-python-quickstart.md
The bash script below is used for configuring the Resources after they have been
- Create the Azure Media Services Transform using a REST API call. This transform will be called in the Azure Function. > [!NOTE]
-> Currently, neither the Azure Media Services v3 Python SDK, nor Azure CLI did support the creation of a FaceRedaction Transform. We therefore the Rest API method to create the transform job.
+> Currently, neither the Azure Media Services v3 Python SDK, nor Azure CLI did support the creation of a FaceRedaction Transform. We therefore the REST API method to create the transform job.
[!code-bash[Main](../../../media-services-v3-python/VideoAnalytics/FaceRedactorEventBased/AzureServicesProvisioning/configure_resources.azcli)]
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/release-notes.md
You also now have an account level feature flag to allow/block public internet a
### .NET SDK (Microsoft.Azure.Management.Media) 5.0.0 release available in NuGet
-The [Microsoft.Azure.Management.Media](https://www.nuget.org/packages/Microsoft.Azure.Management.Media/5.0.0) .NET SDK version 5.0.0 is now released on NuGet. This version is generated to work with the [2021-06-01 stable](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2021-06-01) version of the Open API (Swagger) ARM Rest API.
+The [Microsoft.Azure.Management.Media](https://www.nuget.org/packages/Microsoft.Azure.Management.Media/5.0.0) .NET SDK version 5.0.0 is now released on NuGet. This version is generated to work with the [2021-06-01 stable](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2021-06-01) version of the Open API (Swagger) ARM REST API.
For details on changes from the 4.0.0 release see the [change log](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/mediaservices/Microsoft.Azure.Management.Medi).
media-services Postman Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/previous/postman-collection.md
This article contains a definition of the **Postman** collection that contains g
}, { "name": "Functions",
- "description": "Rest API Functions\nhttps://msdn.microsoft.com/library/azure/jj683097.aspx\n",
+ "description": "REST API Functions\nhttps://msdn.microsoft.com/library/azure/jj683097.aspx\n",
"item": [ { "name": "CreateFileInfos Function",
migrate Add Server Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/add-server-credentials.md
Last updated 03/18/2021
Follow this article to learn how to add multiple server credentials on the appliance configuration manager to perform software inventory (discover installed applications), agentless dependency analysis and discover web apps, and SQL Server instances and databases.
-The [Azure Migrate appliance](migrate-appliance.md) is a lightweight appliance used by Azure Migrate: Discovery and assessment to discover on-premises servers running in VMware environment and send server configuration and performance metadata to Azure. The appliance can also be used to perform software inventory, agentless dependency analysis and discover of web app, and SQL Server instances and databases.
+The [Azure Migrate appliance](migrate-appliance.md) is a lightweight appliance used by Azure Migrate: Discovery and assessment to discover on-premises servers and send server configuration and performance metadata to Azure. The appliance can also be used to perform software inventory, agentless dependency analysis and discover of web app, and SQL Server instances and databases.
-If you want to use these features, you can provide server credentials by following the steps below. The appliance will attempt to automatically map the credentials to the servers to perform the discovery features.
+> [!Note]
+> Currently the discovery of web apps and SQL Server instances and databases is only available in appliance used for discovery and assessment of servers running in VMware environment.
-## Add credentials for servers running in VMware environment
+If you want to use these features, you can provide server credentials by following the steps below. In case of servers running on vCenter Server(s) and Hyper-V host(s)/cluster(s), the appliance will attempt to automatically map the credentials to the servers to perform the discovery features.
+
+## Add server credentials
### Types of server credentials supported
The types of server credentials supported are listed in the table below:
Type of credentials | Description |
-**Domain credentials** | You can add **Domain credentials** by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> To provide domain credentials, you need to specify the **Domain name** which must be provided in the FQDN format (for example, prod.corp.contoso.com). <br/><br/> You also need to specify a friendly name for credentials, username, and password. <br/><br/> The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. <br/><br/>For the appliance to validate the domain credentials with the domain controller, it should be able to resolve the domain name. Ensure that you have provided the correct domain name while adding the credentials else the validation will fail.<br/><br/> The appliance will not attempt to map the domain credentials that have failed validation. You need to have at least one successfully validated domain credential or at least one non-domain credential to proceed with software inventory.<br/><br/>The domain credentials mapped automatically against the Windows servers will be used to perform software inventory and can also be used to discover web apps, and SQL Server instances and databases _(if you have configured Windows authentication mode on your SQL Servers)_.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.
+**Domain credentials** | You can add **Domain credentials** by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> To provide domain credentials, you need to specify the **Domain name** which must be provided in the FQDN format (for example, prod.corp.contoso.com). <br/><br/> You also need to specify a friendly name for credentials, username, and password. <br/><br/> The domain credentials added will be automatically validated for authenticity against the Active Directory of the domain. This is to prevent any account lockouts when the appliance attempts to map the domain credentials against discovered servers. <br/><br/>For the appliance to validate the domain credentials with the domain controller, it should be able to resolve the domain name. Ensure that you have provided the correct domain name while adding the credentials else the validation will fail.<br/><br/> The appliance will not attempt to map the domain credentials that have failed validation. You need to have at least one successfully validated domain credential or at least one non-domain credential to start the discovery.<br/><br/>The domain credentials mapped automatically against the Windows servers will be used to perform software inventory and can also be used to discover web apps, and SQL Server instances and databases _(if you have configured Windows authentication mode on your SQL Servers)_.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.
**Non-domain credentials (Windows/Linux)** | You can add **Windows (Non-domain)** or **Linux (Non-domain)** by selecting the required option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. **SQL Server Authentication credentials** | You can add **SQL Server Authentication** credentials by selecting the option from the drop-down in the **Add credentials** modal. <br/><br/> You need to specify a friendly name for credentials, username, and password. <br/><br/> You can add this type of credentials to discover SQL Server instances and databases running in your VMware environment, if you have configured SQL Server authentication mode on your SQL Servers.<br/> [Learn more](/dotnet/framework/data/adonet/sql/authentication-in-sql-server) about the types of authentication modes supported on SQL Servers.<br/><br/> You need to provide at least one successfully validated domain credential or at least one Windows (Non-domain) credential so that the appliance can complete the software inventory to discover SQL installed on the servers before it uses the SQL Server authentication credentials to discover the SQL Server instances and databases.
+> [!Note]
+> Currently the SQL Server authentication credentials can only be provided in appliance used for discovery and assessment of servers running in VMware environment.
+ Check the permissions required on the Windows/Linux credentials to perform the software inventory, agentless dependency analysis and discover web apps, and SQL Server instances and databases. ### Required permissions
Feature | Windows credentials | Linux credentials
## Next steps
-Review the tutorials for [discovery of servers running in your VMware environment](tutorial-discover-vmware.md)
+Review the tutorials for discovery of servers running in your [VMware environment](tutorial-discover-vmware.md) or [Hyper-V environment](tutorial-discover-hyper-v.md) or for [discovery of physical servers](tutorial-discover-physical.md)
migrate Common Questions Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md
The appliance needs access to Azure URLs. [Review](migrate-appliance.md#url-acce
See the following articles for information about data that the Azure Migrate appliance collects on servers: -- **Servers in VMware environment**: [Review](migrate-appliance.md#collected-datavmware) collected data.-- **Servers in Hyper-V environment**: [Review](migrate-appliance.md#collected-datahyper-v) collected data.-- **Physical or virtual servers**:[Review](migrate-appliance.md#collected-dataphysical) collected data.
+- **Servers in VMware environment**: [Review](discovered-metadata.md#collected-metadata-for-vmware-servers) collected data.
+- **Servers in Hyper-V environment**: [Review](discovered-metadata.md#collected-metadata-for-hyper-v-servers) collected data.
+- **Physical or virtual servers**: [Review](discovered-metadata.md#collected-data-for-physical-servers) collected data.
## How is data stored?
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
If you're assessing servers by using a CSV file, you don't need an appliance. In
## What data does the appliance collect?
-If you're using the Azure Migrate appliance for assessment, learn about the metadata and performance data that's collected for [VMware](migrate-appliance.md#collected-datavmware) and [Hyper-V](migrate-appliance.md#collected-datahyper-v).
+If you're using the Azure Migrate appliance for assessment, learn about the metadata and performance data that's collected for [VMware](discovered-metadata.md#collected-metadata-for-vmware-servers) and [Hyper-V](discovered-metadata.md#collected-metadata-for-hyper-v-servers).
## How does the appliance calculate performance data?
migrate Concepts Azure Vmware Solution Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-vmware-solution-assessment-calculation.md
If you're assessing servers by using a CSV file, you don't need an appliance. In
## What data does the appliance collect?
-If you're using the Azure Migrate appliance for assessment, learn about the metadata and performance data that's collected for [VMware](migrate-appliance.md#collected-datavmware).
+If you're using the Azure Migrate appliance for assessment, learn about the metadata and performance data that's collected for [VMware](discovered-metadata.md#collected-metadata-for-vmware-servers).
## How does the appliance calculate performance data?
Here's what's included in an AVS assessment:
| **Reserved Instances (RIs)** | This property helps you specify Reserved Instances in AVS if purchased and the term of the Reserved Instance. Your cost estimates will take the option chosen into account.[Learn more](../azure-vmware/reserved-instance.md) <br/><br/> If you select reserved instances, you can't specify ΓÇ£Discount (%)ΓÇ¥.| | **Node type** | Specifies the [AVS Node type](../azure-vmware/concepts-private-clouds-clusters.md) used to be used in Azure. The default node type is AV36. More node types might be available in future. Azure Migrate will recommend a required number of nodes for the VMs to be migrated to AVS. | | **FTT Setting, RAID Level** | Specifies the valid combination of Failures to Tolerate and Raid combinations. The selected FTT option combined with RAID level and the on-premises VM disk requirement will determine the total vSAN storage required in AVS. Total available storage after calculations also includes a) space reserved for management objects such as vCenter and b) 25% storage slack required for vSAN operations. |
-| **Sizing criterion** | Sets the criteria to be used to determine memory, cpu and storage requirements for AVS nodes. You can opt for*performance-based* sizing or *as on-premises* without considering the performance history. To simply lift and shift choose as on-premises. To obtain usage based sizing choose performance based. |
+| **Sizing criterion** | Sets the criteria to be used to determine memory, cpu and storage requirements for AVS nodes. You can opt for*performance-based* sizing or *as on-premises* without considering the performance history. To simply lift and shift, choose as on-premises. To obtain usage based sizing, choose performance based. |
| **Performance history** | Sets the duration to consider in evaluating the performance data of servers. This property is applicable only when the sizing criteria is*performance-based*. | | **Percentile utilization** | Specifies the percentile value of the performance sample set to be considered for right-sizing. This property is applicable only when the sizing is performance-based. | | **Comfort factor** | Azure Migrate considers a buffer (comfort factor) during assessment. This buffer is applied on top of server utilization data for VMs (CPU, memory and disk). The comfort factor accounts for issues such as seasonal usage, short performance history, and likely increases in future usage. For example, a 10-core VM with 20% utilization normally results in a 2-core VM. However, with a comfort factor of 2.0x, the result is a 4-core VM instead. |
Here's what's included in an AVS assessment:
| **Currency** | Shows the billing currency for your account. | | **Discount (%)** | Lists any subscription-specific discount you receive on top of the Azure offer. The default setting is 0%. | | **Azure Hybrid Benefit** | Specifies whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). Although it has no impact on Azure VMware solutions pricing due to the node-based price, customers can still apply the on-premises OS or SQL licenses (Microsoft based) in AVS using Azure Hybrid Benefits. Other software OS vendors will have to provide their own licensing terms such as RHEL for example. |
-| **vCPU Oversubscription** | Specifies the ratio of number of virtual cores tied to one physical core in the AVS node. The default value in the calculations is 4 vCPU:1 physical core in AVS. API users can set this value as an integer. Note that vCPU Oversubscription > 4:1 may impact workloads depending on their CPU usage. When sizing we always assume 100% utilization of the cores chosen. |
-| **Memory overcommit factor** | Specifies the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5 for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place. |
+| **vCPU Oversubscription** | Specifies the ratio of number of virtual cores tied to one physical core in the AVS node. The default value in the calculations is 4 vCPU:1 physical core in AVS. API users can set this value as an integer. Note that vCPU Oversubscription > 4:1 may impact workloads depending on their CPU usage. When sizing, we always assume 100% utilization of the cores chosen. |
+| **Memory overcommit factor** | Specifies the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5, for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place. |
| **Dedupe and compression factor** | Specifies the anticipated dedupe and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage config. These vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place. | ## Azure VMware Solution (AVS) suitability analysis
-AVS assessments assess each on-premises VMs for its suitability for AVS by reviewing the server properties. It also assigns each assessed server to one of the following suitability categories:
+AVS assessments assess each on-premises VM for its suitability for AVS by reviewing the server properties. It also assigns each assessed server to one of the following suitability categories:
- **Ready for AVS**: The server can be migrated as-is to Azure (AVS) without any changes. It will start in AVS with full AVS support. - **Ready with conditions**: There might be some compatibility issues example internet protocol or deprecated OS in VMware and need to be remediated before migrating to Azure VMware Solution. To fix any readiness problems, follow the remediation guidance the assessment suggests.
After a server is marked as ready for AVS, AVS Assessment makes node sizing reco
- If the assessment uses*performance-based sizing*, Azure Migrate considers the performance history of the server to make the appropriate sizing recommendation for AVS. This method is especially helpful if you've over-allocated the on-premises VM, but utilization is low and you want to right-size the VM in AVS to save costs. This method will help you optimize the sizes during migration. > [!NOTE]
->If you import serves by using a CSV file, the performance values you specify (CPU utilization, Memory utilization, Storage in use, Disk IOPS and throughput) are used if you choose performance-based sizing. You will not be able to provide performance history and percentile information.
+>If your import serves by using a CSV file, the performance values you specify (CPU utilization, Memory utilization, Storage in use, Disk IOPS, and throughput) are used if you choose performance-based sizing. You will not be able to provide performance history and percentile information.
-- If you don't want to consider the performance data for VM sizing and want to take the on-premises servers as-is to AVS, you can set the sizing criteria to* as on-premises*. Then, the assessment will size the VMs based on the on-premises configuration without considering the utilization data.
+- If you don't want to consider the performance data for VM sizing and want to take the on-premises servers as-is to AVS, you can set the sizing criteria to *as on-premises*. Then, the assessment will size the VMs based on the on-premises configuration without considering the utilization data.
### FTT Sizing Parameters
For performance-based sizing, Azure Migrate appliance profiles the on-premises e
After the effective utilization value is determined, the storage, network, and compute sizing is handled as follows.
-**Storage sizing**: Azure Migrate uses the total on-premises VM disk space as a calculation parameter to determine AVS vSAN storage requirements in addition to the customer-selected FTT setting. FTT - Failures to tolerate as well as requiring a minimum no of nodes per FTT option will determine the total vSAN storage required combined with the VM disk requirement. If you import serves by using a CSV file, storage utilization is taken into consideration when you create a performance based assessment. If you create an as-on-premises assessment, the logic only looks at allocated storage per VM.
+**Storage sizing**: Azure Migrate uses the total on-premises VM disk space as a calculation parameter to determine AVS vSAN storage requirements in addition to the customer-selected FTT setting. FTT - Failures to tolerate as well as requiring a minimum no of nodes per FTT option will determine the total vSAN storage required combined with the VM disk requirement. If your import serves by using a CSV file, storage utilization is taken into consideration when you create a performance based assessment. If you create an as-on-premises assessment, the logic only looks at allocated storage per VM.
**Network sizing**: Azure VMware Solution assessments currently do not take any network settings into consideration for node sizing. While migrating to Azure VMware Solution, minimums and maximums as per VMware NSX- T standards are used.
If you use *as on-premises sizing*, AVS assessment doesn't consider the performa
### CPU utilization on AVS nodes
-CPU utilization assumes 100% usage of the available cores. To reduce the no of nodes required one can increase the oversubscription from 4:1 to say 6:1 based on workload characteristics and on-premises experience. Unlike for disk, AVS does not place any limits on CPU utilization, it's up to customers to ensure their cluster performs optimally so if "running hot" is required adjust accordingly. To allow more room for growth, reduce the oversubscription or increase the value for growth factor.
+CPU utilization assumes 100% usage of the available cores. To reduce the no of nodes required one can increase the oversubscription from 4:1 to say 6:1 based on workload characteristics and on-premises experience. Unlike for disk, AVS does not place any limits on CPU utilization. It's up to customers to ensure their cluster performs optimally so if "running hot" is required, adjust accordingly. To allow more room for growth, reduce the oversubscription or increase the value for growth factor.
CPU utilization also already accounts for management overhead from vCenter, NSX manager and other smaller resources.
Memory utilization also already accounts for management overhead from vCenter, N
Storage utilization is calculated based on the following sequence:
-1. Size required for VM's (either allocated as is or performance based used space)
+1. Size required for VMs (either allocated as is or performance based used space)
2. Apply growth factor if any 3. Add management overhead and apply FTT ratio 4. Apply dedupe and compression factor 5. Apply required 25% slack for vSAN 6. Result available storage for VMs out of total storage including management overhead.
-The available storage on a 3 node cluster will be based on the default storage policy which is Raid-1 and uses thick provisioning. When calculating for erasure coding or Raid-5 for example, a minimum of 4 nodes is required. Note that in Azure VMware Solution, the storage policy for customer workload can be changed by the admin or Run Command(Currently in Preview). [Learn more] (./azure-vmware/configure-storage-policy.md)
+The available storage on a 3 node cluster will be based on the default storage policy, which is Raid-1 and uses thick provisioning. When calculating for erasure coding or Raid-5 for example, a minimum of 4 nodes is required. Note that in Azure VMware Solution, the storage policy for customer workload can be changed by the admin or Run Command(Currently in Preview). [Learn more] (./azure-vmware/configure-storage-policy.md)
### Limiting factor
-The limiting factor shown in assessments could be CPU or memory or storage resources based on the utilization on nodes. It is the resource which is limiting or determining the number of hosts/nodes required to accommodate the resources. For example, in an assessment if it was found that after migrating 8 VMware VMs to Azure VMware Solution, 50% of CPU resources will be utilized, 14% of memory is utilized and 18% of storage will be utilized on the 3 Av36 nodes and thus CPU is the limiting factor.
+The limiting factor shown in assessments could be CPU or memory or storage resources based on the utilization on nodes. It is the resource, which is limiting or determining the number of hosts/nodes required to accommodate the resources. For example, in an assessment if it was found that after migrating 8 VMware VMs to Azure VMware Solution, 50% of CPU resources will be utilized, 14% of memory is utilized and 18% of storage will be utilized on the 3 Av36 nodes and thus CPU is the limiting factor.
## Confidence ratings
Here are a few reasons why an assessment could get a low confidence rating:
- VMs are powered on for the duration of the assessment - Outbound connections on ports 443 are allowed
- - For Hyper-V VMs dynamic memory is enabled
+ - For Hyper-V VMs, dynamic memory is enabled
Please 'Recalculate' the assessment to reflect the latest changes in confidence rating. - Some VMs were created during the time for which the assessment was calculated. For example, assume you created an assessment for the performance history of the last month, but some VMs were created only a week ago. In this case, the performance data for the new VMs will not be available for the entire duration and the confidence rating would be low.
migrate Discovered Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discovered-metadata.md
+
+ Title: Discovered metadata
+description: Provides details of the metadata discovered by Azure Migrate appliance.
++
+ms.
+ Last updated : 02/21/2022++
+# Metadata discovered by Azure Migrate appliance
+
+This article provides details of the metadata discovered by Azure Migrate appliance.
+
+The [Azure Migrate appliance](migrate-appliance.md) is a lightweight appliance that the Azure Migrate: Discovery and assessment tool uses to discover servers running in your environment and send server configuration and performance metadata to Azure.
+
+Metadata discovered by the Azure Migrate appliance helps you to assess server readiness for migration to Azure, right-size servers and plans costs. Microsoft doesn't use this data in any license compliance audit.
+
+## Collected metadata for VMware servers
+
+The appliance collects configuration, performance metadata, data about installed applications, roles and features (software inventory) and dependency data (if agentless dependency analysis is enabled) from servers running in your VMware environment.
+
+Here's the full list of server metadata that the appliance collects and sends to Azure:
+
+**DATA** | **COUNTER**
+ |
+**Server details** |
+Server ID | vm.Config.InstanceUuid
+Server name | vm.Config.Name
+vCenter Server ID | VMwareClient.Instance.Uuid
+Server description | vm.Summary.Config.Annotation
+License product name | vm.Client.ServiceContent.About.LicenseProductName
+Operating system type | vm.SummaryConfig.GuestFullName
+Boot type | vm.Config.Firmware
+Number of cores | vm.Config.Hardware.NumCPU
+Memory (MB) | vm.Config.Hardware.MemoryMB
+Number of disks | vm.Config.Hardware.Device.ToList().FindAll(x => is VirtualDisk).count
+Disk size list | vm.Config.Hardware.Device.ToList().FindAll(x => is VirtualDisk)
+Network adapters list | vm.Config.Hardware.Device.ToList().FindAll(x => is VirtualEthernet).count
+CPU utilization | cpu.usage.average
+Memory utilization |mem.usage.average
+**Per disk details** |
+Disk key value | disk.Key
+Dikunit number | disk.UnitNumber
+Disk controller key value | disk.ControllerKey.Value
+Gigabytes provisioned | virtualDisk.DeviceInfo.Summary
+Disk name | Value generated using disk.UnitNumber, disk.Key, disk.ControllerKey.VAlue
+Read operations per second | virtualDisk.numberReadAveraged.average
+Write operations per second | virtualDisk.numberWriteAveraged.average
+Read throughput (MB per second) | virtualDisk.read.average
+Write throughput (MB per second) | virtualDisk.write.average
+**Per NIC details** |
+Network adapter name | nic.Key
+MAC address | ((VirtualEthernetCard)nic).MacAddress
+IPv4 addresses | vm.Guest.Net
+IPv6 addresses | vm.Guest.Net
+Read throughput (MB per second) | net.received.average
+Write throughput (MB per second) | net.transmitted.average
+**Inventory path details** |
+Name | container.GetType().Name
+Type of child object | container.ChildType
+Reference details | container.MoRef
+Parent details | Container.Parent
+Folder details per server | ((Folder)container).ChildEntity.Type
+Datacenter details per server | ((Datacenter)container).VmFolder
+Datacenter details per host folder | ((Datacenter)container).HostFolder
+Cluster details per host | ((ClusterComputeResource)container).Host
+Host details per server | ((HostSystem)container).VM
+
+### Performance metadata
+
+Here's the performance data that an appliance collects for a server running on VMware and sends to Azure:
+
+**Data** | **Counter** | **Assessment impact**
+ | |
+CPU utilization | cpu.usage.average | Recommended server size/cost
+Memory utilization | mem.usage.average | Recommended server size/cost
+Disk read throughput (MB per second) | virtualDisk.read.average | Calculation for disk size, storage cost, server size
+Disk writes throughput (MB per second) | virtualDisk.write.average | Calculation for disk size, storage cost, server size
+Disk read operations per second | virtualDisk.numberReadAveraged.average | Calculation for disk size, storage cost, server size
+Disk writes operations per second | virtualDisk.numberWriteAveraged.average | Calculation for disk size, storage cost, server size
+NIC read throughput (MB per second) | net.received.average | Calculation for server size
+NIC writes throughput (MB per second) | net.transmitted.average |Calculation for server size
+
+## Collected metadata for Hyper-V servers
+
+The appliance collects configuration, performance metadata, data about installed applications, roles and features (software inventory) and dependency data (if agentless dependency analysis is enabled) from servers running in your Hyper-V environment.
+
+Here's the full list of server metadata that the appliance collects and sends to Azure.
+
+**Data** | **WMI class** | **WMI class property**
+ | |
+**Server details** |
+Serial number of BIOS | Msvm_BIOSElement | BIOSSerialNumber
+Server type (Gen 1 or 2) | Msvm_VirtualSystemSettingData | VirtualSystemSubType
+Server display name | Msvm_VirtualSystemSettingData | ElementName
+Server version | Msvm_ProcessorSettingData | VirtualQuantity
+Memory (bytes) | Msvm_MemorySettingData | VirtualQuantity
+Maximum memory that can be consumed by server | Msvm_MemorySettingData | Limit
+Dynamic memory enabled | Msvm_MemorySettingData | DynamicMemoryEnabled
+Operating system name/version/FQDN | Msvm_KvpExchangeComponent | GuestIntrinsicExchangeItems Name Data
+Server power status | Msvm_ComputerSystem | EnabledState
+**Per disk details** |
+Disk identifier | Msvm_VirtualHardDiskSettingData | VirtualDiskId
+Virtual hard disk type | Msvm_VirtualHardDiskSettingData | Type
+Virtual hard disk size | Msvm_VirtualHardDiskSettingData | MaxInternalSize
+Virtual hard disk parent | Msvm_VirtualHardDiskSettingData | ParentPath
+**Per NIC details** |
+IP addresses (synthetic NICs) | Msvm_GuestNetworkAdapterConfiguration | IPAddresses
+DHCP enabled (synthetic NICs) | Msvm_GuestNetworkAdapterConfiguration | DHCPEnabled
+NIC ID (synthetic NICs) | Msvm_SyntheticEthernetPortSettingData | InstanceID
+NIC MAC address (synthetic NICs) | Msvm_SyntheticEthernetPortSettingData | Address
+NIC ID (legacy NICs) | MsvmEmulatedEthernetPortSetting Data | InstanceID
+NIC MAC ID (legacy NICs) | MsvmEmulatedEthernetPortSetting Data | Address
+
+### Performance data
+
+Here's the server performance data that the appliance collects and sends to Azure.
+
+**Performance counter class** | **Counter** | **Assessment impact**
+ | |
+Hyper-V Hypervisor Virtual Processor | % Guest Run Time | Recommended server size/cost
+Hyper-V Dynamic Memory Server | Current Pressure (%)<br/> Guest Visible Physical Memory (MB) | Recommended server size/cost
+Hyper-V Virtual Storage Device | Read Bytes/Second | Calculation for disk size, storage cost, server size
+Hyper-V Virtual Storage Device | Write Bytes/Second | Calculation for disk size, storage cost, server size
+Hyper-V Virtual Network Adapter | Bytes Received/Second | Calculation for server size
+Hyper-V Virtual Network Adapter | Bytes Sent/Second | Calculation for server size
+
+- CPU utilization is the sum of all usage, for all virtual processors attached to a server.
+- Memory utilization is (Current Pressure * Guest Visible Physical Memory) / 100.
+- Disk and network utilization values are collected from the listed Hyper-V performance counters.
+
+## Collected data for Physical servers
+
+The appliance collects configuration, performance metadata, data about installed applications, roles and features (software inventory) and dependency data (if agentless [dependency analysis](concepts-dependency-visualization.md) is enabled) from physical servers or server running on other clouds like AWS, GCP etc.
+
+### Windows server metadata
+
+Here's the full list of Windows server metadata that the appliance collects and sends to Azure.
+
+**Data** | **WMI class** | **WMI class property**
+ | |
+FQDN | Win32_ComputerSystem | Domain, Name, PartOfDomain
+Processor core count | Win32_PRocessor | NumberOfCores
+Memory allocated | Win32_ComputerSystem | TotalPhysicalMemory
+BIOS serial number | Win32_ComputerSystemProduct | IdentifyingNumber
+BIOS GUID | Win32_ComputerSystemProduct | UUID
+Boot type | Win32_DiskPartition | Check for partition with Type = **GPT:System** for EFI/BIOS
+OS name | Win32_OperatingSystem | Caption
+OS version |Win32_OperatingSystem | Version
+OS architecture | Win32_OperatingSystem | OSArchitecture
+Disk count | Win32_DiskDrive | Model, Size, DeviceID, MediaType, Name
+Disk size | Win32_DiskDrive | Size
+NIC list | Win32_NetworkAdapterConfiguration | Description, Index
+NIC IP address | Win32_NetworkAdapterConfiguration | IPAddress
+NIC MAC address | Win32_NetworkAdapterConfiguration | MACAddress
+
+### Windows server performance data
+
+Here's the Windows server performance data that the appliance collects and sends to Azure.
+
+**Data** | **WMI class** | **WMI class property**
+ | |
+CPU usage | Win32_PerfFormattedData_PerfOS_Processor | PercentIdleTime
+Memory usage | Win32_PerfFormattedData_PerfOS_Memory | AvailableMBytes
+NIC count | Win32_PerfFormattedData_Tcpip_NetworkInterface | Get the network device count.
+Data received per NIC | Win32_PerfFormattedData_Tcpip_NetworkInterface | BytesReceivedPerSec
+Data transmitted per NIC | BWin32_PerfFormattedData_Tcpip_NetworkInterface | BytesSentPersec
+Disk count | BWin32_PerfFormattedData_PerfDisk_PhysicalDisk | Count of disks
+Disk details | Win32_PerfFormattedData_PerfDisk_PhysicalDisk | DiskWritesPerSec, DiskWriteBytesPerSec, DiskReadsPerSec, DiskReadBytesPerSec.
+
+### Linux server metadata
+
+Here's the full list of Linux server metadata that the appliance collects and sends to Azure.
+
+**Data** | **Commands**
+ |
+FQDN | cat /proc/sys/kernel/hostname, hostname -f
+Processor core count | cat/proc/cpuinfo \| awk '/^processor/{print $3}' \| wc -l
+Memory allocated | cat /proc/meminfo \| grep MemTotal \| awk '{printf "%.0f", $2/1024}'
+BIOS serial number | lshw \| grep "serial:" \| head -n1 \| awk '{print $2}' <br/> /usr/sbin/dmidecode -t 1 \| grep 'Serial' \| awk '{ $1="" ; $2=""; print}'
+BIOS GUID | cat /sys/class/dmi/id/product_uuid
+Boot type | [ -d /sys/firmware/efi ] && echo EFI \|\| echo BIOS
+OS name/version | We access these files for the OS version and name:<br/><br/> /etc/os-release<br/> /usr/lib/os-release <br/> /etc/enterprise-release <br/> /etc/redhat-release<br/> /etc/oracle-release<br/> /etc/SuSE-release<br/> /etc/lsb-release <br/> /etc/debian_version
+OS architecture | uname -m
+Disk count | fdisk -l \| egrep 'Disk.*bytes' \| awk '{print $2}' \| cut -f1 -d ':'
+Boot disk | df /boot \| sed -n 2p \| awk '{print $1}'
+Disk size | fdisk -l \| egrep 'Disk.*bytes' \| egrep $disk: \| awk '{print $5}'
+NIC list | ip -o -4 addr show \| awk '{print $2}'
+NIC IP address | ip addr show $nic \| grep inet \| awk '{print $2}' \| cut -f1 -d "/"
+NIC MAC address | ip addr show $nic \| grep ether \| awk '{print $2}'
+
+### Linux server performance data
+
+Here's the Linux server performance data that the appliance collects and sends to Azure.
+
+| **Data** | **Commands** |
+| | |
+| CPU usage | cat /proc/stat/ \| grep 'cpu' /proc/stat |
+| Memory usage | free \| grep Mem \| awk '{print $3/$2 * 100.0}' |
+| NIC count | lshw -class network \| grep eth[0-60] \| wc -l |
+| Data received per NIC | cat /sys/class/net/eth$nic/statistics/rx_bytes |
+| Data transmitted per NIC | cat /sys/class/net/eth$nic/statistics/tx_bytes |
+| Disk count | fdisk -l \| egrep 'Disk.\*bytes' \| awk '{print $2}' \| cut -f1 -d ':' |
+| Disk details | cat /proc/diskstats |
+
+## Software inventory data
+
+The appliance collects data about installed applications, roles and features (software inventory) from servers running in VMware environment/Hyper-V environment/physical servers or servers running on other clouds like AWS, GCP etc.
+
+### Windows server applications data
+
+Here's the software inventory data that the appliance collects from each discovered Windows server:
+
+**Data** | **Registry Location** | **Key**
+ | |
+Application Name | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br/> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | DisplayName
+Version | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br/> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | DisplayVersion
+Provider | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br/> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Publisher
+
+### Windows server features data
+
+Here's the features data that the appliance collects from each discovered Windows server:
+
+**Data** | **PowerShell cmdlet** | **Property**
+ | |
+Name | Get-WindowsFeature | Name
+Feature Type | Get-WindowsFeature | FeatureType
+Parent | Get-WindowsFeature | Parent
+
+### Windows server operating system data
+
+Here's the operating system data that the appliance collects from each discovered Windows server:
+
+**Data** | **WMI class** | **WMI Class Property**
+ | |
+Name | Win32_operatingsystem | Caption
+Version | Win32_operatingsystem | Version
+Architecture | Win32_operatingsystem | OSArchitecture
+
+### SQL Server metadata
+
+Here's the SQL Server data that the appliance collects from each discovered Windows server:
+
+**Data** | **Registry Location** | **Key**
+ | |
+Name | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL | installedInstance
+Edition | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Setup | Edition
+Service Pack | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Setup | SP
+Version | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Setup | Version
+
+### Linux server application data
+
+Here's the software inventory data that the appliance collects from each discovered Linux server. Based on the operating system of the server, one or more of the commands are run.
+
+**Data** | **Commands**
+ |
+Name | rpm, dpkg-query, snap
+Version | rpm, dpkg-query, snap
+Provider | rpm, dpkg-query, snap
+
+### Linux server operating system data
+
+Here's the operating system data that the appliance collects from each discovered Linux server:
+
+**Data** | **Commands**
+ |
+Name <br/> version | Gathered from one or more of the following files:<br/> <br/>/etc/os-release <br> /usr/lib/os-release <br> /etc/enterprise-release <br> /etc/redhat-release <br> /etc/oracle-release <br> /etc/SuSE-release <br> /etc/lsb-release <br> /etc/debian_version
+Architecture | uname
+
+## SQL Server instances and databases data
+
+Azure Migrate appliance used for discovery of VMware VMs can also collect data on SQL Server instances and databases.
+
+> [!Note]
+> Currently this feature is only available for servers running in your VMware environment.
+
+### SQL database metadata
+
+**Database Metadata** | **Views/ SQL Server properties**
+ |
+Unique identifier of the database | sys.databases
+Server defined database ID | sys.databases
+Name of the database | sys.databases
+Compatibility level of database | sys.databases
+Collation name of database | sys.databases
+State of the database | sys.databases
+Size of the database (in MBs) | sys.master_files
+Drive letter of location containing data files | SERVERPROPERTY, and Software\Microsoft\MSSQLServer\MSSQLServer
+List of database files | sys.databases, sys.master_files
+Service broker is enabled or not | sys.databases
+Database is enabled for change data capture or not | sys.databases
+
+### SQL Server metadata
+
+**Server Metadata** | **Views/ SQL server properties**
+ |
+Server name |SERVERPROPERTY
+FQDN | Connection string derived from discovery of installed applications
+Install ID | sys.dm_server_registry
+Server version | SERVERPROPERTY
+Server edition | SERVERPROPERTY
+Server host platform (Windows/Linux) | SERVERPROPERTY
+Product level of the server (RTM SP CTP) | SERVERPROPERTY
+Default Backup path | SERVERPROPERTY
+Default path of the data files | SERVERPROPERTY, and Software\Microsoft\MSSQLServer\MSSQLServer
+Default path of the log files | SERVERPROPERTY, and Software\Microsoft\MSSQLServer\MSSQLServer
+No. of cores on the server | sys.dm_os_schedulers, sys.dm_os_sys_info
+Server collation name | SERVERPROPERTY
+No. of cores on the server with VISIBLE ONLINE status | sys.dm_os_schedulers
+Unique Server ID | sys.dm_server_registry
+HA enabled or not | SERVERPROPERTY
+Buffer Pool Extension enabled or not | sys.dm_os_buffer_pool_extension_configuration
+Failover cluster configured or not | SERVERPROPERTY
+Server using Windows Authentication mode only | SERVERPROPERTY
+Server installs PolyBase | SERVERPROPERTY
+No. of logical CPUs on the system | sys.dm_server_registry, sys.dm_os_sys_info
+Ratio of the no of logical or physical cores that are exposed by one physical processor package | sys.dm_os_schedulers, sys.dm_os_sys_info
+No of physical CPUs on the system | sys.dm_os_schedulers, sys.dm_os_sys_info
+Date and time server last started | sys.dm_server_registry
+Max server memory use (in MBs) | sys.dm_os_process_memory
+Total no. of users across all databases | sys.databases, sys.logins
+Total size of all user databases | sys.databases
+Size of temp database | sys.master_files, sys.configurations, sys.dm_os_sys_info
+No. of logins | sys.logins
+List of linked servers | sys.servers
+List of agent job | [msdb].[dbo].[sysjobs], [sys].[syslogins], [msdb].[dbo].[syscategories]
+
+### Performance metadata
+
+**Performance** | **Views/ SQL server properties** | **Assessment Impact**
+ | |
+SQL Server CPU utilization| sys.dm_os_ring_buffers| Recommended SKU size (CPU dimension)
+SQL logical CPU count| sys.dm_os_sys_info| Recommended SKU size (CPU dimension)
+SQL physical memory in use| sys.dm_os_process_memory| Unused
+SQL memory utilization percentage| sys.dm_os_process_memory | Unused
+Database CPU utilization| sys.dm_exec_query_stats, sys.dm_exec_plan_attributes| Recommended SKU size (CPU dimension)
+Database memory in use (buffer pool)| sys.dm_os_buffer_descriptors| Recommended SKU size (Memory dimension)
+File read/write IO| sys.dm_io_virtual_file_stats, sys.master_files| Recommended SKU size (IO dimension)
+File num of reads/writes| sys.dm_io_virtual_file_stats, sys.master_files| Recommended SKU size (Throughput dimension)
+File IO stall read/write (ms)| sys.dm_io_virtual_file_stats, sys.master_files| Recommended SKU size (IO latency dimension)
+File size| sys.master_files| Recommended SKU size (Storage dimension)
+
+## ASP.NET web apps data
+
+Azure Migrate appliance used for discovery of VMware VMs can also collect data on ASP.NET web applications.
+
+> [!Note]
+> Currently this feature is only available for servers running in your VMware environment.
+
+Here's the web apps configuration data that the appliance collects from each Windows server discovered in your VMware environment.
+
+**Entity** | **Data**
+ |
+Web apps | Application Name <br/>Configuration Path <br/>Frontend Bindings <br/>Enabled Frameworks <br/>Hosting Web Server<br/>Sub-Applications and virtual applications <br/>Application Pool name <br/>Runtime version <br/>Managed pipeline mode
+Web server | Server Name <br/>Server Type (currently only IIS) <br/>Configuration Location <br/>Version <br/>FQDN <br/>Credentials used for discovery <br/>List of Applications
+
+## Application dependency data
+
+Azure Migrate appliance can collect data about inter-server dependencies for servers running in your VMware environment/Hyper-V environment/ physical servers or servers running on other clouds like AWS, GCP etc.
+
+### Windows server dependencies data
+
+Here's the connection data that the appliance collects from each Windows server, which has been enabled for agentless dependency analysis from portal:
+
+**Data** | **Commands**
+ |
+Local port | netstat
+Local IP address | netstat
+Remote port | netstat
+Remote IP address | netstat
+TCP connection state | netstat
+Process ID | netstat
+Number of active connections | netstat
+
+**Data** | **WMI class** | **WMI class property**
+ | |
+Process name | Win32_Process | ExecutablePath
+Process arguments | Win32_Process | CommandLine
+Application name | Win32_Process | VersionInfo.ProductName parameter of ExecutablePath property
+
+### Linux server dependencies data
+
+Here's the connection data that the appliance collects from each Linux server, which has been enabled for agentless dependency analysis.
+
+**Data** | **Commands**
+ |
+Local port | netstat
+Local IP address | netstat
+Remote port | netstat
+Remote IP address | netstat
+TCP connection state | netstat
+Number of active connections | netstat
+Process ID | netstat
+Process name | ps
+Process arguments | ps
+Application name | dpkg or rpm
+
+## Next steps
+
+- [Learn how](how-to-set-up-appliance-vmware.md) to set up the appliance for VMware.
+- [Learn how](how-to-set-up-appliance-hyper-v.md) to set up the appliance for Hyper-V.
+- [Learn how](how-to-set-up-appliance-physical.md) to set up the appliance for physical servers.
migrate How To Create Group Machine Dependencies Agentless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-group-machine-dependencies-agentless.md
Last updated 6/08/2020
This article describes how to set up agentless dependency analysis using Azure Migrate: Discovery and assessment tool. [Dependency analysis](concepts-dependency-visualization.md) helps you to identify and understand dependencies across servers for assessment and migration to Azure.
-> [!IMPORTANT]
->Agentless dependency analysis is currently available only for servers running in your VMware environment, discovered with the Azure Migrate:Discovery and assessment tool.
- ## Current limitations - In the dependency analysis view, you currently cannot add or remove a server from a group.
This article describes how to set up agentless dependency analysis using Azure M
## Before you start -- Ensure that you have [created a project](./create-manage-projects.md) with the Azure Migrate:Discovery and assessment tool added to it.-- Review [VMware requirements](migrate-support-matrix-vmware.md#vmware-requirements) to perform dependency analysis.-- Review [appliance requirements](migrate-support-matrix-vmware.md#azure-migrate-appliance-requirements) before setting up the appliance.-- [Review dependency analysis requirements](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) before enabling dependency analysis on servers.
+- Ensure that you have [created a project](./create-manage-projects.md) with the Azure Migrate: Discovery and assessment tool added to it.
+- Review the requirements based on your environment and the appliance you are setting up to perform software inventory:
-## Deploy and configure the Azure Migrate appliance
+ Environment | Requirements
+ |
+ Servers running in VMware environment | Review [VMware requirements](migrate-support-matrix-vmware.md#vmware-requirements) <br/> Review [appliance requirements](migrate-appliance.md#appliancevmware)<br/> Review [port access requirements](migrate-support-matrix-vmware.md#port-access-requirements) <br/> Review [agentless dependency analysis requirements](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless)
+ Servers running in Hyper-V environment | Review [Hyper-V host requirements](migrate-support-matrix-hyper-v.md#hyper-v-host-requirements) <br/> Review [appliance requirements](migrate-appliance.md#appliancehyper-v)<br/> Review [port access requirements](migrate-support-matrix-hyper-v.md#port-access)<br/> Review [agentless dependency analysis requirements](migrate-support-matrix-hyper-v.md#dependency-analysis-requirements-agentless)
+ Physical servers or servers running on other clouds | Review [server requirements](migrate-support-matrix-physical.md#physical-server-requirements) <br/> Review [appliance requirements](migrate-appliance.md#appliancephysical)<br/> Review [port access requirements](migrate-support-matrix-physical.md#port-access)<br/> Review [agentless dependency analysis requirements](migrate-support-matrix-physical.md#dependency-analysis-requirements-agentless)
+- Review the Azure URLs that the appliance will need to access in the [public](migrate-appliance.md#public-cloud-urls) and [government clouds](migrate-appliance.md#government-cloud-urls).
-1. [Review](migrate-appliance.md#appliancevmware) the requirements for deploying the Azure Migrate appliance.
-2. Review the Azure URLs that the appliance will need to access in the [public](migrate-appliance.md#public-cloud-urls) and [government clouds](migrate-appliance.md#government-cloud-urls).
-3. [Review data](migrate-appliance.md#collected-datavmware) that the appliance collects during discovery and assessment.
-4. [Note](migrate-support-matrix-vmware.md#port-access-requirements) port access requirements for the appliance.
-5. [Deploy the Azure Migrate appliance](how-to-set-up-appliance-vmware.md) to start discovery. To deploy the appliance, you download and import an OVA template into VMware to create a server running in your vCenter Server. After deploying the appliance, you need to register it with the project and configure it to initiate the discovery.
-6. As you configure the appliance, you need to specify the following in the appliance configuration
- - The details of the vCenter Server to which you want to connect.
- - vCenter Server credentials scoped to discover the servers in your VMware environment.
- - Server credentials, which can be domain/ Windows(non-domain)/ Linux(non-domain) credentials. [Learn more](add-server-credentials.md) about how to provide credentials and how we handle them.
-## Verify permissions
+## Deploy and configure the Azure Migrate appliance
-- You need to [create a vCenter Server read-only account](./tutorial-discover-vmware.md#prepare-vmware) for discovery and assessment. The read-only account needs privileges enabled for **Virtual Machines** > **Guest Operations**, in order to interact with the servers to collect dependency data.-- You need a user account so that Azure Migrate can access the server to collect dependency data. [Learn](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) about account requirements for Windows and Linux servers.
+1. Deploy the Azure Migrate appliance to start discovery. To deploy the appliance, you can use the [deployment method](migrate-appliance.md#deployment-methods) as per your environment. After deploying the appliance, you need to register it with the project and configure it to initiate the discovery.
+2. As you configure the appliance, you need to specify the following in the appliance configuration
+ - The details of the source environment (vCenter Server(s)/Hyper-V host(s) or cluster(s)/physical servers) which you want to discover.
+ - Server credentials, which can be domain/ Windows (non-domain)/ Linux (non-domain) credentials. [Learn more](add-server-credentials.md) about how to provide credentials and how the appliance handles them.
+ - Verify the permissions required to perform agentless dependency analysis. For Windows servers, you need to provide domain or non-domain (local) account with administrative permissions. For Linux servers, you need to provide root user account, or an account with these permissions on /bin/netstat and /bin/ls files: CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE.<br/><br/> You can set these capabilities using the following commands: <br/> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br/> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat
### Add credentials and initiate discovery 1. Open the appliance configuration manager, complete the prerequisite checks and registration of the appliance. 2. Navigate to the **Manage credentials and discovery sources** panel.
-1. In **Step 1: Provide vCenter Server credentials**, click on **Add credentials** to provide credentials for the vCenter Server account that the appliance will use to discover servers running on the vCenter Server.
-1. In **Step 2: Provide vCenter Server details**, click on **Add discovery source** to select the friendly name for credentials from the drop-down, specify the **IP address/FQDN** of the vCenter Server instance
-1. In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis and discovery of SQL Server instances and databases**, click **Add credentials** to provide multiple server credentials to initiate software inventory.
-1. Click on **Start discovery**, to kick off vCenter Server discovery.
+1. In **Step 1: Provide credentials for discovery source**, click on **Add credentials** to provide credentials for the discovery source that the appliance will use to discover servers running in your environment.
+1. In **Step 2: Provide discovery source details**, click on **Add discovery source** to select the friendly name for credentials from the drop-down, specify the **IP address/FQDN** of the discovery source.
+1. In **Step 3: Provide server credentials to perform software inventory and agentless dependency analysis**, click **Add credentials** to provide multiple server credentials to perform software inventory.
+1. Click on **Start discovery**, to initiate discovery.
+
+ After the server discovery is complete, appliance initiates the discovery of installed applications, roles and features (software inventory) on the servers. During software inventory, the discovered servers are validated to check if they meet the prerequisites and can be enabled for agentless dependency analysis.
+
+ > [!Note]
+ > You can enable agentless dependency analysis for discovered servers from Azure Migrate project. Only the servers where the validation succeeds can be selected to enable agentless dependency analysis.
- After the vCenter Server discovery is complete, appliance initiates the discovery of installed applications, roles and features (software inventory).During software inventory, the added servers credentials will be iterated against servers and validated for agentless dependency analysis. You can enable agentless dependency analysis for servers from the portal. Only the servers where the validation succeeds can be selected to enable agentless dependency analysis.
+ After servers have been enabled for agentless dependency analysis from portal, appliance gathers the dependency data every 5 mins from the server and sends an aggregated data point every 6 hours to Azure. Review the [data](discovered-metadata.md#application-dependency-data) collected by appliance during agentless dependency analysis.
## Start dependency discovery
Select the servers on which you want to enable dependency discovery.
1. In the **Add servers** page, select the servers where you want to enable dependency analysis. You can enable dependency mapping only on those servers where validation succeeded. The next validation cycle will run 24 hours after the last validation timestamp. 1. After selecting the servers, click **Add servers**. You can visualize dependencies around six hours after enabling dependency analysis on servers. If you want to simultaneously enable multiple servers for dependency analysis, you can use [PowerShell](#start-or-stop-dependency-analysis-using-powershell) to do so.
You can visualize dependencies around six hours after enabling dependency analys
1. Expand the **Client** group to list the servers with a dependency on the selected server. 1. Expand the **Port** group to list the servers that have a dependency from the selected server. 1. To navigate to the map view of any of the dependent servers, click on the server name > **Load server map** 8. Expand the selected server to view process-level details for each dependency. > [!NOTE] > Process information for a dependency is not always available. If it's not available, the dependency is depicted with the process marked as "Unknown process".
You can visualize dependencies around six hours after enabling dependency analys
6. Click **Export dependency**. The dependency data is exported and downloaded in a CSV format. The downloaded file contains the dependency data across all servers enabled for dependency analysis. ### Dependency information
migrate How To Discover Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-applications.md
Title: Discover software inventory on on-premises servers with Azure Migrate description: Learn how to discover software inventory on on-premises servers with Azure Migrate Discovery and assessment.--++ ms. Last updated 03/18/2021
Last updated 03/18/2021
# Discover installed software inventory, web apps, and SQL Server instances and databases
-This article describes how to discover installed software inventory, web apps, and SQL Server instances and databases on servers running in your VMware environment, using Azure Migrate: Discovery and assessment tool.
+This article describes how to discover installed software inventory, web apps, and SQL Server instances and databases on servers running in your on-premises environment, using Azure Migrate: Discovery and assessment tool.
Performing software inventory helps identify and tailor a migration path to Azure for your workloads. Software inventory uses the Azure Migrate appliance to perform discovery, using server credentials. It is completely agentless- no agents are installed on the servers to collect this data.
+> [!Note]
+> Currently the discovery of ASP.NET web apps and SQL Server instances and databases is only available with appliance used for discovery of servers running in your VMware enviornment. These features are not available for servers running in your Hyper-V enviornment and for physical servers or servers running on other clouds like AWS, GCP etc.
+ ## Before you start - Ensure that you have [created a project](./create-manage-projects.md) with the Azure Migrate: Discovery and assessment tool added to it.-- Review [VMware requirements](migrate-support-matrix-vmware.md#vmware-requirements) to perform software inventory.-- Review [appliance requirements](migrate-support-matrix-vmware.md#azure-migrate-appliance-requirements) before setting up the appliance.-- Review [application discovery requirements](migrate-support-matrix-vmware.md#software-inventory-requirements) before initiating software inventory on servers.-
-## Deploy and configure the Azure Migrate appliance
+- Review the requirements based on your environment and the appliance you are setting up to perform software inventory:
-1. [Review](migrate-appliance.md#appliancevmware) the requirements for deploying the Azure Migrate appliance.
-2. Review the Azure URLs that the appliance will need to access in the [public](migrate-appliance.md#public-cloud-urls) and [government clouds](migrate-appliance.md#government-cloud-urls).
-3. [Review data](migrate-appliance.md#collected-datavmware) that the appliance collects during discovery and assessment.
-4. [Note](migrate-support-matrix-vmware.md#port-access-requirements) port access requirements for the appliance.
-5. [Deploy the Azure Migrate appliance](how-to-set-up-appliance-vmware.md) to start discovery. To deploy the appliance, you download and import an OVA template into VMware to create a server running in your vCenter Server. After deploying the appliance, you need to register it with the project and configure it to initiate the discovery.
-6. As you configure the appliance, you need to specify the following in the appliance configuration
- - The details of the vCenter Server to which you want to connect.
- - vCenter Server credentials scoped to discover the servers in your VMware environment.
- - Server credentials, which can be domain/ Windows(non-domain)/ Linux(non-domain) credentials. [Learn more](add-server-credentials.md) about how to provide credentials and how we handle them.
+ Environment | Requirements
+ |
+ Servers running in VMware environment | Review [VMware requirements](migrate-support-matrix-vmware.md#vmware-requirements) <br/> Review [appliance requirements](migrate-appliance.md#appliancevmware)<br/> Review [port access requirements](migrate-support-matrix-vmware.md#port-access-requirements) <br/> Review [software inventory requirements](migrate-support-matrix-vmware.md#software-inventory-requirements)
+ Servers running in Hyper-V environment | Review [Hyper-V host requirements](migrate-support-matrix-hyper-v.md#hyper-v-host-requirements) <br/> Review [appliance requirements](migrate-appliance.md#appliancehyper-v)<br/> Review [port access requirements](migrate-support-matrix-hyper-v.md#port-access)<br/> Review [software inventory requirements](migrate-support-matrix-hyper-v.md#software-inventory-requirements)
+ Physical servers or servers running on other clouds | Review [server requirements](migrate-support-matrix-physical.md#physical-server-requirements) <br/> Review [appliance requirements](migrate-appliance.md#appliancephysical)<br/> Review [port access requirements](migrate-support-matrix-physical.md#port-access)<br/> Review [software inventory requirements](migrate-support-matrix-physical.md#software-inventory-requirements)
+- Review the Azure URLs that the appliance will need to access in the [public](migrate-appliance.md#public-cloud-urls) and [government clouds](migrate-appliance.md#government-cloud-urls).
-## Verify permissions
+## Deploy and configure the Azure Migrate appliance
-- You need to [create a vCenter Server read-only account](./tutorial-discover-vmware.md#prepare-vmware) for discovery and assessment. The read-only account needs privileges enabled for **Virtual Machines** > **Guest Operations**, in order to interact with the servers to perform software inventory.-- You can add multiple domain and non-domain (Windows/Linux) credentials on the appliance configuration manager for application discovery. You need a guest user account for Windows servers, and a regular/normal user account (non-sudo access) for all Linux servers.[Learn more](add-server-credentials.md) about how to provide credentials and how we handle them.
+1. Deploy the Azure Migrate appliance to start discovery. To deploy the appliance, you can use the [deployment method](migrate-appliance.md#deployment-methods) as per your environment. After deploying the appliance, you need to register it with the project and configure it to initiate the discovery.
+2. As you configure the appliance, you need to specify the following in the appliance configuration
+ - The details of the source environment (vCenter Server(s)/Hyper-V host(s) or cluster(s)/physical servers) which you want to discover.
+ - Server credentials, which can be domain/ Windows (non-domain)/ Linux (non-domain) credentials. [Learn more](add-server-credentials.md) about how to provide credentials and how the appliance handles them.
+ - Verify the permissions required to perform software inventory.You need a guest user account for Windows servers, and a regular/normal user account (non-sudo access) for all Linux servers.
### Add credentials and initiate discovery 1. Open the appliance configuration manager, complete the prerequisite checks and registration of the appliance. 2. Navigate to the **Manage credentials and discovery sources** panel.
-1. In **Step 1: Provide vCenter Server credentials**, click on **Add credentials** to provide credentials for the vCenter Server account that the appliance will use to discover servers running on the vCenter Server.
-1. In **Step 2: Provide vCenter Server details**, click on **Add discovery source** to select the friendly name for credentials from the drop-down, specify the **IP address/FQDN** of the vCenter Server instance
-1. In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of ASP.NET web apps in your VMware environment.**, click **Add credentials** to provide multiple server credentials to initiate software inventory.
-1. Click on **Start discovery**, to kick off vCenter Server discovery.
+1. In **Step 1: Provide credentials for discovery source**, click on **Add credentials** to provide credentials for the discovery source that the appliance will use to discover servers running in your environment.
+1. In **Step 2: Provide discovery source details**, click on **Add discovery source** to select the friendly name for credentials from the drop-down, specify the **IP address/FQDN** of the discovery source.
+1. In **Step 3: Provide server credentials to perform software inventory and agentless dependency analysis**, click **Add credentials** to provide multiple server credentials to perform software inventory.
+1. Click on **Start discovery**, to initiate discovery.
- After the vCenter Server discovery is complete, appliance initiates the discovery of installed applications, roles, and features (software inventory). The duration depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate portal.
+ After the server discovery is complete, appliance initiates the discovery of installed applications, roles, and features (software inventory) on the servers. The duration depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate portal. After the initial discovery is complete, software inventory data is collected and sent to Azure once every 24 hours.Review the [data](discovered-metadata.md#software-inventory-data) collected by appliance during software inventory.
## Review and export the inventory
Once connected, appliance gathers configuration and performance data of SQL Serv
## Discover ASP.NET web apps
-Software inventory identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server.
-User can add both domain and non-domain credentials on appliance. Make sure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in source environment.
-After the appliance is connected, it gathers configuration data for IIS web server and ASP.NET web apps. Web apps configuration data is updated once every 24 hours.
+- Software inventory identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server.
+- User can add both domain and non-domain credentials on appliance. Make sure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in source environment.
+- After the appliance is connected, it gathers configuration data for IIS web server and ASP.NET web apps. Web apps configuration data is updated once every 24 hours.
+
+> [!Note]
+> Currently the discovery of ASP.NET web apps and SQL Server instances and databases is only available with appliance used for discovery of servers running in your VMware environment.
## Next steps
migrate How To Set Up Appliance Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-hyper-v.md
Title: Set up an Azure Migrate appliance for Hyper-V description: Learn how to set up an Azure Migrate appliance to assess and migrate servers on Hyper-V.--++ ms. Last updated 03/13/2021
The [Azure Migrate appliance](migrate-appliance.md) is a lightweight appliance
You can deploy the appliance using a couple of methods: -- Set up on a server on Hyper-V using a downloaded VHD. This method described in this article.
+- Set up on a server on Hyper-V using a downloaded VHD. This method described in the current article.
- Set up on a server on Hyper-V or physical server with a PowerShell installer script. [This method](deploy-appliance-script.md) should be used if you can't set up a server using a VHD, or if you're in Azure Government. After creating the appliance, you check that it can connect to Azure Migrate: Discovery and assessment, configure it for the first time, and register it with the project.
+> [!NOTE]
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
+ ## Appliance deployment (VHD) To set up the appliance using a VHD template:
To set up the appliance using a VHD template:
In **2: Download Azure Migrate appliance**, select the .VHD file and click on **Download**.
- ![Selections for Discover servers](./media/tutorial-assess-hyper-v/servers-discover.png)
+ :::image type="content" source="./media/tutorial-assess-hyper-v/servers-discover.png" alt-text="Screenshot of selections for Discover servers.":::
- ![Selections for Generate Key](./media/tutorial-assess-hyper-v/generate-key-hyperv.png)
+ :::image type="content" source="./media/tutorial-assess-hyper-v/generate-key-hyper-v-inline-1.png" alt-text="Screenshots of selections for Generate Key." lightbox="./media/tutorial-assess-hyper-v/generate-key-hyper-v-expanded-1.png":::
### Verify security
Import the downloaded file, and create an appliance.
1. Extract the zipped VHD file to a folder on the Hyper-V host that will host the appliance. Three folders are extracted. 2. Open Hyper-V Manager. In **Actions**, click **Import Virtual Machine**.
- ![Deploy VHD](./media/how-to-set-up-appliance-hyper-v/deploy-vhd.png)
+ ![Screenshot of preocedure to Deploy VHD.](./media/how-to-set-up-appliance-hyper-v/deploy-vhd.png)
2. In the Import Virtual Machine Wizard > **Before you begin**, click **Next**. 3. In **Locate Folder**, specify the folder containing the extracted VHD. Then click **Next**.
Set up the appliance for the first time.
1. Paste the **project key** copied from the portal. If you do not have the key, go to **Azure Migrate: Discovery and assessment> Discover> Manage existing appliances**, select the appliance name you provided at the time of key generation and copy the corresponding key. 1. You will need a device code to authenticate with Azure. Clicking on **Login** will open a modal with the device code as shown below.
- ![Modal showing the device code](./media/tutorial-discover-vmware/device-code.png)
+ ![Modal showing the device code.](./media/tutorial-discover-vmware/device-code.png)
1. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser. 1. On the new tab, paste the device code and sign-in by using your Azure username and password.
If you're running VHDs on SMBs, you must enable delegation of credentials from t
Connect from the appliance to Hyper-V hosts or clusters, and start discovery.
+### Provide Hyper-V host/cluster details
+ 1. In **Step 1: Provide Hyper-V host credentials**, click on **Add credentials** to specify a friendly name for credentials, add **Username** and **Password** for a Hyper-V host/cluster that the appliance will use to discover servers. Click on **Save**. 1. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for the discovery of servers on Hyper-V. 1. In **Step 2: Provide Hyper-V host/cluster details**, click on **Add discovery source** to specify the Hyper-V host/cluster **IP address/FQDN** and the friendly name for credentials to connect to the host/cluster. 1. You can either **Add single item** at a time or **Add multiple items** in one go. There is also an option to provide Hyper-V host/cluster details through **Import CSV**.
- ![Selections for adding discovery source](./media/tutorial-assess-hyper-v/add-discovery-source-hyperv.png)
+ ![Screenshot of selections for adding discovery source.](./media/tutorial-assess-hyper-v/add-discovery-source-hyperv.png)
- If you choose **Add single item**, you need to specify friendly name for credentials and Hyper-V host/cluster **IP address/FQDN** and click on **Save**. - If you choose **Add multiple items** _(selected by default)_, you can add multiple records at once by specifying Hyper-V host/cluster **IP address/FQDN** with the friendly name for credentials in the text box. Verify** the added records and click on **Save**.
Connect from the appliance to Hyper-V hosts or clusters, and start discovery.
- You can't remove a specific host from a cluster. You can only remove the entire cluster. - You can add a cluster, even if there are issues with specific hosts in the cluster. 1. You can **revalidate** the connectivity to hosts/clusters anytime before starting the discovery.
-1. Click on **Start discovery**, to kick off server discovery from the successfully validated hosts/clusters. After the discovery has been successfully initiated, you can check the discovery status against each host/cluster in the table.
-This starts discovery. It takes approximately 2 minutes per host for metadata of discovered servers to appear in the Azure portal.
+### Provide server credentials
+
+In **Step 3: Provide server credentials to perform software inventory and agentless dependency analysis.**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can skip this step and proceed with discovery of servers running on Hyper-V hosts/clusters. You can change this option at any time.
++
+If you want to use these features, provide server credentials by completing the following steps. The appliance attempts to automatically map the credentials to the servers to perform the discovery features.
+
+To add server credentials:
+
+1. Select **Add Credentials**.
+1. In the dropdown menu, select **Credentials type**.
+
+ You can provide domain/, Windows(non-domain)/, Linux(non-domain) credentials. Learn how to [provide credentials](add-server-credentials.md) and how we handle them.
+1. For each type of credentials, enter:
+ * A friendly name.
+ * A username.
+ * A password.
+ Select **Save**.
+
+ If you choose to use domain credentials, you also must enter the FQDN for the domain. The FQDN is required to validate the authenticity of the credentials with the Active Directory instance in that domain.
+1. Review the [required permissions](add-server-credentials.md#required-permissions) on the account for discovery of installed applications and agentless dependency analysis.
+1. To add multiple credentials at once, select **Add more** to save credentials, and then add more credentials.
+ When you select **Save** or **Add more**, the appliance validates the domain credentials with the domain's Active Directory instance for authentication. Validation is made after each addition to avoid account lockouts as during discovery, the appliance iterates to map credentials to respective servers.
+
+To check validation of the domain credentials:
+
+In the configuration manager, in the credentials table, see the **Validation status** for domain credentials. Only domain credentials are validated.
+
+If validation fails, you can select the **Failed** status to see the validation error. Fix the issue, and then select **Revalidate credentials** to reattempt validation of the credentials.
++
+### Start discovery
+
+Click on **Start discovery**, to kick off server discovery from the successfully validated host(s)/cluster(s). After the discovery has been successfully initiated, you can check the discovery status against each host/cluster in the table.
+
+## How discovery works
+
+* It takes approximately 2 minutes per host for metadata of discovered servers to appear in the Azure portal.
+* If you have provided server credentials, software inventory (discovery of installed applications) is automatically initiated when the discovery of servers running on Hyper-V host(s)/cluster(s) is finished. Software inventory occurs once every 12 hours.
+* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
## Verify servers in the portal
migrate How To Set Up Appliance Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-physical.md
The Azure Migrate appliance is a lightweight appliance, used by Azure Migrate: D
[Learn more](migrate-appliance.md) about the Azure Migrate appliance.
+After creating the appliance, you check that it can connect to Azure Migrate: Discovery and assessment, configure it for the first time, and register it with the project.
+
+> [!NOTE]
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
## Appliance deployment steps
To set up the appliance you:
1. After the successful creation of the Azure resources, a **project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
- ![Selections for Generate Key](./media/tutorial-assess-physical/generate-key-physical-1.png)
+ :::image type="content" source="./media/tutorial-assess-physical/generate-key-physical-1-inline.png" alt-text="Screenshots of selections for Generate Key." lightbox="./media/tutorial-assess-physical/generate-key-physical-1-expanded.png":::
### Download the installer script
Set up the appliance for the first time.
1. Paste the **project key** copied from the portal. If you do not have the key, go to **Azure Migrate: Discovery and assessment> Discover> Manage existing appliances**, select the appliance name you provided at the time of key generation and copy the corresponding key. 1. You will need a device code to authenticate with Azure. Clicking on **Login** will open a modal with the device code as shown below.
- ![Modal showing the device code](./media/tutorial-discover-vmware/device-code.png)
+ ![Modal showing the device code.](./media/tutorial-discover-vmware/device-code.png)
1. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser. 1. On the new tab, paste the device code and sign-in by using your Azure username and password.
Now, connect from the appliance to the physical servers to be discovered, and st
- Currently Azure Migrate does not support SSH private key file generated by PuTTY. - Azure Migrate supports OpenSSH format of the SSH private key file as shown below:
- ![SSH private key supported format](./media/tutorial-discover-physical/key-format.png)
+ ![Screenshot of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
1. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery. 1. In **Step 2:Provide physical or virtual server detailsΓÇï**, click on **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server. 1. You can either **Add single item** at a time or **Add multiple items** in one go. There is also an option to provide server details through **Import CSV**.
- ![Selections for adding discovery source](./media/tutorial-assess-physical/add-discovery-source-physical.png)
+ ![Screenshot of selections for adding discovery source.](./media/tutorial-assess-physical/add-discovery-source-physical.png)
- If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN** and click on **Save**. - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. Verify** the added records and click on **Save**.
Now, connect from the appliance to the physical servers to be discovered, and st
- If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, click on **Delete**. 1. You can **revalidate** the connectivity to servers anytime before starting the discovery.
-1. Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers.You can change this option at any time.
++
+### Start discovery
+
+Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+## How discovery works
-This starts discovery. It takes approximately 2 minutes per server for metadata of discovered server to appear in the Azure portal.
+* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal.
+* [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished.
+* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal.
+* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
## Verify servers in the portal
migrate Migrate Appliance Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance-architecture.md
- Title: Azure Migrate appliance architecture
-description: Provides an overview of the Azure Migrate appliance used in server discovery, assessment, and migration.
--
-ms.
- Previously updated : 03/18/2021---
-# Azure Migrate appliance architecture
-
-This article describes the Azure Migrate appliance architecture and processes. The Azure Migrate appliance is a lightweight appliance that's deployed on premises, to discover servers for migration to Azure.
-
-## Deployment scenarios
-
-The Azure Migrate appliance is used in the following scenarios.
-
-**Scenario** | **Tool** | **Used to**
- | |
-**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br/><br/> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br/><br/> Collect server configuration and performance metadata for assessments.
-**Agentless migration of servers running in VMware environment** | Azure Migrate:Server Migration | Discover servers running in your VMware environment.<br/><br/> Replicate servers without installing any agents on them.
-**Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br/><br/> Collect server configuration and performance metadata for assessments.
-**Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br/><br/> Collect server configuration and performance metadata for assessments.
-
-## Deployment methods
-
-The appliance can be deployed using the following methods:
--- The appliance can be deployed using a template for servers running in VMware or Hyper-V environment ([OVA template for VMware](how-to-set-up-appliance-vmware.md) or [VHD for Hyper-V](how-to-set-up-appliance-hyper-v.md)).-- If you don't want to use a template, you can deploy the appliance for VMware or Hyper-V environment using a [PowerShell installer script](deploy-appliance-script.md).-- In Azure Government, you should deploy the appliance using a PowerShell installer script. Refer to the steps of deployment [here](deploy-appliance-script-government.md).-- For physical or virtualized servers on-premises or any other cloud, you always deploy the appliance using a PowerShell installer script.Refer to the steps of deployment [here](how-to-set-up-appliance-physical.md).-- Download links are available in the tables below.-
-## Appliance services
-
-The appliance has the following
--- **Appliance configuration manager**: This is a web application, which can be configured with source details to start the discovery and assessment of servers.-- **Discovery agent**: The agent collects server configuration metadata, which can be used to create as on-premises assessments.-- **Assessment agent**: The agent collects server performance metadata, which can be used to create performance-based assessments.-- **Auto update service**: The service keeps all the agents running on the appliance up-to-date. It automatically runs once every 24 hours.-- **DRA agent**: Orchestrates server replication, and coordinates communication between replicated servers and Azure. Used only when replicating servers to Azure using agentless migration.-- **Gateway**: Sends replicated data to Azure. Used only when replicating servers to Azure using agentless migration.-- **SQL discovery and assessment agent**: sends the configuration and performance metadata of SQL Server instances and databases to Azure.-- **Web apps discovery and assessment agent**: sends the web apps configuration data to Azure.-
-> [!Note]
-> The last 4 services are only available in the appliance used for discovery and assessment of servers running in your VMware environment.
-
-## Discovery and collection process
--
-The appliance communicates with the discovery sources using the following process.
-
-**Process** | **VMware appliance** | **Hyper-V appliance** | **Physical appliance**
-|||
-**Start discovery** | The appliance communicates with the vCenter server on TCP port 443 by default. If the vCenter server listens on a different port, you can configure it in the appliance configuration manager. | The appliance communicates with the Hyper-V hosts on WinRM port 5985 (HTTP). | The appliance communicates with Windows servers over WinRM port 5985 (HTTP) with Linux servers over port 22 (TCP).
-**Gather configuration and performance metadata** | The appliance collects the metadata of servers running on vCenter Server(s) using vSphere APIs by connecting on port 443 (default port) or any other port each vCenter Server listens on. | The appliance collects the metadata of servers running on Hyper-V hosts using a Common Information Model (CIM) session with hosts on port 5985.| The appliance collects metadata from Windows servers using Common Information Model (CIM) session with servers on port 5985 and from Linux servers using SSH connectivity on port 22.
-**Send discovery data** | The appliance sends the collected data to Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits.
-**Data collection frequency** | Configuration metadata is collected and sent every 15 minutes. <br/><br/> Performance metadata is collected every 50 minutes to send a data point to Azure. <br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours. <br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. <br/><br/> The web apps configuration data is updated once every 24 hours. Performance data is not captured for web apps.| Configuration metadata is collected and sent every 30 minutes. <br/><br/> Performance metadata is collected every 30 seconds and is aggregated to send a data point to Azure every 15 minutes.| Configuration metadata is collected and sent every 3 hours. <br/><br/> Performance metadata is collected every 5 minutes to send a data point to Azure.
-**Assess and migrate** | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool.<br/><br/>In addition, you can also start migrating servers running in your VMware environment using Azure Migrate: Server Migration tool to orchestrate agentless server replication.| You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool. | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool.
-
-## Next steps
-
-[Review](migrate-appliance.md) the appliance support matrix.
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
Last updated 03/18/2021 -
+
# Azure Migrate appliance This article summarizes the prerequisites and support requirements for the Azure Migrate appliance.
The Azure Migrate appliance is used in the following scenarios.
**Scenario** | **Tool** | **Used to** | |
-**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br><br> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br><br> Collect server configuration and performance metadata for assessments.
-**Agentless migration of servers running in VMware environment** | Azure Migrate: Server Migration | Discover servers running in your VMware environment. <br><br> Replicate servers without installing any agents on them.
-**Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br><br> Collect server configuration and performance metadata for assessments.
-**Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br><br> Collect server configuration and performance metadata for assessments.
+**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br/><br/> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br/><br/> Collect server configuration and performance metadata for assessments.
+**Agentless migration of servers running in VMware environment** | Azure Migrate: Server Migration | Discover servers running in your VMware environment. <br/><br/> Replicate servers without installing any agents on them.
+**Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br/><br/> Collect server configuration and performance metadata for assessments.<br/><br/> Perform discovery of installed software inventory and agentless dependency analysis.
+**Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br/><br/> Collect server configuration and performance metadata for assessments.<br/><br/> Perform discovery of installed software inventory and agentless dependency analysis.
+ ## Deployment methods
The appliance can be deployed using a couple of methods:
- For physical or virtualized servers on-premises or any other cloud, you always deploy the appliance using a PowerShell installer script.Refer to the steps of deployment [here](how-to-set-up-appliance-physical.md). - Download links are available in the tables below.
+## Appliance services
+
+The appliance has the following
+
+- **Appliance configuration manager**: This is a web application, which can be configured with source details to start the discovery and assessment of servers.
+- **Discovery agent**: The agent collects server configuration metadata, which can be used to create as on-premises assessments.
+- **Assessment agent**: The agent collects server performance metadata, which can be used to create performance-based assessments.
+- **Auto update service**: The service keeps all the agents running on the appliance up-to-date. It automatically runs once every 24 hours.
+- **DRA agent**: Orchestrates server replication, and coordinates communication between replicated servers and Azure. Used only when replicating servers to Azure using agentless migration.
+- **Gateway**: Sends replicated data to Azure. Used only when replicating servers to Azure using agentless migration.
+- **SQL discovery and assessment agent**: sends the configuration and performance metadata of SQL Server instances and databases to Azure.
+- **Web apps discovery and assessment agent**: sends the web apps configuration data to Azure.
+
+> [!Note]
+> The last 4 services are only available in the appliance used for discovery and assessment of servers running in your VMware environment.
+ ## Appliance - VMware The following table summarizes the Azure Migrate appliance requirements for VMware.
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Supported deployment** | Deploy as new server running on vCenter Server using OVA template.<br><br> Deploy on an existing server running Windows Server 2016 using PowerShell installer script. **OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br><br> Download size is 11.9 GB.<br><br> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server. **OVA verification** | [Verify](tutorial-discover-vmware.md#verify-security) the OVA template downloaded from project by checking the hash values.
-**PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-vmware) on how to deploy an appliance using the PowerShell installer script.<br><br>
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br> The appliance requires internet access, either directly or through a proxy.<br><br> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br><br> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
-**VMware requirements** | If you deploy the appliance as a server on vCenter Server, it must be deployed on a vCenter Server running 5.5, 6.0, 6.5, 6.7 or 7.0 and an ESXi host running version 5.5 or later.<br><br>
+**PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-vmware) on how to deploy an appliance using the PowerShell installer script.<br/><br/>
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance requires internet access, either directly or through a proxy.<br/><br/> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**VMware requirements** | If you deploy the appliance as a server on vCenter Server, it must be deployed on a vCenter Server running 5.5, 6.0, 6.5, 6.7 or 7.0 and an ESXi host running version 5.5 or later.<br/><br/>
**VDDK (agentless migration)** | To use the appliance for agentless migration of servers, the VMware vSphere VDDK must be installed on the appliance server. ## Appliance - Hyper-V
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Supported deployment** | Deploy as server running on a Hyper-V host using a VHD template.<br><br> Deploy on an existing server running Windows Server 2016 using PowerShell installer script. **VHD template** | Zip file that includes a VHD. Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140422).<br><br> Download size is 8.91 GB.<br><br> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days. If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance server. **VHD verification** | [Verify](tutorial-discover-hyper-v.md#verify-security) the VHD template downloaded from project by checking the hash values.
-**PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-hyper-v) on how to deploy an appliance using the PowerShell installer script.<br>
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br><br> If you run the appliance as a server running on a Hyper-V host, you need enough resources on the host to create a server that meets the hardware requirements.<br><br> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
-**Hyper-V requirements** | If you deploy the appliance with the VHD template, the appliance provided by Azure Migrate is Hyper-V VM version 5.0.<br><br> The Hyper-V host must be running Windows Server 2012 R2 or later.
+**PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-hyper-v) on how to deploy an appliance using the PowerShell installer script.<br/>
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance as a server running on a Hyper-V host, you need enough resources on the host to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hyper-V requirements** | If you deploy the appliance with the VHD template, the appliance provided by Azure Migrate is Hyper-V VM version 5.0.<br/><br/> The Hyper-V host must be running Windows Server 2012 R2 or later.
+ ## Appliance - Physical
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Supported deployment** | Deploy on an existing server running Windows Server 2016 using PowerShell installer script. **PowerShell script** | Download the script (AzureMigrateInstaller.ps1) in a zip file from the project or from [here](https://go.microsoft.com/fwlink/?linkid=2140334). [Learn more](tutorial-discover-physical.md).<br><br> Download size is 85.8 MB. **Script verification** | [Verify](tutorial-discover-physical.md#verify-security) the PowerShell installer script downloaded from project by checking the hash values.
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage.<br> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br><br> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+ ## URL access
aka.ms/* | Allow access to these links; used to download and install the latest
download.microsoft.com/download | Allow downloads from Microsoft download center. *.servicebus.chinacloudapi.cn | Communication between the appliance and the Azure Migrate service. *.discoverysrv.cn2.windowsazure.cn *.cn2.prod.migration.windowsazure.cn | Connect to Azure Migrate service URLs.
-*.cn2.hypervrecoverymanager.windowsazure.cn | **Used for VMware agentless migration.** Connect to Azure Migrate service URLs.
+*.cn2.hypervrecoverymanager.windowsazure.cn | **Used for VMware agentless migration.** <br><br> Connect to Azure Migrate service URLs.
*.blob.core.chinacloudapi.cn | **Used for VMware agentless migration.**<br><br>Upload data to storage for migration. *.applicationinsights.azure.cn | Upload appliance logs used for internal monitoring.
-## Collected data - VMware
-
-The appliance collects configuration metadata, performance metadata, and server dependencies data (if agentless [dependency analysis](concepts-dependency-visualization.md) is used).
-
-### Metadata
-
-Metadata discovered by the Azure Migrate appliance helps you to figure out whether servers are ready for migration to Azure, right-size servers, plans costs, and analyze application dependencies. Microsoft doesn't use this data in any license compliance audit.
-
-Here's the full list of server metadata that the appliance collects and sends to Azure.
-
-**DATA** | **COUNTER**
- |
-**Server details** |
-Server ID | vm.Config.InstanceUuid
-Server name | vm.Config.Name
-vCenter Server ID | VMwareClient.Instance.Uuid
-Server description | vm.Summary.Config.Annotation
-License product name | vm.Client.ServiceContent.About.LicenseProductName
-Operating system type | vm.SummaryConfig.GuestFullName
-Boot type | vm.Config.Firmware
-Number of cores | vm.Config.Hardware.NumCPU
-Memory (MB) | vm.Config.Hardware.MemoryMB
-Number of disks | vm.Config.Hardware.Device.ToList().FindAll(x => is VirtualDisk).count
-Disk size list | vm.Config.Hardware.Device.ToList().FindAll(x => is VirtualDisk)
-Network adapters list | vm.Config.Hardware.Device.ToList().FindAll(x => is VirtualEthernet).count
-CPU utilization | cpu.usage.average
-Memory utilization |mem.usage.average
-**Per disk details** |
-Disk key value | disk.Key
-Dikunit number | disk.UnitNumber
-Disk controller key value | disk.ControllerKey.Value
-Gigabytes provisioned | virtualDisk.DeviceInfo.Summary
-Disk name | Value generated using disk.UnitNumber, disk.Key, disk.ControllerKey.VAlue
-Read operations per second | virtualDisk.numberReadAveraged.average
-Write operations per second | virtualDisk.numberWriteAveraged.average
-Read throughput (MB per second) | virtualDisk.read.average
-Write throughput (MB per second) | virtualDisk.write.average
-**Per NIC details** |
-Network adapter name | nic.Key
-MAC address | ((VirtualEthernetCard)nic).MacAddress
-IPv4 addresses | vm.Guest.Net
-IPv6 addresses | vm.Guest.Net
-Read throughput (MB per second) | net.received.average
-Write throughput (MB per second) | net.transmitted.average
-**Inventory path details** |
-Name | container.GetType().Name
-Type of child object | container.ChildType
-Reference details | container.MoRef
-Parent details | Container.Parent
-Folder details per server | ((Folder)container).ChildEntity.Type
-Datacenter details per server | ((Datacenter)container).VmFolder
-Datacenter details per host folder | ((Datacenter)container).HostFolder
-Cluster details per host | ((ClusterComputeResource)container).Host
-Host details per server | ((HostSystem)container).VM
-
-### Performance data
-
-Here's the performance data that an appliance collects for a server running on VMware and sends to Azure.
-
-**Data** | **Counter** | **Assessment impact**
- | |
-CPU utilization | cpu.usage.average | Recommended server size/cost
-Memory utilization | mem.usage.average | Recommended server size/cost
-Disk read throughput (MB per second) | virtualDisk.read.average | Calculation for disk size, storage cost, server size
-Disk writes throughput (MB per second) | virtualDisk.write.average | Calculation for disk size, storage cost, server size
-Disk read operations per second | virtualDisk.numberReadAveraged.average | Calculation for disk size, storage cost, server size
-Disk writes operations per second | virtualDisk.numberWriteAveraged.average | Calculation for disk size, storage cost, server size
-NIC read throughput (MB per second) | net.received.average | Calculation for server size
-NIC writes throughput (MB per second) | net.transmitted.average |Calculation for server size
-
-### Installed software inventory
-
-The appliance collects data about installed software inventory on servers.
-
-#### Windows server software inventory data
-
-Here's the software inventory data that the appliance collects from each Windows server discovered in your VMware environment.
-
-**Data** | **Registry Location** | **Key**
- | |
-Application Name | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | DisplayName
-Version | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | DisplayVersion
-Provider | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Publisher
-
-#### Windows server features data
-
-Here's the features data that the appliance collects from each Windows server discovered in your VMware environment.
-
-**Data** | **PowerShell cmdlet** | **Property**
- | |
-Name | Get-WindowsFeature | Name
-Feature Type | Get-WindowsFeature | FeatureType
-Parent | Get-WindowsFeature | Parent
-
-#### SQL Server metadata
-
-Here's the SQL Server data that the appliance collects from each Windows server discovered in your VMware environment.
-
-**Data** | **Registry Location** | **Key**
- | |
-Name | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL | installedInstance
-Edition | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Setup | Edition
-Service Pack | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Setup | SP
-Version | HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\\\<InstanceName>\Setup | Version
-
-#### ASP.NET web apps data
-
-Here's the web apps configuration data that the appliance collects from each Windows server discovered in your VMware environment.
-
-**Entity** | **Data**
- |
-Web apps | Application Name <br>Configuration Path <br>Frontend Bindings <br>Enabled Frameworks <br>Hosting Web Server<br>Sub-Applications and virtual applications <br>Application Pool name <br>Runtime version <br>Managed pipeline mode
-Web server | Server Name <br>Server Type (currently only IIS) <br>Configuration Location <br>Version <br>FQDN <br>Credentials used for discovery <br>List of Applications
-
-#### Windows server operating system data
-
-Here's the operating system data that the appliance collects from each Windows server discovered in your VMware environment.
-
-**Data** | **WMI class** | **WMI Class Property**
- | |
-Name | Win32_operatingsystem | Caption
-Version | Win32_operatingsystem | Version
-Architecture | Win32_operatingsystem | OSArchitecture
-
-#### Linux server software inventory data
-
-Here's the software inventory data that the appliance collects from each Linux server discovered in your VMware environment. Based on the operating system of the server, one or more of the commands are run.
-
-**Data** | **Commands**
- |
-Name | rpm, dpkg-query, snap
-Version | rpm, dpkg-query, snap
-Provider | rpm, dpkg-query, snap
-
-#### Linux server operating system data
-
-Here's the operating system data that the appliance collects from each Linux server discovered in your VMware environment.
-
-**Data** | **Commands**
- |
-Name <br> version | Gathered from one or more of the following files:<br> <br>/etc/os-release <br> /usr/lib/os-release <br> /etc/enterprise-release <br> /etc/redhat-release <br> /etc/oracle-release <br> /etc/SuSE-release <br> /etc/lsb-release <br> /etc/debian_version
-Architecture | uname
-
-### SQL Server instances and databases data
-
-Appliance collects data on SQL Server instances and databases.
-
-#### SQL database metadata
-
-**Database Metadata** | **Views/ SQL Server properties**
- |
-Unique identifier of the database | sys.databases
-Server defined database ID | sys.databases
-Name of the database | sys.databases
-Compatibility level of database | sys.databases
-Collation name of database | sys.databases
-State of the database | sys.databases
-Size of the database (in MBs) | sys.master_files
-Drive letter of location containing data files | SERVERPROPERTY, and Software\Microsoft\MSSQLServer\MSSQLServer
-List of database files | sys.databases, sys.master_files
-Service broker is enabled or not | sys.databases
-Database is enabled for change data capture or not | sys.databases
-
-#### SQL Server metadata
-
-**Server Metadata** | **Views/ SQL server properties**
- |
-Server name |SERVERPROPERTY
-FQDN | Connection string derived from discovery of installed applications
-Install ID | sys.dm_server_registry
-Server version | SERVERPROPERTY
-Server edition | SERVERPROPERTY
-Server host platform (Windows/Linux) | SERVERPROPERTY
-Product level of the server (RTM SP CTP) | SERVERPROPERTY
-Default Backup path | SERVERPROPERTY
-Default path of the data files | SERVERPROPERTY, and Software\Microsoft\MSSQLServer\MSSQLServer
-Default path of the log files | SERVERPROPERTY, and Software\Microsoft\MSSQLServer\MSSQLServer
-No. of cores on the server | sys.dm_os_schedulers, sys.dm_os_sys_info
-Server collation name | SERVERPROPERTY
-No. of cores on the server with VISIBLE ONLINE status | sys.dm_os_schedulers
-Unique Server ID | sys.dm_server_registry
-HA enabled or not | SERVERPROPERTY
-Buffer Pool Extension enabled or not | sys.dm_os_buffer_pool_extension_configuration
-Failover cluster configured or not | SERVERPROPERTY
-Server using Windows Authentication mode only | SERVERPROPERTY
-Server installs PolyBase | SERVERPROPERTY
-No. of logical CPUs on the system | sys.dm_server_registry, sys.dm_os_sys_info
-Ratio of the no of logical or physical cores that are exposed by one physical processor package | sys.dm_os_schedulers, sys.dm_os_sys_info
-No of physical CPUs on the system | sys.dm_os_schedulers, sys.dm_os_sys_info
-Date and time server last started | sys.dm_server_registry
-Max server memory use (in MBs) | sys.dm_os_process_memory
-Total no. of users across all databases | sys.databases, sys.logins
-Total size of all user databases | sys.databases
-Size of temp database | sys.master_files, sys.configurations, sys.dm_os_sys_info
-No. of logins | sys.logins
-List of linked servers | sys.servers
-List of agent job | [msdb].[dbo].[sysjobs], [sys].[syslogins], [msdb].[dbo].[syscategories]
-
-### Performance metadata
-
-**Performance** | **Views/ SQL server properties** | **Assessment Impact**
- | |
-SQL Server CPU utilization| sys.dm_os_ring_buffers| Recommended SKU size (CPU dimension)
-SQL logical CPU count| sys.dm_os_sys_info| Recommended SKU size (CPU dimension)
-SQL physical memory in use| sys.dm_os_process_memory| Unused
-SQL memory utilization percentage| sys.dm_os_process_memory | Unused
-Database CPU utilization| sys.dm_exec_query_stats, sys.dm_exec_plan_attributes| Recommended SKU size (CPU dimension)
-Database memory in use (buffer pool)| sys.dm_os_buffer_descriptors| Recommended SKU size (Memory dimension)
-File read/write IO| sys.dm_io_virtual_file_stats, sys.master_files| Recommended SKU size (IO dimension)
-File num of reads/writes| sys.dm_io_virtual_file_stats, sys.master_files| Recommended SKU size (Throughput dimension)
-File IO stall read/write (ms)| sys.dm_io_virtual_file_stats, sys.master_files| Recommended SKU size (IO latency dimension)
-File size| sys.master_files| Recommended SKU size (Storage dimension)
--
-### Application dependency data
-
-Agentless dependency analysis collects the connection and process data.
-
-#### Windows server dependencies data
-
-Here's the connection data that the appliance collects from each Windows server, enabled for agentless dependency analysis.
-
-**Data** | **Commands**
- |
-Local port | netstat
-Local IP address | netstat
-Remote port | netstat
-Remote IP address | netstat
-TCP connection state | netstat
-Process ID | netstat
-Number of active connections | netstat
-
-Here's the connection data that the appliance collects from each Windows server, enabled for agentless dependency analysis.
-
-**Data** | **WMI class** | **WMI class property**
- | |
-Process name | Win32_Process | ExecutablePath
-Process arguments | Win32_Process | CommandLine
-Application name | Win32_Process | VersionInfo.ProductName parameter of ExecutablePath property
-
-#### Linux server dependencies data
-
-Here's the connection data that the appliance collects from each Linux server, enabled for agentless dependency analysis.
-
-**Data** | **Commands**
- |
-Local port | netstat
-Local IP address | netstat
-Remote port | netstat
-Remote IP address | netstat
-TCP connection state | netstat
-Number of active connections | netstat
-Process ID | netstat
-Process name | ps
-Process arguments | ps
-Application name | dpkg or rpm
+### Azure China URLs
-## Collected data - Hyper-V
-The appliance collects configuration and performance metadata from servers running in Hyper-V environment.
-
-### Metadata
-Metadata discovered by the Azure Migrate appliance helps you to figure out whether servers are ready for migration to Azure, right-size servers, and plans costs. Microsoft doesn't use this data in any license compliance audit.
-
-Here's the full list of server metadata that the appliance collects and sends to Azure.
-
-**Data** | **WMI class** | **WMI class property**
- | |
-**Server details** |
-Serial number of BIOS | Msvm_BIOSElement | BIOSSerialNumber
-Server type (Gen 1 or 2) | Msvm_VirtualSystemSettingData | VirtualSystemSubType
-Server display name | Msvm_VirtualSystemSettingData | ElementName
-Server version | Msvm_ProcessorSettingData | VirtualQuantity
-Memory (bytes) | Msvm_MemorySettingData | VirtualQuantity
-Maximum memory that can be consumed by server | Msvm_MemorySettingData | Limit
-Dynamic memory enabled | Msvm_MemorySettingData | DynamicMemoryEnabled
-Operating system name/version/FQDN | Msvm_KvpExchangeComponent | GuestIntrinsicExchangeItems Name Data
-Server power status | Msvm_ComputerSystem | EnabledState
-**Per disk details** |
-Disk identifier | Msvm_VirtualHardDiskSettingData | VirtualDiskId
-Virtual hard disk type | Msvm_VirtualHardDiskSettingData | Type
-Virtual hard disk size | Msvm_VirtualHardDiskSettingData | MaxInternalSize
-Virtual hard disk parent | Msvm_VirtualHardDiskSettingData | ParentPath
-**Per NIC details** |
-IP addresses (synthetic NICs) | Msvm_GuestNetworkAdapterConfiguration | IPAddresses
-DHCP enabled (synthetic NICs) | Msvm_GuestNetworkAdapterConfiguration | DHCPEnabled
-NIC ID (synthetic NICs) | Msvm_SyntheticEthernetPortSettingData | InstanceID
-NIC MAC address (synthetic NICs) | Msvm_SyntheticEthernetPortSettingData | Address
-NIC ID (legacy NICs) | MsvmEmulatedEthernetPortSetting Data | InstanceID
-NIC MAC ID (legacy NICs) | MsvmEmulatedEthernetPortSetting Data | Address
-
-### Performance data
-
-Here's the server performance data that the appliance collects and sends to Azure.
-
-**Performance counter class** | **Counter** | **Assessment impact**
- | |
-Hyper-V Hypervisor Virtual Processor | % Guest Run Time | Recommended server size/cost
-Hyper-V Dynamic Memory Server | Current Pressure (%)<br> Guest Visible Physical Memory (MB) | Recommended server size/cost
-Hyper-V Virtual Storage Device | Read Bytes/Second | Calculation for disk size, storage cost, server size
-Hyper-V Virtual Storage Device | Write Bytes/Second | Calculation for disk size, storage cost, server size
-Hyper-V Virtual Network Adapter | Bytes Received/Second | Calculation for server size
-Hyper-V Virtual Network Adapter | Bytes Sent/Second | Calculation for server size
--- CPU utilization is the sum of all usage, for all virtual processors attached to a server.-- Memory utilization is (Current Pressure * Guest Visible Physical Memory) / 100.-- Disk and network utilization values are collected from the listed Hyper-V performance counters.-
-## Collected data - Physical
+**URL** | **Details**
+ | |
+*.portal.azure.cn | Navigate to the Azure portal.
+graph.chinacloudapi.cn | Sign in to your Azure subscription.
+login.microsoftonline.cn | Used for access control and identity management by Azure Active Directory.
+management.chinacloudapi.cn | Used for resource deployments and management operations
+*.services.visualstudio.com | Upload appliance logs used for internal monitoring.
+*.vault.chinacloudapi.cn | Manage secrets in the Azure Key Vault.
+aka.ms/* | Allow access to these links; used to download and install the latest updates for appliance services.
+download.microsoft.com/download | Allow downloads from Microsoft download center.
+*.servicebus.chinacloudapi.cn | Communication between the appliance and the Azure Migrate service.
+*.discoverysrv.cn2.windowsazure.cn <br/> *.cn2.prod.migration.windowsazure.cn | Connect to Azure Migrate service URLs.
+*.cn2.hypervrecoverymanager.windowsazure.cn | **Used for VMware agentless migration.**<br/><br/> Connect to Azure Migrate service URLs.
+*.blob.core.chinacloudapi.cn | **Used for VMware agentless migration.**<br/><br/>Upload data to storage for migration.
+*.applicationinsights.azure.cn | Upload appliance logs used for internal monitoring.
-The appliance collects configuration and performance metadata from physical or virtual servers running on-premises.
-### Metadata
+## Discovery and collection process
-Metadata discovered by the Azure Migrate appliance helps you to figure out whether servers are ready for migration to Azure, right-size servers, and plans costs. Microsoft doesn't use this data in any license compliance audit.
-### Windows server metadata
+The appliance communicates with the discovery sources using the following process.
-Here's the full list of Windows server metadata that the appliance collects and sends to Azure.
+**Process** | **VMware appliance** | **Hyper-V appliance** | **Physical appliance**
+|||
+**Start discovery** | The appliance communicates with the vCenter server on TCP port 443 by default. If the vCenter server listens on a different port, you can configure it in the appliance configuration manager. | The appliance communicates with the Hyper-V hosts on WinRM port 5985 (HTTP). | The appliance communicates with Windows servers over WinRM port 5985 (HTTP) with Linux servers over port 22 (TCP).
+**Gather configuration and performance metadata** | The appliance collects the metadata of servers running on vCenter Server(s) using vSphere APIs by connecting on port 443 (default port) or any other port each vCenter Server listens on. | The appliance collects the metadata of servers running on Hyper-V hosts using a Common Information Model (CIM) session with hosts on port 5985.| The appliance collects metadata from Windows servers using Common Information Model (CIM) session with servers on port 5985 and from Linux servers using SSH connectivity on port 22.
+**Send discovery data** | The appliance sends the collected data to Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits.
+**Data collection frequency** | Configuration metadata is collected and sent every 15 minutes. <br/><br/> Performance metadata is collected every 50 minutes to send a data point to Azure. <br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours. <br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. <br/><br/> The web apps configuration data is updated once every 24 hours. Performance data is not captured for web apps.| Configuration metadata is collected and sent every 30 minutes. <br/><br/> Performance metadata is collected every 30 seconds and is aggregated to send a data point to Azure every 15 minutes.<br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours.| Configuration metadata is collected and sent every 3 hours. <br/><br/> Performance metadata is collected every 5 minutes to send a data point to Azure.<br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours.
+**Assess and migrate** | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool.<br/><br/>In addition, you can also start migrating servers running in your VMware environment using Azure Migrate: Server Migration tool to orchestrate agentless server replication.| You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool. | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool.
-**Data** | **WMI class** | **WMI class property**
- | |
-FQDN | Win32_ComputerSystem | Domain, Name, PartOfDomain
-Processor core count | Win32_PRocessor | NumberOfCores
-Memory allocated | Win32_ComputerSystem | TotalPhysicalMemory
-BIOS serial number | Win32_ComputerSystemProduct | IdentifyingNumber
-BIOS GUID | Win32_ComputerSystemProduct | UUID
-Boot type | Win32_DiskPartition | Check for partition with Type = **GPT:System** for EFI/BIOS
-OS name | Win32_OperatingSystem | Caption
-OS version |Win32_OperatingSystem | Version
-OS architecture | Win32_OperatingSystem | OSArchitecture
-Disk count | Win32_DiskDrive | Model, Size, DeviceID, MediaType, Name
-Disk size | Win32_DiskDrive | Size
-NIC list | Win32_NetworkAdapterConfiguration | Description, Index
-NIC IP address | Win32_NetworkAdapterConfiguration | IPAddress
-NIC MAC address | Win32_NetworkAdapterConfiguration | MACAddress
-
-### Linux server metadata
-
-Here's the full list of Linux server metadata that the appliance collects and sends to Azure.
-
-**Data** | **Commands**
- |
-FQDN | cat /proc/sys/kernel/hostname, hostname -f
-Processor core count | cat/proc/cpuinfo \| awk '/^processor/{print $3}' \| wc -l
-Memory allocated | cat /proc/meminfo \| grep MemTotal \| awk '{printf "%.0f", $2/1024}'
-BIOS serial number | lshw \| grep "serial:" \| head -n1 \| awk '{print $2}' <br> /usr/sbin/dmidecode -t 1 \| grep 'Serial' \| awk '{ $1="" ; $2=""; print}'
-BIOS GUID | cat /sys/class/dmi/id/product_uuid
-Boot type | [ -d /sys/firmware/efi ] && echo EFI \|\| echo BIOS
-OS name/version | We access these files for the OS version and name:<br><br> /etc/os-release<br> /usr/lib/os-release <br> /etc/enterprise-release <br> /etc/redhat-release<br> /etc/oracle-release<br> /etc/SuSE-release<br> /etc/lsb-release <br> /etc/debian_version
-OS architecture | uname -m
-Disk count | fdisk -l \| egrep 'Disk.*bytes' \| awk '{print $2}' \| cut -f1 -d ':'
-Boot disk | df /boot \| sed -n 2p \| awk '{print $1}'
-Disk size | fdisk -l \| egrep 'Disk.*bytes' \| egrep $disk: \| awk '{print $5}'
-NIC list | ip -o -4 addr show \| awk '{print $2}'
-NIC IP address | ip addr show $nic \| grep inet \| awk '{print $2}' \| cut -f1 -d "/"
-NIC MAC address | ip addr show $nic \| grep ether \| awk '{print $2}'
-
-### Windows performance data
-
-Here's the Windows server performance data that the appliance collects and sends to Azure.
-
-**Data** | **WMI class** | **WMI class property**
- | |
-CPU usage | Win32_PerfFormattedData_PerfOS_Processor | PercentIdleTime
-Memory usage | Win32_PerfFormattedData_PerfOS_Memory | AvailableMBytes
-NIC count | Win32_PerfFormattedData_Tcpip_NetworkInterface | Get the network device count.
-Data received per NIC | Win32_PerfFormattedData_Tcpip_NetworkInterface | BytesReceivedPerSec
-Data transmitted per NIC | BWin32_PerfFormattedData_Tcpip_NetworkInterface | BytesSentPersec
-Disk count | BWin32_PerfFormattedData_PerfDisk_PhysicalDisk | Count of disks
-Disk details | Win32_PerfFormattedData_PerfDisk_PhysicalDisk | DiskWritesPerSec, DiskWriteBytesPerSec, DiskReadsPerSec, DiskReadBytesPerSec.
-
-### Linux performance data
-
-Here's the Linux server performance data that the appliance collects and sends to Azure.
-
-| **Data** | **Commands** |
-| | |
-| CPU usage | cat /proc/stat/ \| grep 'cpu' /proc/stat |
-| Memory usage | free \| grep Mem \| awk '{print $3/$2 * 100.0}' |
-| NIC count | lshw -class network \| grep eth[0-60] \| wc -l |
-| Data received per NIC | cat /sys/class/net/eth$nic/statistics/rx_bytes |
-| Data transmitted per NIC | cat /sys/class/net/eth$nic/statistics/tx_bytes |
-| Disk count | fdisk -l \| egrep 'Disk.\*bytes' \| awk '{print $2}' \| cut -f1 -d ':' |
-| Disk details | cat /proc/diskstats |
## Appliance upgrades
The appliance is upgraded as the Azure Migrate services running on the appliance
2. Navigate to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance**. 3. To turn off auto-update, create a registry key **AutoUpdate** key with DWORD value of 0.
- ![Set registry key](./media/migrate-appliance/registry-key.png)
+ ![Screenshot of process to set the registry key.](./media/migrate-appliance/registry-key.png)
### Turn on auto-update
To turn on from Appliance Configuration Manager, after discovery is complete:
1. On the appliance configuration manager, go to **Set up prerequisites** panel 2. In the latest updates check, click on **View appliance services** and click on the link to turn on auto-update.
- ![Turn on auto updates](./media/migrate-appliance/autoupdate-off.png)
+ ![Image of turn on auto updates screen.](./media/migrate-appliance/autoupdate-off.png)
### Check the appliance services version
To check in the Appliance configuration
1. On the appliance configuration manager, go to **Set up prerequisites** panel 2. In the latest updates check, click on **View appliance services**.
- ![Check version](./media/migrate-appliance/versions.png)
+ ![Screenshot of screen to check the version.](./media/migrate-appliance/versions.png)
To check in the Control Panel: 1. On the appliance, click **Start** > **Control Panel** > **Programs and Features** 2. Check the appliance services versions in the list.
- ![Check version in Control Panel](./media/migrate-appliance/programs-features.png)
+ ![Screenshot of process to check version in Control Panel.](./media/migrate-appliance/programs-features.png)
### Manually update an older version
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
Title: Support for Hyper-V assessment in Azure Migrate description: Learn about support for Hyper-V assessment with Azure Migrate Discovery and assessment--++ ms. Last updated 03/18/2021
The following table summarizes port requirements for assessment.
**Device** | **Connection** | **Appliance** | Inbound connections on TCP port 3389 to allow remote desktop connections to the appliance.<br/><br/> Inbound connections on port 44368 to remotely access the appliance management app using the URL: ``` https://<appliance-ip-or-name>:44368 ```<br/><br/> Outbound connections on ports 443 (HTTPS), to send discovery and performance metadata to Azure Migrate.
-**Hyper-V host/cluster** | Inbound connection on WinRM port 5985 (HTTP) or 5986 (HTTPS) to pull metadata and performance data for servers on Hyper-V, using a Common Information Model (CIM) session.
+**Hyper-V host/cluster** | Inbound connection on WinRM port 5985 (HTTP) to pull metadata and performance data for servers on Hyper-V, using a Common Information Model (CIM) session.
+**Servers** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22 (TCP) to perform software inventory and agentless dependency analysis
+
+## Software inventory requirements
+
+In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles and features running on Windows and Linux servers, discovered using Azure Migrate. It helps you to identify and plan a migration path tailored for your on-premises workloads.
+
+Support | Details
+ |
+**Supported servers** | You can perform software inventory on up to 5,000 servers running across Hyper-V host(s)/cluster(s) added to each Azure Migrate appliance.
+**Operating systems** | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/about/supported-guest-os.md) enabled.
+**Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.<br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers to pull the application data: list, tail, awk, grep, locate, head, sed, ps, print, sort, uniq. Based on OS type and the type of package manager being used, here are some additional commands: rpm/snap/dpkg, yum/apt-cache, mssql-server.
+**Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers.
+**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).
+**Discovery** | Software inventory is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers.
+
+## Dependency analysis requirements (agentless)
+
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure.The following table summarizes the requirements for setting up agentless dependency analysis:
+
+Support | Details
+ |
+**Supported servers** | You can enable agentless dependency analysis on up to 1000 servers (across multiple Hyper-V hosts/clusters), discovered per appliance.
+**Operating systems** | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/about/supported-guest-os.md) enabled.
+**Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date.
+**Windows server access** | A user account (local or domain) with administrator permissions on servers.
+**Linux server access** | A root user account, or an account that has these permissions on /bin/netstat and /bin/ls files: <br />CAP_DAC_READ_SEARCH<br /> CAP_SYS_PTRACE<br /><br /> Set these capabilities by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).
+**Discovery method** | Agentless dependency analysis is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
## Agent-based dependency analysis requirements
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
To assess physical servers, you create a project, and add the Azure Migrate: Dis
**Type of servers:** Bare metal servers, virtualized servers running on-premises or other clouds like AWS, GCP, Xen etc. >[!Note]
-> Currently, Azure Migrate does not support the discovery of para-virtualized servers.
+> Currently, Azure Migrate does not support the discovery of para-virtualized servers.
**Operating system:** All Windows and Linux operating systems can be assessed for migration.
Set up an account that the appliance can use to access the physical servers.
**Windows servers** -- For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined. -- The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined. The user account can be created in one of the two ways:
+
+### Option 1
+
+- Create an account that has administrator privileges on the servers. This account can be used to pull configuration and performance data through CIM connection and perform software inventory (discovery of installed applications) and enable agentless dependency analysis using PowerShell remoting.
+
+> [!Note]
+> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Windows servers, it recommended to use Option 1.
+
+### Option 2
+- The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
- If Remote management Users group isn't present, then add user account to the group: **WinRMRemoteWMIUsers_**.-- The account needs these permissions for appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed [here.](migrate-appliance.md#collected-dataphysical)-- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md#access-is-denied-error-occurs-when-you-connect-to-physical-servers-during-validation) to enable the required permissions.
+- The account needs these permissions for appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed here.
+- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md) to enable the required permissions.
> [!Note] > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers. **Linux servers** -- You need a root account on the servers that you want to discover. Alternately, you can provide a user account with sudo permissions.
+For Linux servers, based on the features you want to perform, you can create a user account in one of three ways:
+
+### Option 1
+- You need a root account on the servers that you want to discover. This account can be used to pull configuration and performance metadata, perform software inventory (discovery of installed applications) and enable agentless dependency analysis.
+
+> [!Note]
+> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Linux servers, it recommended to use Option 1.
+
+### Option 2
+- To discover the configuration and performance metadata from Linux servers, you can provide a user account with sudo permissions.
- The support to add a user account with sudo access is provided by default with the new appliance installer script downloaded from portal after July 20,2021. - For older appliances, you can enable the capability by following these steps: 1. On the server running the appliance, open the Registry Editor.
Set up an account that the appliance can use to access the physical servers.
:::image type="content" source="./media/tutorial-discover-physical/issudo-reg-key.png" alt-text="Screenshot that shows how to enable sudo support."::: -- To discover the configuration and performance metadata from target server, you need to enable sudo access for the commands listed [here](migrate-appliance.md#linux-server-metadata). Make sure that you have enabled 'NOPASSWD' for the account to run the required commands without prompting for a password every time sudo command is invoked.
+- You need to enable sudo access for the commands listed [here](discovered-metadata.md#linux-server-metadata). Make sure that you have enabled 'NOPASSWD' for the account to run the required commands without prompting for a password every time sudo command is invoked.
- The following Linux OS distributions are supported for discovery by Azure Migrate using an account with sudo access: Operating system | Versions
Set up an account that the appliance can use to access the physical servers.
Debian | 7, 10 Amazon Linux | 2.0.2021 CoreOS Container | 2345.3.0
+ > [!Note]
+ > 'Sudo' account is currently not supported to perform software inventory (discovery of installed applications) and enable agentless dependency analysis.
+### Option 3
- If you cannot provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry and provide a non-root account with the required capabilities using the following commands:
- **Command** | **Purpose**
- | |
- `setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk` <br></br> `setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk` _(if /usr/sbin/fdisk is not present)_ | To collect disk configuration data
- `setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br>cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br>cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm` | To collect disk performance data
- `setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode` | To collect BIOS serial number
- `chmod a+r /sys/class/dmi/id/product_uuid` | To collect BIOS GUID
+ **Command** | **Purpose**
+ | |
+ setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk <br></br> setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk _(if /usr/sbin/fdisk is not present)_ | To collect disk configuration data
+ setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br> cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br> cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | To collect disk performance data
+ setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number
+ chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID
+
+ To perform agentless dependency analysis on the server, ensure that you also set the required permissions on /bin/netstat and /bin/ls files by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
## Azure Migrate appliance requirements
The following table summarizes port requirements for assessment.
**Appliance** | Inbound connections on TCP port 3389, to allow remote desktop connections to the appliance.<br/><br/> Inbound connections on port 44368, to remotely access the appliance management app using the URL: ``` https://<appliance-ip-or-name>:44368 ```<br/><br/> Outbound connections on ports 443 (HTTPS), to send discovery and performance metadata to Azure Migrate. **Physical servers** | **Windows:** Inbound connection on WinRM port 5985 (HTTP) to pull configuration and performance metadata from Windows servers. <br/><br/> **Linux:** Inbound connections on port 22 (TCP), to pull configuration and performance metadata from Linux servers. |
+## Software inventory requirements
+
+In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles and features running on Windows and Linux servers, discovered using Azure Migrate. It helps you to identify and plan a migration path tailored for your on-premises workloads.
+
+Support | Details
+ |
+**Supported servers** | You can perform software inventory on up to 1,000 servers discovered from each Azure Migrate appliance.
+**Operating systems** | Servers running all Windows and Linux versions that meet the server requirements and have the required access permissions are supported.
+**Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.<br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers to pull the application data: list, tail, awk, grep, locate, head, sed, ps, print, sort, uniq. Based on OS type and the type of package manager being used, here are some additional commands: rpm/snap/dpkg, yum/apt-cache, mssql-server.
+**Windows server access** | A guest user account for Windows servers
+**Linux server access** | A standard user account (non-`sudo` access) for all Linux servers
+**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).
+**Discovery** | Software inventory is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers.
+
+## Dependency analysis requirements (agentless)
+
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
+
+Support | Details
+ |
+**Supported servers** | You can enable agentless dependency analysis on up to 1000 servers, discovered per appliance.
+**Operating systems** | Servers running all Windows and Linux versions that meet the server requirements and have the required access permissions are supported.
+**Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date
+**Windows server access** | A user account (local or domain) with administrator permissions on servers.
+**Linux server access** | A root user account, or an account that has these permissions on /bin/netstat and /bin/ls files: <br />CAP_DAC_READ_SEARCH<br /> CAP_SYS_PTRACE<br /><br /> Set these capabilities by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).
+**Discovery method** | Agentless dependency analysis is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
++ ## Agent-based dependency analysis requirements [Dependency analysis](concepts-dependency-visualization.md) helps you to identify dependencies between on-premises servers that you want to assess and migrate to Azure. The table summarizes the requirements for setting up agent-based dependency analysis. Currently only agent-based dependency analysis is supported for physical servers.
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
Device | Connection
## Software inventory requirements
-In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory allows you to identify and plan a migration path tailored for your on-premises workloads.
+In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles and features running on Windows and Linux servers, discovered using Azure Migrate. It allows you to identify and plan a migration path tailored for your on-premises workloads.
Support | Details |
-**Supported servers** | Currently supported only for servers in your VMware environment. You can perform software inventory on up to 10,000 servers from each Azure Migrate appliance.
+**Supported servers** | You can perform software inventory on up to 10,000 servers running across vCenter Server(s) added to each Azure Migrate appliance.
**Operating systems** | Servers running all Windows and Linux versions are supported.
-**VM requirements** | For software inventory, VMware Tools must be installed and running on your servers. <br /><br /> The VMware Tools version must be version 10.2.1 or later.<br /><br /> Windows servers must have PowerShell version 2.0 or later installed.
-**Discovery** | Software inventory is performed from vCenter Server by using VMware Tools installed on the servers. The appliance gathers the information about the software inventory from the server running vCenter Server through vSphere APIs. Software inventory is agentless. No agent is installed on the server, and the appliance doesn't connect directly to the servers. WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.
-**vCenter Server user account** | To interact with the servers for software inventory, the vCenter Server read-only account that's used for assessment must have privileges for guest operations on VMware VMs.
+**Server requirements** | For software inventory, VMware Tools must be installed and running on your servers.The VMware Tools version must be version 10.2.1 or later.<br /><br /> Windows servers must have PowerShell version 2.0 or later installed.<br/><br/>WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.
+**vCenter Server account** | To interact with the servers for software inventory, the vCenter Server read-only account that's used for assessment must have privileges for guest operations on VMware VMs.
**Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers. **Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory.
+**Discovery** | Software inventory is performed from vCenter Server by using VMware Tools installed on the servers.<br/><br/> The appliance gathers the information about the software inventory from the server running vCenter Server through vSphere APIs.<br/><br/> Software inventory is agentless. No agent is installed on the server, and the appliance doesn't connect directly to the servers.
## SQL Server instance and database discovery requirements
Support | Details
## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you identify dependencies between on-premises servers that you want to assess and migrate to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure.The following table summarizes the requirements for setting up agentless dependency analysis:
Support | Details |
-**Supported servers** | You can enable agentless dependency analysis on up to 1000 servers (across multiple vCenter Servers), discovered per appliance. Currently supported only for servers in your VMware environment.
+**Supported servers** | You can enable agentless dependency analysis on up to 1000 servers (across multiple vCenter Servers), discovered per appliance.
**Windows servers** | Windows Server 2019<br />Windows Server 2016<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br />Microsoft Windows Server 2008 (32-bit) **Linux servers** | Red Hat Enterprise Linux 7, 6, 5<br /> Ubuntu Linux 16.04, 14.04<br /> Debian 8, 7<br /> Oracle Linux 7, 6<br /> CentOS 7, 6, 5<br /> SUSE Linux Enterprise Server 11 and later
-**Server requirements** | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.
-**Discovery method** | Dependency information between servers is gathered by using VMware Tools installed on the server running vCenter Server. The appliance gathers the information from the server by using vSphere APIs. No agent is installed on the server, and the appliance doesnΓÇÖt connect directly to servers. WMI should be enabled and available on Windows servers.
-**vCenter account** | The read-only account used by Azure Migrate for assessment must have privileges for guest operations on VMware VMs.
-**Windows server permissions** | A user account (local or domain) with administrator permissions on servers.
-**Linux account** | A root user account, or an account that has these permissions on /bin/netstat and /bin/ls files: <br />CAP_DAC_READ_SEARCH<br /> CAP_SYS_PTRACE<br /><br /> Set these capabilities by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Server requirements** | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers.
+**vCenter Server account** | The read-only account used by Azure Migrate for assessment must have privileges for guest operations on VMware VMs.
+**Windows server acesss** | A user account (local or domain) with administrator permissions on servers.
+**Linux server access** | A root user account, or an account that has these permissions on /bin/netstat and /bin/ls files: <br />CAP_DAC_READ_SEARCH<br /> CAP_SYS_PTRACE<br /><br /> Set these capabilities by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
**Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running the servers that have dependencies you want to discover. The server running vCenter Server returns an ESXi host connection to download the file containing the dependency data.
+**Discovery method** | Dependency information between servers is gathered by using VMware Tools installed on the server running vCenter Server.<br /><br /> The appliance gathers the information from the server by using vSphere APIs.<br /><br /> No agent is installed on the server, and the appliance doesnΓÇÖt connect directly to servers.
## Dependency analysis requirements (agent-based)
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
There are two versions of the Azure Migrate service:
## Next steps - [Assess VMware VMs](./tutorial-assess-vmware-azure-vm.md) for migration.-- [Assess Hyper-V VMs](tutorial-assess-hyper-v.md) for migration.
+- [Assess Hyper-V VMs](tutorial-assess-hyper-v.md) for migration.
migrate Troubleshoot Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-dependencies.md
If your Azure Migrate project has private endpoint connectivity, the request to
## Common agentless dependency analysis errors
-Azure Migrate supports agentless dependency analysis by using Azure Migrate: Discovery and assessment. Agentless dependency analysis is currently supported for VMware only. [Learn more](how-to-create-group-machine-dependencies-agentless.md) about the requirements for agentless dependency analysis.
+Azure Migrate supports agentless dependency analysis by using Azure Migrate: Discovery and assessment.[Learn more](how-to-create-group-machine-dependencies-agentless.md) about how to perform agentless dependency analysis.
-The list of agentless dependency analysis errors is summarized in the following table.
+For VMware VMs, agentless dependency analysis is performed by connecting to the servers via the vCenter Server using the VMware APIs. For Hyper-V VMs and physical servers, agentless dependency analysis is performed by directly connecting to Windows servers using PowerShell remoting on port 5985 (HTTP) and to Linux servers using SSH connectivity on port 22 (TCP).
+
+The table below summarizes all errors encountered when gathering dependency data through VMware APIs or by directly connecting to servers:
> [!Note] > The same errors can also be encountered with software inventory because it follows the same methodology as agentless dependency analysis to collect the required data. | **Error** | **Cause** | **Action** | |--|--|--|
+| **60001**:UnableToConnectToPhysicalServer | Either the [prerequisites](./migrate-support-matrix-physical.md) to connect to the server have not been met or there are network issues in connecting to the server, for instance some proxy settings.| - Ensure that the server meets the prerequisites and [port access requirements](./migrate-support-matrix-physical.md). <br/> - Add the IP addresses of the remote machines (discovered servers) to the WinRM TrustedHosts list on the Azure Migrate appliance, and retry the operation. This is to allow remote inbound connections on servers - _Windows:_ WinRM port 5985 (HTTP) and _Linux:_ SSH port 22 (TCP). <br/>- Ensure that you have chosen the correct authentication method on the appliance to connect to the server. <br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).|
+| **60002**:InvalidServerCredentials| Unable to connect to server. Either you have provided incorrect credentials on the appliance or the credentials previously provided have expired.| - Ensure that you have provided the correct credentials for the server on the appliance. You can check that by trying to connect to the server using those credentials.<br/> - If the credentials added are incorrect or have expired, edit the credentials on the appliance and revalidate the added servers. If the validation succeeds, the issue is resolved.<br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).|
+| **60005**:SSHOperationTimeout | The operation took longer than expected either due to network latency issues or due to the lack of latest updates on the server.| - Ensure that the impacted server has the latest kernel and OS updates installed.<br/>- Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.<br/> - Connect to the impacted server from the appliance and run the commands [documented here](./troubleshoot-appliance.md) to check if they return null or empty data.<br/>- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager). |
| **9000**: VMware tools status on the server can't be detected. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 are installed and running on the server. | | **9001**: VMware tools aren't installed on the server. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 are installed and running on the server. | | **9002**: VMware tools aren't running on the server. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.0 are installed and running on the server. |
The error usually appears for servers running Windows Server 2008 or lower.
### Remediation Install the required PowerShell version (2.0 or later) at this location on the server: ($SYSTEMROOT)\System32\WindowsPowershell\v1.0\powershell.exe. [Learn more](/powershell/scripting/windows-powershell/install/installing-windows-powershell) about how to install PowerShell in Windows Server.
-After you install the required PowerShell version, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification-by-using-vmware-powercli).
+After you install the required PowerShell version, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification).
## Error 9022: GetWMIObjectAccessDenied
Make sure that the user account provided in the appliance has access to the WMI
1. Ensure you grant execute permissions, and select **This namespace and subnamespaces** in the **Applies to** dropdown list. 1. Select **Apply** to save the settings and close all dialogs.
-After you get the required access, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification-by-using-vmware-powercli).
+After you get the required access, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification).
## Error 9032: InvalidRequest
There can be multiple reasons for this issue. One reason is when the username pr
### Remediation - Make sure the username of the server credentials doesn't have invalid XML characters and is in the username@domain.com format. This format is popularly known as the UPN format.-- After you edit the credentials on the appliance, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification-by-using-vmware-powercli).
+- After you edit the credentials on the appliance, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification).
## Error 10002: ScriptExecutionTimedOutOnVm
There can be multiple reasons for this issue. One reason is when the username pr
- Ensure that you can log in to the affected server by using the same credential provided in the appliance. - You can try using another user account (for the same domain, in case the server is domain joined) for that server instead of the administrator account. - The issue can happen when Global Catalog <-> Domain Controller communication is broken. Check for this problem by creating a new user account in the domain controller and providing the same in the appliance. You might also need to restart the domain controller.-- After you take the remediation steps, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification-by-using-vmware-powercli).
+- After you take the remediation steps, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification).
## Error 10012: CredentialNotProvided
This error occurs when you've provided a domain credential with the wrong domain
### Remediation - Go to the appliance configuration manager to add a server credential or edit an existing one as explained in the cause.-- After you take the remediation steps, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification-by-using-vmware-powercli).
+- After you take the remediation steps, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification).
-## Mitigation verification by using VMware PowerCLI
+## Mitigation verification
After you use the mitigation steps for the preceding errors, verify if the mitigation worked by running a few PowerCLI commands from the appliance server. If the commands succeed, it means that the issue is resolved. Otherwise, check and follow the remediation steps again.
+### For VMware VMs _(using VMware pipe)_
1. Run the following commands to set up PowerCLI on the appliance server: ```` Install-Module -Name VMware.PowerCLI -AllowClobber
After you use the mitigation steps for the preceding errors, verify if the mitig
Invoke-VMScript -VM $vm -ScriptText "netstat -atnp | awk '{print $4,$5,$7}'" -GuestCredential $credential ````
-1. After you verify that the mitigation worked, go to the **Azure Migrate project** > **Discovery and assessment** > **Overview** > **Manage** > **Appliances**, select the appliance name, and select **Refresh services** to start a fresh discovery cycle.
+
+### For Hyper-V VMs and physical servers _(using direct connect pipe)_
+For Windows servers:
+
+1. Connect to Windows server by running the command:
+ ````
+ $Server = New-PSSession ΓÇôComputerName <IPAddress of Server> -Credential <user_name>
+ ````
+ and input the server credentials in the prompt.
+
+2. Run the following commands to validate for agentless dependency analysis to see if you get a successful output:
+ ````
+ Invoke-Command -Session $Server -ScriptBlock {Get-WmiObject Win32_Process}
+ Invoke-Command -Session $Server -ScriptBlock {netstat -ano -p tcp}
+ ````
+
+For Linux servers:
+
+1. Install the OpenSSH client
+ ````
+ Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
+ ````
+2. Install the OpenSSH server
+ ````
+ Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
+ ````
+3. Start and configure OpenSSH Server
+ ````
+ Start-Service sshd
+ Set-Service -Name sshd -StartupType 'Automatic'
+ ````
+4. Connect to OpenSSH Server
+ ````
+ ssh username@servername
+ ````
+5. Run the following commands to validate for agentless dependency analysis to see if you get a successful output:
+ ````
+ ps -o pid,cmd | grep -v ]$
+ netstat -atnp | awk '{print $4,$5,$7}'
+ ````
+
+After you verify that the mitigation worked, go to the **Azure Migrate project** > **Discovery and assessment** > **Overview** > **Manage** > **Appliances**, select the appliance name, and select **Refresh services** to start a fresh discovery cycle.
## My Log Analytics workspace isn't listed when you try to configure the workspace in Azure Migrate for agent-based dependency analysis Azure Migrate currently supports creation of OMS workspace in East US, Southeast Asia, and West Europe regions. If the workspace is created outside of Azure Migrate in any other region, it currently can't be associated with a project.
migrate Troubleshoot Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-discovery.md
Ensure the user downloading the inventory from the portal has Contributor privil
## Common software inventory errors
-Azure Migrate supports software inventory by using Azure Migrate: Discovery and assessment. Software inventory is currently supported for VMware only. [Learn more](how-to-discover-applications.md) about the requirements for software inventory.
+Azure Migrate supports software inventory by using Azure Migrate: Discovery and assessment. [Learn more](how-to-discover-applications.md) about how software inventory is performed.
-The list of software inventory errors is summarized in the following table.
+For VMware VMs, software inventory is performed by connecting to the servers via the vCenter Server using the VMware APIs. For Hyper-V VMs and physical servers, software inventory is performed by directly connecting to Windows servers using PowerShell remoting on port 5985 (HTTP) and to Linux servers using SSH connectivity on port 22 (TCP).
+
+The table below summarizes all errors encountered when gathering software inventory data through VMware APIs or by directly connecting to servers:
> [!Note] > The same errors can also be encountered with agentless dependency analysis because it follows the same methodology as software inventory to collect the required data. | **Error** | **Cause** | **Action** | |--|--|--|
+| **60001**:UnableToConnectToPhysicalServer | Either the [prerequisites](./migrate-support-matrix-physical.md) to connect to the server have not been met or there are network issues in connecting to the server, for instance some proxy settings.| - Ensure that the server meets the prerequisites and [port access requirements](./migrate-support-matrix-physical.md). <br/> - Add the IP addresses of the remote machines (discovered servers) to the WinRM TrustedHosts list on the Azure Migrate appliance, and retry the operation. This is to allow remote inbound connections on servers - _Windows:_ WinRM port 5985 (HTTP) and _Linux:_ SSH port 22 (TCP). <br/>- Ensure that you have chosen the correct authentication method on the appliance to connect to the server. <br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).|
+| **60002**:InvalidServerCredentials| Unable to connect to server. Either you have provided incorrect credentials on the appliance or the credentials previously provided have expired.| - Ensure that you have provided the correct credentials for the server on the appliance. You can check that by trying to connect to the server using those credentials.<br/> - If the credentials added are incorrect or have expired, edit the credentials on the appliance and revalidate the added servers. If the validation succeeds, the issue is resolved.<br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).|
+| **60005**:SSHOperationTimeout | The operation took longer than expected either due to network latency issues or due to the lack of latest updates on the server.| - Ensure that the impacted server has the latest kernel and OS updates installed.<br/>- Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.<br/> - Connect to the impacted server from the appliance and run the commands [documented here](./troubleshoot-appliance.md) to check if they return null or empty data.<br/>- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager). |
| **9000**: VMware tools status on the server can't be detected. | VMware tools might not be installed on the server, or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 are installed and running on the server. | | **9001**: VMware tools aren't installed on the server. | VMware tools might not be installed on the server, or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 are installed and running on the server. | | **9002**: VMware tools aren't running on the server. | VMware tools might not be installed on the server, or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.0 are installed and running on the server. |
The error usually appears for servers running Windows Server 2008 or lower.
### Remediation Install the required PowerShell version (2.0 or later) at this location on the server: ($SYSTEMROOT)\System32\WindowsPowershell\v1.0\powershell.exe. [Learn more](/powershell/scripting/windows-powershell/install/installing-windows-powershell) about how to install PowerShell in Windows Server.
-After you install the required PowerShell version, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification-by-using-vmware-powercli).
+After you install the required PowerShell version, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification).
## Error 9022: GetWMIObjectAccessDenied
Make sure that the user account provided in the appliance has access to WMI name
1. Ensure you grant execute permissions, and select **This namespace and subnamespaces** in the **Applies to** dropdown list. 1. Select **Apply** to save the settings and close all dialogs.
-After you get the required access, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification-by-using-vmware-powercli).
+After you get the required access, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification).
## Error 9032: InvalidRequest
There can be multiple reasons for this issue. One reason is when the username pr
### Remediation - Make sure the username of the server credentials doesn't have invalid XML characters and is in the username@domain.com format. This format is popularly known as the UPN format.-- After you edit the credentials on the appliance, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification-by-using-vmware-powercli).
+- After you edit the credentials on the appliance, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification).
## Error 10002: ScriptExecutionTimedOutOnVm
There can be multiple reasons for this issue. One reason is when the username pr
- Ensure that you can sign in to the affected server by using the same credential provided in the appliance. - You can try by using another user account (for the same domain, in case the server is domain joined) for that server instead of an administrator account. - The issue can happen when Global Catalog <-> Domain Controller communication is broken. Check for this problem by creating a new user account in the domain controller and providing the same in the appliance. You might also need to restart the domain controller.-- After you take the remediation steps, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification-by-using-vmware-powercli).
+- After you take the remediation steps, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification).
## Error 10012: CredentialNotProvided
This error occurs when you've provided a domain credential with the wrong domain
### Remediation - Go to the appliance configuration manager to add a server credential or edit an existing one as explained in the cause.-- After you take the remediation steps, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification-by-using-vmware-powercli).
+- After you take the remediation steps, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification).
-## Mitigation verification by using VMware PowerCLI
+## Mitigation verification
After you use the mitigation steps for the preceding errors, verify if the mitigation worked by running a few PowerCLI commands from the appliance server. If the commands succeed, it means that the issue is resolved. Otherwise, check and follow the remediation steps again.
+### For VMware VMs _(using VMware pipe)_
1. Run the following commands to set up PowerCLI on the appliance server: ```` Install-Module -Name VMware.PowerCLI -AllowClobber
After you use the mitigation steps for the preceding errors, verify if the mitig
```` Invoke-VMScript -VM $vm -ScriptText "ls" -GuestCredential $credential ````
-1. For agentless dependency analysis, run the following commands to see if you get a successful output:
- - For Windows servers:
+### For Hyper-V VMs and physical servers _(using direct connect pipe)_
+For Windows servers:
- ````
- Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'Get-WmiObject Win32_Process'" -GuestCredential $credential
-
- Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'netstat -ano -p tcp'" -GuestCredential $credential
- ````
- - For Linux servers:
- ````
- Invoke-VMScript -VM $vm -ScriptText "ps -o pid,cmd | grep -v ]$" -GuestCredential $credential
+1. Connect to Windows server by running the command:
+ ````
+ $Server = New-PSSession ΓÇôComputerName <IPAddress of Server> -Credential <user_name>
+ ````
+ and input the server credentials in the prompt.
+
+2. Run the following commands to validate for software inventory to see if you get a successful output:
+ ````
+ Invoke-Command -Session $Server -ScriptBlock {Get-WMIObject win32_operatingsystem}
+ Invoke-Command -Session $Server -ScriptBlock {Get-WindowsFeature}
+ ````
- Invoke-VMScript -VM $vm -ScriptText "netstat -atnp | awk '{print $4,$5,$7}'" -GuestCredential $credential
- ````
-1. After you verify that the mitigation worked, go to the **Azure Migrate project** > **Discovery and assessment** > **Overview** > **Manage** > **Appliances**, select the appliance name, and select **Refresh services** to start a fresh discovery cycle.
+For Linux servers:
+
+1. Install the OpenSSH client
+ ````
+ Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
+ ````
+2. Install the OpenSSH server
+ ````
+ Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
+ ````
+3. Start and configure OpenSSH Server
+ ````
+ Start-Service sshd
+ Set-Service -Name sshd -StartupType 'Automatic'
+ ````
+4. Connect to OpenSSH Server
+ ````
+ ssh username@servername
+ ````
+5. Run the following commands to validate for software inventory to see if you get a successful output:
+ ````
+ ls
+ ````
+After you verify that the mitigation worked, go to the **Azure Migrate project** > **Discovery and assessment** > **Overview** > **Manage** > **Appliances**, select the appliance name, and select **Refresh services** to start a fresh discovery cycle.
## Discovered SQL Server instances and databases not in the portal After you've initiated discovery on the appliance, it might take up to 24 hours to start showing the inventory data in the portal.
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
Title: Discover AWS instances with Azure Migrate Discovery and assessment description: Learn how to discover AWS instances with Azure Migrate Discovery and assessment.--++ ms. Last updated 03/11/2021
Before you start this tutorial, check you have these prerequisites in place.
**Requirement** | **Details** | **Appliance** | You need an EC2 VM on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed.<br/> _Running the appliance on a machine with Windows Server 2019 isn't supported_.<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.
-**Windows instances** | Allow inbound connections on WinRM port 5985 (HTTP), so that the appliance can pull configuration and performance metadata.
-**Linux instances** | Allow inbound connections on port 22 (TCP).<br/><br/> The instances should use `bash` as the default shell, otherwise discovery will fail.
+**Windows instances** | Allow inbound connections on WinRM port 5985 (HTTP) for discovery of Windows servers.
+**Linux instances** | Allow inbound connections on port 22 (TCP) for discovery of Linux servers.<br/><br/> The instances should use `bash` as the default shell, otherwise discovery will fail.
## Prepare an Azure user account To create a project and register the Azure Migrate appliance, you need an account with: * Contributor or Owner permissions on an Azure subscription.
-* Permissions to register Azure Active Directory (AAD) apps.
+* Permissions to register Azure Active Directory apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows: 1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Search box to search for the Azure subscription](./media/tutorial-discover-aws/search-subscription.png)
+ ![Image of Search box to search for the Azure subscription.](./media/tutorial-discover-aws/search-subscription.png)
2. In the **Subscriptions** page, select the subscription in which you want to create a project. 3. In the subscription, select **Access control (IAM)** > **Check access**. 4. In **Check access**, search for the relevant user account. 5. In **Add a role assignment**, click **Add**.
- ![Search for a user account to check access and assign a role](./media/tutorial-discover-aws/azure-account-access.png)
+ ![Screenshot of process to search for a user account to check access and assign a role.](./media/tutorial-discover-aws/azure-account-access.png)
6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
- ![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-aws/assign-role.png)
+ ![Screenshot of the Add Role assignment page to assign a role to the account.](./media/tutorial-discover-aws/assign-role.png)
-1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
+1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**
1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**. 1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
- ![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-aws/register-apps.png)
+ ![Image to Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-aws/register-apps.png)
-1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare AWS instances
Set up a new project.
5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
- ![Boxes for project name and region](./media/tutorial-discover-aws/new-project.png)
+ ![Screenshot for project name and region.](./media/tutorial-discover-aws/new-project.png)
+ 7. Select **Create**. 8. Wait a few minutes for the project to deploy. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project.
-![Page showing Server Assessment tool added by default](./media/tutorial-discover-aws/added-tool.png)
+![Page showing Server Assessment tool added by default.](./media/tutorial-discover-aws/added-tool.png)
> [!NOTE] > If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
Set up the appliance for the first time.
1. Paste the **project key** copied from the portal. If you do not have the key, go to **Azure Migrate: Discovery and assessment> Discover> Manage existing appliances**, select the appliance name you provided at the time of key generation and copy the corresponding key. 1. You will need a device code to authenticate with Azure. Clicking on **Login** will open a modal with the device code as shown below.
- ![Modal showing the device code](./media/tutorial-discover-vmware/device-code.png)
+ ![Modal showing the device code.](./media/tutorial-discover-vmware/device-code.png)
1. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser. 1. On the new tab, paste the device code and sign-in by using your Azure username and password.
Now, connect from the appliance to the physical servers to be discovered, and st
* Currently Azure Migrate does not support SSH private key file generated by PuTTY. * Azure Migrate supports OpenSSH format of the SSH private key file as shown below:
- ![SSH private key supported format](./media/tutorial-discover-physical/key-format.png)
+ ![Image of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
1. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery.
Now, connect from the appliance to the physical servers to be discovered, and st
- If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, click on **Delete**. 1. You can **revalidate** the connectivity to servers anytime before starting the discovery.
-1. Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers.You can change this option at any time.
+
+ :::image type="content" source="./media/tutorial-discover-physical/disable-slider.png" alt-text="Screenshot that shows where to disable the slider.":::
+
+### Start discovery
+
+Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+## How discovery works
-This starts discovery. It takes approximately 2 minutes per server for metadata of discovered server to appear in the Azure portal.
+* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal.
+* [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished.
+* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal.
+* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
## Verify servers in the portal
After discovery finishes, you can verify that the servers appear in the portal.
## Next steps - [Assess physical servers](tutorial-migrate-aws-virtual-machines.md) for migration to Azure VMs.-- [Review the data](migrate-appliance.md#collected-dataphysical) that the appliance collects during discovery.
+- [Review the data](discovered-metadata.md#collected-data-for-physical-servers) that the appliance collects during discovery.
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
Title: Discover servers on GCP instances with Azure Migrate Discovery and assessment description: Learn how to discover servers on GCP with Azure Migrate Discovery and assessment.--++ ms. Last updated 03/13/2021
Before you start this tutorial, check you have these prerequisites in place.
**Requirement** | **Details** | **Appliance** | You need a server on GCP on which to run the Azure Migrate appliance. The machine should have:<br/><br/> - Windows Server 2016 installed.<br/> _Running the appliance on a machine with Windows Server 2019 isn't supported_.<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.
-**Windows server instances** | Allow inbound connections on WinRM port 5985 (HTTP), so that the appliance can pull configuration and performance metadata.
-**Linux server instances** | Allow inbound connections on port 22 (TCP).
+**Windows server instances** | Allow inbound connections on WinRM port 5985 (HTTP) for discovery of Windows servers.
+**Linux server instances** | Allow inbound connections on port 22 (TCP) for discovery of Linux servers.
## Prepare an Azure user account To create a project and register the Azure Migrate appliance, you need an account with: * Contributor or Owner permissions on an Azure subscription.
-* Permissions to register Azure Active Directory (AAD) apps.
+* Permissions to register Azure Active Directory apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows: 1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Search box to search for the Azure subscription](./media/tutorial-discover-gcp/search-subscription.png)
+ ![Screenshot of Search box to search for the Azure subscription.](./media/tutorial-discover-gcp/search-subscription.png)
2. In the **Subscriptions** page, select the subscription in which you want to create a project. 3. In the subscription, select **Access control (IAM)** > **Check access**. 4. In **Check access**, search for the relevant user account. 5. In **Add a role assignment**, click **Add**.
- ![Search for a user account to check access and assign a role](./media/tutorial-discover-gcp/azure-account-access.png)
+ ![Image to Search for a user account to check access and assign a role.](./media/tutorial-discover-gcp/azure-account-access.png)
6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
- ![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-gcp/assign-role.png)
+ ![Screenshot of Add Role assignment page to assign a role to the account.](./media/tutorial-discover-gcp/assign-role.png)
-1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
+1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**
1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**. 1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
- ![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-gcp/register-apps.png)
+ ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-gcp/register-apps.png)
-1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare GCP instances
Set up a new project.
4. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 5. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
- ![Boxes for project name and region](./media/tutorial-discover-gcp/new-project.png)
+ ![Screenshot to enter project name and region.](./media/tutorial-discover-gcp/new-project.png)
6. Select **Create**. 7. Wait a few minutes for the project to deploy. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project.
-![Page showing Server Assessment tool added by default](./media/tutorial-discover-gcp/added-tool.png)
+![Page showing Server Assessment tool added by default.](./media/tutorial-discover-gcp/added-tool.png)
> [!NOTE] > If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
Set up the appliance for the first time.
1. Paste the **project key** copied from the portal. If you do not have the key, go to **Azure Migrate: Discovery and assessment> Discover> Manage existing appliances**, select the appliance name you provided at the time of key generation and copy the corresponding key. 1. You will need a device code to authenticate with Azure. Clicking on **Login** will open a modal with the device code as shown below.
- ![Modal showing the device code](./media/tutorial-discover-vmware/device-code.png)
+ ![Modal showing the device code.](./media/tutorial-discover-vmware/device-code.png)
1. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser. 1. On the new tab, paste the device code and sign-in by using your Azure username and password.
Now, connect from the appliance to the GCP servers to be discovered, and start t
- Currently Azure Migrate does not support SSH private key file generated by PuTTY. - Azure Migrate supports OpenSSH format of the SSH private key file as shown below:
- ![SSH private key supported format](./media/tutorial-discover-physical/key-format.png)
+ ![Image of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
2. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials.
Now, connect from the appliance to the GCP servers to be discovered, and start t
- If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, click on **Delete**. 6. You can **revalidate** the connectivity to servers anytime before starting the discovery.
-7. Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers.You can change this option at any time.
+ :::image type="content" source="./media/tutorial-discover-physical/disable-slider.png" alt-text="Screenshot that shows where to disable the slider.":::
-This starts discovery. It takes approximately 2 minutes per server for metadata of discovered server to appear in the Azure portal.
+### Start discovery
+
+Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+
+## How discovery works
+
+* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal.
+* [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished.
+* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal.
+* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
## Verify servers in the portal
After discovery finishes, you can verify that the servers appear in the portal.
## Next steps * [Assess GCP servers](tutorial-assess-gcp.md) for migration to Azure VMs.
-* [Review the data](migrate-appliance.md#collected-dataphysical) that the appliance collects during discovery.
+* [Review the data](discovered-metadata.md#collected-data-for-physical-servers) that the appliance collects during discovery.
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
Title: Discover servers on Hyper-V with Azure Migrate Discovery and assessment description: Learn how to discover on-premises servers on Hyper-V with the Azure Migrate Discovery and assessment tool.--++ ms. Last updated 11/12/2021
Before you start this tutorial, check you have these prerequisites in place.
| **Hyper-V host** | Hyper-V hosts on which servers are located can be standalone, or in a cluster.<br/><br/> The host must be running Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2.<br/><br/> Verify inbound connections are allowed on WinRM port 5985 (HTTP), so that the appliance can connect to pull server metadata and performance data, using a Common Information Model (CIM) session. **Appliance deployment** | Hyper-V host needs resources to allocate a server for the appliance:<br/><br/> - 16 GB of RAM, 8 vCPUs, and around 80 GB of disk storage.<br/><br/> - An external virtual switch, and internet access on the appliance, directly or via a proxy.
-**Servers** | Servers can be running any Windows or Linux operating system.
+**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-hyper-v.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, Windows servers must have PowerShell version 2.0 or later installed.
## Prepare an Azure user account To create a project and register the Azure Migrate appliance, you need an account with: - Contributor or Owner permissions on an Azure subscription.-- Permissions to register Azure Active Directory(AAD) apps.
+- Permissions to register Azure Active Directory apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows: 1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Search box to search for the Azure subscription](./media/tutorial-discover-hyper-v/search-subscription.png)
+ ![Screenshot of Search box to search for the Azure subscription.](./media/tutorial-discover-hyper-v/search-subscription.png)
2. In the **Subscriptions** page, select the subscription in which you want to create a project. 3. In the subscription, select **Access control (IAM)** > **Check access**. 4. In **Check access**, search for the relevant user account. 5. In **Add a role assignment**, click **Add**.
- ![Search for a user account to check access and assign a role](./media/tutorial-discover-hyper-v/azure-account-access.png)
+ ![Screenshot of Search for a user account to check access and assign a role.](./media/tutorial-discover-hyper-v/azure-account-access.png)
6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
- ![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-hyper-v/assign-role.png)
+ ![Screenshot of the Add Role assignment page to assign a role to the account.](./media/tutorial-discover-hyper-v/assign-role.png)
-1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
-1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**
+1. In the Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
- ![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-hyper-v/register-apps.png)
+ ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-hyper-v/register-apps.png)
-9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare Hyper-V hosts
Hash value is:
| SHA256 | 0ad60e7299925eff4d1ae9f1c7db485dc9316ef45b0964148a3c07c80761ade2
+### Create an account to access servers
+
+The user account on your servers must have the required permissions to initiate discovery of installed applications and enable agentless dependency analysis. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers.
+
+* For Windows servers, create an account (local or domain) that has administrator permissions on the servers.
+* For Linux servers, provide the root user account details or create an account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files.
+
+> [!NOTE]
+> You can add multiple server credentials in the Azure Migrate appliance configuration manager to initiate discovery of installed applications and enable agentless dependency analysis. You can add multiple domain, Windows (non-domain) or Linux (non-domain)credentials. Learn how to [add server credentials](add-server-credentials.md).
+ ## Set up a project Set up a new project.
Set up a new project.
5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
- ![Boxes for project name and region](./media/tutorial-discover-hyper-v/new-project.png)
+ ![Screenshot of project name and region.](./media/tutorial-discover-hyper-v/new-project.png)
7. Select **Create**. 8. Wait a few minutes for the project to deploy. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project.
-![Page showing Azure Migrate: Discovery and assessment tool added by default](./media/tutorial-discover-hyper-v/added-tool.png)
+![Page showing Azure Migrate: Discovery and assessment tool added by default.](./media/tutorial-discover-hyper-v/added-tool.png)
> [!NOTE] > If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
Import the downloaded file, and create an appliance.
### Verify appliance access to Azure
-Make sure that the appliance can connect to Azure URLs for [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds.
+Make sure that the appliance can connect to Azure URLs [public](migrate-support-matrix.md#public-cloud) and [government](migrate-support-matrix.md#azure-government) clouds.
### 4. Configure the appliance
Set up the appliance for the first time.
1. Paste the **project key** copied from the portal. If you do not have the key, go to **Azure Migrate: Discovery and assessment> Discover> Manage existing appliances**, select the appliance name you provided at the time of key generation and copy the corresponding key. 1. You will need a device code to authenticate with Azure. Clicking on **Login** will open a modal with the device code as shown below.
- ![Modal showing the device code](./media/tutorial-discover-vmware/device-code.png)
+ ![Modal showing the device code.](./media/tutorial-discover-vmware/device-code.png)
1. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser. 1. On the new tab, paste the device code and sign-in by using your Azure username and password.
If you're running VHDs on SMBs, you must enable delegation of credentials from t
Connect from the appliance to Hyper-V hosts or clusters, and start server discovery.
+### Provide Hyper-V host/cluster details
+ 1. In **Step 1: Provide Hyper-V host credentials**, click on **Add credentials** to specify a friendly name for credentials, add **Username** and **Password** for a Hyper-V host/cluster that the appliance will use to discover servers. Click on **Save**. 1. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for discovery of servers in Hyper-V environment. 1. In **Step 2: Provide Hyper-V host/cluster details**, click on **Add discovery source** to specify the Hyper-V host/cluster **IP address/FQDN** and the friendly name for credentials to connect to the host/cluster.
Connect from the appliance to Hyper-V hosts or clusters, and start server discov
- You can't remove a specific host from a cluster. You can only remove the entire cluster. - You can add a cluster, even if there are issues with specific hosts in the cluster. 1. You can **revalidate** the connectivity to hosts/clusters anytime before starting the discovery.
-1. Click on **Start discovery**, to kick off server discovery from the successfully validated hosts/clusters. After the discovery has been successfully initiated, you can check the discovery status against each host/cluster in the table.
-This starts discovery. It takes approximately 2 minutes per host for metadata of discovered servers to appear in the Azure portal.
+### Provide server credentials
+
+In **Step 3: Provide server credentials to perform software inventory and agentless dependency analysis.**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can disable the slider and proceed with discovery of servers running on Hyper-V hosts/clusters. You can change this option at any time.
++
+If you want to use these features, provide server credentials by completing the following steps. The appliance attempts to automatically map the credentials to the servers to perform the discovery features.
+
+To add server credentials:
+
+1. Select **Add Credentials**.
+1. In the dropdown menu, select **Credentials type**.
+
+ You can provide domain/, Windows(non-domain)/, Linux(non-domain) credentials. Learn how to [provide credentials](add-server-credentials.md) and how we handle them.
+1. For each type of credentials, enter:
+ * A friendly name.
+ * A username.
+ * A password.
+ Select **Save**.
+
+ If you choose to use domain credentials, you also must enter the FQDN for the domain. The FQDN is required to validate the authenticity of the credentials with the Active Directory instance in that domain.
+1. Review the [required permissions](add-server-credentials.md#required-permissions) on the account for discovery of installed applications and agentless dependency analysis.
+1. To add multiple credentials at once, select **Add more** to save credentials, and then add more credentials.
+ When you select **Save** or **Add more**, the appliance validates the domain credentials with the domain's Active Directory instance for authentication. Validation is made after each addition to avoid account lockouts as during discovery, the appliance iterates to map credentials to respective servers.
+
+To check validation of the domain credentials:
+
+In the configuration manager, in the credentials table, see the **Validation status** for domain credentials. Only domain credentials are validated.
+
+If validation fails, you can select the **Failed** status to see the validation error. Fix the issue, and then select **Revalidate credentials** to reattempt validation of the credentials.
++
+### Start discovery
+
+Click on **Start discovery**, to kick off server discovery from the successfully validated host(s)/cluster(s). After the discovery has been successfully initiated, you can check the discovery status against each host/cluster in the table.
+
+## How discovery works
+
+* It takes approximately 2 minutes per host for metadata of discovered servers to appear in the Azure portal.
+* If you have provided server credentials, [software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers running on Hyper-V host(s)/cluster(s) is finished.
+* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal.
+* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
## Verify servers in the portal
After discovery finishes, you can verify that the servers appear in the portal.
## Next steps - [Assess servers on Hyper-V environment](tutorial-assess-hyper-v.md) for migration to Azure VMs.-- [Review the data](migrate-appliance.md#collected-datahyper-v) that the appliance collects during discovery.
+- [Review the data](discovered-metadata.md#collected-metadata-for-hyper-v-servers) that the appliance collects during discovery.
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites
-Before you start this tutorial, check you have these prerequisites in place.
+Before you start this tutorial, ensure you have these prerequisites in place.
**Requirement** | **Details** |
-**Appliance** | You need a server on which to run the Azure Migrate appliance. The server should have:<br/><br/> - Windows Server 2016 installed.<br/> _(Currently the deployment of appliance is only supported on Windows Server 2016.)_<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.<br/><br/> - Outbound internet connectivity to the required [URLs](migrate-appliance.md#url-access) from the appliance.
-**Windows servers** | Allow inbound connections on WinRM port 5985 (HTTP), so that the appliance can pull configuration and performance metadata.
-**Linux servers** | Allow inbound connections on port 22 (TCP).
+**Appliance** | You need a server to run the Azure Migrate appliance. The server should have:<br/><br/> - Windows Server 2016 installed.<br/> _(Currently the deployment of appliance is only supported on Windows Server 2016.)_<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.<br/><br/> - Outbound internet connectivity to the required [URLs](migrate-appliance.md#url-access) from the appliance.
+**Windows servers** | Allow inbound connections on WinRM port 5985 (HTTP) for discovery of Windows servers.
+**Linux servers** | Allow inbound connections on port 22 (TCP) for discovery of Linux servers.
> [!NOTE] > It is unsupported to install the Azure Migrate Appliance on a server that has the [replication appliance](migrate-replication-appliance.md) or mobility service agent installed. Ensure that the appliance server has not been previously used to set up the replication appliance or has the mobility service agent installed on the server.
Before you start this tutorial, check you have these prerequisites in place.
To create a project and register the Azure Migrate appliance, you need an account with: - Contributor or Owner permissions on an Azure subscription.-- Permissions to register Azure Active Directory (AAD) apps.
+- Permissions to register Azure Active Directory apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows: 1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Search box to search for the Azure subscription](./media/tutorial-discover-physical/search-subscription.png)
+ ![Screenshot of search box to search for the Azure subscription.](./media/tutorial-discover-physical/search-subscription.png)
-2. In the **Subscriptions** page, select the subscription in which you want to create project.
+2. In the **Subscriptions** page, select the subscription in which you want to create the project.
3. In the subscription, select **Access control (IAM)** > **Check access**. 4. In **Check access**, search for the relevant user account. 5. In **Add a role assignment**, click **Add**.
- ![Search for a user account to check access and assign a role](./media/tutorial-discover-physical/azure-account-access.png)
+ ![Screenshot of searching for a user account to check access and assign a role.](./media/tutorial-discover-physical/azure-account-access.png)
6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
- ![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-physical/assign-role.png)
+ ![Screenshot of the Add Role assignment page to assign a role to the account.](./media/tutorial-discover-physical/assign-role.png)
-1. To register the appliance, your Azure account needs **permissions to register AAD apps.**
+1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**
1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**. 1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
- ![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-physical/register-apps.png)
+ ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-physical/register-apps.png)
-9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of AAD App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare physical servers
Set up an account that the appliance can use to access the physical servers.
**Windows servers** -- For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined. -- The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users. -- If Remote management Users group isn't present, then add user account to the group: **WinRMRemoteWMIUsers_**.-- The account needs these permissions for appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed here.
+For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined. The user account can be created in one of the two ways:
+
+### Option 1
+
+- Create an account that has administrator privileges on the servers. This account can be used to pull configuration and performance data through CIM connection and perform software inventory (discovery of installed applications) and enable agentless dependency analysis using PowerShell remoting.
+
+> [!Note]
+> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Windows servers, it recommended to use Option 1.
+
+### Option 2
+- The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+- If Remote management Users group isn't present, then add the user account to the group: **WinRMRemoteWMIUsers_**.
+- The account needs these permissions for the appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed here.
- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md) to enable the required permissions. > [!Note]
Set up an account that the appliance can use to access the physical servers.
**Linux servers** -- You need a root account on the servers that you want to discover. Alternately, you can provide a user account with sudo permissions.
+For Linux servers, you can create a user account in one of three ways:
+
+### Option 1
+- You need a root account on the servers that you want to discover. This account can be used to pull configuration and performance metadata and perform software inventory (discovery of installed applications) and enable agentless dependency analysis using SSH connectivity.
+
+> [!Note]
+> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Windows servers, it recommended to use Option 1.
+
+### Option 2
+- To discover the configuration and performance metadata from Linux servers, you can provide a user account with sudo permissions.
- The support to add a user account with sudo access is provided by default with the new appliance installer script downloaded from portal after July 20,2021. - For older appliances, you can enable the capability by following these steps: 1. On the server running the appliance, open the Registry Editor.
Set up an account that the appliance can use to access the physical servers.
:::image type="content" source="./media/tutorial-discover-physical/issudo-reg-key.png" alt-text="Screenshot that shows how to enable sudo support."::: -- To discover the configuration and performance metadata from target server, you need to enable sudo access for the commands listed [here](migrate-appliance.md#linux-server-metadata). Make sure that you have enabled 'NOPASSWD' for the account to run the required commands without prompting for a password every time sudo command is invoked.
+- You need to enable sudo access for the commands listed [here](discovered-metadata.md#linux-server-metadata). Make sure that you have enabled 'NOPASSWD' for the account to run the required commands without prompting for a password every time sudo command is invoked.
- The following Linux OS distributions are supported for discovery by Azure Migrate using an account with sudo access: Operating system | Versions
Set up an account that the appliance can use to access the physical servers.
Amazon Linux | 2.0.2021 CoreOS Container | 2345.3.0
+### Option 3
- If you cannot provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry and provide a non-root account with the required capabilities using the following commands: **Command** | **Purpose** | | setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk <br></br> setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk _(if /usr/sbin/fdisk is not present)_ | To collect disk configuration data
-setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br>cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br>cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | To collect disk performance data
+setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br> cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br> cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | To collect disk performance data
setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID
Set up a new project.
5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
- ![Boxes for project name and region](./media/tutorial-discover-physical/new-project.png)
+ ![Screenshot of project name and region.](./media/tutorial-discover-physical/new-project.png)
7. Select **Create**. 8. Wait a few minutes for the project to deploy. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project.
-![Page showing Server Assessment tool added by default](./media/tutorial-discover-physical/added-tool.png)
+ ![Page showing Server Assessment tool added by default.](./media/tutorial-discover-physical/added-tool.png)
> [!NOTE]
-> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers. [Learn more](create-manage-projects.md#find-a-project).
## Set up the appliance Azure Migrate appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate. The appliance can be set up by executing a PowerShell script that can be downloaded from the project.
-To set up the appliance you:
+To set up the appliance, you:
1. Provide an appliance name and generate a project key in the portal.
-2. Download a zipped file with Azure Migrate installer script from the Azure portal.
+2. Download a zipped file with the Azure Migrate installer script from the Azure portal.
3. Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges. 4. Execute the PowerShell script to launch the appliance configuration manager. 5. Configure the appliance for the first time, and register it with the project using the project key.
To set up the appliance you:
1. After the successful creation of the Azure resources, a **project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
- [ ![Selections for Generate Key.](./media/tutorial-assess-physical/generate-key-physical-inline-1.png)](./media/tutorial-assess-physical/generate-key-physical-expanded-1.png#lightbox)
+ [ ![Selections for Generate Key.](./media/tutorial-assess-physical/generate-key-physical-inline-1.png)](./media/tutorial-assess-physical/generate-key-physical-expanded-1.png#lightbox)
### 2. Download the installer script
-In **2: Download Azure Migrate appliance**, click on **Download**.
+In **2: Download Azure Migrate appliance**, click **Download**.
### Verify security
Check that the zipped file is secure, before you deploy it.
`PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1`
-5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **physical servers** _(or servers running on other clouds like AWS, GCP, Xen etc.)_ to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
+5. Select from the scenario, cloud, and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **physical servers** _(or servers running on other clouds like AWS, GCP, Xen etc.)_ to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
- :::image type="content" source="./media/tutorial-discover-physical/script-physical-default-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration" lightbox="./media/tutorial-discover-physical/script-physical-default-expanded.png":::
+ :::image type="content" source="./media/tutorial-discover-physical/script-physical-default-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration." lightbox="./media/tutorial-discover-physical/script-physical-default-expanded.png":::
6. The installer script does the following: - Installs agents and a web application.
- - Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
- - Download and installs an IIS rewritable module.
+ - Installs Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+ - Downloads and installs an IIS rewritable module.
- Updates a registry key (HKLM) with persistent setting details for Azure Migrate. - Creates the following files under the path: - **Config Files:** `%ProgramData%\Microsoft Azure\Config`
Set up the appliance for the first time.
### Register the appliance with Azure Migrate
-1. Paste the **project key** copied from the portal. If you do not have the key, go to **Azure Migrate: Discovery and assessment> Discover> Manage existing appliances**, select the appliance name you provided at the time of key generation and copy the corresponding key.
+1. Paste the **project key** copied from the portal. If you do not have the key, go to **Azure Migrate: Discovery and assessment** > **Discover** > **Manage existing appliances**, select the appliance name you provided at the time of key generation and copy the corresponding key.
1. You will need a device code to authenticate with Azure. Clicking on **Login** will open a modal with the device code as shown below.
- ![Modal showing the device code](./media/tutorial-discover-vmware/device-code.png)
+ :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Modal showing the device code.":::
1. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser.
-1. On the new tab, paste the device code and sign-in by using your Azure username and password. Sign-in with a PIN isn't supported.
+1. On the new tab, paste the device code and sign-in using your Azure username and password. Sign-in with a PIN isn't supported.
1. In case you close the login tab accidentally without logging in, you need to refresh the browser tab of the appliance configuration manager to enable the Login button again.
-1. After you successfully logged in, go back to the previous tab with the appliance configuration manager.
-1. If the Azure user account used for logging has the right [permissions]() on the Azure resources created during key generation, the appliance registration will be initiated.
+1. After you have successfully logged in, go back to the previous tab with the appliance configuration manager.
+1. If the Azure user account used for logging has the right permissions on the Azure resources created during key generation, the appliance registration will be initiated.
1. After appliance is successfully registered, you can see the registration details by clicking on **View details**. ## Start continuous discovery
Now, connect from the appliance to the physical servers to be discovered, and st
- Currently Azure Migrate does not support SSH private key file generated by PuTTY. - Azure Migrate supports OpenSSH format of the SSH private key file as shown below:
- ![SSH private key supported format](./media/tutorial-discover-physical/key-format.png)
+ ![Screenshot of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
1. If you want to add multiple credentials at once, click on **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery. 1. In **Step 2:Provide physical or virtual server detailsΓÇï**, click on **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server.
Now, connect from the appliance to the physical servers to be discovered, and st
- If you choose **Add single item**, you can choose the OS type, specify friendly name for credentials, add server **IP address/FQDN** and click on **Save**.
- - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. Verify** the added records and click on **Save**.
+ - If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. **Verify** the added records and click on **Save**.
- If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and click on **Save**.
-1. On clicking Save, appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server.
+1. On clicking Save, the appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server.
- If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, click on **Delete**. 1. You can **revalidate** the connectivity to servers anytime before starting the discovery.
-1. Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers. You can change this option at any time.
+
+ :::image type="content" source="./media/tutorial-discover-physical/disable-slider.png" alt-text="Screenshot that shows where to disable the slider.":::
+
+### Start discovery
+
+Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+
+## How discovery works
-It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal.
+* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal.
+* [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished.
+* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal.
+* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
## Verify servers in the portal
After the discovery has been initiated, you can delete any of the added servers
## Next steps - [Assess physical servers](tutorial-assess-physical.md) for migration to Azure VMs.-- [Review the data](migrate-appliance.md#collected-dataphysical) that the appliance collects during discovery.
+- [Review the data](discovered-metadata.md#collected-data-for-physical-servers) that the appliance collects during discovery.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
To start vCenter Server discovery, select **Start discovery**. After the discove
- Learn how to [assess servers to migrate to Azure VMs](./tutorial-assess-vmware-azure-vm.md). - Learn how to [assess servers running SQL Server to migrate to Azure SQL](./tutorial-assess-sql.md). - Learn how to [assess web apps to migrate to Azure App Service](./tutorial-assess-webapps.md).-- Review [data the Azure Migrate appliance collects](migrate-appliance.md#collected-datavmware) during discovery.
+- Review [data the Azure Migrate appliance collects](discovered-metadata.md#collected-metadata-for-vmware-servers) during discovery.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
## Update (February 2022) - Azure Migrate is now supported in Azure China. [Learn more](/azure/china/overview-operations#azure-operations-in-china).
+- Public preview of at-scale, software inventory, and agentless dependency analysis for Hyper-V virtual machines and bare metal servers or servers running on other clouds like AWS, GCP etc.
+ ## Update (December 2021) - Support to discover, assess, and migrate VMs from multiple vCenter Servers using a single Azure Migrate appliance. [Learn more](tutorial-discover-vmware.md#start-continuous-discovery).
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/select-right-deployment-type.md
When making your decision, consider the following two options:
- [Flexible Server](flexible-server/overview.md) - Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that do not need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](flexible-server/overview.md#azure-regions). Flexible servers are best suited for all new developments and migration of production workloads to Azure Database for MySQL service.
- - [Single Server](single-server-overview.md) is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. To learn about the differences between Flexible Server and Single Server deployment options, refer [select the right deployment option for you](select-right-deployment-type.md) documentation.
+ - [Single Server](single-server-overview.md) is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option.
- **MySQL on Azure VMs**. This option falls into the industry category of IaaS. With this service, you can run MySQL Server inside a managed virtual machine on the Azure cloud platform. All recent versions and editions of MySQL can be installed in the virtual machine.
network-watcher Network Watcher Nsg Auditing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-auditing-powershell.md
The scenario covered in this article gets the security group view for a virtual
In this scenario, you will: - Retrieve a known good rule set-- Retrieve a virtual machine with Rest API
+- Retrieve a virtual machine with REST API
- Get security group view for virtual machine - Evaluate Response
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Title: Azure Red Hat OpenShift 4 cluster support policy description: Understand support policy requirements for Red Hat OpenShift 4--++ Last updated 03/05/2021
+#Customer intent: I need to understand the Azure Red Hat OpenShift support policies for OpenShift 4.0.
-# Azure Red Hat OpenShift support policy
+# Azure Red Hat OpenShift 4.0 support policy
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cluster's supportability. Azure Red Hat OpenShift 4 allows cluster administrators to make changes to internal cluster components, but not all changes are supported. The support policy below shares what modifications violate the policy and void support from Microsoft and Red Hat.
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
## Cluster configuration requirements * All OpenShift Cluster operators must remain in a managed state. The list of cluster operators can be returned by running `oc get clusteroperators`.
-* The cluster must have a minimum of three worker nodes and three manager nodes. Do not have taints that prevent OpenShift components to be scheduled. Do not scale the cluster workers to zero, or attempt a graceful cluster shutdown.
+* The cluster must have a minimum of three worker nodes and three manager nodes. Don't have taints that prevent OpenShift components to be scheduled. Don't scale the cluster workers to zero, or attempt a graceful cluster shutdown.
* Don't remove or modify the cluster Prometheus and Alertmanager services. * Don't remove Service Alertmanager rules.
-* Don't remove or modify network security groups.
+* Security groups can't be modified. Any attempt to modify security groups will be reverted.
* Don't remove or modify Azure Red Hat OpenShift service logging (mdsd pods). * Don't remove or modify the 'arosvc.azurecr.io' cluster pull secret. * All cluster virtual machines must have direct outbound internet access, at least to the Azure Resource Manager (ARM) and service logging (Geneva) endpoints. No form of HTTPS proxying is supported.
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Don't set any unsupportedConfigOverrides options. Setting these options prevents minor version upgrades. * The Azure Red Hat OpenShift service accesses your cluster via Private Link Service. Don't remove or modify service access. * Non-RHCOS compute nodes aren't supported. For example, you can't use a RHEL compute node.
-* Don't place policies within your subscription or management group that prevent SREs from performing normal maintenance against the ARO cluster, such as requiring tags on the ARO RP-managed cluster resource group.
+* Don't place policies within your subscription or management group that prevent SREs from performing normal maintenance against the Azure Red Hat OpenShift cluster. For example, don''t require tags on the Azure Red Hat OpenShift RP-managed cluster resource group.
## Supported virtual machine sizes Azure Red Hat OpenShift 4 supports worker node instances on the following virtual machine sizes:
-### Master nodes
+### Control plane nodes
|Series|Size|vCPU|Memory: GiB| |-|-|-|-|
Azure Red Hat OpenShift 4 supports worker node instances on the following virtua
|Fsv2|Standard_F32s_v2|32|64| ### Day 2 worker node
-The following instance types are supported as a day 2 operation by configuring machinesets. For information on how to create a machineset, see [Creating a machineset in Azure](https://docs.openshift.com/container-platform/4.8/machine_management/creating_machinesets/creating-machineset-azure.html).
+The following instance types are supported as a day 2 operation by configuring machine sets. For information on how to create a machine set, see [Creating a machine set on Azure](https://docs.openshift.com/container-platform/4.8/machine_management/creating_machinesets/creating-machineset-azure.html).
|Series|Size|vCPU|Memory: GiB|
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
In Azure Purview, you can register and scan source types. Once the scan is compl
> [!NOTE] > After you have scanned your source types, give Asset Insights 3-8 hours to reflect the new assets. The delay may be due to high traffic in deployment region or size of your workload. For further information, please contact the field support team.
-1. Navigate to your Azure Purview resource in the Azure portal.
+1. Navigate to your Azure Purview account in the Azure portal.
1. On the **Overview** page, in the **Get Started** section, select the **Open Azure Purview Studio** tile.
purview Concept Best Practices Asset Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-asset-lifecycle.md
This process describes the high-level steps and suggested roles to capture and m
| 1 | [Azure Purview collections architecture and best practices](concept-best-practices-collections.md) | | 2 | [How to create and manage collections](how-to-create-and-manage-collections.md) | 3 & 4 | [Understand Azure Purview access and permissions](catalog-permissions.md)
-| 5 | [Azure Purview connector overview](purview-connector-overview.md) <br> [Azure Purview private endpoint networking](catalog-private-link.md) |
+| 5 | [Azure Purview supported sources](purview-connector-overview.md) <br> [Azure Purview private endpoint networking](catalog-private-link.md) |
| 6 | [How to manage multi-cloud data sources](manage-data-sources.md) | 7 | [Best practices for scanning data sources in Azure Purview](concept-best-practices-scanning.md) | 8, 9 & 10 | [Search the data catalog](how-to-search-catalog.md) <br> [Browse the data catalog](how-to-browse-catalog.md)
purview Concept Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-security.md
As a general rule, you can use the following options to set up integration runti
|Multi-cloud | Azure runtime or self-hosted integration runtime based on data source types | Supported credential options vary based on data sources types | |Power BI tenant | Azure Runtime | Azure Purview Managed Identity |
-Use [this guide](azure-purview-connector-overview.md) to read more about each connector and their supported authentication options.
+Use [this guide](azure-purview-connector-overview.md) to read more about each source and their supported authentication options.
## Other recommendations
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-and-manage-collections.md
Collections in Azure Purview can be used to organize assets and sources by your
* Your own [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
### Check permissions
-In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](https://web.purview.azure.com/resource/). You can find Studio in the overview page of the Azure Purview resource in [Azure portal](https://portal.azure.com).
+In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](https://web.purview.azure.com/resource/). You can find Studio in the overview page of the Azure Purview account in [Azure portal](https://portal.azure.com).
1. Select Data Map > Collections from the left pane to open collection management page. :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. This is the top collection in your collection list and will have the same name as your Azure Purview resource. In the following example, it's called Contoso Azure Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
+1. Select your root collection. This is the top collection in your collection list and will have the same name as your Azure Purview account. In the following example, it's called Contoso Azure Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
:::image type="content" source="./media/how-to-create-and-manage-collections/select-root-collection.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
In order to create and manage collections in Azure Purview, you will need to be
:::image type="content" source="./media/how-to-create-and-manage-collections/role-assignments.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant your permission.
+1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Azure Purview account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant your permission.
:::image type="content" source="./media/how-to-create-and-manage-collections/collection-admins.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
All assigned roles apply to sources, assets, and other objects within the collec
### Restrict inheritance
-Collection permissions are inherited automatically from the parent collection. For example, any permissions on the root collection (the collection at the top of the list that has the same name as your Azure Purview resource), will be inherited by all collections below it. You can restrict inheritance from a parent collection at any time, using the restrict inherited permissions option.
+Collection permissions are inherited automatically from the parent collection. For example, any permissions on the root collection (the collection at the top of the list that has the same name as your Azure Purview account), will be inherited by all collections below it. You can restrict inheritance from a parent collection at any time, using the restrict inherited permissions option.
Once you restrict inheritance, you will need to add users directly to the restricted collection to grant them access.
purview How To Integrate With Azure Security Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-integrate-with-azure-security-products.md
Microsoft Sentinel is a scalable, cloud-native, solution for both security infor
Integrate Azure Purview with Microsoft Sentinel to gain visibility into where on your network sensitive information is stored, in a way that helps you prioritize at-risk data for protection, and understand the most critical incidents and threats to investigate in Microsoft Sentinel.
-1. Start by ingesting your Azure Purview logs into Microsoft Sentinel through a data connector.
+1. Start by ingesting your Azure Purview logs into Microsoft Sentinel through a data source.
1. Then use a Microsoft Sentinel workbook to view data such as assets scanned, classifications found, and labels applied by Azure Purview. 1. Use analytics rules to create alerts for changes within data sensitivity.
purview Quickstart Create Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-create-collection.md
Collections are Azure Purview's tool to manage ownership and access control acro
* Your own [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
## Check permissions
-In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](use-azure-purview-studio.md). You can find the studio by going to your Azure Purview resource in the [Azure portal](https://portal.azure.com), and selecting the **Open Azure Purview Studio** tile on the overview page.
+In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](use-azure-purview-studio.md). You can find the studio by going to your Azure Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Azure Purview Studio** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page. :::image type="content" source="./media/quickstart-create-collection/find-collections.png" alt-text="Screenshot of Azure Purview studio opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. This is the top collection in your collection list and will have the same name as your Azure Purview resource. In our example below, it's called Contoso Azure Purview.
+1. Select your root collection. This is the top collection in your collection list and will have the same name as your Azure Purview account. In our example below, it's called Contoso Azure Purview.
:::image type="content" source="./media/quickstart-create-collection/select-root-collection.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
In order to create and manage collections in Azure Purview, you will need to be
:::image type="content" source="./media/quickstart-create-collection/role-assignments.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
+1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Azure Purview account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
:::image type="content" source="./media/quickstart-create-collection/collection-admins.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true"::: ## Create a collection in the portal
-To create your collection, we'll start in the [Azure Purview Studio](use-azure-purview-studio.md). You can find the studio by going to your Azure Purview resource in the Azure portal and selecting the **Open Azure Purview Studio** tile on the overview page.
+To create your collection, we'll start in the [Azure Purview Studio](use-azure-purview-studio.md). You can find the studio by going to your Azure Purview account in the Azure portal and selecting the **Open Azure Purview Studio** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
An individual or group in charge of managing a data asset.
## Pattern rule A configuration that overrides how Azure Purview groups assets as resource sets and displays them within the catalog. ## Azure Purview instance
-A single Azure Purview resource.
+A single Azure Purview account.
## Registered source A source that has been added to an Azure Purview instance and is now managed as a part of the Data catalog. ## Related terms
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen1.md
This article outlines the process to register an Azure Data Lake Storage Gen1 da
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
This article outlines the process to register an Azure Data Lake Storage Gen2 da
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-amazon-s3.md
You'll need the name of your Amazon S3 bucket to copy it in to Azure Purview whe
![Retrieve and copy the S3 bucket URL.](./media/register-scan-amazon-s3/retrieve-bucket-url-amazon.png)
- Paste your bucket name in a secure file, and add an `s3://` prefix to it to create the value you'll need to enter when configuring your bucket as an Azure Purview resource.
+ Paste your bucket name in a secure file, and add an `s3://` prefix to it to create the value you'll need to enter when configuring your bucket as an Azure Purview account.
For example: `s3://purview-tutorial-bucket`
For example:
![Retrieve your AWS account ID.](./media/register-scan-amazon-s3/aws-locate-account-id.png)
-## Add a single Amazon S3 bucket as an Azure Purview resource
+## Add a single Amazon S3 bucket as an Azure Purview account
Use this procedure if you only have a single S3 bucket that you want to register to Azure Purview as a data source, or if you have multiple buckets in your AWS account, but do not want to register all of them to Azure Purview.
Use this procedure if you only have a single S3 bucket that you want to register
Continue with [Create a scan for one or more Amazon S3 buckets.](#create-a-scan-for-one-or-more-amazon-s3-buckets).
-## Add an AWS account as an Azure Purview resource
+## Add an AWS account as an Azure Purview account
Use this procedure if you have multiple S3 buckets in your Amazon account, and you want to register all of them as Azure Purview data sources.
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
This article outlines the process to register an Azure Cosmos database (SQL API)
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-data-explorer.md
This article outlines how to register Azure Data Explorer, and how to authentica
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-files-storage-source.md
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
This article outlines how to register multiple Azure sources and how to authenti
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-mysql-database.md
This article outlines how to register a database in Azure Database for MySQL, an
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-postgresql.md
This article outlines how to register an Azure Database for PostgreSQL deployed
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Sql Database Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database-managed-instance.md
This article outlines how to register and Azure SQL Database Managed Instance, a
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
This article outlines the process to register an Azure SQL data source in Azure
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
This article outlines how to register dedicated SQL pools(formerly SQL DW), and
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
When setting up scan, you can choose to scan an entire Cassandra instance, or sc
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
When setting up scan, you can choose to scan an entire Db2 database, or scope th
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-erwin-source.md
When setting up scan, you can choose to scan an entire erwin Mart server, or sco
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
When setting up scan, you can choose to scan an entire Google BigQuery project,
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
When setting up scan, you can choose to scan an entire Hive metastore database,
* You must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* You must have an active [Azure Purview resource](create-catalog-portal.md).
+* You must have an active [Azure Purview account](create-catalog-portal.md).
* You need Data Source Administrator and Data Reader permissions to register a source and manage it in Azure Purview Studio. For more information about permissions, see [Access control in Azure Purview](catalog-permissions.md).
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
When setting up scan, you can choose to scan an entire Looker server, or scope t
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
When setting up scan, you can choose to scan an entire MySQL server, or scope th
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
The supported SQL Server versions are 2005 and above. SQL Server Express LocalDB
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
When setting up scan, you can choose to scan an entire Oracle server, or scope t
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
When setting up scan, you can choose to scan an entire PostgreSQL database, or s
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
This article outlines how to register a Power BI tenant, and how to authenticate
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An active [Azure Purview resource](create-catalog-portal.md).
+- An active [Azure Purview account](create-catalog-portal.md).
- You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
When setting up scan, you can choose to scan an entire Salesforce organization,
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
When setting up scan, you can choose to scan an entire SAP HANA database, or sco
* You must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* You must have an active [Azure Purview resource](create-catalog-portal.md).
+* You must have an active [Azure Purview account](create-catalog-portal.md).
* You need Data Source Administrator and Data Reader permissions to register a source and manage it in Azure Purview Studio. For more information about permissions, see [Access control in Azure Purview](catalog-permissions.md).
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
When scanning SAP ECC source, Azure Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
When scanning SAP S/4HANA source, Azure Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
When setting up scan, you can choose to scan one or more Snowflake database(s) e
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Required. Add any relevant/source-specific prerequisites for connecting with thi
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
When setting up scan, you can choose to scan an entire Teradata server, or scope
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Azure Purview resource](create-catalog-portal.md).
+* An active [Azure Purview account](create-catalog-portal.md).
* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
The limit for Azure Purview policies that can be enforced by Storage accounts is
Check blog, demo and related tutorials * [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Demo of data owner access policies for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
+* [Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
* [Fine-grain data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
This section contains a reference of how actions in Azure Purview data policies
Check blog, demo and related tutorials * [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
-* [Demo of access policy for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
+* [Demo of access policy for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
purview Tutorial Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-register-scan-on-premises-sql-server.md
In this tutorial, you'll learn how to:
## Sign in to Azure Purview Studio
-To interact with Azure Purview, you'll connect to the [Azure Purview Studio](https://web.purview.azure.com/resource/) through the Azure portal. You can find the studio by going to your Azure Purview resource in the [Azure portal](https://portal.azure.com), and selecting the **Open Azure Purview Studio** tile on the overview page.
+To interact with Azure Purview, you'll connect to the [Azure Purview Studio](https://web.purview.azure.com/resource/) through the Azure portal. You can find the studio by going to your Azure Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Azure Purview Studio** tile on the overview page.
:::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/open-purview-studio.png" alt-text="Screenshot of Azure Purview window in Azure portal, with Azure Purview Studio button highlighted." border="true":::
To create and manage collections in Azure Purview, you'll need to be a **Collect
:::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/find-collections.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. The root collection is the top collection in your collection list and will have the same name as your Azure Purview resource. In our example below, it is called Azure Purview Account.
+1. Select your root collection. The root collection is the top collection in your collection list and will have the same name as your Azure Purview account. In our example below, it is called Azure Purview Account.
:::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-root-collection.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
To create and manage collections in Azure Purview, you'll need to be a **Collect
:::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/role-assignments.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
+1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Azure Purview account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
:::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/collection-admins.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
sentinel Configure Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-transformation.md
+
+ Title: Transform or customize data at ingestion time in Microsoft Sentinel (preview)
+description: Learn about how to configure Azure Monitor's ingestion-time data transformation for use with Microsoft Sentinel.
+++ Last updated : 02/27/2022++
+# Transform or customize data at ingestion time in Microsoft Sentinel (preview)
+
+This article describes how to configure [ingestion-time data transformation and custom log ingestion](data-transformation.md) for use in Microsoft Sentinel.
+
+Ingestion-time data transformation provides customers with more control over the ingested data. Supplementing the pre-configured, hardcoded workflows that create standardized tables, ingestion time-transformation adds the capability to filter and enrich the output tables, even before running any queries. Custom log ingestion uses the Custom Log API to normalize custom-format logs so they can be ingested into certain standard tables, or alternatively, to create customized output tables with user-defined schemas for ingesting these custom logs.
+
+These two mechanisms are configured using Data Collection Rules (DCRs), either in the Log Analytics portal, or via API or ARM template. This article will help you choose which kind of DCR you need for your particular data connector, and direct you to the instructions for each scenario.
+
+## Prerequisites
+
+Before you start configuring DCRs for data transformation:
+
+- **Learn more about data transformation and DCRs in Azure Monitor and Microsoft Sentinel**. For more information, see:
+
+ - [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)
+ - [Custom logs API in Azure Monitor Logs (Preview)](../azure-monitor/logs/custom-logs-overview.md)
+ - [Ingestion-time transformations in Azure Monitor Logs (preview)](../azure-monitor/logs/ingestion-time-transformations.md)
+ - [Data transformation in Microsoft Sentinel (preview)](data-transformation.md)
+
+- **Verify data connector support**. Make sure that your data connectors are supported for data transformation.
+
+ In our [data connector reference](data-connectors-reference.md) article, check the section for your data connector to understand which types of DCRs are supported. Continue in this article to understand how the DCR type you select affects the rest of the ingestion and transformation process.
+
+## Determine your requirements
+
+| If you are ingesting | Ingestion-time transformation is... | Use this DCR type |
+| -- | - | -- |
+| **Custom data** through <br>the **DCR-based API** | <li>Required<li>Included in the DCR that defines the data model | Standard DCR |
+| **Built-in data types** <br>(Syslog, CommonSecurityLog, WindowsEvent, SecurityEvent) <br>using the **Azure Monitor Agent (AMA)** | <li>Optional<li>If desired, included in the DCR that defines the AMA configuration | Standard DCR |
+| **Built-in data types** <br>(Syslog, CommonSecurityLog, WindowsEvent, SecurityEvent) <br>using the legacy **Log Analytics Agent (MMA)** | <li>Optional<li>If desired, added to the DCR attached to the Workspace where this data is being ingested | Workspace transformation DCR |
+| **Built-in data types** <br>from most other sources | <li>Optional<li>If desired, added to the DCR attached to the Workspace where this data is being ingested | Workspace transformation DCR |
+| | |
+++
+## Configure your data transformation
+
+Use the following procedures from the Log Analytics and Azure Monitor documentation to configure your data transformation DCRs:
+
+[Direct ingestion through the DCR-based Custom Logs API](../azure-monitor/logs/custom-logs-overview.md):
+- Walk through a tutorial for [ingesting custom logs using the Azure portal](../azure-monitor/logs/tutorial-custom-logs.md).
+- Walk through a tutorial for [ingesting custom logs using Azure Resource Manager (ARM) templates and REST API](../azure-monitor/logs/tutorial-custom-logs-api.md).
+
+[Ingestion-time data transformation](../azure-monitor/logs/ingestion-time-transformations.md):
+- Walk through a tutorial for [configuring ingestion-time transformation using the Azure portal](../azure-monitor/logs/tutorial-ingestion-time-transformations.md).
+- Walk through a tutorial for [configuring ingestion-time transformation using Azure Resource Manager (ARM) templates and REST API](../azure-monitor/logs/tutorial-ingestion-time-transformations-api.md).-
+
+[More on data collection rules](../azure-monitor/essentials/data-collection-rule-overview.md):
+- [Structure of a data collection rule in Azure Monitor (preview)](../azure-monitor/essentials/data-collection-rule-structure.md)
+- [Data collection rule transformations in Azure Monitor (preview)](../azure-monitor/essentials/data-collection-rule-transformations.md)
++
+When you're done, come back to Microsoft Sentinel to verify that your data is being ingested based on your newly-configured transformation. It make take up to 60 minutes for the data transformation configurations to apply.
++
+## Migrate to ingestion-time data transformation
+
+If you currently have custom Microsoft Sentinel data connectors, or built-in, API-based data connectors, you may want to migrate to using ingestion-time data transformation.
+
+Use one of the following methods:
+
+- Configure a DCR to define, from scratch, the custom ingestion from your data source to a new table. You might use this option if you want to use a new schema that doesn't have the current column suffixes, and doesn't require query-time KQL functions to standardize your data.
+
+ After you've verified that your data is properly ingested to the new table, you can delete the legacy table, as well as your legacy, custom data connector.
+
+- Continue using the custom table created by your custom data connector. You might use this option if you have a lot of custom security content created for your existing table. In such cases, see [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](../azure-monitor/logs/custom-logs-migrate.md) in the Azure Monitor documentation.
+
+## Next steps
+
+For more information about data transformation and DCRs, see:
+
+- [Custom data ingestion and transformation in Microsoft Sentinel (preview)](data-transformation.md)
+- [Ingestion-time transformations in Azure Monitor Logs (preview)](../azure-monitor/logs/ingestion-time-transformations.md)
+- [Custom logs API in Azure Monitor Logs (Preview)](../azure-monitor/logs/custom-logs-overview.md)
+- [Data collection rule transformations in Azure Monitor (preview)](../azure-monitor/essentials/data-collection-rule-transformations.md)
+- [Structure of a data collection rule in Azure Monitor (preview)](../azure-monitor/essentials/data-collection-rule-structure.md)
+- [Configure data collection for the Azure Monitor agent](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md)
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
This article describes how to deploy data connectors in Microsoft Sentinel, list
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) <br><br>**Before deployment**: [Enable the Security Graph API (Optional)](#enable-the-security-graph-api-optional). <br>**After deployment**: [Assign necessary permissions to your Function App](#assign-necessary-permissions-to-your-function-app)| | **Log Analytics table(s)** | agari_bpalerts_log_CL<br>agari_apdtc_log_CL<br>agari_apdpolicy_log_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-agari-functionapp | | **API credentials** | <li>Client ID<li>Client Secret<li>(Optional: Graph Tenant ID, Graph Client ID, Graph Client Secret) | | **Vendor documentation/<br>installation instructions** | <li>[Quick Start](https://developers.agari.com/agari-platform/docs/quick-start)<li>[Agari Developers Site](https://developers.agari.com/agari-platform) |
The Agari connector uses an environment variable to store log access timestamps.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** <br><br>[Configure CEF log forwarding for AI Analyst](#configure-cef-log-forwarding-for-ai-analyst) |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | [Darktrace](https://customerportal.darktrace.com/) | | | |
Configure Darktrace to forward Syslog messages in CEF format to your Azure works
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** <br><br>[Configure CEF log forwarding for AI Vectra Detect](#configure-cef-log-forwarding-for-ai-vectra-detect)|
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | [Vectra AI](https://www.vectra.ai/support) | | | |
For more information, see the Cognito Detect Syslog Guide, which can be download
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog**, with a Kusto function parser |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | AkamaiSIEMEvent | | **Kusto function URL:** | https://aka.ms/Sentinel-akamaisecurityevents-parser | | **Vendor documentation/<br>installation instructions** | [Configure Security Information and Event Management (SIEM) integration](https://developer.akamai.com/tools/integrations/siem)<br>[Set up a CEF connector](https://developer.akamai.com/tools/integrations/siem/siem-cef-connector). |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | alcide_kaudit_activity_1_CL - Alcide kAudit activity logs<br>alcide_kaudit_detections_1_CL - Alcide kAudit detections<br>alcide_kaudit_selections_count_1_CL - Alcide kAudit activity counts<br>alcide_kaudit_selections_details_1_CL - Alcide kAudit activity details |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Alcide kAudit installation guide](https://awesomeopensource.com/project/alcideio/kaudit?categoryPage=29#before-installing-alcide-kaudit) | | **Supported by** | [Alcide](https://www.alcide.io/company/contact-us/) | | | |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | [**Log Analytics agent - custom logs**](connect-custom-logs.md) <br><br>[Extra configuration for Alsid](#extra-configuration-for-alsid)| | **Log Analytics table(s)** | AlsidForADLog_CL |
+| **DCR support** | Not currently supported |
| **Kusto function alias:** | afad_parser | | **Kusto function URL:** | https://aka.ms/Sentinel-alsidforad-parser | | **Supported by** | [Alsid](https://www.alsid.com/contact-us/) |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data](connect-aws.md?tabs=ct)** (Top connector article) | | **Log Analytics table(s)** | AWSCloudTrail |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data](connect-aws.md?tabs=s3)** (Top connector article) | | **Log Analytics table(s)** | AWSCloudTrail<br>AWSGuardDuty<br>AWSVPCFlow |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | [**Log Analytics agent - custom logs**](connect-custom-logs.md) | | **Log Analytics table(s)** | ApacheHTTPServer_CL |
+| **DCR support** | Not currently supported |
| **Kusto function alias:** | ApacheHTTPServer | | **Kusto function URL:** | https://aka.ms/Sentinel-apachehttpserver-parser | | **Custom log sample file:** | access.log or error.log |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | [**Log Analytics agent - custom logs**](connect-custom-logs.md) | | **Log Analytics table(s)** | Tomcat_CL |
+| **DCR support** | Not currently supported |
| **Kusto function alias:** | TomcatEvent | | **Kusto function URL:** | https://aka.ms/Sentinel-ApacheTomcat-parser | | **Custom log sample file:** | access.log or error.log |
For more information, see the Cognito Detect Syslog Guide, which can be download
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog**, with a Kusto function parser |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | ArubaClearPass | | **Kusto function URL:** | https://aka.ms/Sentinel-arubaclearpass-parser | | **Vendor documentation/<br>installation instructions** | Follow Aruba's instructions to [configure ClearPass](https://www.arubanetworks.com/techdocs/ClearPass/6.7/PolicyManager/Content/CPPM_UserGuide/Admin/syslogExportFilters_add_syslog_filter_general.htm). |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) | | **Log Analytics table(s)** | Confluence_Audit_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-confluenceauditapi-functionapp | | **API credentials** | <li>ConfluenceAccessToken<li>ConfluenceUsername<li>ConfluenceHomeSiteName | | **Vendor documentation/<br>installation instructions** | <li>[API Documentation](https://developer.atlassian.com/cloud/confluence/rest/api-group-audit/)<li>[Requirements and instructions for obtaining credentials](https://developer.atlassian.com/cloud/confluence/rest/intro/#auth)<li>[View the audit log](https://support.atlassian.com/confluence-cloud/docs/view-the-audit-log/) |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) | | **Log Analytics table(s)** | Jira_Audit_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-jiraauditapi-functionapp | | **API credentials** | <li>JiraAccessToken<li>JiraUsername<li>JiraHomeSiteName | | **Vendor documentation/<br>installation instructions** | <li>[API Documentation - Audit records](https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-audit-records/)<li>[Requirements and instructions for obtaining credentials](https://developer.atlassian.com/cloud/jira/platform/rest/v3/intro/#authentication) |
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Data ingestion method** | **Azure service-to-service integration: <br>[Connect Azure Active Directory data to Microsoft Sentinel](connect-azure-active-directory.md)** (Top connector article) | | **License prerequisites/<br>Cost information** | <li>Azure Active Directory P1 or P2 license for sign-in logs<li>Any Azure AD license (Free/O365/P1/P2) for other log types<br>Other charges may apply | | **Log Analytics table(s)** | SigninLogs<br>AuditLogs<br>AADNonInteractiveUserSignInLogs<br>AADServicePrincipalSignInLogs<br>AADManagedIdentitySignInLogs<br>AADProvisioningLogs<br>ADFSSignInLogs |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** | | **License prerequisites/<br>Cost information** | [Azure AD Premium P2 subscription](https://azure.microsoft.com/pricing/details/active-directory/)<br>Other charges may apply | | **Log Analytics table(s)** | SecurityAlert |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
For more information, see the Cognito Detect Syslog Guide, which can be download
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections, managed by Azure Policy](connect-azure-windows-microsoft-services.md?tabs=AP#diagnostic-settings-based-connections)**<br><br>[Upgrade to the new Azure Activity connector](#upgrade-to-the-new-azure-activity-connector) | | **Log Analytics table(s)** | AzureActivity |
+| **DCR support** | Not currently supported |
| **Supported by** | Microsoft | | | |
Before setting up the new Azure Activity log connector, you must disconnect the
| **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections](connect-azure-windows-microsoft-services.md?tabs=SA#diagnostic-settings-based-connections)** | | **License prerequisites/<br>Cost information** | <li>You must have a configured [Azure DDoS Standard protection plan](../ddos-protection/manage-ddos-protection.md#create-a-ddos-protection-plan).<li>You must have a configured [virtual network with Azure DDoS Standard enabled](../ddos-protection/manage-ddos-protection.md#enable-ddos-protection-for-a-new-virtual-network)<br>Other charges may apply | | **Log Analytics table(s)** | AzureDiagnostics |
+| **DCR support** | Not currently supported |
| **Recommended diagnostics** | DDoSProtectionNotifications<br>DDoSMitigationFlowLogs<br>DDoSMitigationReports | | **Supported by** | Microsoft | | | |
See [Microsoft Defender for Cloud](#microsoft-defender-for-cloud).
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections](connect-azure-windows-microsoft-services.md?tabs=SA#diagnostic-settings-based-connections)** | | **Log Analytics table(s)** | AzureDiagnostics |
+| **DCR support** | Not currently supported |
| **Recommended diagnostics** | AzureFirewallApplicationRule<br>AzureFirewallNetworkRule<br>AzureFirewallDnsProxy | | **Supported by** | Microsoft | | | |
See [Microsoft Defender for Cloud](#microsoft-defender-for-cloud).
| | | | **Data ingestion method** | [**Azure service-to-service integration**](connect-azure-windows-microsoft-services.md) | | **Log Analytics table(s)** | InformationProtectionLogs_CL |
+| **DCR support** | Not currently supported |
| **Supported by** | Microsoft | | | |
For more information, see the [Azure Information Protection documentation](/azur
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections, managed by Azure Policy](connect-azure-windows-microsoft-services.md?tabs=AP#diagnostic-settings-based-connections)** | | **Log Analytics table(s)** | KeyVaultData |
+| **DCR support** | Not currently supported |
| **Supported by** | Microsoft | | | |
For more information, see the [Azure Information Protection documentation](/azur
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections, managed by Azure Policy](connect-azure-windows-microsoft-services.md?tabs=AP#diagnostic-settings-based-connections)** | | **Log Analytics table(s)** | kube-apiserver<br>kube-audit<br>kube-audit-admin<br>kube-controller-manager<br>kube-scheduler<br>cluster-autoscaler<br>guard |
+| **DCR support** | Not currently supported |
| **Supported by** | Microsoft | | | |
For more information, see the [Azure Information Protection documentation](/azur
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections](connect-azure-windows-microsoft-services.md?tabs=AP#diagnostic-settings-based-connections)**<br><br>For more information, see [Tutorial: Integrate Microsoft Sentinel and Azure Purview](purview-solution.md). | | **Log Analytics table(s)** | PurviewDataSensitivityLogs |
+| **DCR support** | Not currently supported |
| **Supported by** | Microsoft | | | |
For more information, see the [Azure Information Protection documentation](/azur
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections, managed by Azure Policy](connect-azure-windows-microsoft-services.md?tabs=AP#diagnostic-settings-based-connections)** <br><br>Also available in the [Azure SQL and Microsoft Sentinel for SQL PaaS solutions](sentinel-solutions-catalog.md#azure)| | **Log Analytics table(s)** | SQLSecurityAuditEvents<br>SQLInsights<br>AutomaticTuning<br>QueryStoreWaitStatistics<br>Errors<br>DatabaseWaitStatistics<br>Timeouts<br>Blocks<br>Deadlocks<br>Basic<br>InstanceAndAppAdvanced<br>WorkloadManagement<br>DevOpsOperationsAudit |
+| **DCR support** | Not currently supported |
| **Supported by** | Microsoft | | | |
For more information, see the [Azure Information Protection documentation](/azur
| **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections](connect-azure-windows-microsoft-services.md?tabs=SA#diagnostic-settings-based-connections)**<br><br>[Notes about storage account diagnostic settings configuration](#notes-about-storage-account-diagnostic-settings-configuration) | | **Log Analytics table(s)** | StorageBlobLogs<br>StorageQueueLogs<br>StorageTableLogs<br>StorageFileLogs | | **Recommended diagnostics** | **Account resource**<li>Transaction<br>**Blob/Queue/Table/File resources**<br><li>StorageRead<li>StorageWrite<li>StorageDelete<li>Transaction |
+| **DCR support** | Not currently supported |
| **Supported by** | Microsoft | | | |
You will only see the storage types that you actually have defined resources for
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections](connect-azure-windows-microsoft-services.md?tabs=SA#diagnostic-settings-based-connections)** | | **Log Analytics table(s)** | AzureDiagnostics |
+| **DCR support** | Not currently supported |
| **Recommended diagnostics** | **Application Gateway**<br><li>ApplicationGatewayAccessLog<li>ApplicationGatewayFirewallLog<br>**Front Door**<li>FrontdoorAccessLog<li>FrontdoorWebApplicationFirewallLog<br>**CDN WAF policy**<li>WebApplicationFirewallLogs | | **Supported by** | Microsoft | | | |
You will only see the storage types that you actually have defined resources for
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | CGFWFirewallActivity | | **Kusto function URL:** | https://aka.ms/Sentinel-barracudacloudfirewall-function | | **Vendor documentation/<br>installation instructions** | https://aka.ms/Sentinel-barracudacloudfirewall-connector |
See Barracuda instructions - note the assigned facilities for the different type
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | BetterMTDDeviceLog_CL<br>BetterMTDIncidentLog_CL<br>BetterMTDAppLog_CL<br>BetterMTDNetflowLog_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [BETTER MTD Documentation](https://mtd-docs.bmobi.net/integrations/azure-sentinel/setup-integration)<br><br>Threat Policy setup, which defines the incidents that are reported to Microsoft Sentinel:<br><ol><li>In **Better MTD Console**, select **Policies** on the side bar.<li>Select the **Edit** button of the Policy that you are using.<li>For each Incident type that you want to be logged, go to **Send to Integrations** field and select **Sentinel**. | | **Supported by** | [Better Mobile](mailto:support@better.mobi) | | | | - ## Beyond Security beSECURE | Connector attribute | Description | | | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | beSECURE_ScanResults_CL<br>beSECURE_ScanEvents_CL<br>beSECURE_Audit_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | Access the **Integration** menu:<br><ol><li>Select the **More** menu option.<li>Select **Server**<li>Select **Integration**<li>Enable Microsoft Sentinel<li>Paste the **Workspace ID** and **Primary Key** values in the beSECURE configuration.<li>Select **Modify**. | | **Supported by** | [Beyond Security](https://beyondsecurity.freshdesk.com/support/home) | | | |
See Barracuda instructions - note the assigned facilities for the different type
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | CylancePROTECT | | **Kusto function URL:** | https://aka.ms/Sentinel-cylanceprotect-parser | | **Vendor documentation/<br>installation instructions** | [Cylance Syslog Guide](https://docs.blackberry.com/content/dam/docs-blackberry-com/release-pdfs/en/cylance-products/syslog-guides/Cylance%20Syslog%20Guide%20v2.0%20rev12.pdf) |
See Barracuda instructions - note the assigned facilities for the different type
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog**, with a Kusto function parser |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | SymantecDLP | | **Kusto function URL:** | https://aka.ms/Sentinel-symantecdlp-parser | | **Vendor documentation/<br>installation instructions** | [Configuring the Log to a Syslog Server action](https://help.symantec.com/cs/DLP15.7/DLP/v27591174_v133697641/Configuring-the-Log-to-a-Syslog-Server-action?locale=EN_US) |
See Barracuda instructions - note the assigned facilities for the different type
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** <br><br>Available from the [Check Point solution](sentinel-solutions-catalog.md#check-point)|
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Log Exporter - Check Point Log Export](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk122323) | | **Supported by** | [Check Point](https://www.checkpoint.com/support-services/contact-support/) | | | |
See Barracuda instructions - note the assigned facilities for the different type
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** <br><br>Available in the [Cisco ASA solution](sentinel-solutions-catalog.md#cisco)|
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Cisco ASA Series CLI Configuration Guide](https://www.cisco.com/c/en/us/support/docs/security/pix-500-series-security-appliances/63884-config-asa-00.html) | | **Supported by** | Microsoft | | | |
See Barracuda instructions - note the assigned facilities for the different type
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** <br><br>[Extra configuration for Cisco Firepower eStreamer](#extra-configuration-for-cisco-firepower-estreamer)|
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [eStreamer eNcore for Sentinel Operations Guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html) | | **Supported by** | [Cisco](https://www.cisco.com/c/en/us/support/https://docsupdatetracker.net/index.html) | | |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md)<br><br> Available in the [Cisco ISE solution](sentinel-solutions-catalog.md#cisco)|
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | CiscoMeraki | | **Kusto function URL:** | https://aka.ms/Sentinel-ciscomeraki-parser | | **Vendor documentation/<br>installation instructions** | [Meraki Device Reporting documentation](https://documentation.meraki.com/General_Administration/Monitoring_and_Reporting/Meraki_Device_Reporting_-_Syslog%2C_SNMP_and_API) |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) <br><br> Available in the [Cisco Umbrella solution](sentinel-solutions-catalog.md#cisco)| | **Log Analytics table(s)** | Cisco_Umbrella_dns_CL<br>Cisco_Umbrella_proxy_CL<br>Cisco_Umbrella_ip_CL<br>Cisco_Umbrella_cloudfirewall_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-CiscoUmbrellaConn-functionapp | | **API credentials** | <li>AWS Access Key ID<li>AWS Secret Access Key<li>AWS S3 Bucket Name | | **Vendor documentation/<br>installation instructions** | <li>[Logging to Amazon S3](https://docs.umbrella.com/deployment-umbrella/docs/log-management#section-logging-to-amazon-s-3) |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | CiscoUCS | | **Kusto function URL:** | https://aka.ms/Sentinel-ciscoucs-function | | **Vendor documentation/<br>installation instructions** | [Set up Syslog for Cisco UCS - Cisco](https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-manager/110265-setup-syslog-for-ucs.html#configsremotesyslog) |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | CitrixAnalytics_SAlerts_CLΓÇï |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Connect Citrix to Microsoft Sentinel](https://docs.citrix.com/en-us/security-analytics/getting-started-security/siem-integration/azure-sentinel-integration.html) | | **Supported by** | [Citrix Systems](https://www.citrix.com/support/) | | | |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | To configure WAF, see [Support WIKI - WAF Configuration with NetScaler](https://support.citrix.com/article/CTX234174).<br><br>To configure CEF logs, see [CEF Logging Support in the Application Firewall](https://support.citrix.com/article/CTX136146).<br><br>To forward the logs to proxy, see [Configuring Citrix ADC appliance for audit logging](https://docs.citrix.com/en-us/citrix-adc/current-release/system/audit-logging/configuring-audit-logging.html). | | **Supported by** | [Citrix Systems](https://www.citrix.com/support/) | | | |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | CognniIncidents_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | **Connect to Cognni**<br><ol><li>Go to [Cognni integrations page](https://intelligence.cognni.ai/integrations).<li>Select **Connect** on the Microsoft Sentinel box.<li>Paste **workspaceId** and **sharedKey** (Primary Key) to the fields on Cognni's integrations screen.<li>Select the **Connect** button to complete the configuration. | | **Supported by** | [Cognni](https://cognni.ai/contact-support/) | | |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Security Information and Event Management (SIEM) Applications](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) | | **Supported by** | [CyberArk](https://www.cyberark.com/customer-support/) | | | |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | CyberpionActionItems_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Get a Cyberpion subscription](https://azuremarketplace.microsoft.com/en/marketplace/apps/cyberpion1597832716616.cyberpion)<br>[Integrate Cyberpion security alerts into Microsoft Sentinel](https://www.cyberpion.com/resource-center/integrations/azure-sentinel/) | | **Supported by** | [Cyberpion](https://www.cyberpion.com/) | | | |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** <br><br> Also available as part of the [Microsoft Sentinel 4 Dynamics 365 solution](sentinel-solutions-catalog.md#azure)| | **License prerequisites/<br>Cost information** | <li>[Microsoft Dynamics 365 production license](/office365/servicedescriptions/microsoft-dynamics-365-online-service-description). Not available for sandbox environments.<li>Microsoft 365 Enterprise [E3 or E5](/power-platform/admin/enable-use-comprehensive-auditing#requirements) subscription is required to do Activity Logging.<br>Other charges may apply | | **Log Analytics table(s)** | Dynamics365Activity |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md)<br><br>[Create an API user](#create-an-api-user) | | **Log Analytics table(s)** | ESETEnterpriseInspector_CLΓÇï |
+| **DCR support** | Not currently supported |
| **API credentials** | <li>EEI Username<li>EEI Password<li>Base URL | | **Vendor documentation/<br>installation instructions** | <li>[ESET Enterprise Inspector REST API documentation](https://help.eset.com/eei/1.5/en-US/api.html) | | **Connector deployment instructions** | [Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template |
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| | | | **Data ingestion method** | [**Syslog**](connect-syslog.md)<br><br>[Configure the ESET SMC logs to be collected](#configure-the-eset-smc-logs-to-be-collected) <br>[Configure OMS agent to pass Eset SMC data in API format](#configure-oms-agent-to-pass-eset-smc-data-in-api-format)<br>[Change OMS agent configuration to catch tag oms.api.eset and parse structured data](#change-oms-agent-configuration-to-catch-tag-omsapieset-and-parse-structured-data)<br>[Disable automatic configuration and restart agent](#disable-automatic-configuration-and-restart-agent)| | **Log Analytics table(s)** | eset_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [ESET Syslog server documentation](https://help.eset.com/esmc_admin/70/en-US/admin_server_settings_syslog.html) | | **Supported by** | [ESET](https://support.eset.com/en) | | | |
For more information, see the Eset documentation.
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | ExabeamEvent | | **Kusto function URL:** | https://aka.ms/Sentinel-Exabeam-parser | | **Vendor documentation/<br>installation instructions** | [Configure Advanced Analytics system activity notifications](https://docs.exabeam.com/en/advanced-analytics/i54/advanced-analytics-administration-guide/113254-configure-advanced-analytics.html#UUID-7ce5ff9d-56aa-93f0-65de-c5255b682a08) |
For more information, see the Eset documentation.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [ExtraHop Detection SIEM Connector](https://aka.ms/asi-syslog-extrahop-forwarding) | | **Supported by** | [ExtraHop](https://www.extrahop.com/support/) | | | |
For more information, see the Eset documentation.
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | F5Telemetry_LTM_CL<br>F5Telemetry_system_CL<br>F5Telemetry_ASM_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Integrating the F5 BIG-IP with Microsoft Sentinel](https://aka.ms/F5BigIp-Integrate) | | **Supported by** | [F5 Networks](https://support.f5.com/csp/home) | | | |+ ## F5 Networks (ASM) | Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Configuring Application Security Event Logging](https://aka.ms/asi-syslog-f5-forwarding) | | **Supported by** | [F5 Networks](https://support.f5.com/csp/home) | | | |
For more information, see the Eset documentation.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Forcepoint CASB and Microsoft Sentinel](https://forcepoint.github.io/docs/casb_and_azure_sentinel/) | | **Supported by** | [Forcepoint](https://support.forcepoint.com/) | | | |
For more information, see the Eset documentation.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Forcepoint Cloud Security Gateway and Microsoft Sentinel](https://forcepoint.github.io/docs/csg_and_sentinel/) | | **Supported by** | [Forcepoint](https://support.forcepoint.com/) | | | |
For more information, see the Eset documentation.
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | ForcepointDLPEvents_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Forcepoint Data Loss Prevention and Microsoft Sentinel](https://forcepoint.github.io/docs/dlp_and_azure_sentinel/) | | **Supported by** | [Forcepoint](https://support.forcepoint.com/) | | | |
For more information, see the Eset documentation.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Forcepoint Next-Gen Firewall and Microsoft Sentinel](https://forcepoint.github.io/docs/ngfw_and_azure_sentinel/) | | **Supported by** | [Forcepoint](https://support.forcepoint.com/) | | | |
For more information, see the Eset documentation.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Install this first! ForgeRock Common Audit (CAUD) for Microsoft Sentinel](https://github.com/javaservlets/SentinelAuditEventHandler) | | **Supported by** | [ForgeRock](https://www.forgerock.com/support) | | | |
For more information, see the Eset documentation.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** <br><br>[Send Fortinet logs to the log forwarder](#send-fortinet-logs-to-the-log-forwarder) <br><br>Available in the [Fortinet Fortigate solution](sentinel-solutions-catalog.md#fortinet-fortigate)|
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Fortinet Document Library](https://aka.ms/asi-syslog-fortinet-fortinetdocumentlibrary)<br>Choose your version and use the *Handbook* and *Log Message Reference* PDFs. | | **Supported by** | [Fortinet](https://support.fortinet.com/) | | | |
end
| | | | **Data ingestion method** |[**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md)<br><br>Only available after installing the [Continuous Threat Monitoring for GitHub](sentinel-solutions-catalog.md#github) solution. | | **Log Analytics table(s)** | GitHubAuditLogPolling_CL |
+| **DCR support** | Not currently supported |
| **API credentials** | GitHub access token | | **Connector deployment instructions** | [Extra configuration for the GitHub connector](#extra-configuration-for-the-github-connector) | | **Supported by** | Microsoft |
end
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md)<br><br>[Extra configuration for the Google Reports API](#extra-configuration-for-the-google-reports-api) | | **Log Analytics table(s)** | GWorkspace_ReportsAPI_admin_CL<br>GWorkspace_ReportsAPI_calendar_CL<br>GWorkspace_ReportsAPI_drive_CL<br>GWorkspace_ReportsAPI_login_CL<br>GWorkspace_ReportsAPI_mobile_CL<br>GWorkspace_ReportsAPI_token_CL<br>GWorkspace_ReportsAPI_user_accounts_CL<br> |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-GWorkspaceReportsAPI-functionapp | | **API credentials** | <li>GooglePickleString | | **Vendor documentation/<br>installation instructions** | <li>[API Documentation](https://developers.google.com/admin-sdk/reports/v1/reference/activities)<li>Get credentials at [Perform Google Workspace Domain-Wide Delegation of Authority](https://developers.google.com/admin-sdk/reports/v1/guides/delegation)<li>[Convert token.pickle file to pickle string](https://aka.ms/sentinel-GWorkspaceReportsAPI-functioncode) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Illusive Networks Admin Guide](https://support.illusivenetworks.com/hc/en-us/sections/360002292119-Documentation-by-Version) | | **Supported by** | [Illusive Networks](https://www.illusivenetworks.com/technical-support/) | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** <br><br>Available in the [Imperva Cloud WAF solution](sentinel-solutions-catalog.md#imperva)|
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Steps for Enabling Imperva WAF Gateway Alert Logging to Microsoft Sentinel](https://community.imperva.com/blogs/craig-burlingame1/2020/11/13/steps-for-enabling-imperva-waf-gateway-alert) | | **Supported by** | [Imperva](https://www.imperva.com/support/technical-support/) | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md)<br><br> available in the [InfoBlox Threat Defense solution](sentinel-solutions-catalog.md#infoblox) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | InfobloxNIOS | | **Kusto function URL:** | https://aka.ms/sentinelgithubparsersinfoblox | | **Vendor documentation/<br>installation instructions** | [NIOS SNMP and Syslog Deployment Guide](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-slog-and-snmp-configuration-for-nios.pdf) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | JuniperSRX | | **Kusto function URL:** | https://aka.ms/Sentinel-junipersrx-parser | | **Vendor documentation/<br>installation instructions** | [Configure Traffic Logging (Security Policy Logs) for SRX Branch Devices](https://kb.juniper.net/InfoCenter/index?page=content&id=KB16509&actp=METADATA)<br>[Configure System Logging](https://kb.juniper.net/InfoCenter/index?page=content&id=kb16502) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) <br><br>Only available after installing the [Lookout Mobile Threat Defense for Microsoft Sentinel](sentinel-solutions-catalog.md#lookout) solution | | **Log Analytics table(s)** | Lookout_CL |
+| **DCR support** | Not currently supported |
| **API credentials** | <li>Lookout Application Key | | **Vendor documentation/<br>installation instructions** | <li>[Installation Guide](https://esupport.lookout.com/s/article/Lookout-with-Azure-Sentinel) (sign-in required)<li>[API Documentation](https://esupport.lookout.com/s/article/Mobile-Risk-API-Guide) (sign-in required)<li>[Lookout Mobile Endpoint Security](https://www.lookout.com/products/mobile-endpoint-security) | | **Supported by** | [Lookout](https://www.lookout.com/support) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Data ingestion method** | **Azure service-to-service integration:<br>[Connect data from Microsoft 365 Defender to Microsoft Sentinel](connect-microsoft-365-defender.md)** (Top connector article) | | **License prerequisites/<br>Cost information** | [Valid license for Microsoft 365 Defender](/microsoft-365/security/mtp/prerequisites) | **Log Analytics table(s)** | **Alerts:**<br>SecurityAlert<br>SecurityIncident<br>**Defender for Endpoint events:**<br>DeviceEvents<br>DeviceFileEvents<br>DeviceImageLoadEvents<br>DeviceInfo<br>DeviceLogonEvents<br>DeviceNetworkEvents<br>DeviceNetworkInfo<br>DeviceProcessEvents<br>DeviceRegistryEvents<br>DeviceFileCertificateInfo<br>**Defender for Office 365 events:**<br>EmailAttachmentInfo<br>EmailUrlInfo<br>EmailEvents<br>EmailPostDeliveryEvents |
+| **DCR support** | Not currently supported |
| **Supported by** | Microsoft | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** | | **License prerequisites/<br>Cost information** | [Valid license for Microsoft Defender for Endpoint deployment](/microsoft-365/security/defender-endpoint/production-deployment) | **Log Analytics table(s)** | SecurityAlert |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** | | **Log Analytics table(s)** | SecurityAlert |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** | | **Log Analytics table(s)** | SecurityAlert |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** | | **License prerequisites/<br>Cost information** | You must have a valid license for [Office 365 ATP Plan 2](/microsoft-365/security/office-365-security/office-365-atp#office-365-atp-plan-1-and-plan-2) | **Log Analytics table(s)** | SecurityAlert |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)** | | **License prerequisites/<br>Cost information** | Your Office 365 deployment must be on the same tenant as your Microsoft Sentinel workspace.<br>Other charges may apply. | | **Log Analytics table(s)** | OfficeActivity |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md), with, [ASIM parsers](normalization-about-parsers.md) based on Kusto functons |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog**, with a Kusto function parser |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | Morphisec | | **Kusto function URL** | https://aka.ms/Sentinel-Morphiescutpp-parser | | **Supported by** | [Morphisec](https://support.morphisec.com/support/home) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) | | **Log Analytics table(s)** | Netskope_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-netskope-functioncode | | **API credentials** | <li>Netskope API Token | | **Vendor documentation/<br>installation instructions** | <li>[Netskope Cloud Security Platform](https://www.netskope.com/platform)<li>[Netskope API Documentation](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html)<li>[Obtain an API Token](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v2-overview.html) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | [**Log Analytics agent - custom logs**](connect-custom-logs.md) | | **Log Analytics table(s)** | NGINX_CL |
+| **DCR support** | Not currently supported |
| **Kusto function alias:** | NGINXHTTPServer | | **Kusto function URL** | https://aka.ms/Sentinel-NGINXHTTP-parser | | **Vendor documentation/<br>installation instructions** | [Module ngx_http_log_module](https://nginx.org/en/docs/http/ngx_http_log_module.html) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | BSMmacOS_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [NXLog Microsoft Sentinel User Guide](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) | | **Supported by** | [NXLog](https://nxlog.co/community-forum) | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | DNS_Logs_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [NXLog Microsoft Sentinel User Guide](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) | | **Supported by** | [NXLog](https://nxlog.co/community-forum) | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | LinuxAudit_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [NXLog Microsoft Sentinel User Guide](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) | | **Supported by** | [NXLog](https://nxlog.co/community-forum) | | | |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) | | **Log Analytics table(s)** | Okta_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/sentineloktaazurefunctioncodev2 | | **API credentials** | <li>API Token | | **Vendor documentation/<br>installation instructions** | <li>[Okta System Log API Documentation](https://developer.okta.com/docs/reference/api/system-log/)<li>[Create an API token](https://developer.okta.com/docs/guides/create-an-api-token/create-the-token/)<li>[Connect Okta SSO to Microsoft Sentinel](#okta-single-sign-on-preview) |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog**, with a Kusto lookup and enrichment function<br><br>[Configure Onapsis to send CEF logs to the log forwarder](#configure-onapsis-to-send-cef-logs-to-the-log-forwarder) |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | incident_lookup | | **Kusto function URL** | https://aka.ms/Sentinel-Onapsis-parser | | **Supported by** | [Onapsis](https://onapsis.force.com/s/login/) |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [One Identity Safeguard for Privileged Sessions Administration Guide](https://aka.ms/sentinel-cef-oneidentity-forwarding) | | **Supported by** | [One Identity](https://support.oneidentity.com/) | | | |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| | | | **Data ingestion method** | [**Log Analytics agent - custom logs**](connect-custom-logs.md) | | **Log Analytics table(s)** | OracleWebLogicServer_CL |
+| **DCR support** | Not currently supported |
| **Kusto function alias:** | OracleWebLogicServerEvent | | **Kusto function URL:** | https://aka.ms/Sentinel-OracleWebLogicServer-parser | | **Vendor documentation/<br>installation instructions** | [Oracle WebLogic Server documentation](https://docs.oracle.com/en/middleware/standalone/weblogic-server/14.1.1.0/https://docsupdatetracker.net/index.html) |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | OrcaAlerts_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Microsoft Sentinel integration](https://orcasecurity.zendesk.com/hc/en-us/articles/360043941992-Azure-Sentinel-configuration) | | **Supported by** | [Orca Security](http://support.orca.security/) | | | |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog**, with a Kusto function parser |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | OSSECEvent | | **Kusto function URL:** | https://aka.ms/Sentinel-OSSEC-parser | | **Vendor documentation/<br>installation instructions** | [OSSEC documentation](https://www.ossec.net/docs/)<br>[Sending alerts via syslog](https://www.ossec.net/docs/docs/manual/output/syslog-output.html) |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** <br><br>Also available in the [Palo Alto PAN-OS and Prisma solutions](sentinel-solutions-catalog.md#palo-alto)|
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Common Event Format (CEF) Configuration Guides](https://aka.ms/asi-syslog-paloalto-forwarding)<br>[Configure Syslog Monitoring](https://aka.ms/asi-syslog-paloalto-configure) | | **Supported by** | [Palo Alto Networks](https://www.paloaltonetworks.com/company/contact-support) | | | |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | Perimeter81_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Perimeter 81 documentation](https://support.perimeter81.com/docs/360012680780) | | **Supported by** | [Perimeter 81](https://support.perimeter81.com/) | | | |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) <br><br>Also available in the [Proofpoint POD solution](sentinel-solutions-catalog.md#proofpoint) | | **Log Analytics table(s)** | ProofpointPOD_message_CL<br>ProofpointPOD_maillog_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-proofpointpod-functionapp | | **API credentials** | <li>ProofpointClusterID<li>ProofpointToken | | **Vendor documentation/<br>installation instructions** | <li>[Sign in to the Proofpoint Community](https://proofpointcommunities.force.com/community/s/article/How-to-request-a-Community-account-and-gain-full-customer-access?utm_source=login&utm_medium=recommended&utm_campaign=public)<li>[Proofpoint API documentation and instructions](https://proofpointcommunities.force.com/community/s/article/Proofpoint-on-Demand-Pod-Log-API) |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) <br><br>Also available in the [Proofpoint TAP solution](sentinel-solutions-catalog.md#proofpoint) | | **Log Analytics table(s)** | ProofPointTAPClicksPermitted_CL<br>ProofPointTAPClicksBlocked_CL<br>ProofPointTAPMessagesDelivered_CL<br>ProofPointTAPMessagesBlocked_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/sentinelproofpointtapazurefunctioncode | | **API credentials** | <li>API Username<li>API Password | | **Vendor documentation/<br>installation instructions** | <li>[Proofpoint SIEM API Documentation](https://help.proofpoint.com/Threat_Insight_Dashboard/API_Documentation/SIEM_API) |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | PulseConnectSecure | | **Kusto function URL:** | https://aka.ms/sentinelgithubparserspulsesecurevpn | | **Vendor documentation/<br>installation instructions** | [Configuring Syslog](https://docs.pulsesecure.net/WebHelp/Content/PCS/PCS_AdminGuide_8.2/Configuring%20Syslog.htm) |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md)<br><br>[Extra configuration for the Qualys VM KB](#extra-configuration-for-the-qualys-vm-kb) <br><br>Also available in the [Qualys VM solution](sentinel-solutions-catalog.md#qualys)| | **Log Analytics table(s)** | QualysKB_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-qualyskb-functioncode | | **API credentials** | <li>API Username<li>API Password | | **Vendor documentation/<br>installation instructions** | <li>[QualysVM API User Guide](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf) |
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md)<br><br>[Extra configuration for the Qualys VM](#extra-configuration-for-the-qualys-vm) <br>[Manual deployment - after configuring the Function App](#manual-deploymentafter-configuring-the-function-app)| | **Log Analytics table(s)** | QualysHostDetection_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/sentinelqualysvmazurefunctioncode | | **API credentials** | <li>API Username<li>API Password | | **Vendor documentation/<br>installation instructions** | <li>[QualysVM API User Guide](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf) |
If a longer timeout duration is required, consider upgrading to an [App Service
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) | | **Log Analytics table(s)** | SalesforceServiceCloud_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-SalesforceServiceCloud-functionapp | | **API credentials** | <li>Salesforce API Username<li>Salesforce API Password<li>Salesforce Security Token<li>Salesforce Consumer Key<li>Salesforce Consumer Secret | | **Vendor documentation/<br>installation instructions** | [Salesforce REST API Developer Guide](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart.htm)<br>Under **Set up authorization**, use **Session ID** method instead of OAuth. |
If a longer timeout duration is required, consider upgrading to an [App Service
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections) (Legacy)** | | **Log Analytics table(s)** | SecurityEvents |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
For more information, see:
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) <br><br>[Extra configuration for SentinelOne](#extra-configuration-for-sentinelone)| | **Log Analytics table(s)** | SentinelOne_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-SentinelOneAPI-functionapp | | **API credentials** | <li>SentinelOneAPIToken<li>SentinelOneUrl (`https://<SOneInstanceDomain>.sentinelone.net`) | | **Vendor documentation/<br>installation instructions** | <li>https://`<SOneInstanceDomain>`.sentinelone.net/api-doc/overview<li>See instructions below |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Log > Syslog](http://help.sonicwall.com/help/sw/eng/7020/26/2/3/content/Log_Syslog.120.2.htm)<br>Select facility local4 and ArcSight as the Syslog format. | | **Supported by** | [SonicWall](https://www.sonicwall.com/support/) | | | |
Follow the instructions to obtain the credentials.
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | SophosCloudOptix_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Integrate with Microsoft Sentinel](https://docs.sophos.com/pcg/optix/help/en-us/pcg/optix/tasks/IntegrateAzureSentinel.html), skipping the first step.<br>[Sophos query samples](https://docs.sophos.com/pcg/optix/help/en-us/pcg/optix/concepts/ExampleAzureSentinelQueries.html) | | **Supported by** | [Sophos](https://secure2.sophos.com/en-us/support.aspx) | | | |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | SophosXGFirewall | | **Kusto function URL:** | https://aka.ms/sentinelgithubparserssophosfirewallxg | | **Vendor documentation/<br>installation instructions** | [Add a syslog server](https://docs.sophos.com/nsg/sophos-firewall/18.5/Help/en-us/webhelp/onlinehelp/nsg/tasks/SyslogServerAdd.html) |
Follow the instructions to obtain the credentials.
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | secRMM_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [secRMM Microsoft Sentinel Administrator Guide](https://www.squadratechnologies.com/StaticContent/ProductDownload/secRMM/9.9.0.0/secRMMAzureSentinelAdministratorGuide.pdf) | | **Supported by** | [Squadra Technologies](https://www.squadratechnologies.com/Contact.aspx) | | | |
Follow the instructions to obtain the credentials.
| | | | **Data ingestion method** | [**Log Analytics agent - custom logs**](connect-custom-logs.md) | | **Log Analytics table(s)** | SquidProxy_CL |
+| **DCR support** | Not currently supported |
| **Kusto function alias:** | SquidProxy | | **Kusto function URL** | https://aka.ms/Sentinel-squidproxy-parser | | **Custom log sample file:** | access.log or cache.log |
Follow the instructions to obtain the credentials.
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md) | | **Log Analytics table(s)** | SymantecICDx_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Configuring Microsoft Sentinel (Log Analytics) Forwarders](https://techdocs.broadcom.com/us/en/symantec-security-software/integrated-cyber-defense/integrated-cyber-defense-exchange/1-4-3/Forwarders/configuring-forwarders-v131944722-d2707e17438.html) | | **Supported by** | [Broadcom Symantec](https://support.broadcom.com/security) | | | |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | SymantecProxySG | | **Kusto function URL:** | https://aka.ms/sentinelgithubparserssymantecproxysg | | **Vendor documentation/<br>installation instructions** | [Sending Access Logs to a Syslog server](https://knowledge.broadcom.com/external/article/166529/sending-access-logs-to-a-syslog-server.html) |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | SymantecVIP | | **Kusto function URL:** | https://aka.ms/sentinelgithubparserssymantecvip | | **Vendor documentation/<br>installation instructions** | [Configuring syslog](https://help.symantec.com/cs/VIP_EG_INSTALL_CONFIG/VIP/v134652108_v128483142/Configuring-syslog?locale=EN_US) |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Secure Syslog/CEF Logging](https://thy.center/ss/link/syslog) | | **Supported by** | [Thycotic](https://thycotic.force.com/support/s/) | | | |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog**, with a Kusto function parser |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | TrendMicroDeepSecurity | | **Kusto function URL** | https://aka.ms/TrendMicroDeepSecurityFunction | | **Vendor documentation/<br>installation instructions** | [Forward Deep Security events to a Syslog or SIEM server](https://aka.ms/Sentinel-trendMicro-connectorInstructions) |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog**, with a Kusto function parser |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | TrendMicroTippingPoint | | **Kusto function URL** | https://aka.ms/Sentinel-trendmicrotippingpoint-function | | **Vendor documentation/<br>installation instructions** | Send Syslog messages in ArcSight CEF Format v4.2 format. |
Follow the instructions to obtain the credentials.
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) | | **Log Analytics table(s)** | TrendMicro_XDR_CL |
+| **DCR support** | Not currently supported |
| **API credentials** | <li>API Token | | **Vendor documentation/<br>installation instructions** | <li>[Trend Micro Vision One API](https://automation.trendmicro.com/xdr/home)<li>[Obtaining API Keys for Third-Party Access](https://docs.trendmicro.com/en-us/enterprise/trend-micro-xdr-help/ObtainingAPIKeys) | | **Connector deployment instructions** | [Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template |
Follow the instructions to obtain the credentials.
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) | | **Log Analytics table(s)** | CarbonBlackEvents_CL<br>CarbonBlackAuditLogs_CL<br>CarbonBlackNotifications_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/sentinelcarbonblackazurefunctioncode | | **API credentials** | **API access level** (for *Audit* and *Event* logs):<li>API ID<li>API Key<br><br>**SIEM access level** (for *Notification* events):<li>SIEM API ID<li>SIEM API Key | | **Vendor documentation/<br>installation instructions** | <li>[Carbon Black API Documentation](https://developer.carbonblack.com/reference/carbon-black-cloud/cb-defense/latest/rest-api/)<li>[Creating an API Key](https://developer.carbonblack.com/reference/carbon-black-cloud/authentication/#creating-an-api-key) |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | VMwareESXi | | **Kusto function URL:** | https://aka.ms/Sentinel-vmwareesxi-parser | | **Vendor documentation/<br>installation instructions** | [Enabling syslog on ESXi 3.5 and 4.x](https://kb.vmware.com/s/article/1016621)<br>[Configure Syslog on ESXi Hosts](https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.monitoring.doc/GUID-9F67DB52-F469-451F-B6C8-DAE8D95976E7.html) |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | [**Syslog**](connect-syslog.md) |
-| **Log Analytics table(s)** | Syslog |
+| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Kusto function alias:** | WatchGuardFirebox | | **Kusto function URL:** | https://aka.ms/Sentinel-watchguardfirebox-parser | | **Vendor documentation/<br>installation instructions** | [Microsoft Sentinel Integration Guide](https://www.watchguard.com/help/docs/help-center/en-US/Content/Integration-Guides/General/Microsoft%20Azure%20Sentinel.html) |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | Contact [WireX support](https://wirexsystems.com/contact-us/) in order to configure your NFP solution to send Syslog messages in CEF format. | | **Supported by** | [WireX Systems](mailto:support@wirexsystems.com) | | | |
Follow the instructions to obtain the credentials.
| | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections) (Legacy)** | | **Log Analytics table(s)** | DnsEvents<br>DnsInventory |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Supported by** | Microsoft | | | |
For more information, see [Gather insights about your DNS infrastructure with th
| **Prerequisites** | You must have Windows Event Collection (WEC) enabled and running.<br>Install the Azure Monitor Agent on the WEC machine. | | **xPath queries prefix** | "ForwardedEvents!*" | | **Log Analytics table(s)** | WindowsEvents |
+| **DCR support** | Standard DCR |
| **Supported by** | Microsoft | | | |
We recommend installing the [Advanced Security Information Model (ASIM)](normali
| **Data ingestion method** | **Azure service-to-service integration: <br>[Azure Monitor Agent-based connections](connect-azure-windows-microsoft-services.md?tabs=AMA#windows-agent-based-connections)** | | **xPath queries prefix** | "Security!*" | | **Log Analytics table(s)** | SecurityEvents |
+| **DCR support** | Standard DCR |
| **Supported by** | Microsoft | | | |
Microsoft Sentinel can apply machine learning (ML) to Security events data to id
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md)<br><br>[Configure Webhooks](#configure-webhooks) <br>[Add Callback URL to Webhook configuration](#add-callback-url-to-webhook-configuration)| | **Log Analytics table(s)** | Workplace_Facebook_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-WorkplaceFacebook-functionapp | | **API credentials** | <li>WorkplaceAppSecret<li>WorkplaceVerifyToken | | **Vendor documentation/<br>installation instructions** | <li>[Configure Webhooks](https://developers.facebook.com/docs/workplace/reference/webhooks)<li>[Configure permissions](https://developers.facebook.com/docs/workplace/reference/permissions) |
For more information, see [Connect Zimperium to Microsoft Sentinel](#zimperium-m
| | | | **Data ingestion method** | [**Microsoft Sentinel Data Collector API**](connect-rest-api-template.md)<br><br>[Configure and connect Zimperium MTD](#configure-and-connect-zimperium-mtd) | | **Log Analytics table(s)** | ZimperiumThreatLog_CL<br>ZimperiumMitigationLog_CL |
+| **DCR support** | Not currently supported |
| **Vendor documentation/<br>installation instructions** | [Zimperium customer support portal](https://support.zimperium.com/) (sign-in required) | | **Supported by** | [Zimperium](https://www.zimperium.com/support) | | | |
For more information, see [Connect Zimperium to Microsoft Sentinel](#zimperium-m
| | | | **Data ingestion method** | [**Azure Functions and the REST API**](connect-azure-functions-template.md) | | **Log Analytics table(s)** | Zoom_CL |
+| **DCR support** | Not currently supported |
| **Azure Function App code** | https://aka.ms/Sentinel-ZoomAPI-functionapp | | **API credentials** | <li>ZoomApiKey<li>ZoomApiSecret | | **Vendor documentation/<br>installation instructions** | <li>[Get credentials using JWT With Zoom](https://marketplace.zoom.us/docs/guides/auth/jwt) |
For more information, see [Connect Zimperium to Microsoft Sentinel](#zimperium-m
| Connector attribute | Description | | | | | **Data ingestion method** | **[Common Event Format (CEF)](connect-common-event-format.md) over Syslog** |
-| **Log Analytics table(s)** | CommonSecurityLog |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) |
| **Vendor documentation/<br>installation instructions** | [Zscaler and Microsoft Sentinel Deployment Guide](https://aka.ms/ZscalerCEFInstructions) | | **Supported by** | [Zscaler](https://help.zscaler.com/submit-ticket-links) | | | |
For more information, see [Connect Zimperium to Microsoft Sentinel](#zimperium-m
| | | | **Data ingestion method** | [**Log Analytics agent - custom logs**](connect-custom-logs.md)<br><br>[Extra configuration for Zscaler Private Access](#extra-configuration-for-zscaler-private-access) | | **Log Analytics table(s)** | ZPA_CL |
+| **DCR support** | Not currently supported |
| **Kusto function alias:** | ZPAEvent | | **Kusto function URL** | https://aka.ms/Sentinel-zscalerprivateaccess-parser | | **Vendor documentation/<br>installation instructions** | [Zscaler Private Access documentation](https://help.zscaler.com/zpa)<br>Also, see below |
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
+
+ Title: Custom data ingestion and transformation in Microsoft Sentinel (preview)
+description: Learn about how Azure Monitor's custom log ingestion and data transformation features can help you get any data into Microsoft Sentinel and shape it the way you want.
+++ Last updated : 02/27/2022++
+# Custom data ingestion and transformation in Microsoft Sentinel (preview)
+
+Azure Monitor's Log Analytics serves as the platform behind the Microsoft Sentinel workspace. All logs ingested into Microsoft Sentinel are stored in Log Analytics by default. From Microsoft Sentinel, you can access the stored logs and run Kusto Query Language (KQL) queries to detect threats and monitor your network activity.
+
+Log Analytics' custom data ingestion process gives you a high level of control over the data that gets ingested. It uses [**data collection rules (DCRs)**](../azure-monitor/essentials/data-collection-rule-overview.md) to collect your data and manipulate it even before it's stored in your workspace. This allows you to filter and enrich standard tables and to create highly customizable tables for storing data from sources that produce unique log formats.
+
+Microsoft Sentinel gives you two tools to control this process:
+
+- The [**custom logs API**](../azure-monitor/logs/custom-logs-overview.md) allows you to send custom-format logs from any data source to your Log Analytics workspace, and store those logs either in certain specific standard tables, or in custom-formatted tables that you create. You have full control over the creation of these custom tables, down to specifying the column names and types. You create [**Data collection rules (DCRs)**](../azure-monitor/essentials/data-collection-rule-overview.md) to define, configure, and apply transformations to these data flows.
+
+- [**Ingestion-time data transformation**](../azure-monitor/logs/ingestion-time-transformations.md) uses DCRs to apply basic KQL queries to incoming standard logs (and certain types of custom logs) before they're stored in your workspace. These transformations can filter out irrelevant data, enrich existing data with analytics or external data, or mask sensitive or personal information.
+
+These two tools will be explained in more detail below.
+
+## Use cases and sample scenarios
+
+### Filtering
+
+Ingestion-time transformation provides you with the ability to filter out irrelevant data even before it's first stored in your workspace.
+
+You can filter at the record (row) level, by specifying criteria for which records to include, or at the field (column) level, by removing the content for specific fields. Filtering out irrelevant data can:
+
+- Help to reduce costs, as you reduce storage requirements
+- Improve performance, as fewer query-time adjustments are needed
+
+Ingestion-time data transformation supports [multiple-workspace scenarios](extend-sentinel-across-workspaces-tenants.md). You would create separate DCRs for each workspace.
+
+### Enrichment and tagging
+
+Ingestion-time transformation also lets you improve analytics by enriching your data with extra columns added to the configured KQL transformation. Extra columns might include parsed or calculated data from existing columns, or data taken from data structures created on-the-fly.
+
+For example, you could add extra information such as external HR data, an expanded event description, or classifications that depend on the user, location, or activity type.
+
+### Masking
+
+Ingestion-time transformations can also be used to mask or remove personal information. For example, you might use data transformation to mask all but the last digits of a social security number or credit card number, or you could replace other types of personal data with nonsense, standard text, or dummy data. Mask your personal information at ingestion time to increase security across your network.
+
+## Data ingestion flow in Microsoft Sentinel
+
+The following image shows where ingestion-time data transformation enters the data ingestion flow into Microsoft Sentinel.
+
+Microsoft Sentinel collects data into the Log Analytics workspace from multiple sources. Data from built-in data connectors is processed in Log Analytics using some combination of hardcoded workflows and ingestion-time transformations, and data ingested directly into the custom logs API endpoint is , and then stored in either standard or custom tables.
++
+## DCR support in Microsoft Sentinel
+
+In Log Analytics, data collection rules (DCRs) determine the data flow for different input streams. A data flow includes: the data stream to be transformed (standard or custom), the destination workspace, the KQL transformation, and the output table. For standard input streams, the output table is the same as the input stream.
+
+Support for DCRs in Microsoft Sentinel includes:
+
+- *Standard DCRs*, currently supported only for AMA-based connectors and workflows using the new [custom logs API](../azure-monitor/logs/custom-logs-overview.md).
+
+ Each connector or log source workflow can have its own dedicated *standard DCR*, though multiple connectors or sources can share a common *standard DCR* as well.
+
+- *Workspace transformation DCRs*, for workflows that don't currently support standard DCRs.
+
+ A single *workspace transformation DCR* serves all the supported workflows in a workspace that aren't served by standard DCRs. A workspace can have only one *workspace transformation DCR*, but that DCR contains separate transformations for each input stream. Also, *workspace transformation DCR*s are supported only for a [specific set of tables](../azure-monitor/logs/tables-feature-support.md).
+
+Microsoft Sentinel's support for ingestion-time transformation depends on the type of data connector you're using. For more in-depth information on custom logs, ingestion-time transformation, and data collection rules, see the articles linked in the [Next steps](#next-steps) section at the end of this article.
+
+### DCR support for Microsoft Sentinel data connectors
+
+The following table describes DCR support for Microsoft Sentinel data connector types:
+
+| Data connector type | DCR support |
+| - | -- |
+| **Direct ingestion via [Custom Logs API](../azure-monitor/logs/custom-logs-overview.md)** | Standard DCRs |
+| [**AMA standard logs**](connect-azure-windows-microsoft-services.md?tabs=AMA#windows-agent-based-connections), such as: <li>[Windows Security Events via AMA](data-connectors-reference.md#windows-security-events-via-ama)<li>[Windows Forwarded Events](data-connectors-reference.md#windows-forwarded-events-preview)<li>[CEF data](connect-common-event-format.md)<li>[Syslog data](connect-syslog.md) | Standard DCRs |
+| [**MMA standard logs**](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections), such as <li>[Syslog data](connect-syslog.md)<li>[CommonSecurityLog](connect-azure-windows-microsoft-services.md) | Workspace transformation DCRs |
+| [**Diagnostic settings-based connections**](connect-azure-windows-microsoft-services.md#diagnostic-settings-based-connections) | Workspace transformation DCRs, based on the [supported output tables](../azure-monitor/logs/tables-feature-support.md) for specific data connectors |
+| **Built-in, service-to-service data connectors**, such as:<li>[Microsoft Office 365](connect-azure-windows-microsoft-services.md#api-based-connections)<li>[Azure Active Directory](connect-azure-active-directory.md)<li>[Amazon S3](connect-aws.md) | Workspace transformation DCRs, based on the [supported output tables](../azure-monitor/logs/tables-feature-support.md) for specific data connectors |
+| **Built-in, API-based data connectors**, such as: <li>[Codeless data connectors](create-codeless-connector.md)<li>[Azure Functions-based data connectors](connect-azure-functions-template.md) | Not currently supported |
+| | |
+
+## Data transformation support for custom data connectors
+
+If you've created custom data connectors for Microsoft Sentinel, you can use DCRs to configure how the data will be parsed and stored in Log Analytics in your workspace.
+
+Only the following tables are currently supported for custom log ingestion:
+- [**WindowsEvent**](/azure/azure-monitor/reference/tables/windowsevent)
+- [**SecurityEvent**](/azure/azure-monitor/reference/tables/securityevent)
+- [**CommonSecurityLog**](/azure/azure-monitor/reference/tables/commonsecuritylog)
+- [**Syslog**](/azure/azure-monitor/reference/tables/syslog)
+- **ASIMDnsActivityLog**
+
+## Known issues
+
+Ingestion-time data transformation currently has the following known issues for Microsoft Sentinel data connectors:
+
+- Data transformations using *workspace transformation DCRs* are supported only per table, and not per connector.
+
+ There can only be one workspace transformation DCR for an entire workspace. Within that DCR, each table can use a separate input stream with its own transformation. However, if you have two different MMA-based data connectors sending data to the *Syslog* table, they will both have to use the same input stream configuration in the DCR.
+
+- The following configurations are supported only via API:
+
+ - Standard DCRs for AMA-based connectors like [Windows Security Events](data-connectors-reference.md#windows-security-events-via-ama) and [Windows Forwarded Events](data-connectors-reference.md#windows-forwarded-events-preview).
+
+ - Standard DCRs for custom log ingestion to a standard table.
+
+- It make take up to 60 minutes for the data transformation configurations to apply.
+
+- KQL syntax: Not all operators are supported. For more information, see [**KQL limitations** and **Supported KQL features**](../azure-monitor/essentials/data-collection-rule-transformations.md#kql-limitations) in the Azure Monitor documentation.
+
+## Next steps
+
+[Get started configuring ingestion-time data transformation in Microsoft Sentinel](configure-data-transformation.md).
+
+Learn more about Microsoft Sentinel data connector types. For more information, see:
+
+- [Microsoft Sentinel data connectors](connect-data-sources.md)
+- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+
+For more in-depth information on ingestion-time transformation, the Custom Logs API, and data collection rules, see the following articles in the Azure Monitor documentation:
+
+- [Ingestion-time transformations in Azure Monitor Logs (preview)](../azure-monitor/logs/ingestion-time-transformations.md)
+- [Custom logs API in Azure Monitor Logs (Preview)](../azure-monitor/logs/custom-logs-overview.md)
+- [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
Title: What's new in Microsoft Sentinel description: This article describes new features in Microsoft Sentinel from the past few months.--++ Last updated 03/01/2022
For more information, see:
## February 2022
+- [New custom log ingestion and data transformation at ingestion time (Public preview)](#new-custom-log-ingestion-and-data-transformation-at-ingestion-time-public-preview)
- [View MITRE support coverage (Public preview)](#view-mitre-support-coverage-public-preview)-- [View Azure Purview data in Microsoft Sentinel](#view-azure-purview-data-in-microsoft-sentinel-public-preview)
+- [View Azure Purview data in Microsoft Sentinel (Public preview)](#view-azure-purview-data-in-microsoft-sentinel-public-preview)
- [Manually run playbooks based on the incident trigger (Public preview)](#manually-run-playbooks-based-on-the-incident-trigger-public-preview) - [Search across long time spans in large datasets (public preview)](#search-across-long-time-spans-in-large-datasets-public-preview) - [Restore archived logs from search (public preview)](#restore-archived-logs-from-search-public-preview)
+### New custom log ingestion and data transformation at ingestion time (Public preview)
+
+Microsoft Sentinel supports two new features for data ingestion and transformation. These features, provided by Log Analytics, act on your data even before it's stored in your workspace.
+
+The first of these features is the [**custom logs API**](../azure-monitor/logs/custom-logs-overview.md). It allows you to send custom-format logs from any data source to your Log Analytics workspace, and store those logs either in certain specific standard tables, or in custom-formatted tables that you create. The actual ingestion of these logs can be done by direct API calls. You use Log Analytics [**data collection rules (DCRs)**](../azure-monitor/essentials/data-collection-rule-overview.md) to define and configure these workflows.
+
+The second feature is [**ingestion-time data transformation**](../azure-monitor/logs/ingestion-time-transformations.md) for standard logs. It uses [**DCRs**](../azure-monitor/essentials/data-collection-rule-overview.md) to filter out irrelevant data, to enrich or tag your data, or to hide sensitive or personal information. Data transformation can be configured at ingestion time for the following types of built-in data connectors:
+
+- AMA-based data connectors (based on the new Azure Monitor Agent)
+- MMA-based data connectors (based on the legacy Log Analytics Agent)
+- Data connectors that use Diagnostic settings
+- Service-to-service data connectors
+
+For more information, see:
+
+- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+- [Data transformation in Microsoft Sentinel (preview)](data-transformation.md)
+- [Configure ingestion-time data transformation for Microsoft Sentinel (preview)](configure-data-transformation.md).
+ ### View MITRE support coverage (Public preview) Microsoft Sentinel now provides a new **MITRE** page, which highlights the MITRE tactic and technique coverage you currently have, and can configure, for your organization.
For example:
:::image type="content" source="media/whats-new/mitre-coverage.png" alt-text="Screenshot of the MITRE coverage page with both active and simulated indicators selected."::: For more information, see [Understand security coverage by the MITRE ATT&CK® framework](mitre-coverage.md).-- [Search across long time spans in large datasets (public preview)](#search-across-long-time-spans-in-large-datasets-public-preview)-- [Restore archived logs from search (public preview)](#restore-archived-logs-from-search-public-preview) ### View Azure Purview data in Microsoft Sentinel (Public Preview)
service-fabric Quickstart Classic Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-classic-cluster-portal.md
+
+ Title: Deploy a Service Fabric cluster using the Azure portal
+description: Learn how to create a Service Fabric cluster using the Azure portal
+++++ Last updated : 03/02/2022++
+# Quickstart: Deploy a Service Fabric cluster using the Azure portal
+
+Test out Service Fabric clusters in this quickstart by creating a **three-node cluster**.
+
+Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. A Service Fabric cluster is a network-connected set of virtual machines into which your microservices are deployed and managed.
+
+In this quickstart, you learn how to:
+
+* Use Azure Key Vault to create a client certificate for your cluster
+* Deploy a Service Fabric cluster
+* View your cluster in Service Fabric Explorer
+
+This article describes how to deploy a Service Fabric cluster for testing in Azure using the **Azure portal**. There is also a quickstart for [Azure Resource Manager templates](quickstart-cluster-template.md).
+
+The three-node cluster created in this tutorial is only intended for instructional purposes. The cluster will use a self-signed certificate for authentication and will operate in the bronze reliability tier, so it's not suitable for production workloads. For more information about reliability tiers, see [Reliability characteristics of the cluster](service-fabric-cluster-capacity.md#reliability-characteristics-of-the-cluster).
+
+## Prerequisites
+
+* An Azure subscription. If you don't already have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+* A resource group to manage all the resources you use in this quickstart. We use the example resource group name **ServiceFabricResources** throughout this quickstart.
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
+
+ 1. Select **Resource groups** under **Azure services**.
+
+ 1. Choose **+ Create**, select your Azure subscription, enter a name for your resource group, and pick your preferred region from the dropdown menu.
+
+ 1. Select **Review + create** and, once the validation passes, choose **Create**.
+
+## Create a client certificate
+
+Service Fabric clusters use a client certificate as a key for access control.
+
+In this quickstart, we use a client certificate called **ExampleCertificate** from an Azure Key Vault named **QuickstartSFKeyVault**.
+
+To create your own Azure Key Vault:
+
+1. In the [Azure portal](https://portal.azure.com), select **Key vaults** under **Azure services** and select **+ Create**. Alternatively, select **Create a resource**, enter **Key Vault** in the `Search services and marketplace` box, choose **Key Vault** from the results, and select **Create**.
+
+1. On the **Create a key vault** page, provide the following information:
+ - `Subscription`: Choose your Azure subscription.
+ - `Resource group`: Choose the resource group you created in the prerequisites or create a new one if you didn't already. For this quickstart, we use **ServiceFabricResources**.
+ - `Name`: Enter a unique name. For this quickstart, we use **QuickstartSFKeyVault**.
+ - `Region`: Choose your preferred region from the dropdown menu.
+ - Leave the other options as their defaults.
+
+1. Select **Review + create** and, once the validation passes, choose **Create**.
+
+To generate and retrieve your client certificate:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your Azure Key Vault.
+
+1. Under **Settings** in the pane on the left, select **Certificates**.
+
+ ![Select the Certificates tab under Settings in the left pane.](./media/quickstart-classic-cluster-portal/key-vault-settings-certificates.png)
+
+1. Choose **+ Generate/Import**.
+
+1. On the **Create a certificate** page, provide the following information:
+ - `Method of Certificate Creation`: Choose **Generate**.
+ - `Certificate Name`: Use a unique name. For this quickstart, we use **ExampleCertificate**.
+ - `Type of Certificate Authority (CA)`: Choose **Self-signed certificate**.
+ - `Subject`: Use a unique domain name. For this quickstart, we use **CN=ExampleDomain**.
+ - Leave the other options as their defaults.
+
+1. Select **Create**.
+
+1. Your certificate will appear under **In progress, failed or cancelled**. You may need to refresh the list for it to appear under **Completed**. Once it's completed, select it and choose the version under **CURRENT VERSION**.
+
+1. Select **Download in PFX/PEM format** and select **Download**. The certificate's name will be formatted as `yourkeyvaultname-yourcertificatename-yyyymmdd.pfx`.
+
+ ![Select Download in PFX/PEM format to retrieve your certificate so you can import it into your computer's certificate store.](./media/quickstart-classic-cluster-portal/download-pfx.png)
+
+1. Import the certificate to your computer's certificate store so that you may use it to access your Service Fabric cluster later.
+
+ >[!NOTE]
+ >The private key included in this certificate doesn't have a password. If your certificate store prompts you for a private key password, leave the field blank.
+
+Before you create your Service Fabric cluster, you need to make sure Azure Virtual Machines can retrieve certificates from your Azure Key Vault. To do so:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your Azure Key Vault.
+
+1. Under **Settings** in the pane on the left, select **Access policies**.
+
+ ![Select the Access policies tab under Settings in the left pane.](./media/quickstart-classic-cluster-portal/key-vault-settings-access-policies.png)
+
+1. Toggle **Azure Virtual Machines for deployment** under **Enable access to:**.
+
+1. Save your changes.
+
+## Create your Service Fabric cluster
+
+In this quickstart, we use a Service Fabric cluster named **quickstartsfcluster**.
+
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource**, enter **Service Fabric** in the `Search services and marketplace` box, choose **Service Fabric Cluster** from the results, and select **Create**.
+
+1. On the **Create Service Fabric cluster** page, provide the following information:
+ - `Subscription`: Choose your Azure subscription.
+ - `Resource group`: Choose the resource group you created in the prerequisites or create a new one if you didn't already. For this quickstart, we use **ServiceFabricResources**.
+ - `Cluster name`: Enter a unique name. For this quickstart, we use **quickstartsfcluster**.
+ - `Location`: Choose your preferred region from the dropdown menu. This must be the same region as your Azure Key Vault.
+ - `Operating system`: Choose **WindowsServer 2019-Datacenter-with-Containers** from the dropdown menu.
+ - `Username`: Enter a username for your cluster's administrator account.
+ - `Password`: Enter a password for your cluster's administrator account.
+ - `Confirm password`: Reenter the password you chose.
+ - `Initial VM scale set capacity`: Adjust the slider to **3**. You will see a warning that choosing less than 5 for the initial VM scale set capacity will put your cluster on a reliability tier of bronze. Bronze tier is acceptable for the purposes of this quickstart but is not recommended for production workloads.
+ - `Key vault and primary certificate`: Choose **Select a certificate**, pictured below. Select your Azure Key Vault from the **Key vault** dropdown menu and your certificate from the **Certificate** dropdown menu, pictured below.
+ - Leave the other options as their defaults.
+
+ ![Choose Select a certificate in the Authentication method section of the settings.](./media/quickstart-classic-cluster-portal/create-a-service-fabric-classic-cluster-security.png)
+
+ ![Select your Azure Key Vault and certificate from the dropdown menus.](./media/quickstart-classic-cluster-portal/select-a-certificate-from-azure-key-vault.png)
+
+ If you didn't already change your Azure Key Vault's access policies, you may get text prompting you to do so after you select your key vault and certificate. If so, choose **Edit access policies for yourkeyvaultname**, select **Click to show advanced access policies**, toggle **Azure Virtual Machines for deployment**, and save your changes. Click **Create Service Fabric cluster** to return to the creation page.
+
+1. Select **Review + create** and, once the validation passes, choose **Create**.
+
+Now, your cluster's deployment is in progress. The deployment will likely take around 20 minutes to complete.
+
+>[!NOTE]
+> The Azure portal may tell you the deployment succeeded before the deployment has completed. You will know it has completed when your cluster's **Overview** page shows three nodes with an OK **Health state**.
+
+## Validate the deployment
+
+Once the deployment completes, you're ready to view your new Service Fabric cluster.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your cluster.
+
+1. On your cluster's **Overview** page, find the **Service Fabric Explorer** link and select it.
+
+ ![Select the SF Explorer link on your cluster's Overview page.](./media/quickstart-classic-cluster-portal/service-fabric-explorer-address.png)
+
+ >[!NOTE]
+ >You may get a warning that your connection to your cluster isn't private. Select **Advanced** and choose **continue to yourclusterfqdn (unsafe)**.
+
+1. When prompted for a certificate, choose the certificate you created, downloaded, and stored for this quickstart and select **OK**. If you completed those steps successfully, the certificate should be in the list of certificates.
+
+1. You'll arrive at the Service Fabric Explorer display for your cluster, pictured below.
+
+ ![View your cluster's page in the Service Fabric Explorer.](./media/quickstart-classic-cluster-portal/service-fabric-explorer.png)
+
+Your Service Fabric cluster consists of three nodes. These nodes are WindowsServer 2019-Datacenter virtual machines with 2 vCPUs and 8 GiB of RAM. These features are determined by the **VM Size** under **Node types** on the **Create Service Fabric cluster** page.
+
+## Clean up resources
+
+When no longer needed, delete the resource group for your Service Fabric cluster. To delete your resource group:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your resource group.
+
+1. Select **Delete resource group**.
+
+1. In the `TYPE THE RESOURCE GROUP NAME:` box, type the name of your resource group and select **Delete**.
+
+## Next steps
+
+In this quickstart, you deployed a Service Fabric cluster. To learn more about how to scale a cluster, see:
+
+> [!div class="nextstepaction"]
+> [Scale a Service Fabric cluster in Azure](service-fabric-tutorial-scale-cluster.md)
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
Enable replication. This procedure assumes that the primary Azure region is East
- **Deployment model**: Azure deployment model of the source machines. - **Source subscription**: The subscription to which your source VMs belong. This can be any subscription within the same Azure Active Directory tenant where your recovery services vault exists. - **Resource Group**: The resource group to which your source virtual machines belong. All the VMs under the selected resource group are listed for protection in the next step.
+ - **Disaster Recovery between Availability Zones**: Select yes if you want to perform zonal disaster recovery on virtual machines.
+ - **Availability Zones**: Select the availability zone where the source virtual machines are pinned.
- ![Screenshot that highlights the fields needed to configure replication.](./media/site-recovery-replicate-azure-to-azure/enabledrwizard1.png)
+ ![Screenshot that highlights the fields needed to configure replication.](./media/azure-to-azure-how-to-enable-replication/enabled-rwizard-1.png)
3. In **Virtual Machines > Select virtual machines**, click and select each VM that you want to replicate. You can only select machines for which replication can be enabled. Then click **OK**.
- ![Screenshot that highlights where you select virtual machines.](./media/site-recovery-replicate-azure-to-azure/virtualmachine_selection.png)
+ ![Screenshot that highlights where you select virtual machines.](./media/azure-to-azure-how-to-enable-replication/virtual-machine-selection.png)
4. In **Settings**, you can optionally configure target site settings:
Enable replication. This procedure assumes that the primary Azure region is East
- You can customize the resource group settings. - The location of the target resource group can be any Azure region, except the region in which the source VMs are hosted. - **Target virtual network**: By default, Site Recovery creates a new virtual network in the target region with an "asr" suffix in the name. This is mapped to your source network, and used for any future protection. [Learn more](./azure-to-azure-network-mapping.md) about network mapping.
- - **Target storage accounts (source VM doesn't use managed disks)**: By default, Site Recovery creates a new target storage account mimicking your source VM storage configuration. In case storage account already exists, it's reused.
- **Replica-managed disks (source VM uses managed disks)**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk. - **Cache Storage accounts**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard. - **Target availability sets**: By default, Site Recovery creates a new availability set in the target region with the "asr" suffix in the name, for VMs that are part of an availability set in the source region. If the availability set created by Site Recovery already exists, it's reused.
Enable replication. This procedure assumes that the primary Azure region is East
- One day of retention for recovery points. - No app-consistent snapshots.
- ![Enable replication](./media/site-recovery-replicate-azure-to-azure/enabledrwizard3.PNG)
+ ![Screenshot that displays the enable replication parameters.](./media/azure-to-azure-how-to-enable-replication/enabled-rwizard-3.PNG)
### Enable replication for added disks
If you add disks to an Azure VM for which replication is enabled, the following
- If you choose not to enable replication for the disk, you can select to dismiss the warning.
- ![New disk added](./media/azure-to-azure-how-to-enable-replication/newdisk.png)
+ ![Screenshot that displays how to enable replication for an added disk.](./media/azure-to-azure-how-to-enable-replication/newdisk.png)
To enable replication for an added disk, do the following:
To enable replication for an added disk, do the following:
2. Click **Disks**, and then select the data disk for which you want to enable replication (these disks have a **Not protected** status). 3. In **Disk Details**, click **Enable replication**.
- ![Enable replication for added disk](./media/azure-to-azure-how-to-enable-replication/enabled-added.png)
+ ![Screenshot that displays replication enabled for a newly added disk.](./media/azure-to-azure-how-to-enable-replication/enabled-added.png)
After the enable replication job runs, and the initial replication finishes, the replication health warning for the disk issue is removed. - ## Customize target resources You can modify the default target settings used by Site Recovery.
You can modify the default target settings used by Site Recovery.
2. Click **Customize:** to modify default settings: - In **Target resource group**, select the resource group from the list of all the resource groups in the target location of the subscription. - In **Target virtual network**, select the network from a list of all the virtual network in the target location.
- - In **Availability set**, you can add availability set settings to the VM, if they're part of an availability set in the source region.
- - In **Target Storage accounts**, select the account you want to use.
+ - In **Cache storage**, select the storage you want from the list of available cache storage.
+ - In **Target availability type**, select the availability type from a list of all the availability type in the target location.
+ - In **Target proximity placement group**, select the proximity placement group from a list of all the proximity placement group in the target location.
- ![Screenshot that shows how to customize target subscription settings.](./media/site-recovery-replicate-azure-to-azure/customize.PNG)
+ ![Screenshot that shows how to customize target subscription settings.](./media/azure-to-azure-how-to-enable-replication/customize.PNG)
3. Click **Customize:** to modify replication settings. 4. In **Multi-VM consistency**, select the VMs that you want to replicate together. - All the machines in a replication group will have shared crash consistent and app-consistent recovery points when failed over.
You can modify the default target settings used by Site Recovery.
- If you enable multi-VM consistency, machines in the replication group communicate with each other over port 20004. - Ensure there's no firewall appliance blocking the internal communication between the VMs over port 20004. - If you want Linux VMs to be part of a replication group, ensure the outbound traffic on port 20004 is manually opened according to guidance for the specific Linux version.
-![Screenshot that shows the Multi-VM consistency settings.](./media/site-recovery-replicate-azure-to-azure/multivmsettings.PNG)
+![Screenshot that shows the Multi-VM consistency settings.](./media/azure-to-azure-how-to-enable-replication/multi-vm-settings.PNG)
-5. Click **Create target resource** > **Enable Replication**.
-6. After the VMs are enabled for replication, you can check the status of VM health under **Replicated items**
+5. Click **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings.
+ ![Screenshot that shows the Capacity Reservation settings.](./media/azure-to-azure-how-to-enable-replication/capacity-reservation-edit-button.png)
+6. Click **Create target resource** > **Enable Replication**.
+7. After the VMs are enabled for replication, you can check the status of VM health under **Replicated items**
>[!NOTE] >
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
You can customize the following properties of the target VM during reprotection.
||| |Target resource group | Modify the target resource group in which the VM is created. As the part of reprotection, the target VM is deleted. You can choose a new resource group under which to create the VM after failover. | |Target virtual network | The target network can't be changed during the reprotect job. To change the network, redo the network mapping. |
+|Capacity reservation | Configure a capacity reservation for the VM. You can create a new capacity reservation group to reserve capacity or select an existing capacity reservation group. [Learn more](azure-to-azure-how-to-enable-replication.md#enable-replication) about capacity reservation. |
|Target storage (Secondary VM doesn't use managed disks) | You can change the storage account that the VM uses after failover. | |Replica managed disks (Secondary VM uses managed disks) | Site Recovery creates replica managed disks in the primary region to mirror the secondary VM's managed disks. | |Cache storage | You can specify a cache storage account to be used during replication. By default, a new cache storage account is created, if it doesn't exist. |
static-web-apps Deploy Nuxtjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nuxtjs.md
When you build a Nuxt.js site using `npm run build`, the app is built as a tradi
> [!NOTE] > This folder is listed in the _.gitignore_ file because it should be generated by CI/CD when you deploy.
-## Push your static website to GitHub
-
-Azure Static Web Apps deploys your app from a GitHub repository and keeps doing so for every pushed commit to a designated branch. Use the following commands sync your changes to GitHub.
-
-1. Stage all changed files:
-
- ```bash
- git add .
- ```
-
-1. Commit all changes
-
- ```bash
- git commit -m "Update build config"
- ```
-
-1. Push your changes to GitHub.
-
- ```bash
- git push origin main
- ```
- ## Deploy your static website The following steps show how to link the app you just pushed to GitHub to Azure Static Web Apps. Once in Azure, you can deploy the application to a production environment.
The following steps show how to link the app you just pushed to GitHub to Azure
1. In the _App location_, enter **./** in the box. 1. Leave the _Api location_ box empty.
-1. In the _Output location_ box, enter **out**.
+1. In the _Output location_ box, enter **dist**.
### Review and create
storage Blob Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md
+
+ Title: Manage block blobs with Azure CLI
+
+description: Manage blobs with Azure CLI
++++ Last updated : 03/02/2022++
+# Manage block blobs with Azure CLI
+
+Blob storage supports block blobs, append blobs, and page blobs. Block blobs are optimized for uploading large amounts of data efficiently. Block blobs are ideal for storing images, documents, and other types of data not subjected to random read and write operations. This article explains how to work with block blobs.
+
+## Prerequisites
+++
+- This article requires version 2.0.46 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+### Authorize access to Blob storage
+
+You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using a storage account access key. Using Azure AD credentials is recommended, and this article's examples use Azure AD exclusively.
+
+Azure CLI commands for data operations against Blob storage support the `--auth-mode` parameter, which enables you to specify how to authorize a given operation. Set the `--auth-mode` parameter to *login* to authorize with Azure AD credentials. Only Blob storage data operations support the `--auth-mode` parameter. Management operations, such as creating a resource group or storage account, automatically use Azure AD credentials for authorization. For more information, see [Choose how to authorize access to blob data with Azure CLI](authorize-data-operations-cli.md).
+
+Run the `login` command to open a browser and connect to your Azure subscription.
+
+```azurecli-interactive
+
+az login
+
+```
+
+### Create a container
+
+All blob data is stored within containers, so you'll need at least one container resource before you can upload data. If needed, use the following example to create a storage container. For more information, see [Managing blob containers using Azure CLI](blob-containers-cli.md).
+
+```azurecli-interactive
+
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+
+# Create a container object
+az storage container create \
+ --name $containerName \
+ --account-name $storageAccount
+ --auth-mode login
+
+```
+
+When you use the examples included in this article, you'll need to replace the placeholder values in brackets with your own values. For more information about signing into Azure with Azure CLI, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+## Upload blobs
+
+Azure CLI offers commands that perform operations on one resource or on multiple resources, depending on your requirements.
+
+To upload a file to a block blob, pass the required parameter values to the `az storage blob upload` command. Supply the source path and file name with the `--file` parameter, and the name of the destination container with the `--container-name` parameter. You'll also need to supply the `--account-name` parameter. This command creates a new blob or overwrites the original blob if it already exists.
+
+You can use the `az storage blob upload-batch` command to recursively upload multiple blobs to a storage container. You can use Unix filename pattern matching specify a range of files to upload with the `--pattern` parameter. The supported patterns are `*`, `?`, `[seq]`, and `[!seq]`. To learn more, refer to the Python documentation on [Unix filename pattern matching](https://docs.python.org/3.7/library/fnmatch.html).
+
+In the following example, the first operation uses the `az storage blob upload` command to upload a single, named file. The source file and destination storage container are specified with the `--file` and `--container-name` parameters.
+
+The second operation demonstrates the use of the `az storage blob upload-batch` command to upload multiple files. The `--if-unmodified-since` parameter ensures that only files modified with the last seven days will be uploaded. The value supplied by this parameter must be provided in UTC format.
+
+```azurecli-interactive
+
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+lastModified=`date -d "10 days ago" '+%Y-%m-%dT%H:%MZ'`
+
+path="C:\\temp\\"
+filename="demo-file.txt"
+imageFiles="*.png"
+file="$path$filename"
+
+#Upload a single named file
+az storage blob upload \
+ --file $file \
+ --container-name $containerName \
+ --account-name $storageAccount \
+ --auth-mode login
+
+#Upload multiple image files recursively
+az storage blob upload-batch \
+ --destination $containerName \
+ --source $path \
+ --pattern *.png \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --if-unmodified-since $lastModified
+
+```
+
+## List blobs
+
+By default, the `az storage blob list` command lists all blobs stored within a container. You can use various approaches to refine the scope of your search. There's no restriction on the number of containers or blobs a storage account may have. To potentially avoid retrieving thousands of blobs, it's a good idea to limit the amount of data returned.
+
+Use the `--prefix` parameter to select either a single known file or a range of files whose names begin with a defined string.
+
+By default, only blobs are returned in a listing operation. In some scenarios, you may want to pass a value for the `--include` parameter to return additional types of objects such as soft-deleted blobs, snapshots, and versions. These values can be combined to return more than multiple object types.
+
+The `--num-results` parameter can be used to limit the number of unfiltered blobs returned from a container. A service limit of 5,000 is imposed on all Azure resources. This limit ensures that manageable amounts of data are retrieved and that performance isn't impacted. If the number of blobs returned exceeds either the `--num-results` value or the service limit, a continuation token is returned. This token allows you to use multiple requests to retrieve any number of blobs. More information is available on [Enumerating blob resources](/rest/api/storageservices/enumerating-blob-resources).
+
+The following example shows several approaches used to provide a list of blobs. The first approach lists a single blob within a specific container resource. The second approach uses the `--prefix` parameter to list all blobs in all containers with a prefix of *louis*. The search is restricted to five containers using the `--num-results` parameter. The third approach uses `--num-results` and `--marker` parameters to limit the retrieval of all blobs within a container.
+
+For additional information, see the [az storage blob list](/cli/azure/storage/blob#az-storage-blob-list) reference.
+
+```azurecli-interactive
+
+#!/bin/bash
+storageAccount="<storage-account>"
+blobName="demo-file.txt"
+containerName="demo-container"
+blobPrefix="img-louis"
+numResults=5
+
+#Approach 1: List all blobs in a named container
+az storage blob list \
+ --container $containerName \
+ --account-name $storageAccount \
+ --prefix $blobName
+ --auth-mode login
+
+#Approach 2: Use the --prefix parameter to list blobs in all containers
+
+containerList=$( \
+ az storage container list \
+ --query "[].name" \
+ --num-results $numResults \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --output tsv
+)
+for row in $containerList
+do
+ tmpName=$(echo $row | sed -e 's/\r//g')
+ echo $tmpName
+ az storage blob list \
+ --prefix $blobPrefix \
+ --container $tmpName \
+ --account-name $storageAccount \
+ --auth-mode login
+done
+```
+
+## Download a blob
+
+Depending on your use case, you'll use either the `az storage blob download` or `az storage blob download-batch` command to download blobs. To download an individual blob, call the `az storage blob download` command directly and pass values for the `--container-name`, `--file`, and `--name` parameters. The blob will be downloaded to the shell directory by default, but an alternate location can be specified. The operation will fail with an error if your specified path doesn't exist.
+
+To recursively download multiple blobs from a storage container, use the `az storage blob download-batch` command. This command supports Unix filename pattern matching with the `--pattern` parameter. The supported patterns are `*`, `?`, `[seq]`, and `[!seq]`. To learn more, refer to the Python documentation on [Unix filename pattern matching](https://docs.python.org/3.7/library/fnmatch.html).
+
+The following sample code provides an example of both single and multiple download approaches. It also offers a simplified approach to searching all containers for specific files using a wildcard. Because some environments may have many thousands of resources, using the `--num-results` parameter is recommended.
+
+For additional information, see the [az storage blob download](/cli/azure/storage/blob#az-storage-blob-download) and [az storage blob download batch](/cli/azure/storage/blob#az-storage-blob-download-batch)reference.
+
+```azurecli-interactive
+#!/bin/bash
+#Set variables
+storageAccount="<storage-account>"
+containerName="demo-container"
+
+destinationPath="C:\\temp\\downloads\\"
+destinationFilename="downloadedBlob.txt"
+file="$destinationPath$destinationFilename"
+sourceBlobName="demo-file.txt"
+
+#Download a single named blob
+
+az storage blob download \
+ --container $containerName \
+ --file $file \
+ --name $sourceBlobName \
+ --account-name $storageAccount \
+ --auth-mode login
+
+#Download multiple blobs using a pattern value
+
+az storage blob download-batch \
+ --destination $destinationPath \
+ --source $containerName \
+ --pattern images/*.png \
+ --account-name $storageAccount \
+ --auth-mode login
+
+#Use a loop to download matching blobs in a list of containers
+
+containerList=$( \
+ az storage container list \
+ --query "[].name" \
+ --num-results 5 \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --output tsv
+)
+for row in $containerList
+do
+ tmpName=$(echo $row | sed -e 's/\r//g')
+ echo $tmpName
+
+ az storage blob download-batch \
+ --destination $destinationPath \
+ --source $tmpName \
+ --pattern *louis*.* \
+ --account-name $storageAccount \
+ --auth-mode login
+done
+
+```
+
+## Manage blob properties and metadata
+
+A blob exposes both system properties and user-defined metadata. System properties exist on each Blob Storage resource. Some properties are read-only, while others can be read or set. Under the covers, some system properties map to certain standard HTTP headers.
+
+User-defined metadata consists of one or more name-value pairs that you specify for a Blob Storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes, and don't affect how the resource behaves.
+
+### Reading blob properties
+
+To read blob properties or metadata, you must first retrieve the blob from the service. Use the `az storage blob show` command to retrieve a blob's properties and metadata, but not its content. The following example retrieves a blob and lists its properties.
+
+For additional information, see the [az storage blob show](/cli/azure/storage/blob#az-storage-blob-show) reference.
+
+```azurecli-interactive
+#!/bin/bash
+#Set variables
+storageAccount="<storage-account>"
+containerName="demo-container"
+
+az storage blob show \
+ --container demo-container \
+ --name demo-file.txt \
+ --account-name $storageAccount \
+ --auth-mode login
+```
+
+### Read and write blob metadata
+
+Blob metadata is an optional set of name/value pairs associated with a blob. As shown in the previous example, there's no metadata associated with a blob initially, though it can be added when necessary. To read, use the `az storage blob metadata show` command. To update blob metadata, you'll use `az storage blob metadata update` and supply an array of key-value pairs. For more information, see the [az storage blob metadata](/cli/azure/storage/blob/metadata) reference.
+
+For additional information, see the [az storage blob metadata](/cli/azure/storage/blob#az-storage-blob-metadata) reference.
+
+The example below first updates and then commits a blob's metadata, and then retrieves it.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+blobName="blue-moon.mp3"
+
+metadata=("Written=1934" "Recorded=1958")
+metadata+=("Lyricist=Lorenz Hart")
+metadata+=("Composer=Richard Rogers")
+metadata+=("Artist=Tony Bennett")
+
+#Update metadata
+az storage blob metadata update \
+ --container-name $containerName \
+ --name $blobName \
+ --metadata "${metadata[@]}" \
+ --account-name $storageAccount \
+ --auth-mode login
+
+#Retrieve updated blob metadata
+az storage blob metadata show \
+ --container-name $containerName \
+ --name $blobName \
+ --account-name $storageAccount \
+ --auth-mode login
+```
+
+## Copy operations for blobs
+
+There are many scenarios in which blobs of different types may be copied. Examples in this article are limited to block blobs. Azure CLI offers commands that perform operations on one resource or on multiple resources, depending on your requirements.
+
+To copy a specific blob, use the `az storage blob copy start` command and specify values for source and destination containers and blobs. It's also possible to provide a uniform resource identifier (URI), share, or shared access signature (SAS) as the source.
+
+You can also specify the conditions under which the blob will be copied. These conditions can be set for either the source or destination blob. You can reference the last modified date, tag data, or ETag value. You may, for example, choose to copy blobs that haven't been recently modified to a separate container. For more information, see [Specifying conditional headers for Blob service operations](/rest/api/storageservices/specifying-conditional-headers-for-blob-service-operations).
+
+You can use the `az storage blob copy start-batch` command to recursively copy multiple blobs between storage containers within the same storage account. This command requires values for the `--source-container` and `--destination-container` parameters, and can copy all files between the source and destination. Like other CLI batch commands, this command supports Unix filename pattern matching with the `--pattern` parameter. The supported patterns are `*`, `?`, `[seq]`, and `[!seq]`. To learn more, refer to the Python documentation on [Unix filename pattern matching](https://docs.python.org/3.7/library/fnmatch.html).
+
+> [!NOTE]
+> Consider the use of AzCopy for ease and performance, especially when copying blobs between storage accounts. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. Find out more about how to [Get started with AzCopy](/azure/storage/common/storage-use-azcopy-v10).
+
+For more information, see the [az storage blob copy](/cli/azure/storage/blob/copy) reference.
+
+The following sample code provides an example of both single and multiple copy operations. Because some environments may have many thousands of resources, using the `--num-results` parameter is recommended. The first example copies the **secret-town-road.png** blob from the **photos** container to the **locations** container. Both containers exist within the same storage account. The result verifies the success of the copy operation.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+sourceContainer="photos"
+blobName="secret-town-road.jpg"
+destContainer="locations"
+
+az storage blob copy start \
+ --destination-container $destContainer \
+ --destination-blob $blobName \
+ --source-container $sourceContainer \
+ --source-blob $blobName \
+ --account-name $storageAccount \
+ --auth-mode login
+```
+
+## Snapshot blobs
+
+Any leases associated with the base blob don't affect the snapshot. You canΓÇÖt acquire a lease on a snapshot. Read more about [Blob snapshots](snapshots-overview.md). For more information, see the [az storage blob snapshot](/cli/azure/storage/blob#az-storage-blob-snapshot) reference.
+
+The following sample code retrieves a blob from a storage container and creates a snapshot of it.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+blobName="demo-file.txt"
+
+az storage blob snapshot \
+ --container-name $containerName \
+ --name Blue-Moon.mp3 \
+ --account-name $storageAccount \
+ --auth-mode login
+
+```
+
+## Set blob tier
+
+When you change a blob's tier, you move the blob and all of its data to the target tier. You can change the tier between **Hot**, **Cool**, and **Archive** with the `az storage blob set-tier` command.
+
+Depending on your requirements, you may also utilize the *Copy Blob* operation to copy a blob from one tier to another. The *Copy Blob* operation will create a new blob in the desired tier while leaving the source blob remains in the original tier.
+
+Changing tiers from **Cool** or **Hot** to **Archive** takes place almost immediately. After a blob is moved to the **Archive** tier, it's considered to be offline and can't be read or modified. Before you can read or modify an archived blob's data, you'll need to rehydrate it to an online tier. Read more about [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+
+For additional information, see the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) reference.
+
+The following sample code sets the tier to **Hot** for a single, named blob within the `archive` container.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+
+az storage blob set-tier
+ --container-name $containerName \
+ --name Blue-Moon.mp3 \
+ --tier Hot \
+ --account-name $storageAccount \
+ --auth-mode login
+```
+
+## Operations using blob tags
+
+Blob index tags make data management and discovery easier. Blob index tags are user-defined key-value index attributes that you can apply to your blobs. Once configured, you can categorize and find objects within an individual container or across all containers. Blob resources can be dynamically categorized by updating their index tags without requiring a change in container organization. This approach offers a flexible way to cope with changing data requirements. You can use both metadata and index tags simultaneously. For more information on index tags, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
+
+> [!IMPORTANT]
+> Support for blob index tags is in preview status.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+> [!TIP]
+> The code sample provided below uses pattern matching to obtain text from an XML file having a known structure. The example is used to illustrate a simplified approach for adding blob tags using basic Bash functionality. The use of an actual data parsing tool is always recommended when consuming data for production workloads.
+
+The following example illustrates how to add blob index tags to a series of blobs. The example reads data from an XML file and uses it to create index tags on several blobs. To use the sample code, create a local *blob-list.xml* file in your *C:\temp* directory. The XML data is provided below.
+
+For additional information, see the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) reference.
+
+```xml
+<Venue Name="House of Prime Rib" Type="Restaurant">
+ <Files>
+ <File path="transactions/12027121.csv" />
+ <File path="campaigns/radio-campaign.docx" />
+ <File path="photos/bannerphoto.png" />
+ <File path="archive/completed/2020review.pdf" />
+ <File path="logs/2020/01/01/logfile.txt" />
+ </Files>
+</Venue>
+```
+
+The sample code iterates the lines within the XML file. It locates the *Venue* element and creates variables for the *Name* and *Type* values. It then iterates through the remaining lines and creates tags for each blob referenced by a `File` node.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+
+while read line
+do
+
+#Set Tag values
+if echo "$line" | grep -q "<Venue";then
+ name=`echo "$line" | cut -d'"' -f 2`
+ type=`echo "$line" | cut -d'"' -f 4`
+ tags=("name=$name")
+ tags+=("type=$type")
+fi
+
+#Add tags to blobs
+if echo "$line" | grep -q "<File ";then
+ blobName=`echo "$line" | cut -d'"' -f 2`
+
+ echo az storage blob tag set \
+ --container-name $containerName \
+ --name $blobName \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --tags "{$tags[@]}"
+fi
+
+done < /mnt/c/temp/bloblist.xml
+
+```
+
+## Delete blobs
+
+You can delete either a single blob or series of blobs with the `az storage blob delete` and `az storage blob delete-batch` commands. When deleting multiple blobs, you can use conditional operations, loops, or other automation as shown in the examples below.
+
+[!WARNING] Running the following examples may permanently delete blobs. Microsoft recommends enabling container soft delete to protect containers and blobs from accidental deletion. For more info, see Soft delete for containers.
+
+The following sample code provides an example of both single and multiple download approaches. The first example deletes a single, named blob. The second example illustrates the use of logical operations in Bash to delete multiple blobs. The third example uses the `delete-batch` command to delete all blobs with the format *bennett-x*, except *bennett-2*.
+
+For more information, see the [az storage blob delete](/cli/azure/storage/blob#az-storage-blob-delete) and [az storage blob delete-batch](/cli/azure/storage/blob#az-storage-blob-delete-batch) reference.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+
+blobName="demo-file.txt"
+blobPrefix="sinatra-"
+
+#Delete a single, named blob
+az storage blob delete \
+ --container-name $containerName \
+ --name $blobName \
+ --account-name $storageAccount \
+ --auth-mode login
+
+#Iterate a blob list, deleting blobs whose names end with even numbers
+
+## Get list of containers
+blobList=$(az storage blob list \
+ --query "[].name" \
+ --prefix $blobPrefix \
+ --container-name $containerName \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --output tsv)
+
+## Delete all blobs with the format *bennett-x* except *bennett-2.*
+for row in $blobList
+do
+ #Get the blob's number
+ tmpBlob=$(echo $row | sed -e 's/\r//g')
+ tmpName=$(echo ${row%.*} | sed -e 's/\r//g')
+
+ if [ `expr ${tmpName: ${#blobPrefix}} % 2` == 0 ]
+ then
+
+ echo "Deleting $tmpBlob"
+ az storage blob delete \
+ --container-name $containerName \
+ --name $tmpBlob \
+ --account-name $storageAccount \
+ --auth-mode login
+
+ fi
+done
+
+#Delete multiple blobs using delete-batch
+az storage blob delete-batch \
+ --source $containerName \
+ --pattern bennett-[!2].* \
+ --account-name $storageAccount \
+ --auth-mode login
+```
+
+If your storage account's soft delete data protection option is enabled, you can use a listing operation to return blobs deleted within the associated retention period. To learn more about soft delete, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
+
+Use the following example to retrieve a list of blobs deleted within container's associated retention period. The result displays a list of recently deleted blobs.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+
+blobPrefix="sinatra-"
+
+#Retrieve a list of all deleted blobs
+az storage blob list \
+ --container-name $containerName \
+ --include d \
+ --output table \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --query "[?deleted].{name:name,deleted:properties.deletedTime}"
+
+#Retrieve a list of all blobs matching specific prefix
+az storage blob list \
+ --container-name $containerName \
+ --prefix $blobPrefix \
+ --output table \
+ --include d \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --query "[].{name:name,deleted:deleted}"
+```
+
+## Restore a soft-deleted blob
+As mentioned in the [List blobs](#list-blobs) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore containers deleted within the associated retention period.
+
+The following examples restore soft-deleted blobs with the `az storage blob undelete` method. The first example uses the `--name` parameter to restore a single named blob. The second example uses a loop to restore the remainder of the deleted blobs. Before you can follow this example, you'll need to enable soft delete on at least one of your storage accounts.
+
+To learn more about the soft delete data protection option, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article or the [az storage blob undelete](/cli/azure/storage/blob#az-storage-blob-undelete) reference.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account>"
+containerName="demo-container"
+
+blobName="demo-file.txt"
+
+#Restore a single, named blob
+az storage blob undelete \
+ --container-name $containerName \
+ --name $blobName \
+ --account-name $storageAccount \
+ --auth-mode login
+
+#Retrieve all deleted blobs
+blobList=$( \
+ az storage blob list \
+ --container-name $containerName \
+ --include d \
+ --output tsv \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --query "[?deleted].[name]" \
+)
+
+#Iterate list of deleted blobs and restore
+for row in $blobList
+do
+ tmpName=$(echo $row | sed -e 's/\r//g')
+ echo "Restoring $tmpName"
+ az storage blob undelete \
+ --container-name $containerName \
+ --name $tmpName \
+ --account-name $storageAccount \
+ --auth-mode login
+done
+```
+
+## Next steps
+
+- [Choose how to authorize access to blob data with Azure CLI](/azure/storage/blobs/authorize-data-operations-cli)
+- [Run PowerShell commands with Azure AD credentials to access blob data](/azure/storage/blobs/authorize-data-operations-cli)
+- [Manage blob containers using CLI](blob-containers-cli.md)
storage Data Lake Storage Migrate Gen1 To Gen2 Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md
Once the migration is complete, both in "Copy data" and "Complete migration" opt
#### Gen1 doesn't have containers and Gen2 has them - what should I expect?
-When we copy the data over to your Gen2-enabled account, we automatically create a container named `Gen1`. If you choose to copy only data, then you can rename that container after the data copy is complete. If you perform a complete migration, and you plan to use the application compatibility layer, then you should avoid changing the container name. When you no longer want to use the compatibility layer, you can change the name of the container.
+When we copy the data over to your Gen2-enabled account, we automatically create a container named 'Gen1'. In Gen2 container names cannot be renamed and hence post migration data can be copied to new container in Gen2 as needed.
#### What should I consider in terms of migration performance?
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Some data stays idle in the cloud and is rarely, if ever, accessed. The followin
### Expire data based on age
-Some data is expected to expire days or months after creation. You can configure a lifecycle management policy to expire data by deletion based on data age. The following example shows a policy that deletes all block blobs older than 365 days.
+Some data is expected to expire days or months after creation. You can configure a lifecycle management policy to expire data by deletion based on data age. The following example shows a policy that deletes all block blobs that have not been modified in the last 365 days.
```json {
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
Previously updated : 01/14/2022 Last updated : 03/04/2022
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support pr
When you connect to Blob Storage by using an SFTP client, you might be prompted to trust a host key. During the public preview, you can verify the host key by finding that key in the list presented in this article. > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available in [these regions](secure-file-transfer-protocol-support.md#regional-availability).
+> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts.
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
When you connect to Blob Storage by using an SFTP client, you might be prompted
## See also - [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)
+- [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md)
+- [Known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Previously updated : 02/03/2022 Last updated : 03/04/2022
This article describes limitations and known issues of SFTP support for Azure Blob Storage. > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available in [these regions](secure-file-transfer-protocol-support.md#regional-availability).
+> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts.
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
This article describes limitations and known issues of SFTP support for Azure Bl
- To resolve the `Failed to update SFTP settings for account 'accountname'. Error: The value 'True' is not allowed for property isSftpEnabled.` error, ensure that the following pre-requisites are met at the storage account level:
- - The account needs to be a GPv2 or Block Blob Storage account.
+ - The account needs to be a general-purpose v2 and premium block blob accounts.
- The account needs to have hierarchical namespace enabled on it.
- - The account needs to be in a [supported regions](secure-file-transfer-protocol-support.md#regional-availability).
-
- Customer's subscription needs to be signed up for the preview. Request to join via 'Preview features' in the Azure portal. Requests are automatically approved. - To resolve the `Home Directory not accessible error.` error, check that:
This article describes limitations and known issues of SFTP support for Azure Bl
## See also - [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)-- [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md)
+- [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md)
+- [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Previously updated : 02/16/2022 Last updated : 03/04/2022
You can securely connect to the Blob Storage endpoint of an Azure Storage accoun
To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](secure-file-transfer-protocol-support.md). > [!IMPORTANT]
-> SFTP support is currently in PREVIEW and is available in [these regions](secure-file-transfer-protocol-support.md#regional-availability).
+> SFTP support is currently in PREVIEW and is available on general-purpose v2 and premium block blob accounts.
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
See the documentation of your SFTP client for guidance about how to connect and
- [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md) - [Known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
+- [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Previously updated : 02/16/2022 Last updated : 03/04/2022
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management. > [!IMPORTANT]
-> SFTP support currently is in PREVIEW and is available in only [these regions](secure-file-transfer-protocol-support.md#regional-availability).
+> SFTP support currently is in PREVIEW and is available on general-purpose v2 and premium block blob accounts.
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
SFTP support for Azure Blob Storage currently limits its cryptographic algorithm
SFTP clients commonly found to not support algorithms listed above include Apache SFTP server, Axway, Moveit, Five9, Workday, Mule, Kemp, Salesforce, XFB.
-## Known issues and limitations
-
-See the [Known issues](secure-file-transfer-protocol-known-issues.md) article for a complete list of issues and limitations with the current release of SFTP support.
+## Connecting with SFTP
-## Regional availability
+To get started, enable SFTP support, create a local user, and assign permissions for that local user. Then, you can use any SFTP client to securely connect and then transfer files. For step-by-step guidance, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md).
-SFTP support is available in the following regions:
+## Known issues and limitations
-- North Central US-- East US 2-- Canada East-- Canada Central-- North Europe-- Australia East-- Switzerland North-- Germany West Central-- East Asia-- France Central-- West Europe
+See the [Known issues](secure-file-transfer-protocol-known-issues.md) article for a complete list of issues and limitations with the current release of SFTP support.
## Pricing and billing
Transaction and storage costs are based on factors such as storage account type
## See also -- [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md)-- [Known issues with SSH File Transfer Protocol (SFTP) in Azure Blob Storage (preview)](secure-file-transfer-protocol-known-issues.md)
+- [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md)
+- [Known issues with SSH File Transfer Protocol (SFTP) in Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
+- [Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-host-keys.md)