Updates from: 10/24/2022 01:08:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
Title: Use additional context in Microsoft Authenticator notifications (Preview) - Azure Active Directory
+ Title: Use additional context in Microsoft Authenticator notifications - Azure Active Directory
description: Learn how to use additional context in MFA notifications
# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use additional context in Microsoft Authenticator notifications (Preview) - Authentication methods policy
+# How to use additional context in Microsoft Authenticator notifications - Authentication methods policy
This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator passwordless and push notifications.
You can enable and disable application name and geographic location separately.
Identify your single target group for each of the features. Then use the following API endpoint to change the displayAppInformationRequiredState or displayLocationInformationRequiredState properties under featureSettings to **enabled** and include or exclude the groups you want: ```http
-https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
``` #### MicrosoftAuthenticatorAuthenticationMethodConfiguration properties
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
//Change the Query to PATCH and Run query {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
To verify, run GET again and verify the ObjectID: ```http
-GET https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
+GET https://graph.microsoft.com/v1.0/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
``` #### Example of how to disable application name and only enable geographic location
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
```json {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
Only users who are enabled for Microsoft Authenticator under Microsoft Authentic
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
To turn off additional context, you'll need to PATCH **displayAppInformationRequ
```json {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodConfigurations/$entity",
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodConfigurations/$entity",
"@odata.type": "#microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration", "id": "MicrosoftAuthenticator", "state": "enabled",
To turn off additional context, you'll need to PATCH **displayAppInformationRequ
} } },
- "includeTargets@odata.context": "https://graph.microsoft.com/beta/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
+ "includeTargets@odata.context": "https://graph.microsoft.com/v1.0/$metadata#authenticationMethodsPolicy/authenticationMethodConfigurations('MicrosoftAuthenticator')/microsoft.graph.microsoftAuthenticatorAuthenticationMethodConfiguration/includeTargets",
"includeTargets": [ { "targetType": "group",
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
Title: Use number matching in multifactor authentication (MFA) notifications (Preview) - Azure Active Directory
+ Title: Use number matching in multifactor authentication (MFA) notifications - Azure Active Directory
description: Learn how to use number matching in MFA notifications Previously updated : 10/07/2022 Last updated : 10/21/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication methods policy
+# How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy
This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. >[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled by default for all tenants a few months after general availability (GA).<br>
+>Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled for all users of the Microsoft Authenticator app starting February 28, 2023.<br>
>We highly recommend enabling number matching in the near-term for improved sign-in security. ## Prerequisites
This topic covers how to enable number matching in Microsoft Authenticator push
>[!NOTE] >The policy schema for Microsoft Graph APIs has been improved. The older policy schema is now deprecated. Make sure you use the new schema to help prevent errors. -- If your organization is using ADFS adapter or NPS extensions, upgrade to the latest versions for a consistent experience.
+- If your organization is using AD FS adapter or NPS extensions, upgrade to the latest versions for a consistent experience.
## Number matching
Number matching is available for the following scenarios. When enabled, all scen
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-Number matching will be available in Azure Government two weeks after General Availability. Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
+Number matching is available for sign in for Azure Government. It is available for combined registration two weeks after General Availability. Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
### Multifactor authentication
During self-service password reset, the Authenticator app notification will show
### Combined registration
-When a user goes through combined registration to set up the Authenticator app, the user is asked to approve a notification as part of adding the account. For users who are enabled for number matching, this notification will show a number that they need to type in their Authenticator app notification.
+When a user goes through combined registration to set up the Authenticator app, the user is asked to approve a notification as part of adding the account. For users who are enabled for number matching, this notification will show a number that they need to type in their Authenticator app notification. Number matching will be available for combined registration in Azure Government two weeks after General Availability.
### AD FS adapter
To enable number matching in the Azure AD portal, complete the following steps:
:::image type="content" border="true" source="./media/how-to-mfa-number-match/enable-settings-number-match.png" alt-text="Screenshot of how to enable Microsoft Authenticator settings for Push authentication mode.":::
-1. On the **Configure** tab, for **Require number matching for push notifications (Preview)**, change **Status** to **Enabled**, choose who to include or exclude from number matching, and click **Save**.
+1. On the **Configure** tab, for **Require number matching for push notifications**, change **Status** to **Enabled**, choose who to include or exclude from number matching, and click **Save**.
:::image type="content" border="true" source="./media/how-to-mfa-number-match/number-match.png" alt-text="Screenshot of how to enable number matching.":::
+## FAQ
+
+### Can I opt out of number matching?
+
+Yes, currently you can disable number matching. We highly recommend that you enable number matching for all users in your tenant to protect yourself from MFA fatigue attacks. Microsoft will enable number matching for all tenants by Feb 28, 2023. After protection is enabled by default, users can't opt out of number matching in Microsoft Authenticator push notifications.
+
+### What about my Apple Watch?
+
+Apple Watch will remain unsupported for number matching. We recommend you uninstall the Microsoft Authenticator Apple Watch app because you will have to approve notifications on your phone.
+
+### What happens if a user runs an older version of Microsoft Authenticator?
+
+If a user is running an older version of Microsoft Authenticator that doesn't support number matching, authentication won't work if number matching is enabled. Users need to upgrade to the latest version of Microsoft Authenticator to use it for sign-in.
+ ## Next steps [Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
The set of optional claims available by default for applications to use are list
| `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) | | `groups`| Optional formatting for group claims |JWT, SAML| |For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md). Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. | `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This claim is the most accurate way for an API to determine if a token is an app token or an app+user token.|
-| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user selects on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you're operating in a guest scenario where the user is from another tenant, you must provide a tenant identifier in the sign-in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
+| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim that's base64 encoded. Do not modify this value. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user selects on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you're operating in a guest scenario where the user is from another tenant, you must provide a tenant identifier in the sign-in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
| `sid` | Session ID, used for per-session user sign-out. | JWT | Personal and Azure AD accounts. | | | `tenant_ctry` | Resource tenant's country/region | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. | | `tenant_region_scope` | Region of the resource tenant | JWT | | |
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
These samples show how to write a single-page application secured with Microsoft
> [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample(s) <br/>on GitHub | Auth<br/> libraries | Auth flow | > | - | -- | - | -- |
-> | Angular | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/4-Deployment/README.md)| MSAL Angular | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) |
+> | Angular | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/4-Deployment/README.md)| MSAL Angular | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) |
> | Blazor WebAssembly | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/MyOrg/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/B2C/README.md)<br/>&#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-graph-user/Call-MSGraph/README.md)<br/>&#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/Deploy-to-Azure/README.md) | MSAL.js | Implicit Flow | > | JavaScript | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/1-call-api/README.md)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/2-call-api-b2c/README.md)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Call Node.js web API via OBO and CA](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/2-call-api-api-c)| MSAL.js | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access | > | React | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploy to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access |
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
# Quickstart: Add a guest user and send an invitation
-With Azure AD [B2B collaboration](what-is-b2b.md), you can invite anyone to collaborate with your organization using their own work, school, or social account. In this quickstart, you'll learn how to add a new guest user to your Azure AD directory in the Azure portal. You'll also send an invitation and see what the guest user's invitation redemption process looks like. In addition to this quickstart, you can learn more about adding guest users [in the Azure portal](add-users-administrator.md), via [PowerShell](b2b-quickstart-invite-powershell.md), or [in bulk](tutorial-bulk-invite.md).
+With Azure AD [B2B collaboration](what-is-b2b.md), you can invite anyone to collaborate with your organization using their own work, school, or social account.
+
+In this quickstart, you'll learn how to add a new guest user to your Azure AD directory in the Azure portal. You'll also send an invitation and see what the guest user's invitation redemption process looks like. In addition to this quickstart, you can learn more about adding guest users [in the Azure portal](add-users-administrator.md), via [PowerShell](b2b-quickstart-invite-powershell.md), or [in bulk](tutorial-bulk-invite.md).
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
To complete the scenario in this quickstart, you need:
1. Under **Azure services**, select **Azure Active Directory** (or use the search box to find and select **Azure Active Directory**).
- ![Screenshot showing where to select the Azure Active Directory service.](media/quickstart-add-users-portal/azure-active-directory-service.png)
+ :::image type="content" source="media/quickstart-add-users-portal/azure-active-directory-service.png" alt-text="Screenshot showing where to select the Azure Active Directory service.":::
1. Under **Manage**, select **Users**.
- ![Screenshot showing where to select the Users option](media/quickstart-add-users-portal/quickstart-users-portal-user.png)
+ :::image type="content" source="media/quickstart-add-users-portal/quickstart-users-portal-user.png" alt-text="Screenshot showing where to select the Users option.":::
-1. Select **New guest user**.
+1. Under **New user** select **Invite external user**.
- ![Screenshot showing where to select the New guest user option.](media/quickstart-add-users-portal/new-guest-user.png)
+ :::image type="content" source="media/quickstart-add-users-portal/new-guest-user.png" alt-text="Screenshot showing where to select the New guest user option.":::
1. On the **New user** page, select **Invite user** and then add the guest user's information.
To complete the scenario in this quickstart, you need:
- **Groups**: You can add the guest user to one or more existing groups, or you can do it later. - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role.
- ![Screenshot showing the new user page.](media/quickstart-add-users-portal/invite-user.png)
+ :::image type="content" source="media/quickstart-add-users-portal/invite-user.png" alt-text="Screenshot showing the new user page.":::
1. Select **Invite** to automatically send the invitation to the guest user. A notification appears in the upper right with the message **Successfully invited user**. 1. After you send the invitation, the user account is automatically added to the directory as a guest.
- ![Screenshot showing the new guest user in the directory.](media/quickstart-add-users-portal/new-guest-user-directory.png)
+ :::image type="content" source="media/quickstart-add-users-portal/new-guest-user-directory.png" alt-text="Screenshot showing the new guest user in the directory.":::
+ ## Accept the invitation
Now sign in as the guest user to see the invitation.
1. In your inbox, open the email from "Microsoft Invitations on behalf of Contoso."
- ![Screenshot showing the B2B invitation email](media/quickstart-add-users-portal/quickstart-users-portal-email-small.png)
+ :::image type="content" source="media/quickstart-add-users-portal/quickstart-users-portal-email-small.png" alt-text="Screenshot showing the B2B invitation email.":::
+ 1. In the email body, select **Accept invitation**. A **Review permissions** page opens in the browser.
- ![Screenshot showing the Review permissions page.](media/quickstart-add-users-portal/consent-screen.png)
+ :::image type="content" source="media/quickstart-add-users-portal/consent-screen.png" alt-text="Screenshot showing the Review permissions page.":::
1. Select **Accept**.
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following table shows the scheduling (trigger) relevant attributes and the m
|Attribute|Type|Supported in HR Inbound Provisioning|Support in Azure AD Connect Cloud Sync|Support in Azure AD Connect Sync| |--|--|--|--|--| |employeeHireDate|DateTimeOffset|Yes|Yes|Yes|
-|employeeLeaveDateTime|DateTimeOffset|Yes|Not currently|Not currently|
+|employeeLeaveDateTime|DateTimeOffset|Yes|Yes|Not currently|
> [!NOTE]
-> To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually for cloud-only users. For more information, see: [Configure the employeeLeaveDateTime property for a user](/graph/tutorial-lifecycle-workflows-set-employeeleavedatetime)
+> Manually setting the employeeLeaveDateTime for cloud-only users requires special permissions. For more information, see: [Configure the employeeLeaveDateTime property for a user](/graph/tutorial-lifecycle-workflows-set-employeeleavedatetime)
This document explains how to set up synchronization from on-premises Azure AD Connect cloud sync and Azure AD Connect for the required attributes.
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
For Microsoft Graph the parameters for the **Send welcome email to new hire** ta
```Example for usage within the workflow { "category": "joiner",
+ "continueOnError": true,
"description": "Send welcome email to new hire", "displayName": "Send Welcome Email", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "70b29d51-b59a-4773-9280-8841dfd3f2ea", "arguments": [] }
For Microsoft Graph the parameters for the **Remove user from selected groups**
```Example for usage within the workflow { "category": "leaver",
- "continueOnError": true,
"displayName": "Remove user from selected groups", "description": "Remove user from membership of selected Azure AD groups", "isEnabled": true,
For Microsoft Graph the parameters for the **Remove users from all groups** task
"displayName": "Remove user from all groups", "description": "Remove user from all Azure AD groups memberships", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc", "arguments": [] }
For Microsoft Graph the parameters for the **Remove User from Teams** task are a
"displayName": "Remove user from selected Teams", "description": "Remove user from membership of selected Teams", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "06aa7acb-01af-4824-8899-b14e5ed788d6", "arguments": [ {
For Microsoft Graph the parameters for the **Remove users from all teams** task
"description": "Remove user from all Teams", "displayName": "Remove user from all Teams memberships", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024", "arguments": [] }
For Microsoft Graph the parameters for the **Remove all license assignment from
"displayName": "Remove all licenses for user", "description": "Remove all licenses assigned to the user", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e", "arguments": [] }
For Microsoft Graph the parameters for the **Delete User** task are as follows:
"displayName": "Delete user account", "description": "Delete user account in Azure AD", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff", "arguments": [] }
For Microsoft Graph the parameters for the **Send email before user last day** t
"displayName": "Send email before userΓÇÖs last day", "description": "Send offboarding email to userΓÇÖs manager before the last day of work", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "52853a3e-f4e5-4eb8-bb24-1ac09a1da935", "arguments": [] }
For Microsoft Graph the parameters for the **Send email on user last day** task
"displayName": "Send email on userΓÇÖs last day", "description": "Send offboarding email to userΓÇÖs manager on the last day of work", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "9c0a1eaf-5bda-4392-9d9e-6e155bb57411", "arguments": [] }
For Microsoft Graph the parameters for the **Send offboarding email to users man
"displayName": "Send offboarding email to userΓÇÖs manager after the last day of work", "description": "Send email after userΓÇÖs last day", "isEnabled": true,
- "continueOnError": true,
"taskDefinitionId": "6f22ddd4-b3a5-47a4-a846-0d7c201a49ce", "arguments": [] }
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md
Update Microsoft 365 with the new token signing certificates to be used for the
> [!NOTE] > If you need to support multiple top-level domains, such as contoso.com and fabrikam.com, you must use the **SupportMultipleDomain** switch with any cmdlets. For more information, see [Support for Multiple Top Level Domains](how-to-connect-install-multiple-domains.md). >-
+> If your tenant is federated with more than one domain, the Update-MsolFederatedDomain needs to be run for all the domains, listed in the output from `Get-MsolDomain -Authentication Federated`. This will ensure that all of the federated domains are updated to the Token-Signing certificate.
+>You can achieve this by running:
+>`Get-MsolDomain -Authentication Federated | % { Update-MsolFederatedDomain -DomainName $_.Name -SupportMultipleDomain }`
## Repair Azure AD trust by using Azure AD Connect <a name="connectrenew"></a> If you configured your AD FS farm and Azure AD trust by using Azure AD Connect, you can use Azure AD Connect to detect if you need to take any action for your token signing certificates. If you need to renew the certificates, you can use Azure AD Connect to do so.
active-directory Overview Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md
- Title: What is Azure Active Directory recommendations (preview)? | Microsoft Docs description: Provides a general overview of Azure Active Directory recommendations. -+ - na Previously updated : 08/22/2022- Last updated : 10/13/2022+ + # Customer intent: As an Azure AD administrator, I want guidance to so that I can keep my Azure AD tenant in a healthy state.-+ # What is Azure Active Directory recommendations (preview)? This feature is supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Ideally, you want your Azure Active Directory (Azure AD) tenant to be in a secure and healthy state. However, trying to keep your knowledge regarding the management of the various components in your tenant up to date can become overwhelming.
-
-This is where Azure AD recommendations can help you.
+Keeping track of all the settings and resources in your tenant can be overwhelming. The Azure AD recommendations (preview) feature helps monitor the status of your tenant so you don't have to. Azure AD recommendations helps ensure your tenant is in a secure and healthy state while also helping you maximize the value of the features available in Azure AD.
The Azure AD recommendations feature provides you personalized insights with actionable guidance to: - Help you identify opportunities to implement best practices for Azure AD-related features. - Improve the state of your Azure AD tenant.
+- Optimize the configurations for your scenarios.
-This article gives you an overview of how you can use Azure AD recommendations.
--
+This article gives you an overview of how you can use Azure AD recommendations. As an administrator, you should review your tenant's recommendations, and their associated resources periodically.
## What it is
-The [Azure Advisor](../../advisor/advisor-overview.md) is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. It analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, Reliability (formerly called High availability), and security of your Azure resources.
+Azure AD recommendations is the Azure AD specific implementation of [Azure Advisor](../../advisor/advisor-overview.md), which is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage telemetry to recommend solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources.
-Azure AD recommendations:
--- Is the Azure AD specific implementation of Azure Advisor. -- Supports you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state.
+*Azure AD recommendations* uses similar data to support you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state. Azure AD recommendations provide a holistic view into your tenant's security, health, and usage.
-## Recommendation object
-
-Azure AD tracks the status of a recommendation in a related object. This object includes attributes that are used to characterize the recommendation and a body to store the actionable guidance.
--
-Each object is characterized by:
--- **Title** - A short summary of what the recommendation is about.--- **Priority** - Possible values are: low, medium, high--- **Status** - Possible values are: Active, Dismissed, Postponed, CompletedByUser, CompletedBySystem.-
- - A recommendation is marked as CompletedByUser if you mark the recommendation as complete.
-
- - A recommendation is marked as CompletedBySystem if a recommendation that did once apply is no longer applicable to you because you have taken the necessary steps.
-
--- **Impacted Resources** - A definition of the scope of a recommendation. Possible values are either a list of the impacted resources or **Tenant level**. --- **Updated at** - The timestamp of the last status update.--
-![Reporting](./media/overview-recommendations/recommendations-object.png)
---
-The body of a recommendation object contains the actionable guidance:
--- **Description** - An explanation of what it is that Azure AD has detected and related background information.--- **Value** - An explanation of why completing the recommendation will benefit you, and the value of the associated feature. --- **Action Plan** - Detailed instructions to step-by-step implement a recommendation.--- ## How it works
-On a daily basis, Azure AD analyzes the configuration of your tenant. During an analysis, Azure AD compares the data of the known recommendations with the actual configuration. If a recommendation is flagged as applicable to your tenant, the recommendation status and its corresponding resources are marked as active.
--
-In the recommendations or resource list, you can use the **Status** information to determine your action item.
-
-As an administrator, you should review your tenant's recommendations, and their associated resources periodically.
--- **Dismiss**--- **Mark complete** --- **Postpone**--- **Reactivate**--
-### Dismiss
-
-If you don't like a recommendation, or if you have another reason for not applying it, you can dismiss it. In this case, Azure AD asks you for a reason for dismissing a recommendation.
-
-![Help us provide better recommendations](./media/overview-recommendations/provide-better-recommendations.png)
-
+On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Azure AD Overview area. Recommendations are listed in order of priority so you can quickly determine where to focus first.
-### Mark as complete
+Recommendations contain a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources that are associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*. so your step-by-step action plan impacts the entire tenant and not just a specific resource.
-Use this state to indicate that you have:
+![Screenshot of the Overview page of the tenant with the Recommendations option highlighted.](./media/overview-recommendations/recommendations-preview-option-tenant-overview.png)
-- Completed the recommendation.-- Taken action for an individual resource.
+## Recommendation details
-A recommendation or resource that has been marked as complete is again evaluated when Azure AD compares the available recommendations with your current configuration.
+Each recommendation provides the same set of details that explain what the recommendation is, why it's important, and how to fix it.
+The **Status** of a recommendation can be updated manually or automatically. If all resources are addressed according to the action plan, the status will automatically change to *Completed* the next time the recommendations service runs. The recommendation service runs every 24-48 hours, depending on the recommendation.
-### Postpone
+![Screenshot of the Mark as options.](./media/overview-recommendations/recommendations-object.png)
-Postpone a recommendation or resource to address it in the future. The recommendation or resource will be marked as Active again when the date that the recommendation or resource is postponed to occurs.
+The **Priority** of a recommendation could be low, medium, or high. These values are determined by several factors, such as security implications, health concerns, or potential breaking changes.
-### Reactivate
-Accidentally dismissed, completed, or postponed a recommendation or resource. Mark it as active again to keep it top of mind.
+![Screenshot of a recommendation's status, priority, and impacted resource type.](./media/overview-recommendations/recommendation-status-risk.png)
+- **High**: Must do. Not acting will result in severe security implications or potential downtime.
+- **Medium**: Should do. No severe risk if action isn't taken.
+- **Low**: Might do. No security risks or health concerns if action isn't taken.
-## Common tasks
+The **Impacted resources** for a recommendation could be things like applications or users. This detail gives you an idea of what type of resources you'll need to address. The impacted resource could also be at the tenant level, so you may need to make a global change.
-### Enable recommendations
+The **Status description** tells you the date the recommendation status changed and if it was changed by the system or a user.
-To enable your Azure AD recommendations:
+The recommendation's **Value** is an explanation of why completing the recommendation will benefit you, and the value of the associated feature.
-1. Navigate to the **[Preview features](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/PreviewHub)** page.
-2. Set the **State** to **On**.
+The **Action plan** provides step-by-step instructions to implement a recommendation. May include links to relevant documentation or direct you to other pages in the Azure AD portal.
- ![Enable Azure AD recommendations](./media/overview-recommendations/enable-azure-ad-recommendations.png)
---
-### Manage recommendations
-
-To manage your Azure AD recommendations:
-
-1. Navigate to the [Azure AD overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page.
-
-2. On the Azure AD overview page, in the toolbar, click **Recommendations (Preview)**.
-
- ![Manage Azure AD recommendations](./media/overview-recommendations/manage-azure-ad-recommendations.png)
---
-### Update the status of a resource
-
-To update the status of a resource, you have to right click a resource to bring up the edit menu.
--
-## Who can access it?
+## What you should know
-The Azure AD recommendations feature supports all editions of Azure AD. In other words, there is no specific subscription or license required to use this feature.
+The following roles provide *read-only* access to recommendations:
-To (re-) view your recommendations, you need to be:
+- Reports Reader
+- Security Reader
+- Global Reader
-- Global reader
+The following roles provide *update and read-only* access to recommendations:
-- Security reader
+- Global Administrator
+- Security Administrator
+- Security Operator
+- Cloud apps Administrator
+- Apps Administrator
-- Reports reader
+Any role can enable the Azure AD recommendations preview, but you'll need one of the roles listed above to view or update recommendations. Azure AD only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed.
+Some recommendations have a list of impacted resources associated. This list of resources gives you more context on how the recommendation applies to you and/or which resources you need to address. The only action recorded in the audit log is completing recommendations. Actions taken on a recommendation are collected in the audit log. To view these logs, go to **Azure AD** > **Audit logs** and filter the service to "Azure AD recommendations."
-To manage your recommendations, you need to be:
+The table below provides the impacted resources and links available documentation.
-- Global admin
+| Recommendation | Impacted resources |
+|- |- |
+| [Convert per-user MFA to Conditional Access MFA](recommendation-turn-off-per-user-mfa.md) | Users |
+| [Integrate 3rd party applications](recommendation-integrate-third-party-apps.md) | Tenant level |
+| [Migrate applications from AD FS to Azure AD](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users |
+| [Migrate to Microsoft Authenticator](recommendation-migrate-to-authenticator.md) | Users |
+| [Minimize MFA prompts from known devices](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users |
-- Security admin
+## How to access Azure AD recommendations (preview)
-- Security operator
+To enable the Azure AD recommendations preview:
-- Cloud app admin
+1. Sign in to the [Azure portal](https://portal.azure.com/).
-- App admin
+1. Go to **Azure AD** > **Preview features** and enable **Azure AD recommendations.**
+ - Recommendations may take a few minutes to sync.
+ - While anyone can enable the preview feature, you'll need a [specific role](overview-recommendations.md#what-you-should-know) to view or update a recommendation.
+ ![Screenshot of the Enable Azure AD recommendations option](./media/overview-recommendations/enable-azure-ad-recommendations.png)
+After the preview is enabled, you can view the available recommendations from the Azure AD administration portal. The Azure AD recommendations feature appears on the **Overview** page of your tenant.
+## How to use Azure AD recommendations (preview)
-## What you should know
+1. Go to **Azure AD** > **Recommendations**.
-- On the recommendations page, you might not see all supported recommendations. This is because Azure AD only displays the recommendations that apply to your tenant.
+1. Select a recommendation from the list to view the details, status, and action plan.
-- Some recommendations have a list of impacted resources associated. This list of resources gives you more context on how the recommendation applies to you and/or which resources you need to address.
+ ![Screenshot of the list of recommendations.](./media/overview-recommendations/recommendations-list.png)
-**Right now:**
+1. Follow the **Action plan**.
-- You can update the status of a recommendation with a read only roles (global reader, security reader, reports reader). This is a known issue that will be fixed.
+1. If applicable, right-click on a resource in a recommendation, select **Mark as**, then select a status.
-- The only action recorded in the audit log is completing recommendations.
+ ![Screenshot of the status options for a resource.](./media/overview-recommendations/resource-mark-as-option.png)
-- Audit logs do not capture actions taken by reader roles.
+1. If you need to manually change the status of a recommendation, select **Mark as** from the top of the page and select a status.
+ - Mark a recommendation as **Completed** if all impacted resources have been addressed.
+ - Active resources may still appear in the list of resources for manually completed recommendations. If the resource is completed, the service will update the status the next time the service runs.
+ - If the service identifies an active resource for a manually completed recommendation the next time the service runs, the recommendation will automatically change back to **Active**.
+ - Mark a recommendation as **Dismissed** if you think the recommendation is irrelevant or the data is wrong.
+ - Azure AD will ask for a reason why you dismissed the recommendation so we can improve the service.
+ - Mark a recommendation as **Postponed** if you want to address the recommendation at a later time.
+ - The recommendation will become **Active** when the selected date occurs.
+ - You can reactivate a completed or postponed recommendation to keep it top of mind and reassess the resources.
+Continue to monitor the recommendations in your tenant for changes.
## Next steps
active-directory Linkedinsalesnavigator Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinsalesnavigator-provisioning-tutorial.md
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
4. Click **+ Add new SCIM configuration** and follow the procedure by filling in each field. > [!NOTE]
- > When auto­assign licenses is not enabled, it means that only user data is synced.
+ > When auto-assign licenses is not enabled, it means that only user data is synced.
![Screenshot shows the LinkedIn Account Center Global Settings.](./media/linkedinsalesnavigator-provisioning-tutorial/linkedin_1.PNG) > [!NOTE]
- > When auto­license assignment is enabled, you need to note the application instance and license type. Licenses are assigned on a first come, first serve basis until all the licenses are taken.
+ > When auto-license assignment is enabled, you need to note the application instance and license type. Licenses are assigned on a first come, first serve basis until all the licenses are taken.
![Screenshot shows the S C I M Setup page.](./media/linkedinsalesnavigator-provisioning-tutorial/linkedin_2.PNG)
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
* In the **Secret Token** field, enter the access token you generated in step 1 and click **Test Connection** .
- * You should see a success notification on the upper­right side of
+ * You should see a success notification on the upper-right side of
your portal. 12. Enter the email address of a person or group who should receive provisioning error notifications in the **Notification Email** field, and check the checkbox below.
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
az extension update --name aks-preview
### Limitations
-* Currently, there are only containerd shims available for [spin][spin] and [slight][slight] applications, which use the [wasmtime][wasmtime] runtime. In addition to wasmtime runtime applications, you can also run containers on WASI/WASM node pools.
+* Currently, there are only containerd shims available for [spin][spin] and [slight][slight] applications, which use the [wasmtime][wasmtime] runtime. In addition to wasmtime runtime applications, you can also run containers on WASM/WASI node pools.
* You can run containers and wasm modules on the same node, but you can't run containers and wasm modules on the same pod. * The WASM/WASI node pools can't be used for system node pool. * The *os-type* for WASM/WASI node pools must be Linux.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
This policy can be used in the following policy [sections](./api-management-howt
## <a name="ValidateJWT"></a> Validate JWT
-The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value. The JSON Web Key Set (JWKS) is cached and is not fetched on each request. Automatic metadata refresh occurs once per hour. If retrieval fails, it will be refreshed in five minutes.
+The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value.
> [!IMPORTANT] > The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
This example shows how to use the [Validate JWT](api-management-access-restricti
| issuer-signing-keys | A list of Base64-encoded security keys used to validate signed tokens. If multiple security keys are present, then each key is tried until either all are exhausted (in which case validation fails) or one succeeds (useful for token rollover). Key elements have an optional `id` attribute used to match against `kid` claim. <br/><br/>Alternatively supply an issuer signing key using:<br/><br/> - `certificate-id` in format `<key certificate-id="mycertificate" />` to specify the identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management<br/>- RSA modulus `n` and exponent `e` pair in format `<key n="<modulus>" e="<exponent>" />` to specify the RSA parameters in base64url-encoded format | No | | decryption-keys | A list of Base64-encoded keys used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds. Key elements have an optional `id` attribute used to match against `kid` claim.<br/><br/>Alternatively supply a decryption key using:<br/><br/> - `certificate-id` in format `<key certificate-id="mycertificate" />` to specify the identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management | No | | issuers | A list of acceptable principals that issued the token. If multiple issuer values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. | No |
-| openid-config | The element used for specifying a compliant Open ID configuration endpoint from which signing keys and issuer can be obtained. | No |
+| openid-config | Add one or more of these elements to specify a compliant OpenID configuration endpoint from which signing keys and issuer can be obtained.<br/><br/>Configuration including the JSON Web Key Set (JWKS) is pulled from the endpoint every 1 hour and cached. If the token being validated references a validation key (using `kid` claim) that is missing in cached configuration, or if retrieval fails, API Management pulls from the endpoint at most once per 5 min. These intervals are subject to change without notice. | No |
| required-claims | Contains a list of claims expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all` every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any` at least one claim must be present in the token for validation to succeed. | No | ### Attributes
This example shows how to use the [Validate JWT](api-management-access-restricti
| require-scheme | The name of the token scheme, e.g. "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A | | require-signed-tokens | Boolean. Specifies whether a token is required to be signed. | No | true | | separator | String. Specifies a separator (e.g. ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
-| url | Open ID configuration endpoint URL from where Open ID configuration metadata can be obtained. The response should be according to specs as defined at URL:`https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. For Azure Active Directory use the following URL: `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` substituting your directory tenant name, e.g. `contoso.onmicrosoft.com`. | Yes | N/A |
+| url | Open ID configuration endpoint URL from where OpenID configuration metadata can be obtained. The response should be according to specs as defined at URL:`https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. For Azure Active Directory use the following URL: `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` substituting your directory tenant name, e.g. `contoso.onmicrosoft.com`. | Yes | N/A |
| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A | ### Usage
api-management How To Deploy Self Hosted Gateway Kubernetes Opentelemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md
Now that we have the chart repository configured, we can deploy the OpenTelemetr
protocol: TCP ```
-This allows us to use a standalone collector with the Prometheus exporter being exposed on port `8889`. To expose the Prometheus metrics, we are asking the Helm chart to configure a ┬┤LoadBalancer` service.
+This allows us to use a standalone collector with the Prometheus exporter being exposed on port `8889`. To expose the Prometheus metrics, we are asking the Helm chart to configure a `LoadBalancer` service.
> [!NOTE] > We are disabling the compact Jaeger port given it uses UDP and `LoadBalancer` service does not allow you to have multiple protocols at the same time.
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
Without a valid access token, a self-hosted gateway can't access and download co
When you're automating token refresh, use [this management API operation](/rest/api/apimanagement/current-ga/gateway/generate-token) to generate a new token. For information on managing Kubernetes secrets, see the [Kubernetes website](https://kubernetes.io/docs/concepts/configuration/secret).
-## Namespace
-Kubernetes [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) help with dividing a single cluster among multiple teams, projects, or applications. Namespaces provide a scope for resources and names. They can be associated with a resource quota and access control policies.
-
-The Azure portal provides commands to create self-hosted gateway resources in the **default** namespace. This namespace is automatically created, exists in every cluster, and can't be deleted.
-Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production.
-
-## Number of replicas
-The minimum number of replicas suitable for production is three, preferably combined with [high-available scheduling of the instances](#high-availability).
-
-By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
- ## Autoscaling While we provide [guidance on the minimum number of replicas](#number-of-replicas) for the self-hosted gateway, we recommend that you use autoscaling for the self-hosted gateway to meet the demand of your traffic more proactively.
Kubernetes Event-driven Autoscaling (KEDA) provides a few ways that can help wit
- You can scale based on metrics from a Kubernetes ingress if they're available in [Prometheus](https://keda.sh/docs/latest/scalers/prometheus/) or [Azure Monitor](https://keda.sh/docs/latest/scalers/azure-monitor/) by using an out-of-the-box scaler - You can install [HTTP add-on](https://github.com/kedacore/http-add-on), which is available in beta, and scales based on the number of requests per second.
-## Container resources
-By default, the YAML file provided in the Azure portal doesn't specify container resource requests.
-
-It's impossible to reliably predict and recommend the amount of per-container CPU and memory resources and the number of replicas required for supporting a specific workload. Many factors are at play, such as:
+## Configuration backup
-- Specific hardware that the cluster is running on.-- Presence and type of virtualization.-- Number and rate of concurrent client connections.-- Request rate.-- Kind and number of configured policies.-- Payload size and whether payloads are buffered or streamed.-- Backend service latency.
+Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be `/apim/config` and must be owned by group ID `1001`. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
+To learn about storage in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/storage/volumes/).
+To change ownership for a mounted path, see the `securityContext.fsGroup` setting on the [Kubernetes website](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod).
-We recommend setting resource requests to two cores and 2 GiB as a starting point. Perform a load test and scale up/out or down/in based on the results.
+> [!NOTE]
+> To learn about self-hosted gateway behavior in the presence of a temporary Azure connectivity outage, see [Self-hosted gateway overview](self-hosted-gateway-overview.md#connectivity-to-azure).
## Container image tag The YAML file provided in the Azure portal uses the **latest** tag. This tag always references the most recent version of the self-hosted gateway container image.
You can [download a full list of available tags](https://mcr.microsoft.com/v2/az
> > Learn more on how to [install an API Management self-hosted gateway on Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
-## DNS policy
-DNS name resolution plays a critical role in a self-hosted gateway's ability to connect to dependencies in Azure and dispatch API calls to backend services.
+## Container resources
+By default, the YAML file provided in the Azure portal doesn't specify container resource requests.
-The YAML file provided in the Azure portal applies the default [ClusterFirst](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) policy. This policy causes name resolution requests not resolved by the cluster DNS to be forwarded to the upstream DNS server that's inherited from the node.
+It's impossible to reliably predict and recommend the amount of per-container CPU and memory resources and the number of replicas required for supporting a specific workload. Many factors are at play, such as:
-To learn about name resolution in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service). Consider customizing [DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) or [DNS configuration](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config) as appropriate for your setup.
+- Specific hardware that the cluster is running on.
+- Presence and type of virtualization.
+- Number and rate of concurrent client connections.
+- Request rate.
+- Kind and number of configured policies.
+- Payload size and whether payloads are buffered or streamed.
+- Backend service latency.
-## External traffic policy
-The YAML file provided in the Azure portal sets `externalTrafficPolicy` field on the [Service](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#service-v1-core) object to `Local`. This preserves caller IP address (accessible in the [request context](api-management-policy-expressions.md#ContextVariables)) and disables cross node load balancing, eliminating network hops caused by it. Be aware, that this setting might cause asymmetric distribution of traffic in deployments with unequal number of gateway pods per node.
+We recommend setting resource requests to two cores and 2 GiB as a starting point. Perform a load test and scale up/out or down/in based on the results.
## Custom domain names and SSL certificates
In this scenario, if the SSL certificate that's used by the Management endpoint
> [!NOTE] > With the self-hosted gateway v2, API Management provides a new configuration endpoint: `<apim-service-name>.configuration.azure-api.net`. Currently, API Management doesn't enable configuring a custom domain name for the v2 configuration endpoint. If you need custom hostname mapping for this endpoint, you may be able to configure an override in the container's local hosts file, for example, using a [`hostAliases`](https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/#adding-additional-entries-with-hostaliases) element in a Kubernetes container spec.
-## Configuration backup
-
-Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be `/apim/config` and must be owned by group ID `1001`. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
-To learn about storage in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/storage/volumes/).
-To change ownership for a mounted path, see the `securityContext.fsGroup` setting on the [Kubernetes website](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod).
-
-> [!NOTE]
-> To learn about self-hosted gateway behavior in the presence of a temporary Azure connectivity outage, see [Self-hosted gateway overview](self-hosted-gateway-overview.md#connectivity-to-azure).
-
-## Local logs and metrics
-The self-hosted gateway sends telemetry to [Azure Monitor](api-management-howto-use-azure-monitor.md) and [Azure Application Insights](api-management-howto-app-insights.md) according to configuration settings in the associated API Management service.
-When [connectivity to Azure](self-hosted-gateway-overview.md#connectivity-to-azure) is temporarily lost, the flow of telemetry to Azure is interrupted and the data is lost for the duration of the outage.
-Consider [setting up local monitoring](how-to-configure-local-metrics-logs.md) to ensure the ability to observe API traffic and prevent telemetry loss during Azure connectivity outages.
-
-## HTTP(S) proxy
-
-The self-hosted gateway provides support for HTTP(S) proxy by using the traditional `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables.
-
-Once configured, the self-hosted gateway will automatically use the proxy for all outbound HTTP(S) requests to the backend services.
+## DNS policy
+DNS name resolution plays a critical role in a self-hosted gateway's ability to connect to dependencies in Azure and dispatch API calls to backend services.
-Starting with version 2.1.5 or above, the self-hosted gateway provides observability related to request proxying:
+The YAML file provided in the Azure portal applies the default [ClusterFirst](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) policy. This policy causes name resolution requests not resolved by the cluster DNS to be forwarded to the upstream DNS server that's inherited from the node.
-- [API Inspector](api-management-howto-api-inspector.md) will show additional steps when HTTP(S) proxy is being used and its related interactions.-- Verbose logs are provided to provide indication of the request proxy behavior.
+To learn about name resolution in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service). Consider customizing [DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) or [DNS configuration](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config) as appropriate for your setup.
-> [!Warning]
-> Ensure that the [infrastructure requirements](self-hosted-gateway-overview.md#fqdn-dependencies) have been met and that the self-hosted gateway can still connect to them or certain functionality will not work properly.
+## External traffic policy
+The YAML file provided in the Azure portal sets `externalTrafficPolicy` field on the [Service](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#service-v1-core) object to `Local`. This preserves caller IP address (accessible in the [request context](api-management-policy-expressions.md#ContextVariables)) and disables cross node load balancing, eliminating network hops caused by it. Be aware, that this setting might cause asymmetric distribution of traffic in deployments with unequal number of gateway pods per node.
## High availability The self-hosted gateway is a crucial component in the infrastructure and has to be highly available. However, failure will and can happen.
Pods can experience disruption due to [various](https://kubernetes.io/docs/conce
Consider using [Pod Disruption Budgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) to enforce a minimum number of pods to be available at any given time.
+## HTTP(S) proxy
+
+The self-hosted gateway provides support for HTTP(S) proxy by using the traditional `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables.
+
+Once configured, the self-hosted gateway will automatically use the proxy for all outbound HTTP(S) requests to the backend services.
+
+Starting with version 2.1.5 or above, the self-hosted gateway provides observability related to request proxying:
+
+- [API Inspector](api-management-howto-api-inspector.md) will show additional steps when HTTP(S) proxy is being used and its related interactions.
+- Verbose logs are provided to provide indication of the request proxy behavior.
+
+> [!Warning]
+> Ensure that the [infrastructure requirements](self-hosted-gateway-overview.md#fqdn-dependencies) have been met and that the self-hosted gateway can still connect to them or certain functionality will not work properly.
+
+## Local logs and metrics
+The self-hosted gateway sends telemetry to [Azure Monitor](api-management-howto-use-azure-monitor.md) and [Azure Application Insights](api-management-howto-app-insights.md) according to configuration settings in the associated API Management service.
+When [connectivity to Azure](self-hosted-gateway-overview.md#connectivity-to-azure) is temporarily lost, the flow of telemetry to Azure is interrupted and the data is lost for the duration of the outage.
+
+Consider [setting up local monitoring](how-to-configure-local-metrics-logs.md) to ensure the ability to observe API traffic and prevent telemetry loss during Azure connectivity outages.
+
+## Namespace
+Kubernetes [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) help with dividing a single cluster among multiple teams, projects, or applications. Namespaces provide a scope for resources and names. They can be associated with a resource quota and access control policies.
+
+The Azure portal provides commands to create self-hosted gateway resources in the **default** namespace. This namespace is automatically created, exists in every cluster, and can't be deleted.
+Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production.
+
+## Number of replicas
+The minimum number of replicas suitable for production is three, preferably combined with [high-available scheduling of the instances](#high-availability).
+
+By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
+ ## Security The self-hosted gateway is able to run as non-root in Kubernetes allowing customers to run the gateway securely.
app-service App Service Web Configure Tls Mutual Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-configure-tls-mutual-auth.md
To set up your app to require client certificates:
1. Set **Client certificate mode** to **Require**. Click **Save** at the top of the page.
+### [Azure CLI](#tab/azurecli)
To do the same with Azure CLI, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive az webapp update --set clientCertEnabled=true --name <app-name> --resource-group <group-name> ```
+### [Bicep](#tab/bicep)
+
+For Bicep, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sampe Bicep snippet is provided for you:
+
+```bicep
+resource appService 'Microsoft.Web/sites@2020-06-01' = {
+ name: webSiteName
+ location: location
+ kind: 'app'
+ properties: {
+ serverFarmId: appServicePlan.id
+ siteConfig: {
+ linuxFxVersion: linuxFxVersion
+ }
+ clientCertEnabled: true
+ clientCertMode: 'Required'
+ clientCertExclusionPaths: '/sample1;/sample2'
+ }
+}
+```
+
+### [ARM](#tab/arm)
+
+For ARM templates, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sampe ARM template snippet is provided for you:
+
+```ARM
+{
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2020-06-01",
+ "name": "[parameters('webAppName')]",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', variables('appServicePlanPortalName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('appServicePlanPortalName'))]",
+ "siteConfig": {
+ "linuxFxVersion": "[parameters('linuxFxVersion')]"
+ },
+ "clientCertEnabled": true,
+ "clientCertMode": "Required",
+ "clientCertExclusionPaths": "/sample1;/sample2"
+ }
+}
+```
++ ## Exclude paths from requiring authentication
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
It's not possible to edit app settings in bulk by using a JSON file with Azure P
--
+### Configure arrays in app settings
+
+You can also configure arrays in app settings as shown in the table below.
+
+|App setting name | App setting value |
+|--|-|
+|MY_ENV_VAR | ['entry1', 'entry2', 'entry3'] |
+ ## Configure connection strings In the [Azure portal], search for and select **App Services**, and then select your app. In the app's left menu, select **Configuration** > **Application settings**.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Windows containers:
::: zone pivot="container-linux"
-This guide shows how to mount Azure Storage as a network share in a built-in Linux container or a custom Linux container in App Service. See the video [how to mount Azure Storage as a local share](https://www.youtube.com/watch?v=OJkvpWYr57Y). The benefits of custom-mounted storage include:
+This guide shows how to mount Azure Storage as a network share in a built-in Linux container or a custom Linux container in App Service. See the video [how to mount Azure Storage as a local share](https://www.youtube.com/watch?v=OJkvpWYr57Y). For using Azure Storage in an ARM template, see [Bring your own storage](https://github.com/Azure/app-service-linux-docs/blob/master/BringYourOwnStorage/BYOS_azureFiles.json). The benefits of custom-mounted storage include:
- Configure persistent storage for your App Service app and manage the storage separately. - Make static content like video and images readily available for your App Service app.
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md
When you push commits to your App Service repository, App Service deploys the fi
You can also change the `DEPLOYMENT_BRANCH` app setting in the Azure Portal, by selecting **Configuration** under **Settings** and adding a new Application Setting with a name of `DEPLOYMENT_BRANCH` and value of `main`.
+> [!NOTE]
+> You can also change the `DEPLOYMENT_BRANCH` using the Azure Portal interface, by selecting **Deployment Center** under **Deployment** and modifying the `Branch`.
## Troubleshoot deployment
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Title: Use the migration feature to migrate your App Service Environment to App
description: Learn how to migrate your App Service Environment to App Service Environment v3 using the migration feature Previously updated : 9/15/2022 Last updated : 10/21/2022 zone_pivot_groups: app-service-cli-portal
An App Service Environment v1 and v2 can be automatically migrated to an [App Se
Ensure you understand how migrating to an App Service Environment v3 will affect your applications. Review the [migration process](migrate.md#overview-of-the-migration-process-using-the-migration-feature) to understand the process timeline and where and when you'll need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which may answer some questions you currently have.
+Ensure there are no locks on your virtual network, resource group, or subscription. Locks will block platform operations during migration.
+ ::: zone pivot="experience-azcli" The recommended experience for the migration feature is using the [Azure portal](how-to-migrate.md?pivots=experience-azp). If you decide to use the Azure CLI to carry out the migration, you should follow the steps described here in order and as written since you'll be making Azure REST API calls. The recommended way for making these API calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
App Service Environment v3 requires the subnet it's in to have a single delegati
az network vnet subnet update --resource-group $VNET_RG -name <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments ```
-## 6. Prepare your configurations
+## 6. Confirm there are no locks on the virtual network
+
+Virtual network locks will block platform operations during migration. If your virtual network has locks, you'll need to remove them before migrating. The locks can be readded if needed once migration is complete. Locks can exist at three different scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription or resource group scope, they'll need to be removed during the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+
+Use the following command to check if your virtual network has any locks.
+
+```azurecli
+az lock list --resource-group $VNET_RG --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks
+```
+
+Delete any exisiting locks using the following command.
+
+```azurecli
+az lock delete --resource-group jordan-rg --name <lock-name> --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks
+```
+
+For related commands to check if your subscription or resource group has locks, see [Azure CLI reference for locks](../../azure-resource-manager/management/lock-resources.md#azure-cli).
+
+## 7. Prepare your configurations
You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). This can be done by setting the `zoneRedundant` property to "true". Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). If you don't want to configure zone redundancy, don't include the `zoneRedundant` parameter.
If you're using a system assigned managed identity for your custom domain suffix
} ```
-## 7. Migrate to App Service Environment v3
+## 8. Migrate to App Service Environment v3
Only start this step once you've completed all pre-migration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. During that time, there will be about one hour of application downtime. Scaling, deployments, and modifications to your existing App Service Environment will be blocked during this step.
From the [Azure portal](https://portal.azure.com), navigate to the **Migration**
On the migration page, the platform will validate if migration is supported for your App Service Environment. If your environment isn't supported for migration, a banner will appear at the top of the page and include an error message with a reason. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the error messages you may see if you aren't eligible for migration. If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you won't be able to use the migration feature. If your environment [won't be supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md). If migration is supported for your App Service Environment, you'll be able to proceed to the next step in the process. The migration page will guide you through the series of steps to complete the migration. ## 2. Generate IP addresses for your new App Service Environment v3
Under **Get new IP addresses**, confirm you understand the implications and star
When the previous step finishes, you'll be shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. Don't move on to the next step until you confirm that you have made these updates. ## 4. Delegate your App Service Environment subnet App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and/or update the delegation if needed before migrating. A link to your subnet is given so that you can confirm and update as needed. +
+## 5. Confirm there are no locks on the virtual network
-## 5. Choose your configurations
+Virtual network locks will block platform operations during migration. If your virtual network has locks, you'll need to remove them before migrating. The locks can be readded if needed once migration is complete. Locks can exist at three different scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription or resource group scope, they'll need to be removed during the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+
+For details on how to check if your subscription or resource group has locks, see [Configure locks](../../azure-resource-manager/management/lock-resources.md#configure-locks).
++
+## 6. Choose your configurations
You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). Select **Enabled** if you'd like to configure zone redundancy.
After you add your custom domain suffix details, the "Migrate" button will be en
:::image type="content" source="./media/migration/custom-domain-suffix.png" alt-text="Screenshot that shows the configuration details have been added and environment is ready for migration.":::
-## 6. Migrate to App Service Environment v3
+## 7. Migrate to App Service Environment v3
Once you've completed all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. Scaling and modifications to your existing App Service Environment will be blocked during this step.
If your migration included a custom domain suffix, for App Service Environment v
> [App Service Environment v3 Networking](networking.md) > [!div class="nextstepaction"]
-> [Custom domain suffix](./how-to-custom-domain-suffix.md)
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Once the new IPs are created, you'll have the new default outbound to the intern
App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration won't succeed if the App Service Environment's subnet isn't delegated or it's delegated to a different resource.
+### Ensure there are no locks on your resources
+
+Virtual network locks will block platform operations during migration. If your virtual network has locks, you'll need to remove them before migrating. The locks can be readded if needed once migration is complete. Locks can exist at three different scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription or resource group scope, they'll need to be removed during the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+ ### Choose your App Service Environment v3 configurations Your App Service Environment v3 can be deployed across availability zones in the regions that support it. This architecture is known as [zone redundancy](../../availability-zones/migrate-app-service-environment.md). Zone redundancy can only be configured during App Service Environment creation. If you want your new App Service Environment v3 to be zone redundant, enable the configuration during the migration process. Any App Service Environment that is using the migration feature to migrate can be configured as zone redundant as long as you're using a [region that supports zone redundancy for App Service Environment v3](./overview.md#regions). If you're existing environment is using a region that doesn't support zone redundancy, the configuration option will be disabled and you won't be able to configure it. The migration feature doesn't support changing regions. If you'd like to use a different region, use one of the [manual migration options](migration-alternatives.md).
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
Azure App Service is a fully managed platform as a service (PaaS) offering for d
* **DevOps optimization** - Set up [continuous integration and deployment](deploy-continuous-deployment.md) with Azure DevOps, GitHub, BitBucket, Docker Hub, or Azure Container Registry. Promote updates through [test and staging environments](deploy-staging-slots.md). Manage your apps in App Service by using [Azure PowerShell](/powershell/azure/) or the [cross-platform command-line interface (CLI)](/cli/azure/install-azure-cli). * **Global scale with high availability** - Scale [up](manage-scale-up.md) or [out](../azure-monitor/autoscale/autoscale-get-started.md) manually or automatically. Host your apps anywhere in Microsoft's global datacenter infrastructure, and the App Service [SLA](https://azure.microsoft.com/support/legal/sla/app-service/) promises high availability. * **Connections to SaaS platforms and on-premises data** - Choose from more than 50 [connectors](../connectors/apis-list.md) for enterprise systems (such as SAP), SaaS services (such as Salesforce), and internet services (such as Facebook). Access on-premises data using [Hybrid Connections](app-service-hybrid-connections.md) and [Azure Virtual Networks](./overview-vnet-integration.md).
-* **Security and compliance** - App Service is [ISO, SOC, and PCI compliant](https://www.microsoft.com/en-us/trustcenter). Authenticate users with [Azure Active Directory](configure-authentication-provider-aad.md), [Google](configure-authentication-provider-google.md), [Facebook](configure-authentication-provider-facebook.md), [Twitter](configure-authentication-provider-twitter.md), or [Microsoft account](configure-authentication-provider-microsoft.md). Create [IP address restrictions](app-service-ip-restrictions.md) and [manage service identities](overview-managed-identity.md).
+* **Security and compliance** - App Service is [ISO, SOC, and PCI compliant](https://www.microsoft.com/trustcenter). Authenticate users with [Azure Active Directory](configure-authentication-provider-aad.md), [Google](configure-authentication-provider-google.md), [Facebook](configure-authentication-provider-facebook.md), [Twitter](configure-authentication-provider-twitter.md), or [Microsoft account](configure-authentication-provider-microsoft.md). Create [IP address restrictions](app-service-ip-restrictions.md) and [manage service identities](overview-managed-identity.md). [Prevent subdomain takeovers](reference-dangling-subdomain-prevention.md).
* **Application templates** - Choose from an extensive list of application templates in the [Azure Marketplace](https://azure.microsoft.com/marketplace/), such as WordPress, Joomla, and Drupal. * **Visual Studio and Visual Studio Code integration** - Dedicated tools in Visual Studio and Visual Studio Code streamline the work of creating, deploying, and debugging. * **API and mobile features** - App Service provides turn-key CORS support for RESTful API scenarios, and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more.
Create your first web app.
> [HTML](quickstart-html.md) > [!div class="nextstepaction"]
-> [Custom container (Windows or Linux)](tutorial-custom-container.md)
+> [Custom container (Windows or Linux)](tutorial-custom-container.md)
app-service Reference Dangling Subdomain Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-dangling-subdomain-prevention.md
-# What is a subdomain takeover?
+# Mitigating subdomain takeovers in Azure App Service
Subdomain takeovers are a common threat for organizations that regularly create and delete many resources. A subdomain takeover can occur when you have a DNS record that points to a deprovisioned Azure resource. Such DNS records are also known as "dangling DNS" entries. Subdomain takeovers enable malicious actors to redirect traffic intended for an organizationΓÇÖs domain to a site performing malicious activity.
The risks of subdomain takeover include:
- Phishing campaigns - Further risks of classic attacks such as XSS, CSRF, CORS bypass
-Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](../security/fundamentals/subdomain-takeover.md).
+Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](/azure/security/fundamentals/subdomain-takeover.md).
-Azure App Service provides [Name Reservation](#how-name-reservation-service-works) Service and [domain verification tokens](#domain-verification-token) to prevent subdomain takeovers.
-## How Name Reservation Service works
+Azure App Service provides [Name Reservation Service](#how-app-service-prevents-subdomain-takeovers) and [domain verification tokens](#how-you-can-prevent-subdomain-takeovers) to prevent subdomain takeovers.
+## How App Service prevents subdomain takeovers
-Upon deletion of an App Service app, the corresponding DNS is reserved. During the reservation period, re-use of the DNS will be forbidden except for subscriptions belonging to tenant of the subscription originally owning the DNS.
+Upon deletion of an App Service app, the corresponding DNS is reserved. During the reservation period, reuse of the DNS is forbidden except for subscriptions belonging to tenant of the subscription originally owning the DNS.
-After the reservation expires, the DNS is free to be claimed by any subscription. By Name Reservation Service, the customer is afforded some time to either clean up any associations/pointers to said DNS or re-claim the DNS in Azure. The DNS name being reserved can be derived by appending 'azurewebsites.net'. Name Reservation Service is enabled by default on Azure App Service and doesn't require additional configuration.
+After the reservation expires, the DNS is free to be claimed by any subscription. By Name Reservation Service, the customer is afforded some time to either clean-up any associations/pointers to said DNS or reclaim the DNS in Azure. The DNS name being reserved can be derived by appending 'azurewebsites.net'. Name Reservation Service is enabled by default on Azure App Service and doesn't require more configuration.
#### Example scenario
Subscription 'A' and subscription 'B' are the only subscriptions belonging to te
During the reservation period, only subscription 'A' or subscription 'B' will be able to claim the DNS name 'test.azurewebsites.net' by creating a web app named 'test'. No other subscriptions will be allowed to claim it. After the reservation period is complete, any subscription in Azure can now claim 'test.azurewebsites.net'.
-## Domain verification token
+## How you can prevent subdomain takeovers
When creating DNS entries for Azure App Service, create an asuid.{subdomain} TXT record with the Domain Verification ID. When such a TXT record exists, no other Azure Subscription can validate the Custom Domain or take it over unless they add their token verification ID to the DNS entries. These records prevent the creation of another App Service app using the same name from your CNAME entry. Without the ability to prove ownership of the domain name, threat actors can't receive traffic or control the content.
-DNS records should be updated before the site deletion to ensure bad actors can't take over the domain between the period of deletion and re-creation. Be aware that the DNS records take time to propagate.
+DNS records should be updated before the site deletion to ensure bad actors can't take over the domain between the period of deletion and re-creation.
-To get a domain verification ID, see the [Map a custom domain tutorial](app-service-web-tutorial-custom-domain.md#2-get-a-domain-verification-id)
+To get a domain verification ID, see the [Map a custom domain tutorial](app-service-web-tutorial-custom-domain.md#2-get-a-domain-verification-id)
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
description: Secure Azure Database for PostgreSQL connectivity with managed iden
ms.devlang: java Last updated 09/26/2022--++ # Tutorial: Connect to a PostgreSQL Database from Java Tomcat App Service without secrets using a managed identity
* [Git](https://git-scm.com/) * [Java JDK](/azure/developer/java/fundamentals/java-support-on-azure) * [Maven](https://maven.apache.org)
-* [Azure CLI](/cli/azure/overview). This quickstart requires that you are running the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
+* [Azure CLI](/cli/azure/install-azure-cli) version 2.41.0 or higher.
## Clone the sample app and prepare the repo
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
recommendations: false
> > Form Recognizer REST API v3.0 introduces breaking changes in the REST API request and analyze response JSON.
+## Migrating from a v3.0 preview API version
+
+Preview APIs are periodically deprecated. If you're using a preview API version, plan on updating your application to target the GA API version once available. To migrate from the 2021-09-30-preview or the 2022-01-30-preview API versions to the 2022-08-31 (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md).
+
+The 2022-08-31 API has a few updates from the preview API versions:
+* Field rename: boundingBox to polygon to support non-quadrilateral polygon regions.
+* Field deleted: entities removed from the result of the general document model.
+* Field rename: documentLanguage.languageCode to locale
+* Added support for HEIF format
+* Added paragraph detection, with role classification for layout and general document models
+* Added support for parsed address fields.
+
+## Migrating from v2.1
+ Form Recognizer v3.0 introduces several new features and capabilities: * [Form Recognizer REST API](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) has been redesigned for better usability.
Base64 encoding is also supported in Form Recognizer v3.0:
} ```
-### Additional supported parameters
+### Additionally supported parameters
Parameters that continue to be supported:
In this migration guide, you've learned how to upgrade your existing Form Recogn
* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) * [What is Form Recognizer?](overview.md)
-* [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
+* [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
+
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Last updated 10/20/2022
monikerRange: '>=form-recog-2.1.0' recommendations: false+ <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
recommendations: false
Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+>[!NOTE]
+> With the release of the 2022-08-31 GA API, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview or the 2022-01-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md).
+ ## October 2022 With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Form Recognizer now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages. Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications.
+### Region expansion for training custom neural models
+
+Training custom neural models now supported in added regions.
+* East US
+* East US2
+* US Gov Arizona
+ ## September 2022 ### Region expansion for training custom neural models
The updated Layout API table feature adds header recognition with column headers
* Client defaults to the latest supported service version, currently v2.1. You can specify version 2.0 in the **FormRecognizerClientOptions** object's **Version** property.
-* **StartRecognizeIdentityDocuments**. Renamed methods and method parameters using **Identity** to replace _Id_ keyword for all related identity documents recognition API functionalities.
+* **StartRecognizeIdentityDocuments**. Renamed methods and method parameters using **Identity** to replace _ID_ keyword for all related identity documents recognition API functionalities.
* **FormReadingOrder**. *ReadingOrder* renamed to **FormReadingOrder**.
The updated Layout API table feature adds header recognition with column headers
#### **Breaking changes (May)**
-* **begin_recognize_identity_documents** and **begin_recognize_identity_documents_from_url**. Renamed methods and method parameters using **Identity** to replace _Id_ keyword.
+* **begin_recognize_identity_documents** and **begin_recognize_identity_documents_from_url**. Renamed methods and method parameters using **Identity** to replace _ID_ keyword.
* **FieldValueType**. Renamed value type *country* to **countryRegion**. Removed value type *gender*.
automation Automation Dsc Config Data At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-data-at-scale.md
description: This article tells how to configure data at scale for Azure Automat
keywords: dsc,powershell,configuration,setup Previously updated : 08/08/2019 Last updated : 10/21/2022+ # Configure data at scale for Azure Automation State Configuration
-> Applies To: Windows PowerShell 5.1
+**Applies to:** :heavy_check_mark: Windows PowerShell 5.1
-Managing hundreds or thousands of servers can be a challenge.
-Customers have provided feedback that the most difficult aspect is actually managing
-[configuration data](/powershell/dsc/configurations/configdata).
-Organizing information across logical constructs like location, type, and environment.
+> [!IMPORTANT]
+> This article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, and not from Microsoft.
-> [!NOTE]
-> This article refers to a solution that is maintained by the Open Source community.
-> Support is only available in the form of GitHub collaboration, not from Microsoft.
+Managing many servers is a challenge and difficulty lies in managing [configuration data](/powershell/dsc/configurations/configdata) as it involves organizing information across logical constructs like location, type, and environment.
## Community project: Datum
-A community maintained solution named
-[Datum](https://github.com/gaelcolas/Datum)
-has been created to resolve this challenge.
-Datum builds on great ideas from other configuration management platforms
-and implements the same type of solution for PowerShell DSC.
-Information is
-[organized in to text files](https://github.com/gaelcolas/Datum#3-intended-usage)
-based on logical ideas.
-Examples would be:
+[Datum](https://github.com/gaelcolas/Datum) is a community maintained solution that has been created to resolve this challenge. Datum builds on great ideas from other configuration management platforms and implements the same type of solution for PowerShell DSC. Information is [organized in to text files](https://github.com/gaelcolas/Datum#3-intended-usage) based on logical ideas.
+
+Listed below are few examples:
- Settings that should apply globally - Settings that should apply to all servers in a location - Settings that should apply to all database servers - Individual server settings
-This information is organized in the file format you prefer (JSON, Yaml, or PSD1).
-Then cmdlets are provided to generate configuration data files by
-[consolidating the information](https://github.com/gaelcolas/Datum#datum-tree)
-from each file in to single view of a server or server role.
-
-Once the data files have been generated,
-you can use them with
-[DSC Configuration scripts](/powershell/dsc/configurations/write-compile-apply-configuration)
-to generate MOF files
-and
-[upload the MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
-Then register your servers from either
-[on-premises](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines)
-or [in Azure](./automation-dsc-onboarding.md#enable-azure-vms)
-to pull configurations.
-
-To try out Datum, visit the
-[PowerShell Gallery](https://www.powershellgallery.com/packages/datum/)
-and download the solution or click "Project Site"
-to view the
-[documentation](https://github.com/gaelcolas/Datum#2-getting-started--concepts).
+
+## Configure data at scale
+
+Follow the below steps to configure data at scale for Azure Automation State Configuration:
+
+1. The information is organized in your preferred file format. For example, *JSON*, *Yaml*, or *PSD1*.
+1. The cmdlets are provided to generate configuration data files by [consolidating the information](https://github.com/gaelcolas/Datum#datum-tree) from each file in to single view of a server or server role.
+1. After you generate the data files, you can use them with [DSC Configuration scripts](/powershell/dsc/configurations/write-compile-apply-configuration) to generate *MOF* files and [upload the MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
+1. Register your servers from either [on-premises](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines)
+or [in Azure](./automation-dsc-onboarding.md#enable-azure-vms) to pull configurations.
+
+To download the solution, go to [PowerShell Gallery](https://www.powershellgallery.com/packages/datum/) or select **Project site** to view the [documentation](https://github.com/gaelcolas/Datum#2-getting-started--concepts).
+ ## Next steps
automation Automation Dsc Create Composite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-create-composite.md
description: This article tells how to convert configurations to composite resou
keywords: dsc,powershell,configuration,setup Previously updated : 08/08/2019 Last updated : 10/21/2022
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
Title: Support matrix for Arc-enabled VMware vSphere (preview)
-description: In this article, you'll learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements etc.
+ Title: Support matrix for Azure Arc-enabled VMware vSphere (preview)
+description: Learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements, and more.
Previously updated : 09/30/2022 Last updated : 10/21/2022 # Customer intent: As a VI admin, I want to understand the support matrix for Arc-enabled VMware vSphere.
-# Support matrix for Arc-enabled VMware vSphere (preview)
+# Support matrix for Azure Arc-enabled VMware vSphere (preview)
-This article documents the prerequisites and support requirements for using the [Arc-enabled VMware vSphere (preview)](overview.md) to manage your VMware vSphere VMs through Azure Arc.
+This article documents the prerequisites and support requirements for using [Azure Arc-enabled VMware vSphere (preview)](overview.md) to manage your VMware vSphere VMs through Azure Arc.
-To use Arc-enabled VMware vSphere, you must deploy an Azure Arc resource bridge in your VMware vSphere environment. The resource bridge provides an ongoing connection between your VMware vCenter Server and Azure. Once you've connected your VMware vCenter Server to Azure, components on the resource bridge discover your vCenter inventory. You can enable them in Azure and start performing virtual hardware and guest OS operations on them using Azure Arc.
+To use Arc-enabled VMware vSphere, you must deploy an Azure Arc resource bridge (preview) in your VMware vSphere environment. The resource bridge provides an ongoing connection between your VMware vCenter Server and Azure. Once you've connected your VMware vCenter Server to Azure, components on the resource bridge discover your vCenter inventory. You can enable them in Azure and start performing virtual hardware and guest OS operations on them using Azure Arc.
+## VMware vSphere requirements
-## VMware vSphere Requirements
+The following requirements must be met in order to use Azure Arc-enabled VMware vSphere.
### Supported vCenter Server versions -- vCenter Server version 6.7 or 7.
+Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 6.7 and 7.
+
+> [!NOTE]
+> Azure Arc-enabled VMware vSphere (preview) currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
### Required vSphere account privileges You need a vSphere account that can:-- Read all inventory. +
+- Read all inventory.
- Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc. This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere (preview) and the deployment of the Azure Arc resource bridge (preview) VM.
-### Resource bridge resource requirements
+### Resource bridge resource requirements
For Arc-enabled VMware vSphere, resource bridge has the following minimum virtual hardware requirements
The following firewall URL exceptions are needed for the Azure Arc resource brid
| **Service** | **Port** | **URL** | **Direction** | **Notes**| | | | | | |
-| Microsoft container registry | 443 | https://mcr.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images for installation. |
-| Azure Arc Identity service | 443 | https://*.his.arc.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Manages identity and access control for Azure resources |
-| Azure Arc configuration service | 443 | https://*.dp.kubernetesconfiguration.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Used for Kubernetes cluster configuration. |
-| Cluster connect service | 443 | https://*.servicebus.windows.net | Appliance VM IP and control plane endpoint need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
-| Guest Notification service | 443 | `https://guestnotificationservice.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
-| SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
-| Resource bridge (appliance) Dataplane service | 443 | https://*.dp.prod.appliances.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Communicate with resource provider in Azure. |
-| Resource bridge (appliance) container image download | 443 | *.blob.core.windows.net, `https://ecpacr.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
-| Resource bridge (appliance) image download | 80 | *.dl.delivery.mp.microsoft.com | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Download the Arc resource bridge OS images. |
+| Microsoft container registry | 443 | `https://mcr.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images for installation. |
+| Azure Arc Identity service | 443 | `https://*.his.arc.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Manages identity and access control for Azure resources |
+| Azure Arc configuration service | 443 | `https://*.dp.kubernetesconfiguration.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used for Kubernetes cluster configuration. |
+| Cluster connect service | 443 | `https://*.servicebus.windows.net` | Appliance VM IP and control plane endpoint need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
+| Guest Notification service | 443 | `https://guestnotificationservice.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
+| SFS API endpoint | 443 | `msk8s.api.cdp.microsoft.com` | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
+| Resource bridge (appliance) Data plane service | 443 | `https://*.dp.prod.appliances.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Communicate with resource provider in Azure. |
+| Resource bridge (appliance) container image download | 443 | `*.blob.core.windows.net`, `https://ecpacr.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
+| Resource bridge (appliance) image download | 80 | `*.dl.delivery.mp.microsoft.com` | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Download the Arc resource bridge OS images. |
| Azure Arc for K8s container image download | 443 | `https://azurearcfork8sdev.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
-| ADHS telemetry service | 443 | adhs.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. |
-| Microsoft events data service | 443 | v20.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
+| ADHS telemetry service | 443 | `adhs.events.data.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. |
+| Microsoft events data service | 443 | `v20.events.data.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
| vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.|
-## Azure permissions required
+## Azure role/permission requirements
-Following are the minimum Azure roles required for various operations:
+The minimum Azure roles required for operations related to Arc-enabled VMware vSphere are as follows:
| **Operation** | **Minimum role required** | **Scope** | | | | |
Following are the minimum Azure roles required for various operations:
| VM Provisioning | Azure Arc VMware VM Contributor | On the subscription or resource group where you want to provision VMs | | VM Operations | Azure Arc VMware VM Contributor | On the subscription or resource group that contains the VM, or on the VM itself |
-Any roles with higher permissions such as *Owner/Contributor* role on the same scope, will also allow you to perform all the operations listed above.
+Any roles with higher permissions on the same scope, such as Owner or Contributor, will also allow you to perform the operations listed above.
## Guest management (Arc agent) requirements
-With Arc-enabled VMware vSphere, you can install the Arc connected machine agent on your VMs at scale and use Azure management services on the VMs. There are additional requirements for this capability:
+With Arc-enabled VMware vSphere, you can install the Arc connected machine agent on your VMs at scale and use Azure management services on the VMs. There are additional requirements for this capability.
-To enable guest management (install the Arc connected machine agent), ensure
+To enable guest management (install the Arc connected machine agent), ensure the following:
-- VM is powered on-- VM has VMware tools installed and running-- Resource bridge has access to the host on which the VM is running-- VM is running a [supported operating system](#supported-operating-systems)
+- VM is powered on.
+- VM has VMware tools installed and running.
+- Resource bridge has access to the host on which the VM is running.
+- VM is running a [supported operating system](#supported-operating-systems).
- VM has internet connectivity directly or through proxy. If the connection is through a proxy, ensure [these URLs](#networking-requirements) are allow-listed.
+Additionally, be sure that the requirements below are met in order to enable guest management.
+ ### Supported operating systems
-The officially supported versions of the Windows and Linux operating system for the Azure Connected Machine agent are listed [here](../servers/prerequisites.md#supported-operating-systems). Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, aren't supported operating environments.
+Make sure you are using a version of the Windows or Linux [operating systems that are officially supported for the Azure Connected Machine agent](../servers/prerequisites.md#supported-operating-systems). Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, aren't supported operating environments.
### Software requirements Windows operating systems:
-* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
-* Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+- NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
+- Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
Linux operating systems:
-* systemd
-* wget (to download the installation script)
+- systemd
+- wget (to download the installation script)
### Networking requirements
The following firewall URL exceptions are needed for the Azure Arc agents:
| **URL** | **Description** | | | |
-| aka.ms | Used to resolve the download script during installation |
-| download.microsoft.com | Used to download the Windows installation package |
-| packages.microsoft.com | Used to download the Linux installation package |
-| login.windows.net | Azure Active Directory |
-| login.microsoftonline.com | Azure Active Directory |
-| pas.windows.net | Azure Active Directory |
-| management.azure.com | Azure Resource Manager - to create or delete the Arc server resource |
-| *.his.arc.azure.com | Metadata and hybrid identity services |
-| *.guestconfiguration.azure.com | Extension management and guest configuration services |
-| guestnotificationservice.azure.com, *.guestnotificationservice.azure.com | Notification service for extension and connectivity scenarios |
-| azgn*.servicebus.windows.net | Notification service for extension and connectivity scenarios |
-| *.servicebus.windows.net | For Windows Admin Center and SSH scenarios |
-| *.blob.core.windows.net | Download source for Azure Arc-enabled servers extensions |
-| dc.services.visualstudio.com | Agent telemetry |
-
+| `aka.ms` | Used to resolve the download script during installation |
+| `packages.microsoft.com` | Used to download the Linux installation package |
+| `download.microsoft.com` | Used to download the Windows installation package |
+| `login.windows.net` | Azure Active Directory |
+| `login.microsoftonline.com` | Azure Active Directory |
+| `pas.windows.net` | Azure Active Directory |
+| `management.azure.com` | Azure Resource Manager - to create or delete the Arc server resource |
+| `*.his.arc.azure.com` | Metadata and hybrid identity services |
+| `*.guestconfiguration.azure.com` | Extension management and guest configuration services |
+| `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | Notification service for extension and connectivity scenarios |
+| `azgn*.servicebus.windows.net` | Notification service for extension and connectivity scenarios |
+| `*.servicebus.windows.net` | For Windows Admin Center and SSH scenarios |
+| `*.blob.core.windows.net` | Download source for Azure Arc-enabled servers extensions |
+| `dc.services.visualstudio.com` | Agent telemetry |
## Next steps
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
There are several ways to execute your function code based on changes to blobs i
| Filters | [Blob name pattern](#blob-name-patterns) | [Event filters](../storage/blobs/storage-blob-event-overview.md#filtering-events) | n/a | [Event filters](../storage/blobs/storage-blob-event-overview.md#filtering-events) | | Requires [event subscription](../event-grid/concepts.md#event-subscriptions) | No | Yes | No | Yes | | Supports high-scale┬▓ | No | Yes | Yes | Yes |
-| Description | Default trigger behavior, which relies on polling the container for updates. For more information, see the [examples in this article](#example). | Consumes blob storage events from an event subscription. Requires a `Source` parameter value of `EventGrid`. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md). | Blob name string is manually added to a storage queue when a blob is added to the container. This value is passed directly by a Queue Storage trigger to a Blob Storage input binding on the same function. | Provides the flexibility of triggering on events besides those coming from a storage container. Use when need to also have non-storage events trigger your function. For more information, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md). |
+| Description | Default trigger behavior, which relies on polling the container for updates. For more information, see the [examples in this article](#example). | Consumes blob storage events from an event subscription. Requires a `Source` parameter value of `EventGrid`. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md). | Blob name string is manually added to a storage queue when a blob is added to the container. This value is passed directly by a Queue Storage trigger to a Blob Storage input binding on the same function. | Provides the flexibility of triggering on events besides those coming from a storage container. Use when need to also have non-storage events trigger your function. For more information, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md). |
-┬╣Blob Storage input and output bindings support blob-only accounts.
-┬▓High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.
+<sup>1</sup> Blob Storage input and output bindings support blob-only accounts.
+
+<sup>2</sup> High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Title: Overview for Microsoft Azure Maps
description: Learn about services and capabilities in Microsoft Azure Maps and how to use them in your applications. Previously updated : 11/29/2021 Last updated : 10/21/2022
Maps Creator service is a suite of web services that developers can use to creat
Maps Creator provides three core
-* [Dataset service](/rest/api/maps/v2/dataset). Use the Dataset service to create a dataset from a converted Drawing package data. For information about Drawing package requirements, see Drawing package requirements.
+* [Dataset service][Dataset service]. Use the Dataset service to create a dataset from a converted Drawing package data. For information about Drawing package requirements, see Drawing package requirements.
-* [Conversion service](/rest/api/maps/v2/dataset). Use the Conversion service to convert a DWG design file into Drawing package data for indoor maps.
+* [Conversion service][Conversion service]. Use the Conversion service to convert a DWG design file into Drawing package data for indoor maps.
-* [Tileset service](/rest/api/maps/v2/tileset). Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.
+* [Tileset service][Tileset]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.
-* [Feature State service](/rest/api/maps/v2/feature-state). Use the Feature State service to support dynamic map styling. Dynamic map styling allows applications to reflect real-time events on spaces provided by IoT systems.
+* [Custom styling service][Custom styling] (preview). Use the [style service][style] or [visual style editor][style editor] to customize the visual elements of an indoor map.
-* [WFS service](/rest/api/maps/v2/feature-state). Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API](http://docs.opengeospatial.org/is/17-069r3/17-069r3.html) standards for querying a single dataset.
+* [Feature State service][FeatureState]. Use the Feature State service to support dynamic map styling. Dynamic map styling allows applications to reflect real-time events on spaces provided by IoT systems.
+* [WFS service][WFS]. Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API](http://docs.opengeospatial.org/is/17-069r3/17-069r3.html) standards for querying a single dataset.
+
+<!-* [Wayfinding service][wayfinding-preview] (preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.
+>
### Elevation service The Azure Maps Elevation service is a web service that developers can use to retrieve elevation data from anywhere on the EarthΓÇÖs surface.
Try a sample app that showcases Azure Maps:
Stay up to date on Azure Maps: [Azure Maps blog](https://azure.microsoft.com/blog/topics/azure-maps/)+
+[Dataset service]: creator-indoor-maps.md#datasets
+[Conversion service]: creator-indoor-maps.md#convert-a-drawing-package
+[Tileset]: creator-indoor-maps.md#tilesets
+[Custom styling]: creator-indoor-maps.md#custom-styling-preview
+[style]: /rest/api/maps/v20220901preview/style
+[style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+[FeatureState]: creator-indoor-maps.md#feature-statesets
+[WFS]: creator-indoor-maps.md#web-feature-service-api
+<!--[wayfinding-preview]: creator-indoor-maps.md# -->
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
description: Learn what level of coverage Azure Maps provides in various regions for routing, routing with traffic, and truck routing. Previously updated : 02/08/2022 Last updated : 10/21/2022
The following tables provide coverage information for Azure Maps routing.
| British Virgin Islands | Γ£ô | | | | Canada | Γ£ô | Γ£ô | Γ£ô | | Cayman Islands | Γ£ô | | |
-| Chile | Γ£ô | Γ£ô | Γ£ô |
+| Chile | Γ£ô | Γ£ô | |
| Colombia | Γ£ô | Γ£ô | | | Costa Rica | Γ£ô | | | | Cuba | Γ£ô | | |
The following tables provide coverage information for Azure Maps routing.
| Guam | Γ£ô | | | | Hong Kong SAR | Γ£ô | Γ£ô | | | India | Γ£ô | Γ£ô | |
-| Indonesia | Γ£ô | Γ£ô | |
+| Indonesia | Γ£ô | Γ£ô | Γ£ô |
| Kiribati | Γ£ô | | | | Laos | Γ£ô | | | | Macao SAR | Γ£ô | Γ£ô | |
The following tables provide coverage information for Azure Maps routing.
| Philippines | Γ£ô | Γ£ô | Γ£ô | | Pitcairn Islands | Γ£ô | | | | Samoa | Γ£ô | | |
-| Singapore | Γ£ô | Γ£ô | |
+| Singapore | Γ£ô | Γ£ô | Γ£ô |
| Solomon Islands | Γ£ô | | | | Sri Lanka | Γ£ô | | | | Taiwan | Γ£ô | Γ£ô | Γ£ô |
The following tables provide coverage information for Azure Maps routing.
| Montenegro | Γ£ô | | Γ£ô | | Netherlands | Γ£ô | Γ£ô | Γ£ô | | North Macedonia | Γ£ô | | |
-| Norway | Γ£ô | Γ£ô | |
+| Norway | Γ£ô | Γ£ô | Γ£ô |
| Poland | Γ£ô | Γ£ô | Γ£ô | | Portugal | Γ£ô | Γ£ô | Γ£ô | | Romania | Γ£ô | Γ£ô | Γ£ô |
The following tables provide coverage information for Azure Maps routing.
| Seychelles | Γ£ô | | | | Sierra Leone | Γ£ô | | | | Somalia | Γ£ô | | |
-| South Africa | Γ£ô | Γ£ô | |
+| South Africa | Γ£ô | Γ£ô | Γ£ô |
| South Sudan | Γ£ô | | | | St. Helena | Γ£ô | | | | Sudan | Γ£ô | | |
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.4.1.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.1/applicationinsights-agent-3.4.1.jar) file.
+Download the [applicationinsights-agent-3.4.2.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.2/applicationinsights-agent-3.4.2.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.4.1.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` to your application's JVM args.
+Add `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` to your applicati
APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview> ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.1.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.2.jar` with the following content:
```json {
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.1.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.2.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.1.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.2.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.1.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.2.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.1</version>
+ <version>3.4.2</version>
</dependency> ``` And invoke the `attach()` method of the `com.microsoft.applicationinsights.attach.ApplicationInsights` class in the first line of your `main()` method.
+> [!WARNING]
+>
+> The invocation must be at the beginning of the `main` method.
+ > [!WARNING] > > JRE is not supported. > [!WARNING] >
-> Read-only file system is not supported.
-
-> [!WARNING]
->
-> The invocation must be at the beginning of the `main` method.
+> The temporary directory of the operating system should be writable.
Example:
public class SpringBootApp {
### Configuration
-> [!NOTE]
-> Spring's `application.properties` or `application.yaml` files are not supported as
-> as sources for Application Insights Java configuration.
- Programmatic enablement supports all the same [configuration options](./java-standalone-config.md) as the JVM argument enablement, with the following differences below. #### Configuration file location By default, when enabling Application Insights Java programmatically, the configuration file `applicationinsights.json`
-will be read from the classpath.
+will be read from the classpath (`src/main/resources`, `src/test/resources`).
+
+From 3.4.2, you can configure the name of a JSON file in the classpath with the `applicationinsights.runtime-attach.configuration.classpath.file` system property.
+For example, with `-Dapplicationinsights.runtime-attach.configuration.classpath.file=applicationinsights-dev.json`, Application Insights will use `applicationinsights-dev.json` file for configuration.
+
+> [!NOTE]
+> Spring's `application.properties` or `application.yaml` files are not supported as
+> as sources for Application Insights Java configuration.
See [configuration file path configuration options](./java-standalone-config.md#configuration-file-path)
-to change this location.
+to change the location for a file outside the classpath.
#### Self-diagnostic log file location
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Read the Spring Boot documentation [here](../app/java-in-process-agent.md).
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.1.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.2.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.1.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.1.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.2.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.1.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.2.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.1.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.2.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.1.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.2.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.1.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.2.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.1.jar
+-javaagent:path/to/applicationinsights-agent-3.4.2.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.1.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.2.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following JVM argument: ```--javaagent:path/to/applicationinsights-agent-3.4.1.jar
+-javaagent:path/to/applicationinsights-agent-3.4.2.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.1.jar
+-javaagent:path/to/applicationinsights-agent-3.4.2.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.1.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.2.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.1.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.2.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.1.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.2.jar` is located.
```json {
Furthermore, sampling is trace ID based, to help ensure consistent sampling deci
### Rate-Limited Sampling
-Starting from 3.4.1, rate-limited sampling is available, and is now the default.
+Starting from 3.4.2, rate-limited sampling is available, and is now the default.
If no sampling has been configured, the default is now rate-limited sampling configured to capture at most (approximately) 5 requests per second, along with all the dependencies and logs on those requests.
Starting from version 3.2.0, if you want to set a custom dimension programmatica
## Connection string overrides (preview)
-This feature is in preview, starting from 3.4.1.
+This feature is in preview, starting from 3.4.2.
Connection string overrides allow you to override the [default connection string](#connection-string), for example: * Set one connection string for one http path prefix `/myapp1`.
These are the valid `level` values that you can specify in the `applicationinsig
> | project timestamp, message, itemType > ```
+### Log markers for Logback and Log4j 2 (preview)
+
+Log markers are disabled by default.
+
+You can enable the `Marker` property for Logback and Log4j 2:
+
+```json
+{
+ "preview": {
+ "captureLogbackMarker": true
+ }
+}
+```
+
+```json
+{
+ "preview": {
+ "captureLog4jMarker": true
+ }
+}
+```
+
+This feature is in preview, starting from 3.4.2.
### Code properties for Logback (preview)
You can enable code properties (_FileName_, _ClassName_, _MethodName_, _LineNumb
> > This feature could add a performance overhead.
-This feature is in preview, starting from 3.4.1.
+This feature is in preview, starting from 3.4.2.
### LoggingLevel
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
Literal values in JDBC queries are masked by default in order to avoid accidentally capturing sensitive data.
-Starting from 3.4.1, this behavior can be disabled if desired, e.g.
+Starting from 3.4.2, this behavior can be disabled if desired, e.g.
```json {
Starting from 3.4.1, this behavior can be disabled if desired, e.g.
Literal values in Mongo queries are masked by default in order to avoid accidentally capturing sensitive data.
-Starting from 3.4.1, this behavior can be disabled if desired, e.g.
+Starting from 3.4.2, this behavior can be disabled if desired, e.g.
```json {
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.1.jar` is located.
+`applicationinsights-agent-3.4.2.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
auto-instrumentation which is provided by the 3.x Java agent.
| 2.x dependency | Action | Remarks | |-|--||
-| `applicationinsights-core` | Update the version to `3.4.1` or later | |
-| `applicationinsights-web` | Update the version to `3.4.1` or later, and remove the Application Insights web filter your `web.xml` file. | |
-| `applicationinsights-web-auto` | Replace with `3.4.1` or later of `applicationinsights-web` | |
+| `applicationinsights-core` | Update the version to `3.4.2` or later | |
+| `applicationinsights-web` | Update the version to `3.4.2` or later, and remove the Application Insights web filter your `web.xml` file. | |
+| `applicationinsights-web-auto` | Replace with `3.4.2` or later of `applicationinsights-web` | |
| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 1.2 is auto-instrumented in the 3.x Java agent. | | `applicationinsights-logging-log4j2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 2 is auto-instrumented in the 3.x Java agent. | | `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your logback configuration. | No longer needed since Logback is auto-instrumented in the 3.x Java agent. |
-| `applicationinsights-spring-boot-starter` | Replace with `3.4.1` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. |
+| `applicationinsights-spring-boot-starter` | Replace with `3.4.2` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. |
## Step 2: Add the 3.x Java agent Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.1.jar
+-javaagent:path/to/applicationinsights-agent-3.4.2.jar
``` If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 10/11/2021 Last updated : 10/21/2022 ms.devlang: csharp, javascript, python
-# Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)
+# Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applications (preview)
-The Azure Monitor OpenTelemetry Exporter is a component that sends traces (and eventually all application telemetry) to Azure Monitor Application Insights. To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+The Azure Monitor OpenTelemetry Exporter is a component that sends traces, and metrics (and eventually all application telemetry) to Azure Monitor Application Insights. To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
-This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Preview offering. After you finish the instructions in this article, you'll be able to send OpenTelemetry traces to Azure Monitor Application Insights.
+This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Preview offerings. After you finish the instructions in this article, you'll be able to send OpenTelemetry traces and metrics to Azure Monitor Application Insights.
> [!IMPORTANT]
-> Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications is currently in preview.
+> The Azure Monitor OpenTelemetry-based Offerings for .NET, Node.js, and Python applications are currently in preview.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Limitations of the preview release ### [.NET](#tab/net)
-Carefully consider whether this preview is right for you. It *enables distributed tracing only* and _excludes_:
+Consider whether this preview is right for you. It *enables distributed tracing, metrics* and _excludes_:
- [Live Metrics](live-stream.md) - Logging API (like console logs and logging libraries) - [Profiler](profiler-overview.md) - [Snapshot Debugger](snapshot-debugger.md) - [Azure Active Directory authentication](azure-ad-authentication.md) - Autopopulation of Cloud Role Name and Cloud Role Instance in Azure environments - Autopopulation of User ID and Authenticated User ID when you use the [Application Insights JavaScript SDK](javascript.md) - Autopopulation of User IP (to determine location attributes)
Carefully consider whether this preview is right for you. It *enables distribute
- Ability to manually set User ID or Authenticated User ID - Propagating Operation Name to Dependency Telemetry - [Instrumentation libraries](#instrumentation-libraries) support on Azure Functions
+ - [Status](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#set-status) supports statuscode(unset,ok,error) and status-description. "Status Description" is ignored by Azure Monitor Exporters.
-If you require a full-feature experience, use the existing Application Insights [ASP.NET](asp-net.md) or [ASP.NET Core](asp-net-core.md) SDK until the OpenTelemetry-based offering matures.
+If you require a full-feature experience, use the existing Application Insights [ASP.NET](asp-net.md), or [ASP.NET Core](asp-net-core.md) SDK until the OpenTelemetry-based offering matures.
### [Node.js](#tab/nodejs)
-Carefully consider whether this preview is right for you. It *enables distributed tracing only* and _excludes_:
+Consider whether this preview is right for you. It *enables distributed tracing, metrics* and _excludes_:
- [Live Metrics](live-stream.md) - Logging API (like console logs and logging libraries) - Autopopulation of Cloud Role Name and Cloud Role Instance in Azure environments - Autopopulation of User ID and Authenticated User ID when you use the [Application Insights JavaScript SDK](javascript.md) - Autopopulation of User IP (to determine location attributes) - Ability to override [Operation Name](correlation.md#data-model-for-telemetry-correlation) - Ability to manually set User ID or Authenticated User ID - Propagating Operation Name to Dependency Telemetry
+ - [Status](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#set-status) only supports statuscode(unset,ok,error) and status-description. "Status Description" is ignored by Azure Monitor Exporters.
If you require a full-feature experience, use the existing [Application Insights Node.js SDK](nodejs.md) until the OpenTelemetry-based offering matures.
If you require a full-feature experience, use the existing [Application Insights
### [Python](#tab/python)
-Carefully consider whether this preview is right for you. It *enables distributed tracing only* and _excludes_:
+Consider whether this preview is right for you. It *enables distributed tracing, metrics* and _excludes_:
- [Live Metrics](live-stream.md) - Logging API (like console logs and logging libraries) - [Azure Active Directory authentication](azure-ad-authentication.md) - Autopopulation of Cloud Role Name and Cloud Role Instance in Azure environments - Autopopulation of User ID and Authenticated User ID when you use the [Application Insights JavaScript SDK](javascript.md) - Autopopulation of User IP (to determine location attributes)
Carefully consider whether this preview is right for you. It *enables distribute
- Ability to manually set User ID or Authenticated User ID - Propagating Operation Name to Dependency Telemetry - [Instrumentation libraries](#instrumentation-libraries) support on Azure Functions
+ - [Status](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#set-status) only supports statuscode(unset,ok,error) and status-description. "Status Description" is ignored by Azure Monitor Exporters.
If you require a full-feature experience, use the existing [Application Insights Python-OpenCensus SDK](opencensus-python.md) until the OpenTelemetry-based offering matures.
Follow the steps in this section to instrument your application with OpenTelemet
### [.NET](#tab/net) -- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.1
+- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2
### [Node.js](#tab/nodejs)
Follow the steps in this section to instrument your application with OpenTelemet
### [Python](#tab/python) -- Python Application using version 3.6+
+- Python Application using version 3.7+
Install these packages:
- [@opentelemetry/sdk-trace-base](https://www.npmjs.com/package/@opentelemetry/sdk-trace-base) - [@opentelemetry/sdk-trace-node](https://www.npmjs.com/package/@opentelemetry/sdk-trace-node)
+- [@opentelemetry/sdk-metrics](https://www.npmjs.com/package/@opentelemetry/sdk-metrics)
- [@azure/monitor-opentelemetry-exporter](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) ```sh npm install @opentelemetry/sdk-trace-base npm install @opentelemetry/sdk-trace-node
+npm install @opentelemetry/sdk-metrics
npm install @azure/monitor-opentelemetry-exporter ```
npm install @opentelemetry/instrumentation-http
Install the latest [azure-monitor-opentelemetry-exporter](https://pypi.org/project/azure-monitor-opentelemetry-exporter/) PyPI package: ```sh
-pip install azure-monitor-opentelemetry-exporter
+pip install azure-monitor-opentelemetry-exporter --pre
```
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
-exporter = AzureMonitorTraceExporter.from_connection_string(
- "<Your Connection String>"
-)
+exporter = AzureMonitorTraceExporter(connection_string="<Your Connection String>")
trace.set_tracer_provider(TracerProvider()) tracer = trace.get_tracer(__name__)
Run your application and open your **Application Insights Resource** tab in the
> [!IMPORTANT] > If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set Cloud Role Names](#set-the-cloud-role-name-and-the-cloud-role-instance) to represent them properly on the Application Map.
-As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable nonessential data collection. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
+As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You may disable nonessential data collection. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
## Set the Cloud Role Name and the Cloud Role Instance
var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttr
// Done setting role name and role instance // Set ResourceBuilder on the provider.
-using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .SetResourceBuilder(resourceBuilder)
- .AddSource("OTel.AzureMonitor.Demo")
- .AddAzureMonitorTraceExporter(o =>
- {
- o.ConnectionString = "<Your Connection String>";
- })
- .Build();
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .SetResourceBuilder(resourceBuilder)
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
```
using var tracerProvider = Sdk.CreateTracerProviderBuilder()
```typescript ... import { NodeTracerProvider, NodeTracerConfig } from "@opentelemetry/sdk-trace-node";
+import { MeterProvider, MeterProviderOptions } from "@opentelemetry/sdk-metrics";
import { Resource } from "@opentelemetry/resources"; import { SemanticResourceAttributes } from "@opentelemetry/semantic-conventions"; // - // Setting role name and role instance // -
-const config: NodeTracerConfig = {
- resource: new Resource({
- [SemanticResourceAttributes.SERVICE_NAME]: "my-helloworld-service",
- [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace",
- [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance",
- }),
- };
+const testResource = new Resource({
+ [SemanticResourceAttributes.SERVICE_NAME]: "my-helloworld-service",
+ [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace",
+ [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance",
+});
+const tracerProviderConfig: NodeTracerConfig = {
+ resource: testResource
+};
+const meterProviderConfig: MeterProviderOptions = {
+ resource: testResource
+};
+ // - // Done setting role name and role instance // -
-const provider = new NodeTracerProvider(config);
+const tracerProvider = new NodeTracerProvider(tracerProviderConfig);
+const meterProvider = new MeterProvider(meterProviderConfig);
... ```
trace.set_tracer_provider(
For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
-## Sampling
+## Enable Sampling
-Sampling is supported in OpenTelemetry, but it isn't supported in the Azure Monitor OpenTelemetry Exporter at this time.
+You may want to enable sampling to reduce your data ingestion volume, which reduces your cost. Azure Monitor provides a custom *fixed-rate* sampler that populates events with a "sampling ratio", which Application Insights converts to "ItemCount". The *fixed-rate* sampler ensures accurate experiences and event counts. The sampler is designed to preserve your traces across services, and it's interoperable with older Application Insights SDKs. The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces will be sent. For more information, see [Learn More about sampling](sampling.md#brief-summary).
-> [!WARNING]
-> Enabling sampling in OpenTelemetry makes standard and log-based metrics extremely inaccurate, which adversely affects all Application Insights experiences. Also, enabling sampling alongside the existing Application Insights SDKs results in broken traces.
+> [!NOTE]
+> Metrics are unaffected by sampling.
+
+#### [.NET](#tab/net)
+
+In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs.
+
+```dotnetcli
+dotnet add package --prerelease OpenTelemetry.Extensions.AzureMonitor
+```
+
+```csharp
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .SetSampler(new ApplicationInsightsSampler(0.1F))
+ .AddAzureMonitorTraceExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+```
+
+#### [Node.js](#tab/nodejs)
+
+```typescript
+...
+import { BasicTracerProvider, SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base";
+import { ApplicationInsightsSampler, AzureMonitorTraceExporter } from "@azure/monitor-opentelemetry-exporter";
+
+// Sampler expects a sample rate of between 0 and 1 inclusive
+// A rate of 0.1 means approximately 10% of your traces are sent
+const aiSampler = new ApplicationInsightsSampler(0.75);
+const provider = new BasicTracerProvider({
+ sampler: aiSampler
+});
+const exporter = new AzureMonitorTraceExporter({
+ connectionString:
+ process.env["APPLICATIONINSIGHTS_CONNECTION_STRING"] || "<your connection string>",
+});
+provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
+provider.register();
+```
+
+#### [Python](#tab/python)
+
+In this example, we utilize the `ApplicationInsightsSampler`, which offers compatibility with Application Insights SDKs.
+
+```python
+from opentelemetry import trace
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+from azure.monitor.opentelemetry.exporter import (
+ ApplicationInsightsSampler,
+ AzureMonitorTraceExporter,
+)
+
+# Sampler expects a sample rate of between 0 and 1 inclusive
+# 0.1 means approximately 10% of your traces are sent
+sampler = ApplicationInsightsSampler(0.1)
+trace.set_tracer_provider(TracerProvider(sampler=sampler))
+tracer = trace.get_tracer(__name__)
+exporter = AzureMonitorTraceExporter(connection_string="<your-connection-string>")
+span_processor = BatchSpanProcessor(exporter)
+trace.get_tracer_provider().add_span_processor(span_processor)
+
+for i in range(100):
+ # Approximately 90% of these spans should be sampled out
+ with tracer.start_as_current_span("hello"):
+ print("Hello, World!")
+```
+++
+> [!TIP]
+> If you're not sure where to set the sampling rate, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](#metrics), which are unaffected by sampling.
## Instrumentation libraries
-<!-- Microsoft has tested and validated that the following instrumentation libraries will work with the **Preview** Release. -->
+ The following libraries are validated to work with the preview release. > [!WARNING] > Instrumentation libraries are based on experimental OpenTelemetry specifications. Microsoft's *preview* support commitment is to ensure that the following libraries emit data to Azure Monitor Application Insights, but it's possible that breaking changes or experimental mapping will block some data elements.
-### HTTP
+### Distributed Tracing
#### [.NET](#tab/net) -- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.AspNet/README.md) version:
- [1.0.0-rc7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet/1.0.0-rc7)
+Requests
+- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md) (1) version:
+ [1.0.0-rc9.6](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet/1.0.0-rc9.6)
- [ASP.NET
- Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) version:
- [1.0.0-rc7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/1.0.0-rc7)
+ Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) (1) version:
+ [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/1.0.0-rc9.7)
+
+Dependencies
- [HTTP
- clients](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.Http/README.md) version:
- [1.0.0-rc7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc7)
+ clients](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md) (1) version:
+ [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc9.7)
+- [SQL
+ client](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.SqlClient/README.md) (1) version:
+ [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient/1.0.0-rc9.7)
#### [Node.js](#tab/nodejs)
+Requests/Dependencies
- [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version:
- [0.26.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.26.0)
+ [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0)
+
+Dependencies
+- [mysql](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql) version:
+ [0.25.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-mysql/v/0.25.0)
#### [Python](#tab/python) -- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) version:
- [0.24b0](https://pypi.org/project/opentelemetry-instrumentation-django/0.24b0/)
-- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) version:
- [0.24b0](https://pypi.org/project/opentelemetry-instrumentation-flask/0.24b0/)
-- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) version:
- [0.24b0](https://pypi.org/project/opentelemetry-instrumentation-requests/0.24b0/)
+Requests
+- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) (1) version:
+ [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-django/0.34b0/)
+- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) (1) version:
+ [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-flask/0.34b0/)
+
+Dependencies
+- [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2) version:
+ [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-psycopg2/0.34b0/)
+- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) (1) version:
+ [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-requests/0.34b0/)
-### Database
+(1) Supports automatic reporting (as SpanEvent) of unhandled exceptions
++
+### Metrics
#### [.NET](#tab/net) -- [SQL
- client](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.SqlClient/README.md) version:
- [1.0.0-rc7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient/1.0.0-rc7)
+- [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md) version:
+ [1.0.0-rc9.6](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet/1.0.0-rc9.6)
+- [ASP.NET
+ Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) version:
+ [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/1.0.0-rc9.7)
+- [HTTP
+ clients](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md) version:
+ [1.0.0-rc9.7](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/1.0.0-rc9.7)
+- [Runtime](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.Runtime-1.0.0/src/OpenTelemetry.Instrumentation.Runtime/README.md) version: [1.0.0](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime/1.0.0)
#### [Node.js](#tab/nodejs) -- [mysql](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql) version:
- [0.25.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-mysql/v/0.25.0)
+- [http/https](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http/README.md) version:
+ [0.33.0](https://www.npmjs.com/package/@opentelemetry/instrumentation-http/v/0.33.0)
#### [Python](#tab/python) -- [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2) version:
- [0.24b0](https://pypi.org/project/opentelemetry-instrumentation-psycopg2/0.24b0/)
+- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) version:
+ [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-django/0.34b0/)
+- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) version:
+ [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-flask/0.34b0/)
+- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) version:
+ [0.34b0](https://pypi.org/project/opentelemetry-instrumentation-requests/0.34b0/)
-> [!NOTE]
-> The *preview* offering only includes instrumentations that handle HTTP and database requests. To learn more, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/tree/main/specification/trace/semantic_conventions).
+> [!TIP]
+> The OpenTelemetry-based offerings currently emit all metrics as [Custom Metrics](#add-custom-metrics) in Metrics Explorer. Whatever you set as the meter name becomes the metrics namespace.
## Modify telemetry
These attributes might include adding a custom property to your telemetry. You m
> [!TIP] > The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute.
-#### Add a custom property
+#### Add a custom property to a Trace
Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests or the dependencies tables in Application Insights. ##### [.NET](#tab/net) 1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich)
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
- - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
+ - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich)
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
+ - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
1. Use a custom processor:
const provider = new NodeTracerProvider();
const azureExporter = new AzureMonitorTraceExporter(); provider.addSpanProcessor(new SpanEnrichingProcessor()); provider.addSpanProcessor(new SimpleSpanProcessor(azureExporter));- ``` ##### [Python](#tab/python)
class SpanEnrichingProcessor(SpanProcessor):
span._name = "Updated-" + span.name span._attributes["CustomDimension1"] = "Value1" span._attributes["CustomDimension2"] = "Value2"- ```
You can populate the _client_IP_ field for requests by setting the `http.client_
##### [.NET](#tab/net)
-Use the add [custom property example](#add-a-custom-property), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
+Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
```C# // only applicable in case of activity.Kind == Server
activity.SetTag("http.client_ip", "<IP Address>");
##### [Node.js](#tab/nodejs)
-Use the add [custom property example](#add-a-custom-property), but replace the following lines of code:
+Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code:
```typescript ...
class SpanEnrichingProcessor implements SpanProcessor{
##### [Python](#tab/python)
-Use the add [custom property example](#add-a-custom-property), but replace the following lines of code in `SpanEnrichingProcessor.py`:
+Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code in `SpanEnrichingProcessor.py`:
```python span._attributes["http.client_ip"] = "<IP Address>"
You can populate the _user_Id_ or _user_Authenticatedid_ field for requests by s
##### [.NET](#tab/net)
-Use the add [custom property example](#add-custom-property), but replace the following lines of code:
+Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code:
```C# Placeholder
Placeholder
##### [Node.js](#tab/nodejs)
-Use the add [custom property example](#add-custom-property), but replace the following lines of code:
+Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code:
```typescript ...
class SpanEnrichingProcessor implements SpanProcessor{
##### [Python](#tab/python)
-Use the add [custom property example](#add-custom-property), but replace the following lines of code:
+Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code:
```python span._attributes["enduser.id"] = "<User ID>"
You might use the following ways to filter out telemetry before it leaves your a
#### [.NET](#tab/net) 1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
- - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc7/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
+ - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.6/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
+ - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.7/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
1. Use a custom processor:
You might use the following ways to filter out telemetry before it leaves your a
1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source will be exported.
- <!
- ### Get the trace ID or span ID
- You might use X or Y to get the trace ID or span ID. Adding a trace ID or span ID to existing logging telemetry enables better correlation when you debug and diagnose issues.
-
- > [!NOTE]
- > If you manually create spans for log-based metrics and alerting, you'll need to update them to use the metrics API (after it's released) to ensure accuracy.
-
- ```C#
- Placeholder
- ```
-
- For more information, see [GitHub Repo](link).
- >
- #### [Node.js](#tab/nodejs) 1. Exclude the URL option provided by many HTTP instrumentation libraries.
You might use the following ways to filter out telemetry before it leaves your a
The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http): ```typescript
- ...
+ import { IncomingMessage } from "http";
+ import { RequestOptions } from "https";
+ import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { HttpInstrumentation, HttpInstrumentationConfig } from "@opentelemetry/instrumentation-http";
-
- ...
+ import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
++ const httpInstrumentationConfig: HttpInstrumentationConfig = {
- ignoreIncomingPaths: [new RegExp(/dc.services.visualstudio.com/i)]
+ ignoreIncomingRequestHook: (request: IncomingMessage) => {
+ // Ignore OPTIONS incoming requests
+ if (request.method === 'OPTIONS') {
+ return true;
+ }
+ return false;
+ },
+ ignoreOutgoingRequestHook: (options: RequestOptions) => {
+ // Ignore outgoing requests with /test path
+ if (options.path === '/test') {
+ return true;
+ }
+ return false;
+ }
}; const httpInstrumentation = new HttpInstrumentation(httpInstrumentationConfig);
+ const provider = new NodeTracerProvider();
provider.register(); registerInstrumentations({ instrumentations: [
You might use the following ways to filter out telemetry before it leaves your a
``` 1. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
-Use the add [custom property example](#add-a-custom-property), but replace the following lines of code:
+Use the add [custom property example](#add-a-custom-property-to-a-trace), but replace the following lines of code:
```typescript ...
Use the add [custom property example](#add-a-custom-property), but replace the f
+## Custom telemetry
+
+This section explains how to collect custom telemetry from your application.
+
+### Add Custom Metrics
+
+> [!NOTE]
+> Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
+
+You may want to collect metrics beyond what is collected by [instrumentation libraries](#instrumentation-libraries).
+
+The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you'll need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
+
+The following table shows the recommended [aggregation types](/essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
+
+| OpenTelemetry Instrument | Azure Monitor Aggregation Type |
+|||
+| Counter | Sum |
+| Asynchronous Counter | Sum |
+| Histogram | Average, Sum, Count (Max, Min for Python and Node.js only) |
+| Asynchronous Gauge | Average |
+| UpDownCounter (Python and Node.js only) | Sum |
+| Asynchronous UpDownCounter (Python and Node.js only) | Sum |
+
+> [!CAUTION]
+> Aggregation types beyond what's shown in the table typically aren't meaningful.
+
+The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument)
+describes the instruments and provides examples of when you might use each one.
+
+> [!TIP]
+> The histogram is the most versatile and most closely equivalent to the prior Application Insights Track Metric API. Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
+
+#### Histogram Example
+
+#### [.NET](#tab/net)
+
+```csharp
+using System.Diagnostics.Metrics;
+using Azure.Monitor.OpenTelemetry.Exporter;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+
+ Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
+
+ var rand = new Random();
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+}
+```
+
+#### [Node.js](#tab/nodejs)
+
+ ```typescript
+ import {
+ MeterProvider,
+ PeriodicExportingMetricReader,
+ PeriodicExportingMetricReaderOptions
+ } from "@opentelemetry/sdk-metrics";
+ import { AzureMonitorMetricExporter } from "@azure/monitor-opentelemetry-exporter";
+
+ const provider = new MeterProvider();
+ const exporter = new AzureMonitorMetricExporter({
+ connectionString:
+ process.env["APPLICATIONINSIGHTS_CONNECTION_STRING"] || "<your connection string>",
+ });
+ const metricReaderOptions: PeriodicExportingMetricReaderOptions = {
+ exporter: exporter,
+ };
+ const metricReader = new PeriodicExportingMetricReader(metricReaderOptions);
+ provider.addMetricReader(metricReader);
+ const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+ let histogram = meter.createHistogram("histogram");
+ histogram.record(1, { "testKey": "testValue" });
+ histogram.record(30, { "testKey": "testValue2" });
+ histogram.record(100, { "testKey2": "testValue" });
+```
+
+#### [Python](#tab/python)
+
+```python
+from opentelemetry import metrics
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
+
+from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter
+
+exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string")
+reader = PeriodicExportingMetricReader(exporter)
+metrics.set_meter_provider(MeterProvider(metric_readers=[reader]))
+meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_histogram_demo")
+
+histogram = meter.create_histogram("histogram")
+histogram.record(1.0, {"test_key": "test_value"})
+histogram.record(100.0, {"test_key2": "test_value"})
+histogram.record(30.0, {"test_key": "test_value2"})
+
+input()
+```
+++
+#### Counter Example
+
+#### [.NET](#tab/net)
+
+```csharp
+using System.Diagnostics.Metrics;
+using Azure.Monitor.OpenTelemetry.Exporter;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+
+ Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
+
+ myFruitCounter.Add(1, new("name", "apple"), new("color", "red"));
+ myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow"));
+ myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow"));
+ myFruitCounter.Add(2, new("name", "apple"), new("color", "green"));
+ myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
+ myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow"));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+}
+```
+
+#### [Node.js](#tab/nodejs)
+
+```typescript
+ import {
+ MeterProvider,
+ PeriodicExportingMetricReader,
+ PeriodicExportingMetricReaderOptions
+ } from "@opentelemetry/sdk-metrics";
+ import { AzureMonitorMetricExporter } from "@azure/monitor-opentelemetry-exporter";
+
+ const provider = new MeterProvider();
+ const exporter = new AzureMonitorMetricExporter({
+ connectionString:
+ process.env["APPLICATIONINSIGHTS_CONNECTION_STRING"] || "<your connection string>",
+ });
+ const metricReaderOptions: PeriodicExportingMetricReaderOptions = {
+ exporter: exporter,
+ };
+ const metricReader = new PeriodicExportingMetricReader(metricReaderOptions);
+ provider.addMetricReader(metricReader);
+ const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+ let counter = meter.createCounter("counter");
+ counter.add(1, { "testKey": "testValue" });
+ counter.add(5, { "testKey2": "testValue" });
+ counter.add(3, { "testKey": "testValue2" });
+```
+
+#### [Python](#tab/python)
+
+```python
+from opentelemetry import metrics
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
+
+from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter
+
+exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string")
+reader = PeriodicExportingMetricReader(exporter)
+metrics.set_meter_provider(MeterProvider(metric_readers=[reader]))
+meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_counter_demo")
+
+counter = meter.create_counter("counter")
+counter.add(1.0, {"test_key": "test_value"})
+counter.add(5.0, {"test_key2": "test_value"})
+counter.add(3.0, {"test_key": "test_value2"})
+
+input()
+```
+++
+#### Gauge Example
+
+#### [.NET](#tab/net)
+
+```csharp
+using System.Diagnostics.Metrics;
+using Azure.Monitor.OpenTelemetry.Exporter;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter(o =>
+ {
+ o.ConnectionString = "<Your Connection String>";
+ })
+ .Build();
+
+ var process = Process.GetCurrentProcess();
+
+ ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+
+ private static IEnumerable<Measurement<int>> GetThreadState(Process process)
+ {
+ foreach (ProcessThread thread in process.Threads)
+ {
+ yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id));
+ }
+ }
+}
+```
+
+#### [Node.js](#tab/nodejs)
+
+```typescript
+ import {
+ MeterProvider,
+ PeriodicExportingMetricReader,
+ PeriodicExportingMetricReaderOptions
+ } from "@opentelemetry/sdk-metrics";
+ import { AzureMonitorMetricExporter } from "@azure/monitor-opentelemetry-exporter";
+
+ const provider = new MeterProvider();
+ const exporter = new AzureMonitorMetricExporter({
+ connectionString:
+ process.env["APPLICATIONINSIGHTS_CONNECTION_STRING"] || "<your connection string>",
+ });
+ const metricReaderOptions: PeriodicExportingMetricReaderOptions = {
+ exporter: exporter,
+ };
+ const metricReader = new PeriodicExportingMetricReader(metricReaderOptions);
+ provider.addMetricReader(metricReader);
+ const meter = provider.getMeter("OTel.AzureMonitor.Demo");
+ let gauge = meter.createObservableGauge("gauge");
+ gauge.addCallback((observableResult: ObservableResult) => {
+ let randomNumber = Math.floor(Math.random() * 100);
+ observableResult.observe(randomNumber, {"testKey": "testValue"});
+ });
+```
+
+#### [Python](#tab/python)
+
+```python
+from typing import Iterable
+
+from opentelemetry import metrics
+from opentelemetry.metrics import CallbackOptions, Observation
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
+
+from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter
+
+exporter = AzureMonitorMetricExporter(connection_string="<your-connection-string")
+reader = PeriodicExportingMetricReader(exporter)
+metrics.set_meter_provider(MeterProvider(metric_readers=[reader]))
+meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_gauge_demo")
+
+def observable_gauge_generator(options: CallbackOptions) -> Iterable[Observation]:
+ yield Observation(9, {"test_key": "test_value"})
+
+def observable_gauge_sequence(options: CallbackOptions) -> Iterable[Observation]:
+ observations = []
+ for i in range(10):
+ observations.append(
+ Observation(9, {"test_key": i})
+ )
+ return observations
+
+gauge = meter.create_observable_gauge("gauge", [observable_gauge_generator])
+gauge2 = meter.create_observable_gauge("gauge2", [observable_gauge_sequence])
+
+input()
+```
+++
+### Add Custom Exceptions
+
+Select instrumentation libraries automatically support exceptions to Application Insights.
+However, you may want to manually report exceptions beyond what instrumentation libraries report.
+For instance, exceptions caught by your code aren't* ordinarily reported. You may wish to report them
+to draw attention in relevant experiences including the failures section and end-to-end transaction views.
+
+#### [.NET](#tab/net)
+
+```csharp
+using (var activity = activitySource.StartActivity("ExceptionExample"))
+{
+ try
+ {
+ throw new Exception("Test exception");
+ }
+ catch (Exception ex)
+ {
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.RecordException(ex);
+ }
+}
+```
+
+#### [Node.js](#tab/nodejs)
+
+```typescript
+import * as opentelemetry from "@opentelemetry/api";
+import { BasicTracerProvider, SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base";
+import { AzureMonitorTraceExporter } from "@azure/monitor-opentelemetry-exporter";
+
+const provider = new BasicTracerProvider();
+const exporter = new AzureMonitorTraceExporter({
+ connectionString:
+ process.env["APPLICATIONINSIGHTS_CONNECTION_STRING"] || "<your connection string>",
+});
+provider.addSpanProcessor(new SimpleSpanProcessor(exporter as any));
+provider.register();
+const tracer = opentelemetry.trace.getTracer("example-basic-tracer-node");
+let span = tracer.startSpan("hello");
+try{
+ throw new Error("Test Error");
+}
+catch(error){
+ span.recordException(error);
+}
+```
+
+#### [Python](#tab/python)
+
+The OpenTelemetry Python SDK is implemented such that exceptions thrown will automatically be captured and recorded. See below for an example of this.
+
+```python
+from opentelemetry import trace
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
+
+exporter = AzureMonitorTraceExporter(connection_string="<your-connection-string>")
+
+trace.set_tracer_provider(TracerProvider())
+tracer = trace.get_tracer("otel_azure_monitor_exception_demo")
+span_processor = BatchSpanProcessor(exporter)
+trace.get_tracer_provider().add_span_processor(span_processor)
+
+# Exception events
+try:
+ with tracer.start_as_current_span("hello") as span:
+ # This exception will be automatically recorded
+ raise Exception("Custom exception message.")
+except Exception:
+ print("Exception raised")
+
+```
+
+If you would like to record exceptions manually, you can disable that option when creating the span as show below.
+
+```python
+...
+with tracer.start_as_current_span("hello", record_exception=False) as span:
+ try:
+ raise Exception("Custom exception message.")
+ except Exception as ex:
+ # Manually record exception
+ span.record_exception(ex)
+...
+
+```
+++ ## Enable the OTLP Exporter You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside your Azure Monitor Exporter to send your telemetry to two locations.
You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside yo
trace.set_tracer_provider(TracerProvider()) tracer = trace.get_tracer(__name__)
- exporter = AzureMonitorTraceExporter.from_connection_string(
- "<Your Connection String>"
- )
+ exporter = AzureMonitorTraceExporter(connection_string="<your-connection-string>")
otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317") span_processor = BatchSpanProcessor(otlp_exporter) trace.get_tracer_provider().add_span_processor(span_processor)
You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside yo
+## Configuration
+
+### Offline Storage and Automatic Retries
+
+To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry for 48 hours and periodically tries to send it again. In addition to exceeding the allowable time, telemetry will occasionally be dropped in high-load applications when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product will save more recent events over old ones. In some cases, you may wish to disable this feature to optimize application performance. [Learn More](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage)
+
+#### [.NET](#tab/net)
+
+By default, the AzureMonitorExporter uses one of the following locations for offline storage (listed in order of precedence):
+
+- Windows
+ - %LOCALAPPDATA%\Microsoft\AzureMonitor
+ - %TEMP%\Microsoft\AzureMonitor
+- Non-Windows
+ - %TMPDIR%/Microsoft/AzureMonitor
+ - /var/tmp/Microsoft/AzureMonitor
+ - /tmp/Microsoft/AzureMonitor
+
+To override the default directory, you should set `AzureMonitorExporterOptions.StorageDirectory`.
+
+For example:
+```csharp
+var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddAzureMonitorTraceExporter(o => {
+ o.ConnectionString = "<Your Connection String>";
+ o.StorageDirectory = "C:\\SomeDirectory";
+ })
+ .Build();
+```
+
+To disable this feature, you should set `AzureMonitorExporterOptions.DisableOfflineStorage = true`.
+
+#### [Node.js](#tab/nodejs)
+
+By default, the AzureMonitorExporter uses one of the following locations for offline storage.
+
+- Windows
+ - %TEMP%\Microsoft\AzureMonitor
+- Non-Windows
+ - %TMPDIR%/Microsoft/AzureMonitor
+ - /var/tmp/Microsoft/AzureMonitor
+
+To override the default directory, you should set `storageDirectory`.
+
+For example:
+```typescript
+const exporter = new AzureMonitorTraceExporter({
+ connectionString:
+ process.env["APPLICATIONINSIGHTS_CONNECTION_STRING"] || "<your connection string>",
+ storageDirectory: "C:\\SomeDirectory",
+ disableOfflineStorage: false
+});
+```
+
+To disable this feature, you should set `disableOfflineStorage = true`.
+
+#### [Python](#tab/python)
+
+By default, the Azure Monitor exporters will use the following path:
+
+`<tempfile.gettempdir()>/Microsoft/AzureMonitor/opentelemetry-python-<your-instrumentation-key>`
+
+To override the default directory, you should set `storage_directory` to the directory you want.
+
+For example:
+```python
+...
+exporter = AzureMonitorTraceExporter(connection_string="your-connection-string", storage_directory="C:\\SomeDirectory")
+...
+
+```
+
+To disable this feature, you should set `disable_offline_storage` to `True`. Defaults to `False`.
+
+For example:
+```python
+...
+exporter = AzureMonitorTraceExporter(connection_string="your-connection-string", disable_offline_storage=True)
+...
+
+```
+++ ## Troubleshooting This section provides help with troubleshooting.
provider.register();
#### [Python](#tab/python)
-The Azure Monitor Exporter uses the Python standard logging [library](https://docs.python.org/3/library/logging.html) for its own internal logging. OpenTelemetry API and Azure Monitor Exporter logs are usually logged at the severity level of WARNING or ERROR for irregular activity. The INFO severity level is used for regular or successful activity. By default, the Python logging library sets the severity level to WARNING, so you must change the severity level to see logs under this severity setting. The following example shows how to output logs of *all* severity levels to the console *and* a file:
+The Azure Monitor Exporter uses the Python standard logging [library](https://docs.python.org/3/library/logging.html) for its own internal logging. OpenTelemetry API and Azure Monitor Exporter logs are logged at the severity level of WARNING or ERROR for irregular activity. The INFO severity level is used for regular or successful activity. By default, the Python logging library sets the severity level to WARNING, so you must change the severity level to see logs under this severity setting. The following example shows how to output logs of *all* severity levels to the console *and* a file:
```python ...
To provide feedback:
### [Node.js](#tab/nodejs) - To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter).-- To install the NPM package, check for updates, or view release notes, see the [Azure Monitor Exporter NPM Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) page.
+- To install the npm package, check for updates, or view release notes, see the [Azure Monitor Exporter npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry-exporter) page.
- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter/samples). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
N/A
# [Azure CLI](#tab/azure-cli) ```azurecli
+az account set ΓÇösubscription "storage-account-subscription-id"
+ $storageAccountId = '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage name>' az account set ΓÇösubscription "workspace-subscription-id"
az monitor log-analytics workspace linked-storage create ΓÇötype Query ΓÇöresour
# [PowerShell](#tab/powershell) ```powershell
-$storageAccount.Id = Get-AzStorageAccount -ResourceGroupName "resource-group-name" -Name "storage-account-name"
+Select-AzSubscription "StorageAccount-subscription-id"
+
+$storageAccountId = (Get-AzStorageAccount -ResourceGroupName "resource-group-name" -Name "storage-account-name").id
Select-AzSubscription "workspace-subscription-id"
-New-AzOperationalInsightsLinkedStorageAccount -ResourceGroupName "resource-group-name" -WorkspaceName "workspace-name" -DataSourceType Query -StorageAccountIds $storageAccount.Id
+New-AzOperationalInsightsLinkedStorageAccount -ResourceGroupName "resource-group-name" -WorkspaceName "workspace-name" -DataSourceType Query -StorageAccountIds $storageAccountId
``` # [REST](#tab/rest)
N/A
# [Azure CLI](#tab/azure-cli) ```azurecli
+az account set ΓÇösubscription "storage-account-subscription-id"
+ $storageAccountId = '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage name>' az account set ΓÇösubscription "workspace-subscription-id"
az monitor log-analytics workspace linked-storage create ΓÇötype ALerts ΓÇöresou
# [PowerShell](#tab/powershell) ```powershell
-$storageAccount.Id = Get-AzStorageAccount -ResourceGroupName "resource-group-name" -Name "storage-account-name"
+Select-AzSubscription "StorageAccount-subscription-id"
+
+$storageAccountId = (Get-AzStorageAccount -ResourceGroupName "resource-group-name" -Name "storage-account-name").id
Select-AzSubscription "workspace-subscription-id"
-New-AzOperationalInsightsLinkedStorageAccount -ResourceGroupName "resource-group-name" -WorkspaceName "workspace-name" -DataSourceType Alerts -StorageAccountIds $storageAccount.Id
+New-AzOperationalInsightsLinkedStorageAccount -ResourceGroupName "resource-group-name" -WorkspaceName "workspace-name" -DataSourceType Alerts -StorageAccountIds $storageAccountId
``` # [REST](#tab/rest)
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data that is sent to yo
- You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled. - Destinations must be in the same region as the Log Analytics workspace. - Storage Account must be unique across rules in workspace.-- Tables names can be no longer than 60 characters when exporting to Storage Account and 47 characters to Event Hubs. Tables with longer names will not be exported.
+- Tables names can be 60 characters long when exporting to Storage Account, and 47 characters to Event Hubs. Tables with longer names won't be exported.
- Data export isn't supported in China currently. ## Data completeness
If you have configured your Storage Account to allow access from selected networ
Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. Storage Account must be unique across rules in workspace. Multiple rules can use the same Event Hubs namespace when sending to separate Event Hubs. > [!NOTE]
-> - You can include tables that aren't yet supported in export, and no data will be exported for these until the tables are supported.
-> - The legacy custom log wonΓÇÖt be supported in export. The next generation of custom log available in preview early 2022 can be exported.
+> - You can include tables that aren't yet supported in rules, but no data will be exported for these until tables get supported.
> - Export to Storage Account - a separate container is created in Storage Account for each table. > - Export to Event Hubs - if Event Hubs name isn't provided, a separate Event Hubs is created for each table. The [number of supported Event Hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces, or provide an Event Hubs name in the rule to export all tables to it.
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
In NFSv4.1, sessions define the relationship between the client and the server.
Although Linux clients default to 64 maximum requests per session, the value of `max_session_slots` is tunable. A reboot is required for changes to take effect. Use the `systool -v -m nfs` command to see the current maximum in use by the client. For the command to work, at least one NFSv4.1 mount must be in place:
-```
+```shell
$ systool -v -m nfs { Module = "nfs"
-…
+...
Parameters:
-…
- max_session_slots = "64"
-…
+...
+ max_session_slots = "64"
+...
} ```
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
na Previously updated : 10/20/2022 Last updated : 10/21/2022 # Use availability zones for high availability in Azure NetApp Files
-Azure [availability zones](../availability-zones/az-overview.md#availability-zones) are physically separate locations within each supporting Azure region that are tolerant to local failures. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions.
+Azure [availability zones](../availability-zones/az-overview.md#availability-zones) are physically separate locations within each supporting Azure region that are tolerant to local failures. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all [availability zone-enabled regions](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
>[!IMPORTANT] > Availability zones are referred to as _logical zones_. Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription, and the mapping will be different with different subscriptions. Azure subscriptions are automatically assigned this mapping when a subscription is created. Azure NetApp Files aligns with the generic logical-to-physical availability zone mapping for all Azure services for the subscription. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Azure availability zones let you design and operate applications and databases that automatically transition between zones without interruption. You can design resilient solutions by using Azure services that use availability zones.
-The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/architecture/framework/resiliency/app-design#use-availability-zones-within-a-region). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation.
+The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/architecture/framework/resiliency/app-design#use-availability-zones-within-a-region). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation.
-Azure NetApp Files lets you deploy volumes in availability zones. The Azure NetApp Files [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy volumes in the logical availability zone of your choice, in alignment with Azure compute and other services in the same zone.
-Azure NetApp Files deployments will occur in the availability of zone of choice if the Azure NetApp Files is present in that availability zone and if it has sufficient capacity. All VMs within the region in (peered) VNets can access all Azure NetApp Files resources.
+Azure NetApp Files' [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy volumes in availability zones of your choice, in alignment with Azure compute and other services in the same zone.
+
+All Virtual Machines within the region in (peered) VNets can access all Azure NetApp Files resources (blue arrows). Virtual Machines accessing Azure NetApp Files volumes in the same zone (green arrows) share the availability zone failure domain.
+
+Azure NetApp Files deployments will occur in the availability of zone of choice if Azure NetApp Files is present in that availability zone and has sufficient capacity.
>[!IMPORTANT] >Azure NetApp Files availability zone volume placement provides zonal placement. It doesn't provide proximity placement towards compute. As such, it doesnΓÇÖt provide lowest latency guarantee. VM-to-storage latencies are within the availability zone latency envelopes.
azure-relay Relay Hybrid Connections Dotnet Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-dotnet-api-overview.md
The following code reads individual lines of text from the stream until a cancel
```csharp // Create a CancellationToken, so that we can cancel the while loop var cancellationToken = new CancellationToken();
-// Create a StreamReader from the 'hybridConnectionStream`
+// Create a StreamReader from the hybridConnectionStream
var streamReader = new StreamReader(hybridConnectionStream); while (!cancellationToken.IsCancellationRequested)
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-10-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-10-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-10-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-security-integration.md
Title: Integrate Microsoft Defender for Cloud with Azure VMware Solution
description: Learn how to protect your Azure VMware Solution VMs with Azure's native security tools from the workload protection dashboard. Previously updated : 06/14/2021 Last updated : 10/18/2022 # Integrate Microsoft Defender for Cloud with Azure VMware Solution
-Microsoft Defender for Cloud provides advanced threat protection across your Azure VMware Solution and on-premises virtual machines (VMs). It assesses the vulnerability of Azure VMware Solution VMs and raises alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Microsoft Defender for Cloud. For more information, see [Working with security policies](../security-center/tutorial-security-policy.md).
+Microsoft Defender for Cloud provides advanced threat protection across your Azure VMware Solution and on-premises virtual machines (VMs). It assesses the vulnerability of Azure VMware Solution VMs and raises alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Microsoft Defender for Cloud. For more information, see [Working with security policies](../security-center/tutorial-security-policy.md).
Microsoft Defender for Cloud offers many features, including:+ - File integrity monitoring - Fileless attack detection-- Operating system patch assessment
+- Operating system patch assessment
- Security misconfigurations assessment - Endpoint protection assessment The diagram shows the integrated monitoring architecture of integrated security for Azure VMware Solution VMs.
-
+ :::image type="content" source="media/azure-security-integration/azure-integrated-security-architecture.png" alt-text="Diagram showing the architecture of Azure Integrated Security." border="false":::
-**Log Analytics agent** collects log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a **Log Analytics Workspace**. Each workspace has its own data repository and configuration to store data. Once the logs are collected, **Microsoft Defender for Cloud** assesses the vulnerability status of Azure VMware Solution VMs and raises an alert for any critical vulnerability. Once assessed, Microsoft Defender for Cloud forwards the vulnerability status to Microsoft Sentinel to create an incident and map with other threats. Microsoft Defender for Cloud is connected to Microsoft Sentinel using Microsoft Defender for Cloud Connector.
+**Log Analytics agent** collects log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a **Log Analytics Workspace**. Each workspace has its own data repository and configuration to store data. Once the logs are collected, **Microsoft Defender for Cloud** assesses the vulnerability status of Azure VMware Solution VMs and raises an alert for any critical vulnerability. Once assessed, Microsoft Defender for Cloud forwards the vulnerability status to Microsoft Sentinel to create an incident and map with other threats. Microsoft Defender for Cloud is connected to Microsoft Sentinel using Microsoft Defender for Cloud Connector.
## Prerequisites
The diagram shows the integrated monitoring architecture of integrated security
- [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) to collect data from various sources. -- [Enable Microsoft Defender for Cloud in your subscription](../security-center/security-center-get-started.md).
+- [Enable Microsoft Defender for Cloud in your subscription](../security-center/security-center-get-started.md).
>[!NOTE] >Microsoft Defender for Cloud is a pre-configured tool that doesn't require deployment, but you'll need to enable it. -- [Enable Microsoft Defender for Cloud](../security-center/enable-azure-defender.md). -
+- [Enable Microsoft Defender for Cloud](../security-center/enable-azure-defender.md).
## Add Azure VMware Solution VMs to Defender for Cloud
The diagram shows the integrated monitoring architecture of integrated security
2. Under Resources, select **Servers** and then **+Add**.
- :::image type="content" source="media/azure-security-integration/add-server-to-azure-arc.png" alt-text="Screenshot showing Azure Arc Servers page for adding an Azure VMware Solution VM to Azure.":::
+ :::image type="content" source="media/azure-security-integration/add-server-to-azure-arc.png" alt-text="Screenshot showing Azure Arc Servers page for adding an Azure VMware Solution VM to Azure."lightbox="media/azure-security-integration/add-server-to-azure-arc.png":::
3. Select **Generate script**.
-
- :::image type="content" source="media/azure-security-integration/add-server-using-script.png" alt-text="Screenshot of Azure Arc page showing option for adding a server using interactive script.":::
-
+
+ :::image type="content" source="media/azure-security-integration/add-server-using-script.png" alt-text="Screenshot of Azure Arc page showing option for adding a server using interactive script."lightbox="media/azure-security-integration/add-server-using-script.png":::
+ 4. On the **Prerequisites** tab, select **Next**.
-5. On the **Resource details** tab, fill in the following details and then select **Next: Tags**.
+5. On the **Resource details** tab, fill in the following details and then select **Next. Tags**:
- Subscription- - Resource group-
- - Region
-
+ - Region
- Operating system- - Proxy Server details
-
+ 6. On the **Tags** tab, select **Next**. 7. On the **Download and run script** tab, select **Download**.
The diagram shows the integrated monitoring architecture of integrated security
## View recommendations and passed assessments
-Recommendations and assessments provide you with the security health details of your resource.
+Recommendations and assessments provide you with the security health details of your resource.
1. In Microsoft Defender for Cloud, select **Inventory** from the left pane. 2. For Resource type, select **Servers - Azure Arc**.
-
- :::image type="content" source="media/azure-security-integration/select-resource-in-security-center.png" alt-text="Screenshot showing the Microsoft Defender for Cloud Inventory page with the Servers - Azure Arc selected under Resource type.":::
+
+ :::image type="content" source="media/azure-security-integration/select-resource-in-security-center.png" alt-text="Screenshot showing the Microsoft Defender for Cloud Inventory page with the Servers - Azure Arc selected under Resource type."lightbox="media/azure-security-integration/select-resource-in-security-center.png":::
3. Select the name of your resource. A page opens showing the security health details of your resource. 4. Under **Recommendation list**, select the **Recommendations**, **Passed assessments**, and **Unavailable assessments** tabs to view these details.
- :::image type="content" source="media/azure-security-integration/view-recommendations-assessments.png" alt-text="Screenshot showing the Microsoft Defender for Cloud security recommendations and assessments.":::
+ :::image type="content" source="media/azure-security-integration/view-recommendations-assessments.png" alt-text="Screenshot showing the Microsoft Defender for Cloud security recommendations and assessments."lightbox="media/azure-security-integration/view-recommendations-assessments.png":::
## Deploy a Microsoft Sentinel workspace
-Microsoft Sentinel provides security analytics, alert detection, and automated threat response across an environment. It's a cloud-native, security information event management (SIEM) solution that's built on top of a Log Analytics Workspace.
+Microsoft Sentinel provides security analytics, alert detection, and automated threat response across an environment. It's a cloud-native, security information event management (SIEM) solution that's built on top of a Log Analytics workspace.
Since Microsoft Sentinel is built on top of a Log Analytics workspace, you'll only need to select the workspace you want to use.
Since Microsoft Sentinel is built on top of a Log Analytics workspace, you'll on
2. Under Configuration, select **Data connectors**.
-3. Under the Connector Name column, select **Security Events** from the list, and then select **Open connector page**.
-
-4. On the connector page, select the events you wish to stream and then select **Apply Changes**.
-
- :::image type="content" source="media/azure-security-integration/select-events-you-want-to-stream.png" alt-text="Screenshot of Security Events page in Microsoft Sentinel where you can select which events to stream.":::
-
+3. Under the Connector Name column, select **Security Events** from the list, then select **Open connector page**.
+4. On the connector page, select the events you wish to stream, then select **Apply Changes**.
+ :::image type="content" source="media/azure-security-integration/select-events-you-want-to-stream.png" alt-text="Screenshot of Security Events page in Microsoft Sentinel where you can select which events to stream."lightbox="media/azure-security-integration/select-events-you-want-to-stream.png":::
## Connect Microsoft Sentinel with Microsoft Defender for Cloud
Since Microsoft Sentinel is built on top of a Log Analytics workspace, you'll on
2. Under Configuration, select **Data connectors**.
-3. Select **Microsoft Defender for Cloud** from the list and then select **Open connector page**.
+3. Select **Microsoft Defender for Cloud** from the list, then select **Open connector page**.
- :::image type="content" source="media/azure-security-integration/connect-security-center-with-azure-sentinel.png" alt-text="Screenshot of Data connectors page in Microsoft Sentinel showing selection to connect Microsoft Defender for Cloud with Microsoft Sentinel.":::
+ :::image type="content" source="media/azure-security-integration/connect-security-center-with-azure-sentinel.png" alt-text="Screenshot of Data connectors page in Microsoft Sentinel showing selection to connect Microsoft Defender for Cloud with Microsoft Sentinel."lightbox="media/azure-security-integration/connect-security-center-with-azure-sentinel.png":::
4. Select **Connect** to connect the Microsoft Defender for Cloud with Microsoft Sentinel.
Since Microsoft Sentinel is built on top of a Log Analytics workspace, you'll on
## Create rules to identify security threats
-After connecting data sources to Microsoft Sentinel, you can create rules to generate alerts for detected threats. In the following example, we'll create a rule for attempts to sign in to Windows server with the wrong password.
+After connecting data sources to Microsoft Sentinel, you can create rules to generate alerts for detected threats. In the following example, we'll create a rule for attempts to sign into Windows server with the wrong password.
1. On the Microsoft Sentinel overview page, under Configurations, select **Analytics**.
After connecting data sources to Microsoft Sentinel, you can create rules to gen
4. On the **General** tab, enter the required information and then select **Next: Set rule logic**. - Name- - Description- - Tactics- - Severity- - Status
-5. On the **Set rule logic** tab, enter the required information, and then select **Next**.
+5. On the **Set rule logic** tab, enter the required information, then select **Next**.
- Rule query (here showing our example query)
After connecting data sources to Microsoft Sentinel, you can create rules to gen
|summarize count () by IpAddress,Computer |where count_ > 3 ```
-
- - Map entities
+ - Map entities
- Query scheduling- - Alert threshold- - Event grouping- - Suppression - 6. On the **Incident settings** tab, enable **Create incidents from alerts triggered by this analytics rule** and select **Next: Automated response**. :::image type="content" source="../sentinel/media/tutorial-detect-threats-custom/general-tab.png" alt-text="Screenshot showing the Analytic rule wizard for creating a new rule in Microsoft Sentinel.":::
After connecting data sources to Microsoft Sentinel, you can create rules to gen
8. On the **Review and create** tab, review the information, and select **Create**. >[!TIP]
->After the third failed attempt to sign in to Windows server, the created rule triggers an incident for every unsuccessful attempt.
+>After the third failed attempt to sign into Windows server, the created rule triggers an incident for every unsuccessful attempt.
## View alerts
You can view generated incidents with Microsoft Sentinel. You can also assign in
3. Select an incident and then assign it to a team for resolution.
- :::image type="content" source="media/azure-security-integration/assign-incident.png" alt-text="Screenshot of Microsoft Sentinel Incidents page with incident selected and option to assign the incident for resolution.":::
+ :::image type="content" source="media/azure-security-integration/assign-incident.png" alt-text="Screenshot of Microsoft Sentinel Incidents page with incident selected and option to assign the incident for resolution."lightbox="media/azure-security-integration/assign-incident.png":::
>[!TIP] >After resolving the issue, you can close it.
You can create queries or use the available pre-defined query in Microsoft Senti
1. On the Microsoft Sentinel overview page, under Threat management, select **Hunting**. A list of pre-defined queries is displayed. >[!TIP]
- >You can also create a new query by selecting **New Query**.
+ >You can also create a new query by selecting **New Query**.
> >:::image type="content" source="../sentinel/media/hunting/save-query.png" alt-text="Screenshot of Microsoft Sentinel Hunting page with + New Query highlighted.":::
You can create queries or use the available pre-defined query in Microsoft Senti
4. Select **View Results** to check the results. -- ## Next steps Now that you've covered how to protect your Azure VMware Solution VMs, you may want to learn about:
azure-vmware Concepts Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-hub-and-spoke.md
Title: Concept - Integrate an Azure VMware Solution deployment in a hub and spok
description: Learn about integrating an Azure VMware Solution deployment in a hub and spoke architecture on Azure. Previously updated : 10/26/2020 Last updated : 10/20/2022 # Integrate Azure VMware Solution in a hub and spoke architecture
-This article provides recommendations for integrating an Azure VMware Solution deployment in an existing or a new [Hub and Spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/#hub-spoke-network-topology) on Azure.
+This article provides recommendations for integrating an Azure VMware Solution deployment in an existing or a new [Hub and Spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/#hub-spoke-network-topology) on Azure.
The Hub and Spoke scenario assume a hybrid cloud environment with workloads on: * Native Azure using IaaS or PaaS services
-* Azure VMware Solution
+* Azure VMware Solution
* vSphere on-premises ## Architecture
The architecture has the following main components:
- **ExpressRoute gateway:** Enables the communication between Azure VMware Solution private cloud, shared services on Hub virtual network, and workloads running on Spoke virtual networks. -- **ExpressRoute Global Reach:** Enables the connectivity between on-premises and Azure VMware Solution private cloud. The connectivity between Azure VMware Solution and the Azure fabric is through ExpressRoute Global Reach only.
+- **ExpressRoute Global Reach:** Enables the connectivity between on-premises and Azure VMware Solution private cloud. The connectivity between Azure VMware Solution and the Azure fabric is through ExpressRoute Global Reach only.
- **S2S VPN considerations:** Connectivity to Azure VMware Solution private cloud using Azure S2S VPN is supported as long as it meets the [minimum network requirements](https://docs.vmware.com/en/VMware-HCX/4.4/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html) for VMware HCX.
The architecture has the following main components:
- **Spoke virtual network**
- - **IaaS Spoke:** Hosts Azure IaaS based workloads, including VM availability sets and virtual machine scale sets, and the corresponding network components.
+ - **IaaS Spoke:** Hosts Azure IaaS based workloads, including VM availability sets and Virtual Machine Scale Sets, and the corresponding network components.
- **PaaS Spoke:** Hosts Azure PaaS services using private addressing thanks to [Private Endpoint](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md).
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
Title: Configure external identity source for vCenter Server
description: Learn how to configure Active Directory over LDAP or LDAPS for vCenter Server as an external identity source. Previously updated : 04/22/2022 Last updated : 10/21/2022 # Configure external identity source for vCenter Server -- [!INCLUDE [vcenter-access-identity-description](includes/vcenter-access-identity-description.md)] >[!NOTE]
Last updated 04/22/2022
In this article, you learn how to: > [!div class="checklist"]
+>
> * Export the certificate for LDAPS authentication > * Upload the LDAPS certificate to blob storage and generate a SAS URL > * Configure NSX-T DNS for resolution to your Active Directory Domain
In this article, you learn how to:
> * Remove AD group from the cloudadmin role > * Remove existing external identity sources -- ## Prerequisites - Connectivity from your Active Directory network to your Azure VMware Solution private cloud must be operational. - For AD authentication with LDAPS:
- - You will need access to the Active Directory Domain Controller(s) with Administrator permissions.
- - Your Active Directory Domain Controller(s) must have LDAPS enabled with a valid certificate. The certificate could be issued by an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [Third-party/Public CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).
+ - You'll need access to the Active Directory Domain Controller(s) with Administrator permissions.
+ - Your Active Directory Domain Controller(s) must have LDAPS enabled with a valid certificate. The certificate could be issued by an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [Third-party/Public CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).
+ - You need to have a valid certificate. To create a certificate, follow the steps shown in [create a certificate for secure LDAP](https://learn.microsoft.com/azure/active-directory-domain-services/tutorial-configure-ldaps#create-a-certificate-for-secure-ldap). Make sure the certificate meets the requirements that are listed after the steps you used to create a certificate for secure LDAP.
>[!NOTE] >Self-sign certificates are not recommended for production environments. - [Export the certificate for LDAPS authentication](#export-the-certificate-for-ldaps-authentication) and upload it to an Azure Storage account as blob storage. Then, you'll need to [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
In this article, you learn how to:
- Ensure Azure VMware Solution has DNS resolution configured to your on-premises AD. Enable DNS Forwarder from Azure portal. See [Configure DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md) for further information. >[!NOTE]
->For further information about LDAPS and certificate issuance, consult with your security or identity management team.
+>For more information about LDAPS and certificate issuance, see with your security or identity management team.
## Export the certificate for LDAPS authentication
-First, verify that the certificate used for LDAPS is valid.
+First, verify that the certificate used for LDAPS is valid. If you don't already have a certificate, follow the steps to [create a certificate for secure LDAP](https://learn.microsoft.com/azure/active-directory-domain-services/tutorial-configure-ldaps#create-a-certificate-for-secure-ldap) before you continue.
1. Sign in to a domain controller with administrator permissions where LDAPS is enabled.
First, verify that the certificate used for LDAPS is valid.
1. Expand the **Personal** folder under the **Certificates (Local Computer)** management console and select the **Certificates** folder to list the installed certificates. :::image type="content" source="media/run-command/ldaps-certificate-personal-certficates.png" alt-text="Screenshot showing displaying the list of certificates." lightbox="media/run-command/ldaps-certificate-personal-certficates.png":::
-
+ 1. Double click the certificate for LDAPS purposes. The **Certificate** General properties will display. Ensure the certificate date **Valid from** and **to** is current and the certificate has a **private key** that corresponds to the certificate.
- :::image type="content" source="media/run-command/ldaps-certificate-personal-general.png" alt-text="Screenshot showing the properties of the certificate." lightbox="media/run-command/ldaps-certificate-personal-general.png":::
-
+ :::image type="content" source="media/run-command/ldaps-certificate-personal-general.png" alt-text="Screenshot showing the properties of the certificate." lightbox="media/run-command/ldaps-certificate-personal-general.png":::
+ 1. On the same window, select the **Certification Path** tab and verify that the **Certification path** is valid, which it should include the certificate chain of root CA and optionally intermediate certificates and the **Certificate Status** is OK. :::image type="content" source="media/run-command/ldaps-certificate-cert-path.png" alt-text="Screenshot showing the certificate chain." lightbox="media/run-command/ldaps-certificate-cert-path.png":::
-
+ 1. Close the window. Now proceed to export the certificate 1. Still on the Certificates console, right select the LDAPS certificate and select **All Tasks** > **Export**. The Certificate Export Wizard prompt is displayed, select the **Next** button.
-1. In the **Export Private Key** section, select the 2nd option, **No, do not export the private key** and select the **Next** button.
-1. In the **Export File Format** section, select the 2nd option, **Base-64 encoded X.509(.CER)** and then select the **Next** button.
+1. In the **Export Private Key** section, select the second option, **No, do not export the private key** and select the **Next** button.
+1. In the **Export File Format** section, select the second option, **Base-64 encoded X.509(.CER)** and then select the **Next** button.
1. In the **File to Export** section, select the **Browse...** button and select a folder location where to export the certificate, enter a name then select the **Save** button. >[!NOTE]
Now proceed to export the certificate
## Upload the LDAPS certificate to blob storage and generate a SAS URL -- Upload the certificate file (.cer format) you just exported to an Azure Storage account as blob storage. Then [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
+- Upload the certificate file (.cer format) you just exported to an Azure Storage account as blob storage. Then [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
- If multiple certificates are required, upload each certificate individually and for each certificate, generate a SAS URL. > [!IMPORTANT]
-> Make sure to copy each SAS URL string(s), because they will no longer be available once you leave the page.
+> Make sure to copy each SAS URL string(s), because they will no longer be available once you leave the page.
> [!TIP] > Another alternative method for consolidating certificates is saving the certificate chains in a single file as mentioned in [this VMware KB article](https://kb.vmware.com/s/article/2041378), and generate a single SAS URL for the file that contains all the certificates. ## Configure NSX-T DNS for resolution to your Active Directory Domain
-A DNS Zone needs to be created and added to the DNS Service, follow the instructions in [Configure a DNS forwarder in the Azure portal](./configure-dns-azure-vmware-solution.md) to complete these two steps.
+A DNS Zone needs to be created and added to the DNS Service, follow the instructions in [Configure a DNS forwarder in the Azure portal](./configure-dns-azure-vmware-solution.md) to complete these two steps.
After completion, verify that your DNS Service has your DNS zone included. :::image type="content" source="media/run-command/ldaps-dns-zone-service-configured.png" alt-text="Screenshot showing the DNS Service that includes the required DNS zone." lightbox="media/run-command/ldaps-dns-zone-service-configured.png"::: Your Azure VMware Solution Private cloud should now be able to resolve your on-premises Active Directory domain name properly. - ## Add Active Directory over LDAP with SSL
-In your Azure VMware Solution private cloud you'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter Server.
+In your Azure VMware Solution private cloud, you'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter Server.
1. Browse to your Azure VMware Solution private cloud and then select **Run command** > **Packages** > **New-LDAPSIdentitySource**.
In your Azure VMware Solution private cloud you'll run the `New-LDAPSIdentitySou
| **SecondaryURL** | Secondary fall-back URL if there's primary failure. For example, **ldaps://yourbackupldapserver.avslab.local:636**. | | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source. Typically the **avsldap\** format. | | **DomainName** | The FQDN of the domain, for example **avslab.local**. |
- | **Name** | User-friendly name of the external identity source, for example, **avslab.local**. This is how it will be displayed in vCenter. |
+ | **Name** | User-friendly name of the external identity source. For example, **avslab.local**, is how it will be displayed in vCenter. |
| **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. | | **Specify name for execution** | Alphanumeric name, for example, **addexternalIdentity**. | | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
In your Azure VMware Solution private cloud you'll run the `New-LDAPSIdentitySou
## Add Active Directory over LDAP >[!NOTE]
->We don't recommend this method. Instead, use the [Add Active Directory over LDAP with SSL](#add-active-directory-over-ldap-with-ssl) method.
+>We recommend you use the [Add Active Directory over LDAP with SSL](#add-active-directory-over-ldap-with-ssl) method.
-You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an external identity source to use with SSO into vCenter Server.
+You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an external identity source to use with SSO into vCenter Server.
1. Select **Run command** > **Packages** > **New-LDAPIdentitySource**. 1. Provide the required values or change the default values, and then select **Run**.
-
+ | **Field** | **Value** | | | | | **Name** | User-friendly name of the external identity source, for example, **avslab.local**. This is how it will be displayed in vCenter. |
You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identit
1. Provide the required values or change the default values, and then select **Run**. :::image type="content" source="media/run-command/run-command-get-external-identity-sources.png" alt-text="Screenshot showing how to list external identity source. ":::
-
+ | **Field** | **Value** | | | | | **Retain up to** |Retention period of the cmdlet output. The default value is 60 days. |
You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identit
| **Timeout** | The period after which a cmdlet exits if taking too long to finish. | 1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
-
+ :::image type="content" source="media/run-command/run-packages-execution-command-status.png" alt-text="Screenshot showing how to check the run commands notification or status." lightbox="media/run-command/run-packages-execution-command-status.png":::
+## Assign more vCenter Server Roles to Active Directory Identities
-## Assign additional vCenter Server Roles to Active Directory Identities
-After you've added an external identity over LDAP or LDAPS you can assign vCenter Server Roles to Active Directory security groups based on your organization's security controls.
+After you've added an external identity over LDAP or LDAPS, you can assign vCenter Server Roles to Active Directory security groups based on your organization's security controls.
1. After you sign in to vCenter Server with cloudadmin privileges, you can select an item from the inventory, select **ACTIONS** menu and select **Add Permission**.
-
+ :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-1.png" alt-text="Screenshot displaying hot to add permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-1.png"::: 1. In the Add Permission prompt:
After you've added an external identity over LDAP or LDAPS you can assign vCente
1. *Role*. Select the desired role to assign. 1. *Propagate to children*. Optionally select the checkbox if permissions should be propagated down to children resources. :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-2.png" alt-text="Screenshot displaying assign the permission." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png":::
-
+ 1. Switch to the **Permissions** tab and verify the permission assignment was added. :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-3.png" alt-text="Screenshot displaying the add completion of permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png"::: 1. Users should now be able to sign in to vCenter Server using their Active Directory credentials. ## Remove AD group from the cloudadmin role
-You'll run the `Remove-GroupFromCloudAdmins` cmdlet to remove a specified AD group from the cloudadmin role.
+You'll run the `Remove-GroupFromCloudAdmins` cmdlet to remove a specified AD group from the cloudadmin role.
1. Select **Run command** > **Packages** > **Remove-GroupFromCloudAdmins**.
You'll run the `Remove-GroupFromCloudAdmins` cmdlet to remove a specified AD gro
1. Check **Notifications** or the **Run Execution Status** pane to see the progress. - ## Remove existing external identity sources You'll run the `Remove-ExternalIdentitySources` cmdlet to remove all existing external identity sources in bulk.
You'll run the `Remove-ExternalIdentitySources` cmdlet to remove all existing ex
1. Check **Notifications** or the **Run Execution Status** pane to see the progress. - ## Next steps Now that you've learned about how to configure LDAP and LDAPS, you can learn more about: - [How to configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. You can assign a VM storage policy in an initial deployment of a VM or when you do other VM operations, such as cloning or migrating.- - [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter Server to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the cloudadmin role for vCenter Server and restricted administrator rights for NSX-T Manager. - [Configure external identity source for NSX-T](configure-external-identity-source-nsx-t.md) - [Azure VMware Solution identity concepts](concepts-identity.md) - [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html) -
azure-vmware Fix Deployment Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/fix-deployment-failures.md
Title: Support for Azure VMware Solution deployment or provisioning failure
description: Get information from your Azure VMware Solution private cloud to file a service request for an Azure VMware Solution deployment or provisioning failure. Previously updated : 10/28/2020 Last updated : 10/20/2022 # Open a support request for an Azure VMware Solution deployment or provisioning failure
-This article shows you how to open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) and provide key information for an Azure VMware Solution deployment or provisioning failure.
+This article shows you how to open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) and provide key information for an Azure VMware Solution deployment or provisioning failure.
When you have a failure on your private cloud, you need to open a support request in the Azure portal. To open a support request, first get some key information in the Azure portal: - Correlation ID-- Azure ExpressRoute circuit ID - Error messages
+- Azure ExpressRoute circuit ID
## Get the correlation ID
-
+ When you create a private cloud or any resource in Azure, a correlation ID for the resource is automatically generated for the resource. Include the private cloud correlation ID in your support request to more quickly open and resolve the request. In the Azure portal, you can get the correlation ID for a resource in two ways:
-* **Overview** pane
-* Deployment logs
-
- ### Get the correlation ID from the resource overview
+- **Overview** pane
+- Deployment logs
+
+### Get the correlation ID from the resource overview
Here's an example of the operation details of a failed private cloud deployment, with the correlation ID selected: To access deployment results in a private cloud **Overview** pane:
Copy and save the private cloud deployment correlation ID to include in the serv
### Get the correlation ID from the deployment log
-You can get the correlation ID for a failed deployment by searching the deployment activity log in the Azure portal.
+You can get the correlation ID for a failed deployment by searching the deployment activity log located in the Azure portal.
To access the deployment log: 1. In the Azure portal, select your private cloud, and then select the notifications icon.
- :::image type="content" source="media/fix-deployment-failures/open-notifications.png" alt-text="Screenshot that shows the notifications icon in the Azure portal.":::
+ :::image type="content" source="media/fix-deployment-failures/open-notifications.png" alt-text="Screenshot that shows the notifications icon in the Azure portal."lightbox="media/fix-deployment-failures/open-notifications.png":::
1. In the **Notifications** pane, select **More events in the activity log**:
- :::image type="content" source="media/fix-deployment-failures/more-events-in-activity-log.png" alt-text="Screenshot that shows the More events in the activity log link selected in the Notifications pane.":::
+ :::image type="content" source="media/fix-deployment-failures/more-events-in-activity-log.png" alt-text="Screenshot that shows the More events in the activity log link selected in the Notifications pane."lightbox="media/fix-deployment-failures/more-events-in-activity-log.png":::
-1. To find the failed deployment and its correlation ID, search for the name of the resource or other information that you used to create the resource.
+1. To find the failed deployment and its correlation ID, search for the name of the resource or other information that you used to create the resource.
The following example shows search results for a private cloud resource named pc03.
-
- :::image type="content" source="media/fix-deployment-failures/find-past-deployments.png" alt-text="Screenshot that shows search results for an example private cloud resource and the Create or update a PrivateCloud pane.":::
-
+
+ :::image type="content" source="media/fix-deployment-failures/find-past-deployments.png" alt-text="Screenshot that shows search results for an example private cloud resource and the Create or update a PrivateCloud pane."lightbox="media/fix-deployment-failures/find-past-deployments.png":::
+ 1. In the search results in the **Activity log** pane, select the operation name of the failed deployment.
-1. In the **Create or update a PrivateCloud** pane, select the **JSON** tab, and then look for `correlationId` in the log that is shown. Copy the `correlationId` value to include it in your support request.
-
+1. In the **Create or update a PrivateCloud** pane, select the **JSON** tab, and then look for `correlationId` in the log that is shown. Copy the `correlationId` value to include it in your support request.
+ ## Copy error messages To help resolve your deployment issue, include any error messages that are shown in the Azure portal. Select a warning message to see a summary of errors:
-
+ :::image type="content" source="media/fix-deployment-failures/summary-of-errors.png" alt-text="Screenshot that shows error details on the Summary tab of the Errors pane, with the copy icon selected."::: To copy the error message, select the copy icon. Save the copied message to include in your support request.
-
+ ## Get the ExpressRoute ID (URI)
-
+ Perhaps you're trying to scale or peer an existing private cloud with the private cloud ExpressRoute circuit, and it fails. In that scenario, you need the ExpressRoute ID to include in your support request. To copy the ExpressRoute ID: 1. In the Azure portal, select your private cloud.
-1. In the left menu, under **Manage**, select **Connectivity**.
+1. In the left menu, under **Manage**, select **Connectivity**.
1. In the right pane, select the **ExpressRoute** tab. 1. Select the copy icon for **ExpressRoute ID** and save the value to use in your support request.
-
-
++ ## Pre-validation failures
-If your private cloud pre-validation check failed (before deployment), a correlation ID won't have been generated. In this scenario, you can provide the following information in your support request:
+If your private cloud pre-validations check failed (before deployment), a correlation ID won't have been generated. In this scenario, you can provide the following information in your support request:
- Error and failure messages. These messages can be helpful in many failures, for example, for quota-related issues. It's important to copy these messages and include them in the support request, as described in this article. - Information you used to create the Azure VMware Solution private cloud, including:
If your private cloud pre-validation check failed (before deployment), a correla
## Create your support request
-For general information about creating a support request, see [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+For general information about creating a support request, see [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
To create a support request for an Azure VMware Solution deployment or provisioning failure: 1. In the Azure portal, select the **Help** icon, and then select **New support request**.
- :::image type="content" source="media/fix-deployment-failures/open-support-request.png" alt-text="Screenshot of the New support request pane in the Azure portal.":::
+ :::image type="content" source="media/fix-deployment-failures/open-support-request.png" alt-text="Screenshot of the New support request pane in the Azure portal."lightbox="media/fix-deployment-failures/open-support-request.png":::
1. Enter or select the required information:
To create a support request for an Azure VMware Solution deployment or provision
1. Paste your Correlation ID or ExpressRoute ID where this information is requested. If you don't see a specific text box for these values, paste them in the **Provide details about the issue** text box.
- 1. Paste any error details, including the error or failure messages you copied, in the **Provide details about the issue** text box.
+ 1. Paste any error details, including the error or failure messages you copied, in the **Provide details about the issue** text box.
1. Review your entries, and then select **Create** to create your support request.
backup Backup Azure Dpm Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dpm-introduction.md
Title: Prepare the DPM server to back up workloads description: In this article, learn how to prepare for System Center Data Protection Manager (DPM) backups to Azure, using the Azure Backup service. Previously updated : 06/11/2020 Last updated : 10/21/2020+++ # Prepare to back up workloads to Azure with System Center DPM
Unsupported file types | <li>Servers on case-sensitive file systems<li> hard lin
Local storage | Each machine you want to back up must have local free storage that's at least 5% of the size of the data that's being backed up. For example, backing up 100 GB of data requires a minimum of 5 GB of free space in the scratch location. Vault storage | ThereΓÇÖs no limit to the amount of data you can back up to an Azure Backup vault, but the size of a data source (for example a virtual machine or database) shouldnΓÇÖt exceed 54,400 GB. Azure ExpressRoute | You can back up your data over Azure ExpressRoute with public peering (available for old circuits) and Microsoft peering. Backup over private peering isn't supported.<br/><br/> **With public peering**: Ensure access to the following domains/addresses:<br/><br/> URLs:<br> `www.msftncsi.com` <br> .Microsoft.com <br> .WindowsAzure.com <br> .microsoftonline.com <br> .windows.net <br>`www.msftconnecttest.com`<br><br>IP addresses<br> 20.190.128.0/18 <br> 40.126.0.0/18<br> <br/>**With Microsoft peering**, select the following services/regions and relevant community values:<br/><br/>- Azure Active Directory (12076:5060)<br/><br/>- Microsoft Azure Region (according to the location of your Recovery Services vault)<br/><br/>- Azure Storage (according to the location of your Recovery Services vault)<br/><br/>For more information, see [ExpressRoute routing requirements](../expressroute/expressroute-routing.md).<br/><br/>**Note**: Public peering is deprecated for new circuits.
-Azure Backup agent | If DPM is running on System Center 2012 SP1, install Rollup 2 or later for DPM SP1. This is required for agent installation.<br/><br/> This article describes how to deploy the latest version of the Azure Backup agent, also known as the Microsoft Azure Recovery Service (MARS) agent. If you have an earlier version deployed, update to the latest version to ensure that backup works as expected.
+Azure Backup agent | If DPM is running on System Center 2012 SP1, install Rollup 2 or later for DPM SP1. This is required for agent installation.<br/><br/> This article describes how to deploy the latest version of the Azure Backup agent, also known as the Microsoft Azure Recovery Service (MARS) agent. If you have an earlier version deployed, update to the latest version to ensure that backup works as expected. <br><br> [Ensure your server is running on TLS 1.2](transport-layer-security.md).
Before you start, you need an Azure account with the Azure Backup feature enabled. If you don't have an account, you can create a free trial account in just a couple of minutes. Read about [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/).
backup Backup Azure Mabs Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mabs-troubleshoot.md
Title: Troubleshoot Azure Backup Server
description: Troubleshoot installation, registration of Azure Backup Server, and backup and restore of application workloads. Previously updated : 08/26/2022 Last updated : 10/21/2022
We recommend you perform the following validation, before you start troubleshoot
- Ensure Microsoft Azure Recovery Services is running (in Service console). If necessary, restart and retry the operation - [Ensure 5-10% free volume space is available on scratch folder location](./backup-azure-file-folder-backup-faq.yml#what-s-the-minimum-size-requirement-for-the-cache-folder-) - If registration is failing, then ensure the server on which you're trying to install Azure Backup Server isn't already registered with another vault-- If Push install fails, check if DPM agent is already present. If yes, then uninstall the agent and retry the installation
+- If Push install fails, check if DPM agent is already present. If yes, then uninstall the agent and retry the installation.
+- [Ensure your server is running on TLS 1.2](transport-layer-security.md).
- [Ensure no other process or antivirus software is interfering with Azure Backup](./backup-azure-troubleshoot-slow-backup-performance-issue.md#cause-another-process-or-antivirus-software-interfering-with-azure-backup)<br> - Ensure that the SQL Agent service is running and set to automatic in the MABS server<br>
Reg query "HKLM\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Setup"
| Pushing agent(s) to protected servers | The credentials that are specified for the server are invalid. | **If the recommended action that's shown in the product doesn't work, take the following steps**: <br> Try to install the protection agent manually on the production server as specified in [this article](/system-center/dpm/deploy-dpm-protection-agent).| | Azure Backup Agent was unable to connect to the Azure Backup service (ID: 100050) | The Azure Backup Agent was unable to connect to the Azure Backup service. | **If the recommended action that's shown in the product doesn't work, take the following steps**: <br>1. Run the following command from an elevated prompt: **psexec -i -s "c:\Program Files\Internet Explorer\iexplore.exe**. This opens the Internet Explorer window. <br/> 2. Go to **Tools** > **Internet Options** > **Connections** > **LAN settings**. <br/> 3. Change the settings to use a proxy server. Then provide the proxy server details.<br/> 4. If your machine has limited internet access, ensure that firewall settings on the machine or proxy allow these [URLs](install-mars-agent.md#verify-internet-access) and [IP address](install-mars-agent.md#verify-internet-access).| | Azure Backup Agent installation failed | The Microsoft Azure Recovery Services installation failed. All changes that were made to the system by the Microsoft Azure Recovery Services installation were rolled back. (ID: 4024) | Manually install Azure Agent.
+| Server registration status verification with Microsoft Azure Backup failed. | The server registration status could not be verified with Microsoft Azure Backup. Verify that you are connected to the internet and that the proxy settings are configured correctly. | You'll encounter this issue when the MARS agent can't contact Azure services. To resolve this issue: <br><br> - Ensure network connectivity and proxy settings. <br><br> - Ensure that you are running the latest MARS agent. <br><br> - [Ensure your server is running on TLS 1.2](transport-layer-security.md). |
## Configuring protection group
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Title: Troubleshoot the Azure Backup agent description: In this article, learn how to troubleshoot the installation and registration of the Azure Backup agent. Previously updated : 08/26/2022 Last updated : 10/21/2022
We recommend that you check the following before you start troubleshooting Micro
- If you're trying to reregister your server to a vault: - Ensure the agent is uninstalled on the server and that it's deleted from the portal. - Use the same passphrase that was initially used to register the server.
+- [Ensure your server is running on TLS 1.2](transport-layer-security.md).
- For offline backups, ensure Azure PowerShell 3.7.0 is installed on both the source and the copy computer before you start the backup. - If the Backup agent is running on an Azure virtual machine, see [this article](./backup-azure-troubleshoot-slow-backup-performance-issue.md#cause-backup-agent-running-on-an-azure-virtual-machine).
We recommend that you check the following before you start troubleshooting Micro
| Error | Possible cause | Recommended actions | | | | |
-| <br /><ul><li>The Microsoft Azure Recovery Service Agent was unable to connect to Microsoft Azure Backup. (ID: 100050) Check your network settings and ensure that you are able to connect to the internet.<li>(407) Proxy Authentication Required. |A proxy is blocking the connection. | <ul><li>In Internet Explorer, go to **Tools** > **Internet options** > **Security** > **Internet**. Select **Custom Level** and scroll down to the **File download** section. Select **Enable**.<p>You might also have to add [URLs and IP addresses](install-mars-agent.md#verify-internet-access) to your trusted sites in Internet Explorer.<li>Change the settings to use a proxy server. Then provide the proxy server details.<li> If your machine has limited internet access, ensure that firewall settings on the machine or proxy allow these [URLs and IP addresses](install-mars-agent.md#verify-internet-access). <li>If you have antivirus software installed on the server, exclude these files from the antivirus scan: <ul><li>CBEngine.exe (instead of dpmra.exe).<li>CSC.exe (related to .NET Framework). There's a CSC.exe for every .NET Framework version installed on the server. Exclude CSC.exe files for all versions of .NET Framework on the affected server. <li>The scratch folder or cache location. <br>The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch.<li>The bin folder at C:\Program Files\Microsoft Azure Recovery Services Agent\Bin.
-
+| - The Microsoft Azure Recovery Service Agent was unable to connect to Microsoft Azure Backup. (ID: 100050) Check your network settings and ensure that you are able to connect to the internet. <br><br> - (407) Proxy Authentication Required. | A proxy is blocking the connection. | - On Internet Explorer, go to **Tools** > **Internet options** > **Security** > **Internet**. Select **Custom Level** and scroll down to the **File download** section. Select **Enable**. <br> You might also have to add [URLs and IP addresses](install-mars-agent.md#verify-internet-access) to your trusted sites in Internet Explorer. <br><br> - Change the settings to use a proxy server. Then provide the proxy server details. <br><br><br> - If your machine has limited internet access, ensure that firewall settings on the machine or proxy allow these [URLs and IP addresses](install-mars-agent.md#verify-internet-access). <br><br> - If you have antivirus software installed on the server, exclude these files from the antivirus scan: <br> - CBEngine.exe (instead of dpmra.exe). <br> - CSC.exe (related to .NET Framework). There's a CSC.exe for every .NET Framework version installed on the server. Exclude CSC.exe files for all versions of .NET Framework on the affected server. <br><br> - The scratch folder or cache location. <br> The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch. <br><br> - The bin folder at C:\Program Files\Microsoft Azure Recovery Services Agent\Bin. |
+| The server registration status could not be verified with Microsoft Azure Backup. Verify that you are connected to the internet and that the proxy settings are configured correctly. | The MARS agent isn't able to contact Azure services. | - Ensure network connectivity and proxy settings. <br><br> - Ensure that you are running the latest MARS agent. <br><br> - [Ensure your server is running on TLS 1.2](transport-layer-security.md). |
## The specified vault credential file cannot be used as it is not downloaded from the vault associated with this server | Error | Possible cause | Recommended actions |
We recommend that you check the following before you start troubleshooting Micro
| Error | Possible causes | Recommended actions | | | | |
-| <br />Failed to set the encryption key for secure backups. Activation did not succeed completely but the encryption passphrase was saved to the following file. |<li>The server is already registered with another vault.<li>During configuration, the passphrase was corrupted.| Unregister the server from the vault and register it again with a new passphrase.
+| Failed to set the encryption key for secure backups. Activation did not succeed completely but the encryption passphrase was saved to the following file. | - The server is already registered with another vault. <br><br> - During configuration, the passphrase was corrupted.| Unregister the server from the vault and register it again with a new passphrase. |
## The activation did not complete successfully | Error | Possible causes | Recommended actions | ||||
-|<br />The activation did not complete successfully. The current operation failed due to an internal service error [0x1FC07]. Retry the operation after some time. If the issue persists, please contact Microsoft support. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](./backup-azure-file-folder-backup-faq.yml).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. |
+| The activation did not complete successfully. The current operation failed due to an internal service error [0x1FC07]. Retry the operation after some time. If the issue persists, please contact Microsoft support. | - The scratch folder is located on a volume that doesn't have enough space. <br><br> - The scratch folder has been incorrectly moved. <br><br> - The OnlineBackup.KEK file is missing. | - Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS agent. <br><br> - Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](./backup-azure-file-folder-backup-faq.yml). <br><br> - Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. |
## Encryption passphrase not correctly configured | Error | Possible causes | Recommended actions | ||||
-| <br />Error 34506. The encryption passphrase stored on this computer is not correctly configured. | <li> The scratch folder is located on a volume that doesn't have enough space. <li> The scratch folder has been incorrectly moved. <li> The OnlineBackup.KEK file is missing. | <li>Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS Agent.<li>Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](./backup-azure-file-folder-backup-faq.yml).<li> Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. <li> If you've recently moved your scratch folder, ensure that the path of your scratch folder location matches the values of the registry key entries shown below: <br><br> **Registry path**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config` <br> **Registry Key**: ScratchLocation <br> **Value**: *New cache folder location* <br><br>**Registry path**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config\CloudBackupProvider` <br> **Registry Key**: ScratchLocation <br> **Value**: *New cache folder location* |
+| Error 34506. The encryption passphrase stored on this computer is not correctly configured. | - The scratch folder is located on a volume that doesn't have enough space. <br><br> - The scratch folder has been incorrectly moved. <br><br> - The OnlineBackup.KEK file is missing. | - Upgrade to the [latest version](https://aka.ms/azurebackup_agent) of the MARS Agent. <br><br> - Move the scratch folder or cache location to a volume with free space that's between 5% and 10% of the total size of the backup data. To correctly move the cache location, refer to the steps in [Common questions about backing up files and folders](./backup-azure-file-folder-backup-faq.yml). <br><br> - Ensure that the OnlineBackup.KEK file is present. <br>*The default location for the scratch folder or the cache path is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch*. <br><br> - If you've recently moved your scratch folder, ensure that the path of your scratch folder location matches the values of the registry key entries shown below: <br><br> **Registry path**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config` <br> **Registry Key**: ScratchLocation <br> **Value**: *New cache folder location* <br><br>**Registry path**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config\CloudBackupProvider` <br> **Registry Key**: ScratchLocation <br> **Value**: *New cache folder location* |
## Backups don't run according to schedule
Set-ExecutionPolicy Unrestricted
Error | Possible causes | Recommended actions | |
-The current operation failed due to an internal service error "Resource not provisioned in service stamp". Please retry the operation after some time. (ID: 230006) | The protected server was renamed. | <li> Rename the server back to the original name as registered with the vault. <br> <li> Re-register the server to the vault with the new name.
+The current operation failed due to an internal service error "Resource not provisioned in service stamp". Please retry the operation after some time. (ID: 230006) | The protected server was renamed. | - Rename the server back to the original name as registered with the vault. <br><br> - Re-register the server to the vault with the new name.
## Job could not be started as another job was in progress
backup Backup Azure Scdpm Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-scdpm-troubleshooting.md
Title: Troubleshoot System Center Data Protection Manager description: In this article, discover solutions for issues that you might encounter while using System Center Data Protection Manager. Previously updated : 01/30/2019 Last updated : 10/21/2022+++ # Troubleshoot System Center Data Protection Manager
This error occurs during the encryption process when recovering Data Protection
> > When you're recovering data, always provide the same encryption passphrase that's associated with the Data Protection Manager/Azure Backup server. >+
+## Error: The server registration status could not be verified with Microsoft Azure Backup. Verify that you are connected to the internet and that the proxy settings are configured correctly.
+
+To resolve this issue:
+
+- Ensure network connectivity and proxy settings.
+- Ensure that you are running the latest MARS agent.
+- [Ensure your server is running on TLS 1.2](transport-layer-security.md).
backup Backup Mabs Sharepoint Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-sharepoint-azure-stack.md
Title: Back up a SharePoint farm on Azure Stack description: Use Azure Backup Server to back up and restore your SharePoint data on Azure Stack. This article provides the information to configure your SharePoint farm so that desired data can be stored in Azure. You can restore protected SharePoint data from disk or from Azure.- Previously updated : 06/07/2020+ Last updated : 10/20/2022++++
-# Back up a SharePoint farm on Azure Stack
+# Back up a SharePoint farm on Azure Stack using Microsoft Azure Backup Server
-You back up a SharePoint farm on Azure Stack to Microsoft Azure by using Microsoft Azure Backup Server (MABS) in much the same way that you back up other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points and gives you retention policy options for various backup points. It also provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
+This article describes how to back up and restore SharePoint data using Microsoft Azure Backup Server (MABS).
-## SharePoint supported versions and related protection scenarios
+Microsoft Azure Backup Server (MABS) enables you to back up a SharePoint farm (on Azure Stack) to Microsoft Azure, which gives an experience similar to back up of other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points, and gives you retention policy options for various backup points. It also provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
+
+In this article, you'll learn about:
+
+> [!div class="checklist"]
+> - SharePoint supported scenarios
+> - Prerequisites
+> - Configure backup
+> - Monitor operations
+> - Restore a SharePoint item from disk by using MABS
+> - Restore a SharePoint database from Azure by using MABS
+> - Switch the Front-End Web Server
+
+## SharePoint supported scenarios
+
+You need to confirm the following supported scenarios before you back up a SharePoint farm to Azure.
+
+### Supported scenarios
Azure Backup for MABS supports the following scenarios:
Azure Backup for MABS supports the following scenarios:
| | | | | | SharePoint |SharePoint 2016, SharePoint 2013, SharePoint 2010 |SharePoint deployed as an Azure Stack virtual machine <br> -- <br> SQL Always On | Protect SharePoint Farm recovery options: Recovery farm, database, and file or list item from disk recovery points. Farm and database recovery from Azure recovery points. |
-## Before you start
-
-There are a few things you need to confirm before you back up a SharePoint farm to Azure.
-
-### What's not supported
+### Unsupported scenarios
* MABS that protects a SharePoint farm doesn't protect search indexes or application service databases. You'll need to configure the protection of these databases separately. * MABS doesn't provide backup of SharePoint SQL Server databases that are hosted on scale-out file server (SOFS) shares.
-### Prerequisites
+## Prerequisites
-Before you continue, make sure that you've met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. Some tasks for prerequisites include: create a backup vault, download vault credentials, install Azure Backup Agent, and register the Azure Backup Server with the vault.
+Before you continue, ensure that you've met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. The tasks in prerequisites also include: create a backup vault, download vault credentials, install Azure Backup Agent, and register the Azure Backup Server with the vault.
Additional prerequisites and limitations:
-* By default when you protect SharePoint, all content databases (and the SharePoint_Config and SharePoint_AdminContent* databases) will be protected. If you want to add customizations such as search indexes, templates or application service databases, or the user profile service you'll need to configure these for protection separately. Be sure that you enable protection for all folders that include these types of features or customization files.
+* By default when you protect SharePoint, all content databases (and the SharePoint_Config and SharePoint_AdminContent* databases) are protected.
-* You can't protect SharePoint databases as a SQL Server data source. You can recover individual databases from a farm backup.
+ To add customizations (such as search indexes, templates or application service databases, or the user profile service), you need to configure these for protection separately. Ensure that you enable protection for all folders that include these types of features or customization files.
+
+* MABS runs as **Local System**, and to back up SQL Server databases, it needs sysadmin privileges on that account for the SQL server. On the SQL Server that you want to back up, set `NT AUTHORITY\SYSTEM` to **sysadmin**.
-* Remember that MABS runs as **Local System**, and to back up SQL Server databases it needs sysadmin privileges on that account for the SQL server. On the SQL Server you want to back up, set NT AUTHORITY\SYSTEM to **sysadmin**.
+* For every 10 million items in the farm, there must be at least 2 GB of space on the volume where the MABS folder is located. This space is required for catalog generation.
-* For every 10 million items in the farm, there must be at least 2 GB of space on the volume where the MABS folder is located. This space is required for catalog generation. To enable you to use MABS to perform a specific recovery of items (site collections, sites, lists, document libraries, folders, individual documents, and list items), catalog generation creates a list of the URLs contained within each content database. You can view the list of URLs in the recoverable item pane in the Recovery task area of the MABS Administrator Console.
+ To enable you to use MABS to perform a specific recovery of items (site collections, sites, lists, document libraries, folders, individual documents, and list items), catalog generation creates a list of the URLs contained within each content database. You can view the list of URLs in the recoverable item pane in the Recovery task area of the MABS Administrator Console.
* In the SharePoint farm, if you have SQL Server databases that are configured with SQL Server aliases, install the SQL Server client components on the front-end Web server that MABS will protect.
+### Limitations
+
+* You can't protect SharePoint databases as a SQL Server data source. You can recover individual databases from a farm backup.
+ * Protecting application store items isn't supported with SharePoint 2013. * MABS doesn't support protecting remote FILESTREAM. The FILESTREAM should be part of the database. ## Configure backup
-To back up the SharePoint farm, configure protection for SharePoint by using ConfigureSharePoint.exe and then create a protection group in MABS.
+To back up the SharePoint farm, configure protection for SharePoint by using ConfigureSharePoint.exe, and then create a protection group in MABS.
+
+Follow these steps:
-1. **Run ConfigureSharePoint.exe** - This tool configures the SharePoint VSS Writer service \(WSS\) and provides the protection agent with credentials for the SharePoint farm. After you've deployed the protection agent, the ConfigureSharePoint.exe file can be found in the `<MABS Installation Path\>\bin` folder on the front\-end Web server. If you have multiple WFE servers, you only need to install it on one of them. Run as follows:
+1. **Run ConfigureSharePoint.exe** - This tool configures the SharePoint VSS Writer service \(WSS\) and provides the protection agent with credentials for the SharePoint farm. After you've deployed the protection agent, the ConfigureSharePoint.exe file can be found in the `<MABS Installation Path\>\bin` folder on the front\-end Web server.
- * On the WFE server, at a command prompt navigate to `\<MABS installation location\>\\bin\\` and run `ConfigureSharePoint \[\-EnableSharePointProtection\] \[\-EnableSPSearchProtection\] \[\-ResolveAllSQLAliases\] \[\-SetTempPath <path>\]`, where:
+ If you have multiple WFE servers, you only need to install it on one of them.
+
+ Run as follows:
+
+ 1. On the WFE server, on a command prompt, go to `\<MABS installation location\>\\bin\\` and run `ConfigureSharePoint \[\-EnableSharePointProtection\] \[\-EnableSPSearchProtection\] \[\-ResolveAllSQLAliases\] \[\-SetTempPath <path>\]`, where:
* **EnableSharePointProtection** enables protection of the SharePoint farm, enables the VSS writer, and registers the identity of the DCOM application WssCmdletsWrapper to run as a user whose credentials are entered with this option. This account should be a farm admin and also local admin on the front\-end Web Server.
To back up the SharePoint farm, configure protection for SharePoint by using Con
* **SetTempPath** sets the environment variable TEMP and TMP to the specified path. Item level recovery fails if a large site collection, site, list, or item is being recovered and there's insufficient space in the farm admin Temporary folder. This option allows you to change the folder path of the temporary files to a volume that has sufficient space to store the site collection or site being recovered.
- * Enter the farm administrator credentials. This account should be a member of the local Administrator group on the WFE server. If the farm administrator isn't a local admin, grant the following permissions on the WFE server:
+ 1. Enter the farm administrator credentials. This account should be a member of the local Administrator group on the WFE server. If the farm administrator isn't a local admin, grant the following permissions on the WFE server:
* Grant the **WSS_Admin_WPG** group full control to the MABS folder (`%Program Files%\Data Protection Manager\DPM\`).
To back up the SharePoint farm, configure protection for SharePoint by using Con
1. In **Select Protection Group Type**, select **Servers**.
-1. In **Select Group Members**, expand the server that holds the WFE role. If there's more than one WFE server, select the one on which you installed ConfigureSharePoint.exe.
+1. In **Select Group Members**, expand the server that holds the WFE role.
+
+ If there's more than one WFE server, select the one on which you installed ConfigureSharePoint.exe.
When you expand the computer running SharePoint, MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the computer that's running SharePoint and on any remote instance of SQL Server. Then, ensure that the MABS agent is installed both on the computer running SharePoint and on the remote instance of SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
To back up the SharePoint farm, configure protection for SharePoint by using Con
1. In **Select short\-term goals**, specify how you want to back up to short\-term storage on disk. In **Retention range** you specify how long you want to keep the data on disk. In **Synchronization frequency**, you specify how often you want to run an incremental backup to disk. If you don't want to set a backup interval, you can check just before a recovery point so that MABS will run an express full backup just before each recovery point is scheduled.
-1. In the Review disk allocation page, review the storage pool disk space allocated for the protection group.
+1. On the **Review disk allocation** page, review the storage pool disk space allocated for the protection group.
**Total Data size** is the size of the data you want to back up, and **Disk space to be provisioned on MABS** is the space that MABS recommends for the protection group. MABS chooses the ideal backup volume, based on the settings. However, you can edit the backup volume choices in the **Disk allocation details**. For the workloads, select the preferred storage in the dropdown menu. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** pane. Underprovisioned space is the amount of storage MABS suggests you add to the volume, to continue with backups smoothly in the future.
To back up the SharePoint farm, configure protection for SharePoint by using Con
1. On the **Summary** page, review your settings. After you select **Create Group**, initial replication of the data occurs. When it finishes, the protection group status will show as **OK** on the **Status** page. Backup then takes place in line with the protection group settings.
-## Monitoring
+## Monitor operations
-After the protection group's been created, the initial replication occurs and MABS starts backing up and synchronizing the SharePoint data. MABS monitors the initial synchronization and subsequent backups. You can monitor the SharePoint data in a couple of ways:
+After the protection group is created, the initial replication occurs and MABS starts backing up and synchronizing the SharePoint data. MABS monitors the initial synchronization and subsequent backups. You can monitor the SharePoint data in a couple of ways:
* Using default MABS monitoring, you can set up notifications for proactive monitoring by publishing alerts and configuring notifications. You can send notifications by e-mail for critical, warning, or informational alerts, and for the status of instantiated recoveries.
After the protection group's been created, the initial replication occurs and MA
## Restore a SharePoint item from disk by using MABS
-In the following example, the *Recovering SharePoint item* has been accidentally deleted and needs to be recovered.
-![MABS SharePoint Protection4](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection5.png)
+In the following example, the *Recovering SharePoint item* is accidentally deleted and needs to be recovered.
+![Screenshot showing the MABS SharePoint Protection diagram.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection5.png)
+
+Follow these steps:
+
+1. Open the **MABS Administrator Console**.
+
+ All SharePoint farms that are protected by MABS are shown in the **Protection** tab.
-1. Open the **MABS Administrator Console**. All SharePoint farms that are protected by MABS are shown in the **Protection** tab.
+ ![Screenshot showing the list of SharePoint farms that are protected by MABS.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection4.png)
- ![MABS SharePoint Protection3](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection4.png)
-2. To begin to recover the item, select the **Recovery** tab.
+2. To recover the item, select the **Recovery** tab.
- ![MABS SharePoint Protection5](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection6.png)
-3. You can search SharePoint for *Recovering SharePoint item* by using a wildcard-based search within a recovery point range.
+ ![Screenshot showing how to initiate the recovery process.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection6.png)
+
+3. Search SharePoint for *Recovering SharePoint item* by using a wildcard-based search within a recovery point range.
+
+ ![Screenshot showing how to search for SharePoint recovery items using a wildcard-based search.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection7.png)
- ![MABS SharePoint Protection6](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection7.png)
4. Select the appropriate recovery point from the search results, right-click the item, and then select **Recover**.
-5. You can also browse through various recovery points and select a database or item to recover. Select **Date > Recovery time**, and then select the correct **Database > SharePoint farm > Recovery point > Item**.
- ![MABS SharePoint Protection7](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection8.png)
-6. Right-click the item, and then select **Recover** to open the **Recovery Wizard**. Select **Next**.
+ You can also browse through various recovery points and select a database or item to recover.
+
+5. Select **Date > Recovery time**, and then select the correct **Database > SharePoint farm > Recovery point > Item**.
- ![Review Recovery Selection](./media/backup-azure-backup-sharepoint/review-recovery-selection.png)
+ ![Screenshot showing how to select a recovery point for restore operation.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection8.png)
+
+6. Right-click the item, select **Recover** to open the **Recovery Wizard**, and then select **Next**.
+
+ ![Screenshot showing how to continue with the restore process.](./media/backup-azure-backup-sharepoint/review-recovery-selection.png)
7. Select the type of recovery that you want to perform, and then select **Next**.
- ![Recovery Type](./media/backup-azure-backup-sharepoint/select-recovery-type.png)
+ ![Screenshot showing how to select the recovery type.](./media/backup-azure-backup-sharepoint/select-recovery-type.png)
> [!NOTE] > The selection of **Recover to original** in the example recovers the item to the original SharePoint site.
In the following example, the *Recovering SharePoint item* has been accidentally
* Select **Recover without using a recovery farm** if the SharePoint farm hasn't changed and is the same as the recovery point that's being restored. * Select **Recover using a recovery farm** if the SharePoint farm has changed since the recovery point was created.
- ![Recovery Process](./media/backup-azure-backup-sharepoint/recovery-process.png)
+ ![Screenshot showing how to perform the recovery process.](./media/backup-azure-backup-sharepoint/recovery-process.png)
9. Provide a staging SQL Server instance location to recover the database temporarily, and provide a staging file share on MABS and the server that's running SharePoint to recover the item.
- ![Staging Location1](./media/backup-azure-backup-sharepoint/staging-location1.png)
+ ![Screenshot showing the staging location 1.](./media/backup-azure-backup-sharepoint/staging-location1.png)
MABS attaches the content database that's hosting the SharePoint item to the temporary SQL Server instance. From the content database, it recovers the item and puts it on the staging file location on MABS. The recovered item that's on the staging location now needs to be exported to the staging location on the SharePoint farm.
- ![Staging Location2](./media/backup-azure-backup-sharepoint/staging-location2.png)
+ ![Screenshot showing the staging location 2.](./media/backup-azure-backup-sharepoint/staging-location2.png)
10. Select **Specify recovery options**, and apply security settings to the SharePoint farm or apply the security settings of the recovery point. Select **Next**. ![Recovery Options](./media/backup-azure-backup-sharepoint/recovery-options.png)
In the following example, the *Recovering SharePoint item* has been accidentally
![Recovery summary](./media/backup-azure-backup-sharepoint/recovery-summary.png) 12. Now select the **Monitoring** tab in the **MABS Administrator Console** to view the **Status** of the recovery.
- ![Recovery Status](./media/backup-azure-backup-sharepoint/recovery-monitoring.png)
+ ![Screenshot showing the recovery status.](./media/backup-azure-backup-sharepoint/recovery-monitoring.png)
> [!NOTE] > The file is now restored. You can refresh the SharePoint site to check the restored file.
In the following example, the *Recovering SharePoint item* has been accidentally
## Restore a SharePoint database from Azure by using MABS
-1. To recover a SharePoint content database, browse through various recovery points (as shown previously), and select the recovery point that you want to restore.
+To recover a SharePoint content database, follow these steps:
+
+1. Browse through various recovery points (as shown previously), and select the recovery point that you want to restore.
- ![MABS SharePoint Protection8](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection9.png)
+ ![Screenshot showing how to browse through various recovery points.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection9.png)
2. Double-click the SharePoint recovery point to show the available SharePoint catalog information. > [!NOTE]
In the following example, the *Recovering SharePoint item* has been accidentally
> 3. Select **Re-catalog**.
- ![MABS SharePoint Protection10](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection12.png)
+ ![Screenshot showing how to select Re-catalog.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection12.png)
The **Cloud Recatalog** status window opens.
- ![MABS SharePoint Protection11](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection13.png)
+ ![Screenshot showing the Cloud Recatalog status window.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection13.png)
After cataloging is finished, the status changes to *Success*. Select **Close**.
- ![MABS SharePoint Protection12](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection14.png)
+ ![Screenshot showing the status changed to Success.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection14.png)
4. Select the SharePoint object shown in the MABS **Recovery** tab to get the content database structure. Right-click the item, and then select **Recover**.
- ![MABS SharePoint Protection13](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection15.png)
+ ![Screenshot showing how to select the SharePoint object shown in the MABS Recovery tab.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection15.png)
5. At this point, follow the recovery steps earlier in this article to recover a SharePoint content database from disk.
-## Switching the Front-End Web Server
+## Switch the Front-End Web Server
If you have more than one front-end web server, and want to switch the server that MABS uses to protect the farm, follow the instructions:
The following procedure uses the example of a server farm with two front-end Web
> [!NOTE] > If the front-end Web server that MABS uses to protect the farm is unavailable, use the following procedure to change the front-end Web server by starting at step 4.
-### To change the front-end Web server that MABS uses to protect the farm
+### Change the front-end Web server that MABS uses to protect the farm
1. Stop the SharePoint VSS Writer service on *Server1* by running the following command at a command prompt:
The following procedure uses the example of a server farm with two front-end Web
1. Select the protection group that the server farm belongs to, and then select **Modify protection group**.
-1. In the Modify Group Wizard, on the **Select Group Members** page, expand *Server2* and select the server farm, and then complete the wizard.
+1. On the Modify Group Wizard, on the **Select Group Members** page, expand *Server2* and select the server farm, and then complete the wizard.
A consistency check will start.
backup Install Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/install-mars-agent.md
Title: Install the Microsoft Azure Recovery Services (MARS) agent description: Learn how to install the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines. Previously updated : 08/26/2022 Last updated : 10/21/2022
The data that's available for backup depends on where the agent is installed.
* Make sure that you have an Azure account if you need to back up a server or client to Azure. If you don't have an account, you can create a [free one](https://azure.microsoft.com/free/) in just a few minutes. * Verify internet access on the machines that you want to back up. * Ensure the user installing and configuring the MARS agent has local administrator privileges on the server to be protected.
+* [Ensure your server is running on TLS 1.2](transport-layer-security.md).
* To prevent errors during vault registration, ensure that the latest MARS agent version is used. If not, we recommend you to download it [from here](https://aka.ms/azurebackup_agent) or [from the Azure portal as mentioned in this section](#download-the-mars-agent). [!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
bastion Bastion Connect Vm Ssh Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md
description: Learn how to use Azure Bastion to connect to Linux VM using SSH.
Previously updated : 08/18/2022 Last updated : 10/18/2022
In order to make a connection, the following roles are required:
* Reader role on the virtual machine * Reader role on the NIC with private IP of the virtual machine * Reader role on the Azure Bastion resource
+* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network)
### Ports
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 04/26/2022 Last updated : 10/21/2022 # Azure Bastion FAQ
Azure Bastion isn't supported with Azure Private DNS Zones in national clouds.
### <a name="dns"></a>Does Azure Bastion support Private Link?"
-No, Azure Bastion does not currently support private link.
+No, Azure Bastion doesn't currently support private link.
### <a name="subnet"></a>Can I have an Azure Bastion subnet of size /27 or smaller (/28, /29, etc.)?
Review any error messages and [raise a support request in the Azure portal](../a
Azure Bastion is deployed within VNets or peered VNets, and is associated to an Azure region. You're responsible for deploying Azure Bastion to a Disaster Recovery (DR) site VNet. In the event of an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
+### <a name="zone-redundant"></a>Does Bastion support zone redundancies?
+
+Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions may or may not be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies.
+ ## <a name="vm"></a>VM features and connection FAQs ### <a name="roles"></a>Are any roles required to access a virtual machine?
Azure Bastion offers support for file transfer between your target VM and local
### <a name="aadj"></a>Does Bastion hardening work with AADJ VM extension-joined VMs?
-This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information, see [Windows Azure VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#requirements).
+This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information, see [Log in to a Windows virtual machine in Azure by using Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#requirements).
### <a name="rdscal"></a>Does Azure Bastion require an RDS CAL for administrative purposes on Azure-hosted VMs?
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure Bastion so that I can securely connect to my Azure virtual machines. Previously updated : 08/05/2022 Last updated : 10/21/2022
Azure Bastion is deployed to a virtual network and supports virtual network peer
RDP and SSH are some of the fundamental means through which you can connect to your workloads running in Azure. Exposing RDP/SSH ports over the Internet isn't desired and is seen as a significant threat surface. This is often due to protocol vulnerabilities. To contain this threat surface, you can deploy bastion hosts (also known as jump-servers) at the public side of your perimeter network. Bastion host servers are designed and configured to withstand attacks. Bastion servers also provide RDP and SSH connectivity to the workloads sitting behind the bastion, as well as further inside the network.
+Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions may or may not be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies.
+ :::image type="content" source="./media/bastion-overview/architecture.png" alt-text="Diagram showing the Azure Bastion architecture."::: This figure shows the architecture of an Azure Bastion deployment. In this diagram:
bastion Bastion Vm Full Screen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-full-screen.md
Previously updated : 08/30/2021 Last updated : 10/21/2022 # Customer intent: I want to manage my VM experience using Azure Bastion.
Select the **Fullscreen** button to switch the session to a full screen experien
## Next steps
-For more VM features, see [About VM connections and features](vm-about.md).
+For more VM features, see [About VM connections and features](vm-about.md).
chaos-studio Chaos Studio Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md
# Permissions and security in Azure Chaos Studio
-Azure Chaos Studio enables you to improve service resilience by systematically injecting faults into your Azure resources. Fault injection is a powerful way to improve service resilience, but it can also be dangerous. Causing failures in your application can have more impact than originally intended and open opportunities for malicious actors to infiltrate your applications. Chaos Studio has a robust permission model that prevents faults from being run unintentionally or by a bad actor. In this article, you will learn how you can secure resources that are targeted for fault injection using Chaos Studio.
+Azure Chaos Studio enables you to improve service resilience by systematically injecting faults into your Azure resources. Fault injection is a powerful way to improve service resilience, but it can also be dangerous. Causing failures in your application can have more impact than originally intended and open opportunities for malicious actors to infiltrate your applications. Chaos Studio has a robust permission model that prevents faults from being run unintentionally or by a bad actor. In this article, you'll learn how you can secure resources that are targeted for fault injection using Chaos Studio.
## How can I restrict the ability to inject faults with Chaos Studio?
Chaos Studio has three levels of security that help you to control how and when
First, a chaos experiment is an Azure resource that is deployed to a region, resource group, and subscription. Users must have appropriate Azure Resource Manager permissions to create, update, start, cancel, delete, or view an experiment. Each permission is an ARM operation that can be granularly assigned to an identity or assigned as part of a role with wildcard permissions. For example, the Contributor role in Azure has */write permission at the assigned scope, which will include Microsoft.Chaos/experiments/write permission. When attempting to control ability to inject faults against a resource, the most important operation to restrict is Microsoft.Chaos/experiments/start/action, since this operation starts a chaos experiment that will inject faults.
-Second, a chaos experiment has a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) that executes faults on a resource. When you create an experiment, the system-assigned managed identity is created in your Azure Active Directory tenant with no permissions. Before running your chaos experiment, you must grant its identity [appropriate permissions](chaos-studio-fault-providers.md) to all target resources. If the experiment identity does not have appropriate permission to a resource, it will not be able to execute a fault against that resource.
+Second, a chaos experiment has a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) that executes faults on a resource. When you create an experiment, the system-assigned managed identity is created in your Azure Active Directory tenant with no permissions. Before running your chaos experiment, you must grant its identity [appropriate permissions](chaos-studio-fault-providers.md) to all target resources. If the experiment identity doesn't have appropriate permission to a resource, it will not be able to execute a fault against that resource.
-Finally, each resource must be onboarded to Chaos Studio as [a target with corresponding capabilities enabled](chaos-studio-targets-capabilities.md). If a target or the capability for the fault being executed does not exist, the experiment fails without impacting the resource.
+Finally, each resource must be onboarded to Chaos Studio as [a target with corresponding capabilities enabled](chaos-studio-targets-capabilities.md). If a target or the capability for the fault being executed does'nt exist, the experiment fails without impacting the resource.
## Agent authentication
-When running agent-based faults, you need to install the Chaos Studio agent on your virtual machine or virtual machine scale set. The agent uses a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate to Chaos Studio and an *agent profile* to establish relationship to a specific VM resource. When onboarding a virtual machine or virtual machine scale set for agent-based faults, you first create an agent target. The agent target must have a reference to the user-assigned managed identity that will be used for authentication. The agent target contains an *agent profile ID*, which is provided as configuration when installing the agent. Agent profiles are unique to each target and targets are unique per resource.
+When running agent-based faults, you need to install the Chaos Studio agent on your virtual machine or Virtual Machine Scale Set. The agent uses a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate to Chaos Studio and an *agent profile* to establish relationship to a specific VM resource. When onboarding a virtual machine or Virtual Machine Scale Set for agent-based faults, you first create an agent target. The agent target must have a reference to the user-assigned managed identity that will be used for authentication. The agent target contains an *agent profile ID*, which is provided as configuration when installing the agent. Agent profiles are unique to each target and targets are unique per resource.
## ARM operations and roles
To assign these permissions granularly, you can [create a custom role](../role-b
## Network security All user interactions with Chaos Studio happen through Azure Resource Manager. If a user starts an experiment, the experiment may interact with endpoints other than Resource Manager depending on the fault.
-* Service-direct faults - Most service-direct faults are executed through Azure Resource Manager. Target resources do not require any allowlisted network endpoints.
+* Service-direct faults - Most service-direct faults are executed through Azure Resource Manager. Target resources don't require any allowlisted network endpoints.
* Service-direct AKS Chaos Mesh faults - Service-direct faults for Azure Kubernetes Service that use Chaos Mesh require access that the AKS cluster have a publicly-exposed Kubernetes API server. [You can learn how to limit AKS network access to a set of IP ranges here.](../aks/api-server-authorized-ip-ranges.md) * Agent-based faults - Agent-based faults require agent access to the Chaos Studio agent service. A virtual machine or virtual machine scale set must have outbound access to the agent service endpoint for the agent to connect successfully. The agent service endpoint is `https://acs-prod-<region>.chaosagent.trafficmanager.net`, replacing `<region>` with the region where your virtual machine is deployed, for example, `https://acs-prod-eastus.chaosagent.trafficmanager.net` for a virtual machine in East US.
-Azure Chaos Studio does not support Service Tags or Private Link.
+Azure Chaos Studio doesn't support Private Link.
## Data encryption
cognitive-services Concept Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-ocr.md
Last updated 09/12/2022
-# Reading text (preview)
+# Computer Vision v4.0 Read OCR (preview)
-Version 4.0 of Image Analysis offers the ability to extract text from images. Contextual information like line number and position is also returned. Text reading is also available through the [OCR service](overview-ocr.md), but the latest model version is available through Image Analysis. This version is optimized for image inputs as opposed to documents.
+The new Computer Vision v4.0 Image Analysis REST API preview offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md).
[!INCLUDE [read-editions](./includes/read-editions.md)]
-## Reading text example
+## Use the V4.0 REST API preview
-The following JSON response illustrates what the Analyze API returns when reading text in the given image.
+The text extraction feature is part of the [v4.0 Analyze Image REST API](https://aka.ms/vision-4-0-ref). Include `Read` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"readResult"` section.
+
+For an example, copy the following command into a text editor and replace the `<key>` with your API key and optionally, your API endpoint URL. Then open a command prompt window and run the command.
+
+```bash
+ curl.exe -H "Ocp-Apim-Subscription-Key: <key>" -H "Content-Type: application/json" "https://westcentralus.api.cognitive.microsoft.com/computervision/imageanalysis:analyze?features=Read&model-version=latest&language=en&api-version=2022-10-12-preview" -d "{'url':'https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Salto_del_Angel-Canaima-Venezuela08.JPG/800px-Salto_del_Angel-Canaima-Venezuela08.JPG'}"
+
+```
+
+## Text extraction output
+
+The following JSON response illustrates what the v4.0 Analyze Image API returns when extracting text from the given image.
![Photo of a sticky note with writing on it.](./Images/handwritten-note.jpg)
The following JSON response illustrates what the Analyze API returns when readin
} ```
-## Use the API
-
-The text reading feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Read` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"readResult"` section.
- ## Next steps
-Follow the [quickstart](./quickstarts-sdk/image-analysis-client-library.md) to read text from an image using the Analyze API.
+Follow the v4.0 REST API sections in the [Image Analysis quickstart](./quickstarts-sdk/image-analysis-client-library.md) to extract text from an image using the Analyze API.
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
You can analyze images to provide insights about their visual features and chara
### Extract text from images (preview)
-Version 4.0 preview of Image Analysis offers the ability to extract text from images. Contextual information like line number and position is also returned. Text reading is also available through the main [OCR service](overview-ocr.md), but in Image Analysis this feature is optimized for image inputs as opposed to documents. [Reading text in images](concept-ocr.md)
+Version 4.0 preview of Image Analysis offers the ability to extract text from images. Compared with the async Computer Vision 3.2 GA Read, the new version offers the familiar Read OCR engine in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR in a single API operation. [Extract text from images](concept-ocr.md)
+ ### Detect people in images (preview)
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Title: What is Optical character recognition?
-description: The optical character recognition (OCR) service extracts visible text in an image and returns it as structured strings.
+description: The optical character recognition (OCR) service extracts print and handwritten text from images.
# What is Optical character recognition?
-Optical character recognition (OCR) allows you to extract printed or handwritten text from images, such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. Microsoft's **Read** OCR technology is built on top of multiple deep learning models supported by universal script-based models for global language support. This allows them to extract printed and handwritten text in [several languages](./language-support.md), including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
+Optical character recognition (OCR) allows you to extract printed or handwritten text from images, such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices.
## How is OCR related to intelligent document processing (IDP)? OCR typically refers to the foundational technology focusing on extracting text while delegating the extraction of structure, relationships, key-values, entities, and other document-centric insights to intelligent document processing service like [Form Recognizer](../../applied-ai-services/form-recognizer/overview.md). Form Recognizer includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Form Recognizer Read OCR](../../applied-ai-services/form-recognizer/concept-read.md).
+## Read OCR engine
+Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). This allows them to extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
+ [!INCLUDE [read-editions](includes/read-editions.md)]
-## Start with Vision Studio
+## How to use OCR
-Try out OCR by using Vision Studio.
+Try out OCR by using Vision Studio. Then follow one of the links to the Read edition in the later sections that best meet your requirements.
> [!div class="nextstepaction"] > [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
cognitive-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-audio-data.md
Previously updated : 09/11/2022 Last updated : 10/21/2022 ms.devlang: csharp # Locate audio files for batch transcription
-Batch transcription is used to transcribe a large amount of audio in storage. Batch transcription can read audio files from a public URI (such as "https://crbn.us/hello.wav") or a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI.
+Batch transcription is used to transcribe a large amount of audio in storage. Batch transcription can access audio files from inside or outside of Azure.
-You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
+When source audio files are stored outside of Azure, they can be accessed via a public URI (such as "https://crbn.us/hello.wav"). Files should be directly accessible; URIs that require authentication or that invoke interactive scripts before the file can be accessed aren't supported.
+
+Audio files that are stored in Azure Blob storage can be accessed via one of two methods:
+- [Trusted Azure services security mechanism](#trusted-azure-services-security-mechanism)
+- [Shared access signature (SAS)](#sas-url-for-batch-transcription) URI.
+
+You can specify one or multiple audio files when creating a transcription. We recommend that you provide multiple files per request or point to an Azure Blob storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
## Supported audio formats
The batch transcription API supports the following formats:
For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file. To create an ordered final transcript, use the timestamps that are generated per utterance.
-## Azure Blob Storage example
+## Azure Blob Storage upload
+
+When audio files are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account, you can request transcription of individual audio files or an entire Azure Blob Storage container. You can also [write transcription results](batch-transcription-create.md#destination-container-url) to a Blob container.
+
+> [!NOTE]
+> For blob and container limits, see [batch transcription quotas and limits](speech-services-quotas-and-limits.md#batch-transcription).
+
+# [Azure portal](#tab/portal)
-Batch transcription can read audio files from a public URI (such as "https://crbn.us/hello.wav") or a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI. You can provide individual audio files, or an entire Azure Blob Storage container. You can also read or write transcription results in a container. This example shows how to transcribe audio files in [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md).
+Follow these steps to create a storage account and upload wav files from your local directory to a new container.
-The [SAS URI](../../storage/common/storage-sas-overview.md) must have `r` (read) and `l` (list) permissions. The storage container must have at most 5GB of audio data and a maximum number of 10,000 blobs. The maximum size for a blob is 2.5GB.
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource.
+1. Select the Storage account.
+1. In the **Data storage** group in the left pane, select **Containers**.
+1. Select **+ Container**.
+1. Enter a name for the new container and select **Create**.
+1. Select the new container.
+1. Select **Upload**.
+1. Choose the files to upload and select **Upload**.
-Follow these steps to create a storage account, upload wav files from your local directory to a new container, and generate a SAS URL that you can use for batch transcriptions.
+# [Azure CLI](#tab/azure-cli)
-1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account will be created.
+Follow these steps to create a storage account and upload wav files from your local directory to a new container.
+
+1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account will be created. Use the same subscription and resource group as your Speech resource.
```azurecli-interactive set RESOURCE_GROUP=<your existing resource group name>
Follow these steps to create a storage account, upload wav files from your local
az storage blob upload-batch -d <mycontainer> -s . --pattern *.wav ```
-1. Generate a SAS URL with read (r) and list (l) permissions for the container with the [`az storage container generate-sas`](/cli/azure/storage/container#az-storage-container-generate-sas) command. Replace `<mycontainer>` with the name of your container.
++
+## Trusted Azure services security mechanism
+
+This section explains how to set up and limit access to your batch transcription source audio files in an Azure Storage account using the [trusted Azure services security mechanism](../../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity).
+
+> [!NOTE]
+> With the trusted Azure services security mechanism, you need to use [Azure Blob storage](../../storage/blobs/storage-blobs-overview.md) to store audio files. Usage of [Azure Files](../../storage/files/storage-files-introduction.md) is not supported.
+
+If you perform all actions in this section, your Storage account will be in the following configuration:
+- Access to all external network traffic is prohibited.
+- Access to Storage account using Storage account key is prohibited.
+- Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited.
+- Access to the selected Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
+
+So in effect your Storage account becomes completely "locked" and can't be used in any scenario apart from transcribing audio files that were already present by the time the new configuration was applied. You should consider this configuration as a model as far as the security of your audio data is concerned and customize it according to your needs.
+
+For example, you may allow traffic from selected public IP addresses and Azure Virtual networks. You may also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
+
+> [!NOTE]
+> Using [private endpoints for Speech](speech-services-private-link.md) isn't required to secure the storage account. You can use a private endpoint for batch transcription API requests, while separately accessing the source audio files from a secure storage account, or the other way around.
+
+By following the steps below, you'll severely restrict access to the storage account. Then you'll assign the minimum required permissions for Speech resource managed identity to access the Storage account.
+
+### Enable system assigned managed identity for the Speech resource
+
+Follow these steps to enable system assigned managed identity for the Speech resource that you will use for batch transcription.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Speech resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. On the **System assigned** tab, select **On** for the status.
+
+ > [!IMPORTANT]
+ > User assigned managed identity won't meet requirements for the batch transcription storage account scenario. Be sure to enable system assigned managed identity.
+
+1. Select **Save**
+
+Now the managed identity for your Speech resource can be granted access to your storage account.
+
+### Restrict access to the storage account
+
+Follow these steps to restrict access to the storage account.
+
+> [!IMPORTANT]
+> Upload audio files in a Blob container before locking down the storage account access.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Storage account.
+1. In the **Settings** group in the left pane, select **Configuration**.
+1. Select **Disabled** for **Allow Blob public access**.
+1. Select **Disabled** for **Allow storage account key access**
+1. Select **Save**.
+
+For more information, see [Prevent anonymous public read access to containers and blobs](/azure/storage/blobs/anonymous-read-access-prevent) and [Prevent Shared Key authorization for an Azure Storage account](/azure/storage/common/shared-key-authorization-prevent).
+
+### Configure Azure Storage firewall
+
+Having restricted access to the Storage account, you need to grant access to specific managed identities. Follow these steps to add access for the Speech resource.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Storage account.
+1. In the **Security + networking** group in the left pane, select **Networking**.
+1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
+1. Deselect all check boxes.
+1. Make sure **Microsoft network routing** is selected.
+1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Speech resource as the instance name.
+1. Select **Save**.
+
+ > [!NOTE]
+ > It may take up to 5 min for the network changes to propagate.
+
+Although by now the network access is permitted, the Speech resource can't yet access the data in the Storage account. You need to assign a specific access role for Speech resource managed identity.
+
+### Assign resource access role
+
+Follow these steps to assign the **Storage Blob Data Reader** role to the managed identity of your Speech resource.
+
+> [!IMPORTANT]
+> You need to be assigned the *Owner* role of the Storage account or higher scope (like Subscription) to perform the operation in the next steps. This is because only the *Owner* role can assign roles to others. See details [here](../../role-based-access-control/built-in-roles.md).
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Storage account.
+1. Select **Access Control (IAM)** menu in the left pane.
+1. Select **Add role assignment** in the **Grant access to this resource** tile.
+1. Select **Storage Blob Data Reader** under **Role** and then select **Next**.
+1. Select **Managed identity** under **Members** > **Assign access to**.
+1. Assign the managed identity of your Speech resource and then select **Review + assign**.
+
+ :::image type="content" source="media/storage/storage-identity-access-management-role.png" alt-text="Screenshot of the managed role assignment review.":::
+
+1. After confirming the settings, select **Review + assign**
+
+Now the Speech resource managed identity has access to the Storage account and can access the audio files for batch transcription.
+
+With system assigned managed identity, you'll use a plain Storage Account URL (no SAS or other additions) when you [create a batch transcription](batch-transcription-create.md) request. For example:
+
+```json
+{
+ "contentContainerUrl": "https://<storage_account_name>.blob.core.windows.net/<container_name>"
+}
+```
+
+You could otherwise specify individual files in the container. For example:
+
+```json
+{
+ "contentUrls": [
+ "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_1>",
+ "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_2>"
+ ]
+}
+```
+
+## SAS URL for batch transcription
+
+A shared access signature (SAS) is a URI that grants restricted access to an Azure Storage container. Use it when you want to grant access to your batch transcription files for a specific time range without sharing your storage account key.
+
+> [!TIP]
+> If the container with batch transcription source files should only be accessed by your Speech resource, use the [trusted Azure services security mechanism](#trusted-azure-services-security-mechanism) instead.
+
+# [Azure portal](#tab/portal)
+
+Follow these steps to generate a SAS URL that you can use for batch transcriptions.
+
+1. Complete the steps in [Azure Blob Storage upload](#azure-blob-storage-upload) to create a Storage account and upload audio files to a new container.
+1. Select the new container.
+1. In the **Settings** group in the left pane, select **Shared access tokens**.
+1. Select **+ Container**.
+1. Select **Read** and **List** for **Permissions**.
+
+ :::image type="content" source="media/storage/storage-container-shared-access-signature.png" alt-text="Screenshot of the container SAS URI permissions.":::
+
+1. Enter the start and expiry times for the SAS URI, or leave the defaults.
+1. Select **Generate SAS token and URL**.
+
+# [Azure CLI](#tab/azure-cli)
+
+Follow these steps to generate a SAS URL that you can use for batch transcriptions.
+
+1. Complete the steps in [Azure Blob Storage upload](#azure-blob-storage-upload) to create a Storage account and upload audio files to a new container.
+1. Generate a SAS URL with read (r) and list (l) permissions for the container with the [`az storage container generate-sas`](/cli/azure/storage/container#az-storage-container-generate-sas) command. Choose a new expiry date and replace `<mycontainer>` with the name of your container.
```azurecli-interactive
- az storage container generate-sas -n <mycontainer> --expiry 2022-09-09 --permissions rl --https-only
+ az storage container generate-sas -n <mycontainer> --expiry 2022-10-10 --permissions rl --https-only
``` The previous command returns a SAS token. Append the SAS token to your container blob URL to create a SAS URL. For example: `https://<storage_account_name>.blob.core.windows.net/<container_name>?SAS_TOKEN`. ++ You will use the SAS URL when you [create a batch transcription](batch-transcription-create.md) request. For example: ```json
You will use the SAS URL when you [create a batch transcription](batch-transcrip
} ```
+You could otherwise specify individual files in the container. You must generate and use a different SAS URL with read (r) permissions for each file. For example:
+
+```json
+{
+ "contentUrls": [
+ "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_1>?SAS_TOKEN_1",
+ "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_2>?SAS_TOKEN_2"
+ ]
+}
+```
+ ## Next steps - [Batch transcription overview](batch-transcription.md)
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
Previously updated : 09/11/2022 Last updated : 10/21/2022 zone_pivot_groups: speech-cli-rest
With batch transcriptions, you submit the [audio data](batch-transcription-audio
::: zone pivot="speech-cli"
-To create a transcription and connect it to an existing project, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions:
+To create a transcription, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions:
-- Set the required `content` parameter. You can specify either a semi-colon delimited list of individual files, or the SAS URL for an entire container. This property will not be returned in the response. For more information about Azure blob storage and SAS URLs, see [Azure storage for audio files](batch-transcription-audio-data.md#azure-blob-storage-example).
+- Set the required `content` parameter. You can specify either a semi-colon delimited list of individual files, or the URL for an entire container. This property will not be returned in the response. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
- Set the required `language` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response. - Set the required `name` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0/files"
}, "properties": { "diarizationEnabled": false,
- "wordLevelTimestampsEnabled": true,
- "displayFormWordLevelTimestampsEnabled": false,
+ "wordLevelTimestampsEnabled": false,
"channels": [ 0, 1
You should receive a response body in the following format:
"punctuationMode": "DictatedAndAutomatic", "profanityFilterMode": "Masked" },
- "lastActionDateTime": "2022-09-10T18:39:07Z",
+ "lastActionDateTime": "2022-10-21T14:21:59Z",
"status": "NotStarted",
- "createdDateTime": "2022-09-10T18:39:07Z",
+ "createdDateTime": "2022-10-21T14:21:59Z",
"locale": "en-US",
- "displayName": "My Transcription"
+ "displayName": "My Transcription",
+ "description": ""
} ```
spx help batch transcription
To create a transcription, use the [CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md#transcriptions). Construct the request body according to the following instructions: -- You must set either the `contentContainerUrl` or `contentUrls` property. This property will not be returned in the response. For more information about Azure blob storage and SAS URLs, see [Azure storage for audio files](batch-transcription-audio-data.md#azure-blob-storage-example).
+- You must set either the `contentContainerUrl` or `contentUrls` property. This property will not be returned in the response. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
- Set the required `locale` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. - Set the required `displayName` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later.
+- Optionally you can set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`. For more information, see [request configuration options](#request-configuration-options).
Make an HTTP POST request using the URI as shown in the following [CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba/files"
}, "properties": { "diarizationEnabled": false, "wordLevelTimestampsEnabled": true,
- "displayFormWordLevelTimestampsEnabled": false,
"channels": [ 0, 1
You should receive a response body in the following format:
"punctuationMode": "DictatedAndAutomatic", "profanityFilterMode": "Masked" },
- "lastActionDateTime": "2022-09-10T18:39:07Z",
+ "lastActionDateTime": "2022-10-21T14:18:06Z",
"status": "NotStarted",
- "createdDateTime": "2022-09-10T18:39:07Z",
+ "createdDateTime": "2022-10-21T14:18:06Z",
"locale": "en-US", "displayName": "My Transcription" }
Here are some property options that you can use to configure a transcription whe
| Property | Description | |-|-| |`channels`|An array of channel numbers to process. Channels `0` and `1` are transcribed by default. |
-|`contentContainerUrl`| You can submit individual audio files, or a whole storage container. You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Azure storage for audio files](batch-transcription-audio-data.md#azure-blob-storage-example).|
-|`contentUrls`| You can submit individual audio files, or a whole storage container. You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Azure storage for audio files](batch-transcription-audio-data.md#azure-blob-storage-example).|
-|`destinationContainerUrl`|The result can be stored in an Azure container. Specify the [ad hoc SAS](../../storage/common/storage-sas-overview.md) with write permissions. SAS with stored access policies isn't supported. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted.|
+|`contentContainerUrl`| You can submit individual audio files, or a whole storage container. You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).|
+|`contentUrls`| You can submit individual audio files, or a whole storage container. You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).|
+|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information, see [Destination container URL](#destination-container-url).|
|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The feature isn't available with stereo recordings.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.| |`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.| |`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models).|
Here are some property options that you can use to configure a transcription whe
Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
-Optionally, you can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model.
-
+Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model.
::: zone pivot="speech-cli"
spx batch transcription create --name "My Transcription" --language "en-US" --co
```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "contentContainerUrl": "https://YourStorageAccountName.blob.core.windows.net/YourContainerName?YourSASToken",
+ "contentUrls": [
+ "https://crbn.us/hello.wav",
+ "https://crbn.us/whatstheweatherlike.wav"
+ ],
"locale": "en-US",
+ "displayName": "My Transcription",
"model": { "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/1aae1070-7972-47e9-a977-87e3b05c457d" },
- "displayName": "My Transcription",
"properties": { "wordLevelTimestampsEnabled": true, },
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
::: zone-end
-To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide. A [deployed custom endpoint](how-to-custom-speech-deploy-model.md) isn't needed for the batch transcription service.
+To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide. A [custom model deployment endpoint](how-to-custom-speech-deploy-model.md) isn't needed for the batch transcription service.
Batch transcription requests for expired models will fail with a 4xx error. You'll want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md). +
+## Destination container URL
+
+The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted.
+
+You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic.
+
+The [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) is not supported for storing transcription results from a Speech resource. If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos-for-customization-and-logging). You can secure access to BYOS-associated Storage account exactly as described in the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) guide, except that the BYOS Speech resource would need **Storage Blob Data Contributor** role assignment. The results of batch transcription performed by the BYOS Speech resource will be automatically stored in the **TranscriptionData** folder of the **customspeech-artifacts** blob container.
+ ## Next steps - [Batch transcription overview](batch-transcription.md)
cognitive-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md
Previously updated : 09/11/2022 Last updated : 10/21/2022 zone_pivot_groups: speech-cli-rest
Depending in part on the request parameters set when you created the transcripti
|`displayPhraseElements`|A list of results with display text for each word of the phrase. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.| |`duration`|The audio duration, ISO 8601 encoded duration.| |`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).|
-|`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "doctor smith" to "dr smith", phone numbers, and other transformations are applied.|
+|`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.|
|`lexical`|The actual words recognized.| |`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.| |`maskedITN`|The ITN form with profanity masking applied.|
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md
Previously updated : 09/11/2022 Last updated : 10/21/2022 ms.devlang: csharp
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
Previously updated : 09/16/2022 Last updated : 10/23/2022 ms.devlang: cpp, csharp, java, javascript, python
Visemes vary by language and locale. Each locale has a set of visemes that corre
|6|`j`, `i`, `ɪ` |<img src="media/text-to-speech/viseme-id-6.jpg" width="200" height="200" alt="The mouth position when viseme ID is 6">| |7|`w`, `u`|<img src="media/text-to-speech/viseme-id-7.jpg" width="200" height="200" alt="The mouth position when viseme ID is 7">| |8|`o`|<img src="media/text-to-speech/viseme-id-8.jpg" width="200" height="200" alt="The mouth position when viseme ID is 8">|
-|9|Not supported|<img src="media/text-to-speech/viseme-id-9.jpg" width="200" height="200" alt="The mouth position when viseme ID is 9">|
-|10|Not supported|<img src="media/text-to-speech/viseme-id-10.jpg" width="200" height="200" alt="The mouth position when viseme ID is 10">|
-|11|Not supported|<img src="media/text-to-speech/viseme-id-11.jpg" width="200" height="200" alt="The mouth position when viseme ID is 11">|
+|9|`aʊ`|<img src="media/text-to-speech/viseme-id-9.jpg" width="200" height="200" alt="The mouth position when viseme ID is 9">|
+|10|`ɔɪ`|<img src="media/text-to-speech/viseme-id-10.jpg" width="200" height="200" alt="The mouth position when viseme ID is 10">|
+|11|`aɪ`|<img src="media/text-to-speech/viseme-id-11.jpg" width="200" height="200" alt="The mouth position when viseme ID is 11">|
|12|`h`|<img src="media/text-to-speech/viseme-id-12.jpg" width="200" height="200" alt="The mouth position when viseme ID is 12">| |13|`╔╣`|<img src="media/text-to-speech/viseme-id-13.jpg" width="200" height="200" alt="The mouth position when viseme ID is 13">| |14|`l`|<img src="media/text-to-speech/viseme-id-14.jpg" width="200" height="200" alt="The mouth position when viseme ID is 14">|
Here's an example of the viseme output.
# [2D SVG](#tab/2dsvg)
-The SVG output is a xml string that contains the animation.
+The SVG output is an xml string that contains the animation.
Render the SVG animation along with the synthesized speech to see the mouth movement. ```xml
cognitive-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-logging.md
Logging to file is an optional feature for the Speech SDK. During development logging provides additional information and diagnostics from the Speech SDK's core components. It can be enabled by setting the property `Speech_LogFilename` on a speech configuration object to the location and name of the log file. Logging is handled by a static class in Speech SDKΓÇÖs native library. You can turn on logging for any Speech SDK recognizer or synthesizer instance. All instances in the same process write log entries to the same log file.
-> [!NOTE]
-> Logging is available in all supported Speech SDK programming languages, with the exception of JavaScript.
- ## Sample The log file name is specified on a configuration object. Taking the `SpeechConfig` as an example and assuming that you have created an instance called `config`:
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 08/15/2022 Last updated : 10/21/2022 ms.devlang: csharp, java, javascript, python
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
Title: 'Tutorial: Access data with managed identity in Java using Service Connector' description: Secure Azure Database for PostgreSQL connectivity with managed identity from a sample Java Quarkus app, and deploy it to Azure Container Apps. ms.devlang: java-+ -+ Last updated 09/26/2022
What you will learn:
## 1. Prerequisites
-* [Azure CLI](/cli/azure/overview). This quickstart requires that you are running the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
+* [Azure CLI](/cli/azure/install-azure-cli) version 2.41.0 or higher.
* [Git](https://git-scm.com/) * [Java JDK](/azure/developer/java/fundamentals/java-support-on-azure) * [Maven](https://maven.apache.org)
cosmos-db Performance Tips Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-java-sdk-v4.md
Azure Cosmos DB is a fast and flexible distributed database that scales seamless
So if you're asking "How can I improve my database performance?" consider the following options: ## Networking-
-* **Connection mode: Use Direct mode**
-
-Java SDK default connection mode is direct. You can configure the connection mode in the client builder using the *directMode()* or *gatewayMode()* methods, as shown below. To configure either mode with default settings, call either method without arguments. Otherwise, pass a configuration settings class instance as the argument (*DirectConnectionConfig* for *directMode()*, *GatewayConnectionConfig* for *gatewayMode()*.). To learn more about different connectivity options, see the [connectivity modes](sdk-connection-modes.md) article.
-
-# [Async](#tab/api-async)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientConnectionModeAsync)]
-
-# [Sync](#tab/api-sync)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientConnectionModeSync)]
-
-
-
-The *directMode()* method has an additional override, for the following reason. Control plane operations such as database and container CRUD *always* utilize Gateway mode; when the user has configured Direct mode for data plane operations, control plane operations use default Gateway mode settings. This suits most users. However, users who want Direct mode for data plane operations as well as tunability of control plane Gateway mode parameters can use the following *directMode()* override:
-
-# [Async](#tab/api-async)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Async API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientDirectOverrideAsync)]
-
-# [Sync](#tab/api-sync)
-
-Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-
-[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientDirectOverrideSync)]
-
-
- <a name="collocate-clients"></a> * **Collocate clients in same Azure region for performance** <a id="same-region"></a>
Limitations: accelerated networking must be supported on the VM OS, and can only
Please see the [Windows](../../virtual-network/create-vm-accelerated-networking-powershell.md) and [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md) instructions for more details.
+## Tuning direct and gateway connection configuration
+
+For optimizing direct and gateway mode connection configurations, see how to [tune connection configurations for java sdk v4](tune-connection-configurations-java-sdk-v4.md).
+ ## SDK usage * **Install the most recent SDK**
Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
-* **Tuning ConnectionPolicy**
-
-By default, Direct mode Azure Cosmos DB requests are made over TCP when using Azure Cosmos DB Java SDK v4. Internally Direct mode uses a special architecture to dynamically manage network resources and get the best performance.
-
-In Azure Cosmos DB Java SDK v4, Direct mode is the best choice to improve database performance with most workloads.
-
-* ***Overview of Direct mode***
-<a id="direct-connection"></a>
--
-The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. The diagram above shows how Direct mode routes client requests to replicas in the Azure Cosmos DB backend. The Direct mode architecture allocates up to 130 **Channels** on the client side per DB replica. A Channel is a TCP connection preceded by a request buffer, which is 30 requests deep. The Channels belonging to a replica are dynamically allocated as needed by the replica's **Service Endpoint**. When the user issues a request in Direct mode, the **TransportClient** routes the request to the proper service endpoint based on the partition key. The **Request Queue** buffers requests before the Service Endpoint.
-
-* ***Configuration options for Direct mode***
-
-If non-default Direct mode behavior is desired, create a *DirectConnectionConfig* instance and customize its properties, then pass the customized property instance to the *directMode()* method in the Azure Cosmos DB client builder.
-
-These configuration settings control the behavior of the underlying Direct mode architecture discussed above.
-
-As a first step, use the following recommended configuration settings below. These *DirectConnectionConfig* options are advanced configuration settings which can affect SDK performance in unexpected ways; we recommend users avoid modifying them unless they feel very comfortable in understanding the tradeoffs and it is absolutely necessary. Please contact the [Azure Cosmos DB team](mailto:CosmosDBPerformanceSupport@service.microsoft.com) if you run into issues on this particular topic.
-
-| Configuration option | Default |
-| :: | :--: |
-| idleConnectionTimeout | "PT0" |
-| maxConnectionsPerEndpoint | "130" |
-| connectTimeout | "PT5S" |
-| idleEndpointTimeout | "PT1H" |
-| maxRequestsPerConnection | "30" |
- * **Scale out your client-workload** If you are testing at high throughput levels, the client application may become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
The latter is supported but will add latency to your application; the SDK must parse the item and extract the partition key. + ## Query operations For query operations see the [performance tips for queries](performance-tips-query-sdk.md?pivots=programming-language-java).
x-ms-retry-after-ms :100
The SDKs all implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.
-If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client may not suffice; in this case, the client throws a *CosmosClientException* with status code 429 to the application. The default retry count can be changed by using setRetryOptions on the ConnectionPolicy instance. By default, the *CosmosClientException* with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.
+If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently set to 9 internally by the client may not suffice; in this case, the client throws a *CosmosClientException* with status code 429 to the application. The default retry count can be changed by using `setMaxRetryAttemptsOnThrottledRequests()` on the `ThrottlingRetryOptions` instance. By default, the *CosmosClientException* with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.
While the automated retry behavior helps to improve resiliency and usability for the most applications, it might come at odds when doing performance benchmarks, especially when measuring latency. The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate. For more information, see [Request units](../request-units.md).
cosmos-db Sdk Connection Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-connection-modes.md
There are two factors that dictate the number of TCP connections the SDK will op
Each established connection can serve a configurable number of concurrent operations. If the volume of concurrent operations exceeds this threshold, new connections will be open to serve them, and it's possible that for a physical partition, the number of open connections exceeds the steady state number. This behavior is expected for workloads that might have spikes in their operational volume. For the .NET SDK this configuration is set by [CosmosClientOptions.MaxRequestsPerTcpConnection](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.maxrequestspertcpconnection), and for the Java SDK you can customize using [DirectConnectionConfig.setMaxRequestsPerConnection](/java/api/com.azure.cosmos.directconnectionconfig.setmaxrequestsperconnection).
-By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and affect overall performance.
+By default, connections are permanently maintained to benefit the performance of future operations (opening a connection has computational overhead). There might be some scenarios where you might want to close connections that are unused for some time understanding that this might affect future operations slightly. For the .NET SDK this configuration is set by [CosmosClientOptions.IdleTcpConnectionTimeout](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout), and for the Java SDK you can customize using [DirectConnectionConfig.setIdleConnectionTimeout](/java/api/com.azure.cosmos.directconnectionconfig.setidleconnectiontimeout). It isn't recommended to set these configurations to low values as it might cause connections to be frequently closed and effect overall performance.
### Language specific implementation details For further implementation details regarding a language see: * [.NET SDK implementation information](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/docs/SdkDesign.md)
-* [Java SDK direct mode information](performance-tips-java-sdk-v4.md#direct-connection)
+* [Java SDK direct mode information](tune-connection-configurations-java-sdk-v4.md#direct-connection-mode)
## Next steps
cosmos-db Tune Connection Configurations Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tune-connection-configurations-java-sdk-v4.md
+
+ Title: Connection configurations for Azure Cosmos DB Java SDK v4
+description: Learn how to tune connection configurations to improve Azure Cosmos DB database performance for Java SDK v4
+++
+ms.devlang: java
+ Last updated : 04/22/2022++++
+# Tune connection configurations for Azure Cosmos DB Java SDK v4
+
+> [!div class="op_single_selector"]
+> * [Java SDK v4](performance-tips-java-sdk-v4.md)
+> * [Async Java SDK v2](performance-tips-async-java.md)
+> * [Sync Java SDK v2](performance-tips-java.md)
+> * [.NET SDK v3](performance-tips-dotnet-sdk-v3.md)
+> * [.NET SDK v2](performance-tips.md)
+>
+
+> [!IMPORTANT]
+> The performance tips in this article are for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
+
+Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call or SDK method call. However, because Azure Cosmos DB is accessed via network calls there are connection configurations you can tune to achieve peak performance when using Azure Cosmos DB Java SDK v4.
+
+## Connection configuration
+
+> [!NOTE]
+> In Azure Cosmos DB Java SDK v4, *Direct mode* is the best choice to improve database performance with most workloads.
+
+To learn more about different connectivity options, see the [connectivity modes](sdk-connection-modes.md) article.
+
+### Direct connection mode
+
+Java SDK default connection mode is direct. Direct mode Azure Cosmos DB requests are made over TCP when using Azure Cosmos DB Java SDK v4. Internally Direct mode uses a special architecture to dynamically manage network resources and get the best performance. The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. To learn more about architecture, see the [direct mode connection architecture](sdk-connection-modes.md#direct-mode)
+
+You can configure the connection mode in the client builder using the *directMode()* method as shown below. To configure direct mode with default settings, call `directMode()` method without arguments. To customize direct mode connection settings, pass *DirectConnectionConfig* to `directMode()` API.
+
+# [Async](#tab/api-async)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientConnectionModeAsync)]
+
+# [Sync](#tab/api-sync)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientConnectionModeSync)]
+
+
+
+#### Customizing direct connection mode
+
+If non-default Direct mode behavior is desired, create a *DirectConnectionConfig* instance and customize its properties, then pass the customized property instance to the *directMode()* method in the Azure Cosmos DB client builder.
+
+These configuration settings control the behavior of the underlying Direct mode architecture discussed above.
+
+As a first step, use the following recommended configuration settings below. These *DirectConnectionConfig* options are advanced configuration settings which can affect SDK performance in unexpected ways; we recommend users avoid modifying them unless they feel very comfortable in understanding the tradeoffs and it is absolutely necessary. Please contact the [Azure Cosmos DB team](mailto:CosmosDBPerformanceSupport@service.microsoft.com) if you run into issues on this particular topic.
+
+| Configuration option | Default | Recommended | Details |
+| :: | :--: | :: | :--: |
+| idleConnectionTimeout | "PT0" (ZERO) | "PT0" (ZERO) | This represents the idle connection timeout duration for a *single connection* to an endpoint/backend node (representing a replica). By default, SDK doesn't automatically close idle connections to the backend nodes. |
+| idleEndpointTimeout | "PT1H" | "PT1H" | This represents the idle connection timeout duration for the *connection pool* for an endpoint/backend node (representing a replica). By default, if there are no incoming requests to a specific endpoint/backend node, SDK will close all the connections in the connection pool to that endpoint/backend node after 1 hour to save network resources and I/O cost. For sparse or sporadic traffic pattern, we recommend setting this value to a higher number like 6 hours, 12 hours or even 24 hours, so that SDK will not have to open the connections frequently. However, this will utilize the network resources and will have higher number of connections kept open at any given time. If this is set to ZERO, idle endpoint check will be disabled. |
+| maxConnectionsPerEndpoint | "130" | "130" | This represents the upper bound size of the *connection pool* for an endpoint/backend node (representing a replica). SDK creates connections to endpoint/backend node on-demand and based on incoming concurrent requests. By default, if required, SDK will create maximum 130 connections to an endpoint/backend node. (NOTE: SDK doesn't create these 130 connections upfront). |
+| maxRequestsPerConnection | "30" | "30" | This represents the upper bound size of the maximum number of requests that can be queued on a *single connection* for a specific endpoint/backend node (representing a replica). SDK queues requests to a single connection to an endpoint/backend node on-demand and based on incoming concurrent requests. By default, if required, SDK will queue maximum 30 requests to a single connection for a specific endpoint/backend node. (NOTE: SDK doesn't queue these 30 requests to a single connection upfront). |
+| connectTimeout | "PT5S" | "~PT1S" | This represents the connection establishment timeout duration for a *single connection* to be established with an endpoint/backend node. By default SDK will wait for maximum 5 seconds for connection establishment before throwing an error. TCP connection establishment uses [multi-step handshake](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation) which increases latency of the connection establishment time, hence, customers are recommended to set this value according to their network bandwidth and environment settings. NOTE: This recommendation of ~PT1S is only for applications deployed in colocated regions of their Cosmos DB accounts. |
+| networkRequestTimeout | "PT5S" | "PT5S" | This represents the network timeout duration for a *single request*. SDK will wait maximum for this duration to consume a service response after the request has been written to the network connection. SDK only allows values between 5 seconds (min) and 10 seconds (max). Setting a value too high can result in fewer retries and reduce chances of success by retries. |
++
+### Gateway Connection mode
+
+Control plane operations such as database and container CRUD *always* utilize Gateway mode. Even when the user has configured Direct mode for data plane operations, control plane and meta data operations use default Gateway mode settings. This suits most users. However, users who want Direct mode for data plane operations as well as tunability of control plane Gateway mode parameters can use the following *directMode()* override:
+
+# [Async](#tab/api-async)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Async API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/async/SampleDocumentationSnippetsAsync.java?name=PerformanceClientDirectOverrideAsync)]
+
+# [Sync](#tab/api-sync)
+
+Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
+
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/documentationsnippets/sync/SampleDocumentationSnippets.java?name=PerformanceClientDirectOverrideSync)]
+
+
+
+#### Customizing gateway connection mode
+
+If non-default Gateway mode behavior is desired, create a *GatewayConnectionConfig* instance and customize its properties, then pass the customized property instance to the above *directMode()* override method or *gatewayMode()* method in the Azure Cosmos DB client builder.
+
+As a first step, use the following recommended configuration settings below. These *GatewayConnectionConfig* options are advanced configuration settings which can affect SDK performance in unexpected ways; we recommend users avoid modifying them unless they feel very comfortable in understanding the tradeoffs and it is absolutely necessary. Please contact the [Azure Cosmos DB team](mailto:CosmosDBPerformanceSupport@service.microsoft.com) if you run into issues on this particular topic.
+
+| Configuration option | Default | Recommended | Details |
+| :: | :--: | :: | :--: |
+| maxConnectionPoolSize | "1000" | "1000" | This represents the upper bound size of the connection pool size for underlying http client, which is the maximum number of connections that SDK will create for requests going to Gateway mode. SDK reuses these connections when sending requests to the Gateway. |
+| idleConnectionTimeout | "PT60S" | "PT60S" | This represents the idle connection timeout duration for a *single connection* to the Gateway. After this time, the connection will be automatically closed and will be released back to connection pool for reusability. |
++
+## Next steps
+
+To learn more about performance tips for Java SDK, see [Performance tips for Azure Cosmos DB Java SDK v4](performance-tips-java-sdk-v4.md).
+
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
+
+Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cost-management-billing Ea Transfers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-transfers.md
When you request an account transfer with a support request, provide the followi
Other points to keep in mind before an account transfer: - Approval from a full EA Administrator, not a read-only EA administrator, is required for the target and source enrollment.
- - If you have only UPN (User Principal Name) entities configured as full EA administrators without access to e-mail, you must perform one of the following actions:
- - Create a temporary full EA administrator account in the EA portal
- &mdash; Or &mdash;
- - Provide EA portal screenshot evidence of a user account associated with the UPN account
+ - If you have only UPN (User Principal Name) entities configured as full EA administrators without access to e-mail, you must **either** create a temporary full EA administrator account in the EA portal **or** provide EA portal screenshot evidence of a user account associated with the UPN account.
- You should consider an enrollment transfer if an account transfer doesn't meet your requirements. - Your account transfer moves all services and subscriptions related to the specific accounts. - Your transferred account appears inactive under the source enrollment and appears active under the target enrollment when the transfer is complete.
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
Before you start the setup, we recommend you do the following actions:
## Access required to complete the setup
-To complete the setup, you need the following access:
+To complete the setup, you need both of these roles:
-- Owner of the billing account that was created when the Microsoft Customer Agreement was signed. To learn more about billing accounts, see [Your billing account](../understand/mca-overview.md#your-billing-account).
-ΓÇö And ΓÇö
+- Owner of the billing account that was created when the Microsoft Customer Agreement was signed. To learn more about billing accounts, see [Your billing account](../understand/mca-overview.md#your-billing-account).
- Enterprise administrator on the enrollment that is renewed. ### Start migration and get permission needed to complete setup
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 09/01/2022 Last updated : 10/23/2022 # Copy and transform data in Azure Blob Storage by using Azure Data Factory or Azure Synapse Analytics
In this case, all files that were sourced under `/data/sales` are moved to `/bac
**Filter by last modified:** You can filter which files you process by specifying a date range of when they were last modified. All datetimes are in UTC.
-**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture](#change-data-capture-preview).
+**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs.
:::image type="content" source="media/data-flow/enable-change-data-capture.png" alt-text="Screenshot showing Enable change data capture.":::
To learn details about the properties, check [Delete activity](delete-activity.m
] ```
-## Change data capture (preview)
+## Change data capture
-Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture (Preview)** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice.
+Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Pleaser refer to [Change Data Capture](https://learn.microsoft.com/azure/data-factory/concepts-change-data-capture) for detials.
-Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can always be recorded from the last run to get changes from there. If you change your pipeline name or activity name, the checkpoint will be reset, and you will start from the beginning in the next run.
-
-When you debug the pipeline, the **Enable change data capture (Preview)** works as well. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the result from debug run, you can publish and trigger the pipeline. It will always start from the beginning regardless of the previous checkpoint recorded by debug run.
-
-In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changes are always gotten from the checkpoint record in your selected pipeline run.
+.
## Next steps
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Settings specific to Azure SQL Database are available in the **Source Options**
**Incremental date column**: When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table.
-**Enable native change data capture(Preview)**: Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL DB before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture).
+**Enable native change data capture(Preview)**: Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL DB before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture).
**Start reading from beginning**: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on.
derivedColumn1 sink(allowSchemaDrift: true,
### Known limitation:
-* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
## Next steps
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
The below table lists the properties supported by Azure SQL Managed Instance sou
| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel | | Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- | | Incremental date column | When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. | No | - |- |
-| Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL MI before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
+| Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL MI before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
| Start reading from beginning | Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. | No | - |- |
derivedColumn1 sink(allowSchemaDrift: true,
### Known limitation:
-* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
## Next steps
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
The below table lists the properties supported by SQL Server source. You can edi
| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel | | Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- | | Incremental date column | When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. | No | - |- |
-| Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on SQL Server before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
+| Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on SQL Server before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
| Start reading from beginning | Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. | No | - |- |
derivedColumn1 sink(allowSchemaDrift: true,
### Known limitation:
-* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
## Troubleshoot connection issues
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Follow these steps to get started:
5. Save and run. If you used the YAML, it gets triggered every time the main branch is updated.
+> [!NOTE]
+> The generated artifacts already contain pre and post deployment scripts for the triggers so it isn't necessary to add these manually.
+ ## Next steps Learn more information about continuous integration and delivery in Data Factory:
data-factory Control Flow Append Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-append-variable-activity.md
Previously updated : 09/09/2021 Last updated : 10/23/2022 # Append Variable activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
Corporation
### Escaping single quote character
-Expression functions use single quote for string value parameters. Use two single quotes to escape a ' character in string functions. For example, expression `@concat('Baba', '''s ', 'book store')` will return below result.
+Expression functions use single quote for string value parameters. Use two single quotes to escape a `'` character in string functions. For example, expression `@concat('Baba', '''s ', 'book store')` will return below result.
``` Baba's book store
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-until-activity.md
# Until activity in Azure Data Factory and Synapse Analytics [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-The Until activity provides the same functionality that a do-until looping structure provides in programming languages. It executes a set of activities in a loop until the condition associated with the activity evaluates to true. You can specify a timeout value for the until activity.
+The Until activity provides the same functionality that a do-until looping structure provides in programming languages. It executes a set of activities in a loop until the condition associated with the activity evaluates to true. If a inner activity fails the Until activity does not stop. You can specify a timeout value for the until activity.
## Create an Until activity with UI To use an Until activity in a pipeline, complete the following steps:
-1. Search for _Until_ in the pipeline Activities pane, and drag a Set Variable activity to the pipeline canvas.
+1. Search for _Until_ in the pipeline Activities pane, and drag a Until activity to the pipeline canvas.
1. Select the Until activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details. :::image type="content" source="media/control-flow-until-activity/until-activity.png" alt-text="Shows the Settings tab of the Until activity in the pipeline canvas.":::
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
The following table applies to Azure Batch.
### SSL error when linked service using HDInsight ESP cluster -- **Message**: `Failed to connect to HDInsight cluster: 'ERROR [HY000] [Microsoft][DriverSupport] (1100) SSL certificate verification failed because the certificate is missing or incorrect.`
+- **Message**: `Failed to connect to HDInsight cluster: 'ERROR [HY000] [Microsoft][DriverSupport] (1100) SSL certificate verification failed because the certificate is missing or incorrect.'`
- **Cause**: The issue is most likely related with System Trust Store.
If the HDI activity is stuck in preparing for cluster, follow the guidelines bel
### Error code: 2108 -- **Message**: `Error calling the endpoint '<URL>'. Response status code: 'NA - Unknown'. More details: Exception message: 'NA - Unknown [ClientSideException] Invalid Url:<URL>. Please verify Url or integration runtime is valid and retry. Localhost URLs are allowed only with SelfHosted Integration Runtime`
+- **Message**: `Error calling the endpoint '<URL>'. Response status code: 'NA - Unknown'. More details: Exception message: 'NA - Unknown [ClientSideException] Invalid Url: <URL>. Please verify Url or integration runtime is valid and retry. Localhost URLs are allowed only with SelfHosted Integration Runtime'`
- **Cause**: Unable to reach the URL provided. This can occur because there was a network connection issue, the URL was unresolvable, or a localhost URL was being used on an Azure integration runtime.
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-connector-format.md
You can try to use copy activities to unblock this issue.
#### Symptoms
-Your Azure SQL Database can work well in the data copy, dataset preview-data, and test-connection in the linked service, but it fails when the same Azure SQL Database is used as a source or sink in the data flow with error like `Cannot connect to SQL database: 'jdbc:sqlserver://powerbasenz.database.windows.net;..., Please check the linked service configuration is correct, and make sure the SQL database firewall allows the integration runtime to access`
+Your Azure SQL Database can work well in the data copy, dataset preview-data, and test-connection in the linked service, but it fails when the same Azure SQL Database is used as a source or sink in the data flow with error like `Cannot connect to SQL database: 'jdbc:sqlserver://powerbasenz.database.windows.net;..., Please check the linked service configuration is correct, and make sure the SQL database firewall allows the integration runtime to access.'`
#### Cause
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${integrationRuntimeReso
The following services have native private endpoint support. They can be connected through private link from a Data Factory managed virtual network:
+- Azure Databricks
- Azure Functions (Premium plan) - Azure Key Vault - Azure Machine Learning
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-using-azure-monitor.md
Previously updated : 09/02/2021 Last updated : 10/22/2022 # Monitor and Alert Data Factory by using Azure Monitor
Last updated 09/02/2021
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)] Cloud applications are complex and have many moving parts. Monitors provide data to help ensure that your applications stay up and running in a healthy state. Monitors also help you avoid potential problems and troubleshoot past ones. You can use monitoring data to gain deep insights about your applications. This knowledge helps you improve application performance and maintainability. It also helps you automate actions that otherwise require manual intervention.- Azure Monitor provides base-level infrastructure metrics and logs for most Azure services. Azure diagnostic logs are emitted by a resource and provide rich, frequent data about the operation of that resource. Azure Data Factory (ADF) can write diagnostic logs in Azure Monitor. - For more information, see [Azure Monitor overview](../azure-monitor/overview.md). ## Keeping Azure Data Factory metrics and pipeline-run data
Data Factory stores pipeline-run data for only 45 days. Use Azure Monitor if you
* **Event Hub**: Stream the logs to Azure Event Hubs. The logs become input to a partner service/custom analytics solution like Power BI. * **Log Analytics**: Analyze the logs with Log Analytics. The Data Factory integration with Azure Monitor is useful in the following scenarios: * You want to write complex queries on a rich set of metrics that are published by Data Factory to Monitor. You can create custom alerts on these queries via Monitor.
- * You want to monitor across data factories. You can route data from multiple data factories to a single Monitor workspace.
-
-You can also use a storage account or event-hub namespace that isn't in the subscription of the resource that emits logs. The user who configures the setting must have appropriate Azure role-based access control (Azure RBAC) access to both subscriptions.
-
+ - You want to monitor across data factories. You can route data from multiple data factories to a single Monitor workspace.
+* **Partner Solution:** Diagnostic logs could be sent to Partner solutions through integration. For potential partner integrations, [click to learn more about partner integration.](/azure/partner-solutions/overview)
+ You can also use a storage account or event-hub namespace that isn't in the subscription of the resource that emits logs. The user who configures the setting must have appropriate Azure role-based access control (Azure RBAC) access to both subscriptions.
## Next steps - [Azure Data Factory metrics and alerts](monitor-metrics-alerts.md) - [Monitor and manage pipelines programmatically](monitor-programmatically.md)+
data-factory Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md
Title: Plan to manage costs for Azure Data Factory description: Learn how to plan for and manage costs for Azure Data Factory by using cost analysis in the Azure portal.--++ Previously updated : 08/18/2022 Last updated : 10/21/2022 # Plan to manage costs for Azure Data Factory
Cost analysis in Cost Management supports most Azure account types, but not all
## Estimate costs before using Azure Data Factory
-Use the [ADF pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=data-factory) to get an estimate of the cost of running your ETL workload in Azure Data Factory. .To use the calculator, you have to input details such as number of activity runs, number of data integration unit hours, type of compute used for Data Flow, core count, instance count, execution duration, and etc.
+Use the [ADF pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=data-factory) to get an estimate of the cost of running your ETL workload in Azure Data Factory. To use the calculator, you have to input details such as number of activity runs, number of data integration unit hours, type of compute used for Data Flow, core count, instance count, execution duration, and etc.
-One of the commonly asked questions for the pricing calculator is what values should be used as inputs. During the proof-of-concept phase, you can conduct trial runs using sample datasets to understand the consumption for various ADF meters. Then based on the consumption for the sample dataset, you can project out the consumption for the full dataset and operationalization schedule.
+One of the commonly asked questions for the pricing calculator is what values should be used as inputs. During the proof-of-concept phase, you can conduct trial runs using sample datasets to understand the consumption for various ADF meters. Then based on the consumption for the sample dataset, you can project out the consumption for the full dataset and operational schedule.
> [!NOTE] > The prices used in this example below are hypothetical and are not intended to imply actual pricing.
You can pay for Azure Data Factory charges with your Azure Prepayment credit. Ho
## Monitor costs
-Azure Data Factory costs can be monitored at the factory, pipeline-run and activity-run levels.
+Azure Data Factory costs can be monitored at the factory, pipeline, pipeline-run and activity-run levels.
### Monitor costs at factory level with Cost Analysis
In certain cases, you may want a granular breakdown of cost of operations within
You need to opt in for _each_ factory that you want detailed billing for. To turn on per pipeline detailed billing feature, 1. Go to Azure Data Factory portal
-1. Under _Monitor_ tab, select _Factory setting_ in _General_ section
+1. Under _Manage_ tab, select _Factory setting_ in _General_ section
1. Select _Showing billing report_ by pipeline 1. Publish the change
Once the feature is enabled, each pipeline will have a separate entry in our Bil
Using the graphing tools of Cost Analysis, you get similar charts and trends lines as shown [above](#monitor-costs-at-factory-level-with-cost-analysis), but for individual pipelines. You also get the summary view by factory name, as factory name is included in billing report, allowing for proper filtering when necessary.
+The change _only_ impacts how bills are emitted going forward, and does not change past charges. Please give some time before the change populate to billing report: typically, the change is reflected within 1 day.
+ > [!WARNING] > By opting in the per billing setting, there will be one entry for each pipeline in your factory. Please be particularly aware if you have excessive amount of pipelines in the factory, as it may significantly lengthen and complicate your billing report.
data-factory Quickstart Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-get-started.md
Title: Get started to try out your first data factory pipeline
-description: Get started with your first data factory demo to copy data from one blob storage to another.
+ Title: Get started and try out your first data factory pipeline
+description: Get started with your first data factory to copy data from one blob storage to another.
databox-online Azure Stack Edge Mini R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-technical-specifications-compliance.md
The following routers and switches are compatible with the 10 Gbps SPF+ network
|[VoyagerESR 2.0](https://www.klasgroup.com/products-gov/voyager-tdc/) |Cisco ESS3300 Switch component | |[VoyagerSW26G](https://klastelecom.com/products/voyagersw26g/) | | |[VoyagerVM 3.0](https://klastelecom.com/products/voyager-vm-3-0/) | |
-|[TDC Switch](https://klastelecom.com/voyager-tdc/) | |
+|[TDC Switch](https://www.klasgroup.com/products-gov/voyager-tdc/) | |
|[TRX R2](https://klastelecom.com/products/trx-r2/) (8-Core) <!--Better link: https://www.klasgroup.com/products/voyagersw12gg/? On current link target, an "R6" link opens this page.--> | | |[SW12GG](https://www.klasgroup.com/products/voyagersw12gg/) | |
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 10/18/2022 Last updated : 10/23/2022 # Cloud Security Posture Management (CSPM)
Defender for Cloud continually assesses your resources, subscriptions, and organ
|Aspect|Details| |-|:-| |Release state:| Foundational CSPM capabilities: GA <br> Defender Cloud Security Posture Management (CSPM): Preview |
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP projects|
+|Clouds:| **Foundational CSPM capabilities** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP projects <br> <br> **Defender Cloud Security Posture Management (CSPM)** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP projects |
## Defender CSPM plan options
defender-for-cloud Protect Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md
Title: Protecting your network resources in Microsoft Defender for Cloud description: This document addresses recommendations in Microsoft Defender for Cloud that help you protect your Azure network resources and stay in compliance with security policies. Previously updated : 11/09/2021 Last updated : 10/23/2022 # Protect your network resources
To open the Network map:
1. Select **Network map**.
- :::image type="content" source="./media/protect-network-resources/opening-network-map.png" alt-text="Opening the network map from the Workload protections." lightbox="./media/protect-network-resources/opening-network-map.png":::
1. Select the **Layers** menu choose **Topology**.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To locate the unique numeric ID in the GCP portal, navigate to **IAM & Admin** >
1. (Optional) If you changed any of the names of any of the resources, update the names in the appropriate fields.
-1. (**Servers/SQL only**) Select **Azure-Arc for servers onboarding**
-
- :::image type="content" source="media/quickstart-onboard-gcp/unique-numeric-id.png" alt-text="Screenshot showing the Azure-Arc for servers onboarding section of the screen." lightbox="media/quickstart-onboard-gcp/unique-numeric-id.png":::
-
- Enter the service account unique ID, which is generated automatically after running the GCP Cloud Shell.
- 1. Select the **Next: Review and generate >**. 1. Ensure the information presented is correct.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
We deprecated the following policies to corresponding policies that already exis
| To be deprecated | Changing to | |--|--| |`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
-| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest 'Python version` |
+| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest Python version'` |
| `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` | | `Managed identity should be used in your API App` | `App Service apps should use managed identity` | | `Remote debugging should be turned off for API Apps` | `App Service apps should have remote debugging turned off` | | `Ensure that 'PHP version' is the latest, if used as a part of the API app` | `App Service apps that use PHP should use the latest 'PHP version'`| | `FTPS only should be required in your API App` | `App Service apps should require FTPS only` |
-| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version` |
+| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version'` |
| `Latest TLS version should be used in your API App` | `App Service apps should use the latest TLS version` | ## June 2022
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
Title: Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud description: Learn about the availability of Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud deployment. Previously updated : 03/08/2022 Last updated : 10/23/2022
For information about when recommendations are generated for each of these solut
| - [Deployment of agents and extensions](monitoring-components.md) | GA | GA | GA | | - [Asset inventory](./asset-inventory.md) | GA | GA | GA | | - [Azure Monitor Workbooks reports in Microsoft Defender for Cloud's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
-| - [Integration with Microsoft Defender for Cloud Apps](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps) | GA | Not Available | Not Available |
+| - [Integration with Microsoft Defender for Cloud Apps](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps) | GA | GA | Not Available |
| **Microsoft Defender plans and extensions** | | | | | - [Microsoft Defender for Servers](./defender-for-servers-introduction.md) | GA | GA | GA | | - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available |
For information about when recommendations are generated for each of these solut
| - [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA | | - [File Integrity Monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA | | - [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA |
-| - [Adaptive network hardening](./adaptive-network-hardening.md) | GA | Not Available | Not Available |
+| - [Adaptive network hardening](./adaptive-network-hardening.md) | GA | GA | Not Available |
| - [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA | | - [Integrated Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md) | GA | Not Available | Not Available | | - [Regulatory compliance dashboard & reports](./regulatory-compliance-dashboard.md) <sup>[8](#footnote8)</sup> | GA | GA | GA |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 10/20/2022 Last updated : 10/23/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| None | None |
+| [Deprecation of AWS Lambda recommendation](#deprecation-of-aws-lambda-recommendation) | November 2023 |
+
+### Deprecation of AWS Lambda recommendation
+
+**Estimated date for change: November 2023**
+
+The following recommendation is set to be deprecated [`Lambda functions should have a dead-letter queue configured`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/AwsRecommendationDetailsBlade/assessmentKey/dcf10b98-798f-4734-9afd-800916bf1e65/showSecurityCenterCommandBar~/false).
## Next steps
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Policy engine alerts describe detected deviations from learned baseline behavior
| Title | Description | Severity | Category | |--|--|--|--| | Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Database Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
+| Database Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 login failures in 5 minutes | Major | Authentication |
| Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change | | External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access | | Field Device Discovered Unexpectedly | A new source device was detected on the network but hasn't been authorized. | Major | Discovery |
Anomaly engine alerts describe detected anomalies in network activity.
| Title | Description | Severity | Category | |--|--|--|--|
-| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. | Minor | Abnormal Communication Behavior |
+| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior |
| * Abnormal HTTP Header Length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | | * Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior | | Abnormal Periodic Behavior In Communication Channel | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior |
-| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. | Major | Abnormal Communication Behavior |
+| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior |
| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies | | Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | Scan |
-| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. | Critical | Scan |
-| ARP Spoofing | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning | Abnormal Communication Behavior |
-| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical | Authentication |
-| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical | Abnormal Communication Behavior |
-| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. | Major | Restart/ Stop Commands |
-| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical | Authentication |
-| ICMP Flooding | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. | Warning | Abnormal Communication Behavior |
+| Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan |
+| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan |
+| ARP Spoofing | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
+| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 login attempts in 1 minute | Critical | Authentication |
+| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior |
+| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands |
+| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 login attempts in 10 minutes | Critical | Authentication |
+| ICMP Flooding | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
|* Illegal HTTP Header Content | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior |
-| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. | Warning | Unresponsive |
-| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | Scan |
-| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. | Critical | Authentication |
-| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | Scan |
-| Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. | Critical | Scan |
-| Unexpected message length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal Communication Behavior |
+| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive |
+| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan |
+| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication |
+| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan |
+| Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan |
+| Unexpected message length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior |
| Unexpected Traffic for Standard Port | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior | ## Protocol violation engine alerts
Protocol engine alerts describe detected deviations in the packet structure, or
| Title | Description | Severity | Category | |--|--|--|--|
-| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. | Major | Illegal Commands |
+| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands |
| Firmware Update | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change | | Function Code Not Supported by Outstation | The destination device received an invalid request. | Major | Illegal Commands | | Illegal BACNet message | The source device initiated an invalid request. | Major | Illegal Commands |
Malware engine alerts describe detected malicious network activity.
| Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | | Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity | | Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. | Critical | Suspicion of Malicious Activity |
+| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 syn attempts in 1 minute | Critical | Suspicion of Malicious Activity |
| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity | | Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | | Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
Malware engine alerts describe detected malicious network activity.
| Suspicion of Malicious Activity (Havex) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | | Suspicion of Malicious Activity (Karagany) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | | Suspicion of Malicious Activity (LightsOut) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Name Queries) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| Suspicion of Malicious Activity (Name Queries) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity |
| Suspicion of Malicious Activity (Poison Ivy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | | Suspicion of Malicious Activity (Regin) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | | Suspicion of Malicious Activity (Stuxnet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
Operational engine alerts describe detected operational incidents, or malfunctio
| BACNet Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | | Bad MMS Device State | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues | | Change of Device Configuration | A configuration change was detected on a source device. | Minor | Configuration Changes |
-| Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
+| Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow |
| Controller Reset | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands | | Controller Stop | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | | Device Failed to Receive a Dynamic IP Address | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures |
-| Device is Suspected to be Disconnected (Unresponsive) | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. | Major | Unresponsive |
+| Device is Suspected to be Disconnected (Unresponsive) | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive |
| EtherNet/IP CIP Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | | EtherNet/IP Encapsulation Protocol Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | | Event Buffer Overflow in Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
-| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. | Major | Backup |
+| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup |
| GE SRTP Command Failure | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures | | GE SRTP Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands | | GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes |
Operational engine alerts describe detected operational incidents, or malfunctio
| OPC UA Server Raised an Event That Requires User's Attention | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues | | OPC UA Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures | | Outstation Restarted | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands |
-| Outstation Restarts Frequently | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. | Minor | Restart/ Stop Commands |
+| Outstation Restarts Frequently | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands |
| Outstation's Configuration Changed | A configuration change was detected on a source device. | Major | Configuration Changes | | Outstation's Corrupted Configuration Detected | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes | | Profinet DCP Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
Operational engine alerts describe detected operational incidents, or malfunctio
| Sampled Values Message Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes | | Slave Device Unrecoverable Failure | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures | | Suspicion of Hardware Problems in Outstation | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues |
-| Suspicion of Unresponsive MODBUS Device | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. | Minor | Unresponsive |
+| Suspicion of Unresponsive MODBUS Device | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive |
| Traffic Detected on Sensor Interface | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic | \* The alert is disabled by default, but can be enabled again. To enable the alert, navigate to the Support page, find the alert and select **Enable**. You need administrative level permissions to access the Support page.
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
+
+ Title: "Known issues, limitations, and troubleshooting"
+
+description: Known issues, limitations and troubleshooting guide for Azure SQL Migration extension for Azure Data Studio
+++++++++ Last updated : 10/19/2022++
+# Known issues, limitations, and troubleshooting
+
+Known issues and limitations associated with the Azure SQL Migration extension for Azure Data Studio.
+
+### Error code: 2007 - CutoverFailedOrCancelled
+- **Message**: `Cutover failed or cancelled for database <DatabaseName>. Error details: The restore plan is broken because firstLsn <First LSN> of log backup <URL of backup in Azure Storage container>' is not <= lastLsn <last LSN> of Full backup <URL of backup in Azure Storage container>'. Restore to point in time.`
+
+- **Cause**: The error might occur due to the backups being placed incorrectly in the Azure Storage container. If the backups are placed in the network file share, this error could also occur due to network connectivity issues.
+
+- **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there might be network-related issues and lags that are causing this error. Wait for the process to be completed.
+
+### Error code: 2009 - MigrationRestoreFailed
+- **Message**: `Migration for Database 'DatabaseName' failed with error cannot find server certificate with thumbprint.`
+
+- **Cause**: The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) hasn't been migrated to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before migrating data.
+
+- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. See [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](/azure/azure-sql/managed-instance/tde-certificate-migrate) and [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server) for more information.
++
+- **Message**: `Migration for Database <DatabaseName> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3169 The database was backed up on a server running version %ls. That version is incompatible with this server, which is running version %ls. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.`
+
+- **Cause**: Unable to restore a SQL Server backup to an earlier version of SQL Server than the version at which the backup was created.
+
+- **Recommendation**: See [Issues that affect database restoration between different SQL Server versions](/support/sql/admin/backup-restore-operations) for troubleshooting steps.
++
+- **Message**: `Migration for Database <DatabaseName> failed with error 'The managed instance has reached its storage limit. The storage usage for the managed instance can't exceed 32768 MBs.`
+
+- **Cause**: The Azure SQL Managed Instance has reached its resource limits.
+
+- **Recommendation**: See [Overview of Azure SQL Managed Instance resource limits](/azure/azure-sql/managed-instance/resource-limits) for more information.
++
+- **Message**: `Migration for Database <DatabaseName> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3634 The operating system returned the error '1450(Insufficient system resources exist to complete the requested service.)`
+
+- **Cause**: One of the symptoms listed in [OS errors 1450 and 665 are reported for database files during DBCC CHECKDB or Database Snapshot Creation](/support/sql/admin/1450-and-665-errors-running-dbcc-checkdb#symptoms) can be the cause.
+
+- **Recommendation**: See [OS errors 1450 and 665 are reported for database files during DBCC CHECKDB or Database Snapshot Creation](/support/sql/admin/1450-and-665-errors-running-dbcc-checkdb#symptoms) for troubleshooting steps.
++
+- **Message**: `The restore plan is broken because firstLsn <First LSN> of log backup <URL of backup in Azure Storage container>' isn't <= lastLsn <last LSN> of Full backup <URL of backup in Azure Storage container>'. Restore to point in time.`
+
+- **Cause**: The error might occur due to the backups being placed incorrectly in the Azure Storage container. If the backups are placed in the network file share, this error could also occur due to network connectivity issues.
+
+- **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there might be network related issues and lags that are causing this error. Wait for the process to complete.
++
+- **Message**: `Migration for Database <DatabaseName> failed with error 'Full backup <URL of backup in Azure Storage container> is missing checksum. Provide full backup with checksum.'.`
+
+- **Cause**: The database backups haven't been taken with checksum enabled.
+
+- **Recommendation**: See [Enable or disable backup checksums during backup or restore (SQL Server)](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server) for taking backups with checksum enabled.
++
+- **Message**: `Migration for Database <Database Name> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3234 Logical file <Name> isn't part of database <Database GUID>. Use RESTORE FILELISTONLY to list the logical file names. RESTORE DATABASE is terminating abnormally.'.`
+
+- **Cause**: You've specified a logical file name that isn't in the database backup.
+
+- **Recommendation**: Run RESTORE FILELISTONLY to check the logical file names in your backup. See [RESTORE Statements - FILELISTONLY (Transact-SQL)](/sql/t-sql/statements/restore-statements-filelistonly-transact-sql) for more information on RESTORE FILELISTONLY.
++
+- **Message**: `Migration for Database <Database Name> failed with error 'Azure SQL target resource failed to connect to storage account. Make sure the target SQL VNet is allowed under the Azure Storage firewall rules.'`
+
+- **Cause**: Azure Storage firewall isn't configured to allow access to Azure SQL target.
+
+- **Recommendation**: See [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security) for more information on Azure Storage firewall setup.
+
+ > [!NOTE]
+ > For more information on general troubleshooting steps for Azure SQL Managed Instance errors, see [Known issues with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/doc-changes-updates-known-issues)
+
+### Error code: 2012 - TestConnectionFailed
+- **Message**: `Failed to test connections using provided Integration Runtime.`
+
+- **Cause**: Connection to the Self-Hosted Integration Runtime has failed.
+
+- **Recommendation**: See [Troubleshoot Self-Hosted Integration Runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors.
++
+### Error code: 2014 - IntegrationRuntimeIsNotOnline
+- **Message**: `Integration Runtime <IR Name> in resource group <Resource Group Name> Subscription <SubscriptionID> isn't online.`
+
+- **Cause**: The Self-Hosted Integration Runtime isn't online.
+
+- **Recommendation**: Make sure the Self-hosted Integration Runtime is registered and online. To perform the registration, you can use scripts from [Automating self-hosted integration runtime installation using local PowerShell scripts](../data-factory/self-hosted-integration-runtime-automation-scripts.md). Also, see [Troubleshoot self-hosted integration runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors.
++
+### Error code: 2030 - AzureSQLManagedInstanceNotReady
+- **Message**: `Azure SQL Managed Instance <Instance Name> isn't ready.`
+
+- **Cause**: Azure SQL Managed Instance not in ready state.
+
+- **Recommendation**: Wait until the Azure SQL Managed Instance has finished deploying and is ready, then retry the process.
++
+### Error code: 2033 - SqlDataCopyFailed
+- **Message**: `Migration for Database <Database> failed in state <state>.`
+
+- **Cause**: ADF pipeline for data movement failed.
+
+- **Recommendation**: Check the MigrationStatusDetails page for more detailed error information.
++
+### Error code: 2038 - MigrationCompletedDuringCancel
+- **Message**: `Migration cannot be canceled as Migration was completed during the cancel process. Target server: <Target server> Target database: <Target database>.`
+
+- **Cause**: A cancellation request was received, but the migration was completed successfully before the cancellation was completed.
+
+- **Recommendation**: No action required migration succeeded.
++
+### Error code: 2039 - MigrationRetryNotAllowed
+- **Message**: `Migration isn't in a retriable state. Migration must be in state WaitForRetry. Current state: <State>, Target server: <Target Server>, Target database: <Target database>.`
+
+**Cause**: A retry request was received when the migration wasn't in a state allowing retrying.
+
+- **Recommendation**: No action required migration is ongoing or completed.
++
+### Error code: 2040 - MigrationTimeoutWaitingForRetry
+- **Message**: `Migration retry timeout limit of 8 hours reached. Target server: <Target Server>, Target database: <Target Database>.`
+
+- **Cause**: Migration was idle in a failed, but retriable state for 8 hours and was automatically canceled.
+
+- **Recommendation**: No action is required; the migration was canceled.
++
+### Error code: 2041 - DataCopyCompletedDuringCancel
+- **Message**: `Data copy finished successfully before canceling completed. Target schema is in bad state. Target server: <Target Server>, Target database: <Target Database>.`
+
+- **Cause**: Cancel request was received, and the data copy was completed successfully, but the target database schema hasn't been returned to its original state.
+
+- **Recommendation**: If desired, the target database can be returned to its original state by running the first query below and all of the returned queries, then running the second query and doing the same.
+
+```
+SELECT [ROLLBACK] FROM [dbo].[__migration_status]
+WHERE STEP in (3,4,6);
+
+SELECT [ROLLBACK] FROM [dbo].[__migration_status]
+WHERE STEP in (5,7,8) ORDER BY STEP DESC;
+```
+
+
+### Error code: 2042 - PreCopyStepsCompletedDuringCancel
+- **Message**: `Pre Copy steps finished successfully before canceling completed. Target database Foreign keys and temporal tables have been altered. Schema migration may be required again for future migrations. Target server: <Target Server>, Target database: <Target Database>.`
+
+- **Cause**: Cancel request was received and the steps to prepare the target database for copy were completed successfully. The target database schema hasn't been returned to its original state.
+
+- **Recommendation**: If desired, target database can be returned to its original state by running the query below and all of the returned queries.
+
+```
+SELECT [ROLLBACK] FROM [dbo].[__migration_status]
+WHERE STEP in (3,4,6);
+```
+
+
+### Error code: 2043 - CreateContainerFailed
+- **Message**: `Create container <ContainerName> failed with error Error calling the endpoint '<URL>'. Response status code: 'NA - Unknown'. More details: Exception message: 'NA - Unknown [ClientSideException] Invalid Url:<URL>.`
+
+- **Cause**: The request failed due to an underlying issue such as network connectivity, a DNS failure, a server certificate validation, or a timeout.
+
+- **Recommendation**: See [Troubleshoot Azure Data Factory and Synapse pipelines](../data-factory/data-factory-troubleshoot-guide.md#error-code-2108) for troubleshooting steps.
++
+## Azure SQL Database Migration limitations
+
+The Azure SQL Database offline migration (Preview) utilizes Azure Data Factory (ADF) pipelines for data movement and thus abides by ADF limitations. A corresponding ADF is created when a database migration service is also created. Thus factory limits apply per service.
+
+- 100,000 table per database limit.
+- 10,000 concurrent database migrations per service.
+- Migration speed heavily depends on the target Azure SQL Database SKU and the self-hosted Integration Runtime host.
+- Azure SQL Database migration scales poorly with table numbers due to ADF overhead in starting activities. If a database has thousands of tables, there will be a couple of seconds of startup time for each, even if they're composed of one row with 1 bit of data.
+- Azure SQL Database table names with double byte characters currently aren't supported for migration. Mitigation is to rename tables before migration; they can be changed back to their original names after successful migration.
+- Tables with large blob columns may fail to migrate due to timeout.
+- Database names with SQL Server reserved words aren't valid.
+
+## Azure SQL Managed Instance and SQL Server on Azure Virtual Machine known issues and limitations
+- If migrating multiple databases to **Azure SQL Managed Instance** using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container.
+- If migrating a single database to **Azure SQL Managed Instance**, the database backups must be placed in a flat-file structure inside a database folder, and the folders can't be nested, as it's not supported.
+- Overwriting existing databases using DMS in your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine isn't supported.
+- Configuring high availability and disaster recovery on your target to match source topology isn't supported by DMS.
+- The following server objects aren't supported:
+ - Logins
+ - SQL Server Agent jobs
+ - Credentials
+ - SSIS packages
+ - Server roles
+ - Server audit
+- SQL Server 2008 and below as target versions aren't supported when migrating to SQL Server on Azure Virtual Machines.
+- If you're using SQL Server 2012 or SQL Server 2014, you need to store your source database backup files on an Azure Storage Blob Container instead of using the network share option. Store the backup files as page blobs since block blobs are only supported in SQL 2016 and after.
+- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations.
+
+## Next steps
+
+- For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension)
+- For more information on known limitations with Log Replay Service, see [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations)
+- For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
In this tutorial, you learn how to:
> * Launch the Migrate to Azure SQL wizard in Azure Data Studio. > * Run an assessment of your source SQL Server database(s) > * Collect performance data from your source SQL Server
-> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload
+> * Get a recommendation of the SQL Server on Azure Virtual Machine SKU best suited for your workload
> * Specify details of your source SQL Server, backup location and your target SQL Server on Azure Virtual Machine > * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups. > * Start and monitor the progress for your migration through to completion
To complete this tutorial, you need to:
* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) * [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below:
- - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Contributor for the target SQL Server on Azure Virtual Machine (and Storage Account to upload your database backup files from SMB network share).
+ - Reader role for the Azure Resource Groups containing the target SQL Server on Azure Virtual Machine or the Azure storage account.
- Owner or Contributor role for the Azure subscription. - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-virtual-machine-ads.md) > [!IMPORTANT]
To complete this tutorial, you need to:
| Arrived | Backup file arrived in the source backup location and validated | | Uploading | Integration runtime is currently uploading the backup file to Azure storage| | Uploaded | Backup file is uploaded to Azure storage |
- | Restoring | Azure Database Migration Service is currently restoring the backup file to Azure SQL Managed Instance|
- | Restored | Backup file is successfully restored on Azure SQL Managed Instance |
+ | Restoring | Azure Database Migration Service is currently restoring the backup file to SQL Server on Azure Virtual Machine|
+ | Restored | Backup file is successfully restored on SQL Server on Azure Virtual Machine |
| Canceled | Migration process was canceled | | Ignored | Backup file was ignored as it does not belong to a valid database backup chain |
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
In this tutorial, you learn how to:
> * Launch the Migrate to Azure SQL wizard in Azure Data Studio. > * Run an assessment of your source SQL Server database(s) > * Collect performance data from your source SQL Server
-> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload
+> * Get a recommendation of the SQL Server on Azure Virtual Machine SKU best suited for your workload
> * Specify details of your source SQL Server, backup location and your target SQL Server on Azure Virtual Machine > * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups. > * Start and monitor the progress for your migration.
To complete this tutorial, you need to:
* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) * [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below:
- - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Contributor for the target SQL Server on Azure Virtual Machine (and Storage Account to upload your database backup files from SMB network share).
+ - Reader role for the Azure Resource Groups containing the target SQL Server on Azure Virtual Machine or the Azure storage account.
- Owner or Contributor role for the Azure subscription. - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-virtual-machine-ads.md) > [!IMPORTANT]
Resource group, Azure storage account, Blob container from the corresponding dro
| Arrived | Backup file arrived in the source backup location and validated | | Uploading | Integration runtime is currently uploading the backup file to Azure storage| | Uploaded | Backup file is uploaded to Azure storage |
- | Restoring | Azure Database Migration Service is currently restoring the backup file to Azure SQL Managed Instance|
- | Restored | Backup file is successfully restored on Azure SQL Managed Instance |
+ | Restoring | Azure Database Migration Service is currently restoring the backup file to SQL Server on Azure Virtual Machine|
+ | Restored | Backup file is successfully restored on SQL Server on Azure Virtual Machine |
| Canceled | Migration process was canceled | | Ignored | Backup file was ignored as it doesn't belong to a valid database backup chain |
energy-data-services Tutorial Seismic Ddms Sdutil https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms-sdutil.md
Linux
- [64-bit Python 3.8.3](https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz)
-Unix
+Unix/Mac
- [64-bit Python 3.8.3](https://www.python.org/ftp/python/3.8.3/Python-3.8.3.tgz) - Apple Xcode C++ Build Tools
-Other requirements are addressed in the [installation](#installation) section below.
-
-## Installation
-
-Follow the directions in the sdutil documentation for [running sdutil in Azure environments](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env).
-
-The utility requires other modules noted in [requirements.txt](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/azure/stable/requirements.txt). You could either install the modules as is or install them in virtualenv to keep your host clean from package conflicts. If you don't want to install them in a virtual environment, jump directly to step 3.
+The utility requires other modules noted in [requirements.txt](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/azure/stable/requirements.txt). You could either install the modules as is or install them in virtualenv to keep your host clean from package conflicts. If you don't want to install them in a virtual environment, skip the four virtual environment commands below. Additionally, if you are using Mac instead of Ubuntu or WSL - Ubuntu 20.04, either use `homebrew` instead of `apt-get` as your package manager, or manually install `apt-get`.
```bash # check if virtualenv is already installed virtualenv --version
- # if not install it via pip
+ # if not install it via pip or apt-get
pip install virtualenv
+ # or sudo apt-get install python3-venv for WSL
# create a virtual environment for sdutil virtualenv sdutilenv
+ # or python3 -m venv sdutilenv for WSL
# activate the virtual environemnt Windows: sdutilenv/Scripts/activate
The utility requires other modules noted in [requirements.txt](https://community
Install required dependencies: ```bash
- # run it from the extracted sdutil folder
+ # run this from the extracted sdutil folder
pip install -r requirements.txt ```
Install required dependencies:
### Configuration
-1. Clone the [sdutil repository](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable) and open in your favorite editor.
+1. Clone the [sdutil repository](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable) from the community Azure Stable branch and open in your favorite editor.
2. Replace the contents of `config.yaml` in the `sdlib` folder with the following yaml and fill in the three templatized values (two instances of `<meds-instance-url>` and one `<put refresh token here...>`):
- ```yaml
- seistore:
- service: '{"azure": {"azureGlabEnv":{"url": "https://<meds-instance-url>/seistore-svc/api/v3", "appkey": ""}}}'
- url: 'https://<meds-instance-url>/seistore-svc/api/v3'
- cloud_provider: 'azure'
- env: 'glab'
- auth-mode: 'JWT Token'
- ssl_verify: False
- auth_provider:
- azure: '{
- "provider": "azure",
- "authorize_url": "https://login.microsoftonline.com/",
- "oauth_token_host_end": "/oauth2/token",
- "scope_end":"/.default openid profile offline_access",
- "redirect_uri":"http://localhost:8080",
- "login_grant_type": "refresh_token",
- "refresh_token": "<put refresh token here from auth_token.http authorize request>"
- }'
- azure:
- empty: 'none'
- ```
-
- > [!NOTE]
- > Follow the directions in [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to obtain a token if not already present.
+ ```yaml
+ seistore:
+ service: '{"azure": {"azureGlabEnv":{"url": "https://<meds-instance-url>/seistore-svc/api/v3", "appkey": ""}}}'
+ url: 'https://<meds-instance-url>/seistore-svc/api/v3'
+ cloud_provider: 'azure'
+ env: 'glab'
+ auth-mode: 'JWT Token'
+ ssl_verify: False
+ auth_provider:
+ azure: '{
+ "provider": "azure",
+ "authorize_url": "https://login.microsoftonline.com/",
+ "oauth_token_host_end": "/oauth2/token",
+ "scope_end":"/.default openid profile offline_access",
+ "redirect_uri":"http://localhost:8080",
+ "login_grant_type": "refresh_token",
+ "refresh_token": "<put refresh token here from auth_token.http authorize request>"
+ }'
+ azure:
+ empty: 'none'
+ ```
+
+ > [!NOTE]
+ > Follow the directions in [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to obtain a token if not already present.
3. Export or set below environment variables
- ```bash
- export AZURE_TENANT_ID=check-env-provisioning-team-as-specific-to-cluster
- export AZURE_CLIENT_ID=check-env-provisioning-team-as-specific-to-cluster
- export AZURE_CLIENT_SECRET=check-env-provisioning-team-as-specific-to-cluster
- ```
+ ```bash
+ export AZURE_TENANT_ID=<your-tenant-id>
+ export AZURE_CLIENT_ID=<your-client-id>
+ export AZURE_CLIENT_SECRET=<your-client-secret>
+ ```
### Running the Tool 1. Run the utility from the extracted utility folder by typing:
- ```bash
- python sdutil
- ```
-
- If no arguments are specified, this menu will be displayed:
-
- ```code
- Seismic Store Utility
-
- > python sdutil [command]
+ ```bash
+ python sdutil
+ ```
- available commands:
+ If no arguments are specified, this menu will be displayed:
- * auth : authentication utilities
- * unlock : remove a lock on a seismic store dataset
- * version : print the sdutil version
- * rm : delete a subproject or a space separated list of datasets
- * mv : move a dataset in seismic store
- * config : manage the utility configuration
- * mk : create a subproject resource
- * cp : copy data to(upload)/from(download)/in(copy) seismic store
- * stat : print information like size, creation date, legal tag(admin) for a space separated list of tenants, subprojects or datasets
- * patch : patch a seismic store subproject or dataset
- * app : application authorization utilities
- * ls : list subprojects and datasets
- * user : user authorization utilities
- ```
+ ```code
+ Seismic Store Utility
+
+ > python sdutil [command]
+
+ available commands:
+
+ * auth : authentication utilities
+ * unlock : remove a lock on a seismic store dataset
+ * version : print the sdutil version
+ * rm : delete a subproject or a space separated list of datasets
+ * mv : move a dataset in seismic store
+ * config : manage the utility configuration
+ * mk : create a subproject resource
+ * cp : copy data to(upload)/from(download)/in(copy) seismic store
+ * stat : print information like size, creation date, legal tag(admin) for a space separated list of tenants, subprojects or datasets
+ * patch : patch a seismic store subproject or dataset
+ * app : application authorization utilities
+ * ls : list subprojects and datasets
+ * user : user authorization utilities
+ ```
2. If this is your first time using the tool, you must run the sdutil config init command to initialize the configuration.
- ```bash
- python sdutil config init
- ```
+ ```bash
+ python sdutil config init
+ ```
3. Before you start using the utility and performing any operations, you must sign in the system. When you run the following sign in command, sdutil will open a sign in page in a web browser.
- ```bash
- python sdutil auth login
- ```
+ ```bash
+ python sdutil auth login
+ ```
- Once you've successfully logged in, your credentials will be valid for a week. You don't need to sign in again unless the credentials expired (after one week), in this case the system will require you to sign in again.
+ Once you've successfully logged in, your credentials will be valid for a week. You don't need to sign in again unless the credentials expired (after one week), in this case the system will require you to sign in again.
- > [!NOTE]
- > If you aren't getting the "sign in Successful!" message, make sure your three environment variables are set and you've followed all steps in the "Configuration" section above.
+ > [!NOTE]
+ > If you aren't getting the "sign in Successful!" message, make sure your three environment variables are set and you've followed all steps in the "Configuration" section above.
## Seistore Resources
Run the changelog script (`./changelog-generator.sh`) to automatically generate
./scripts/changelog-generator.sh ```
-## Setup and usage for Microsoft Energy Data Services
+## Usage for Microsoft Energy Data Services
-Below steps are for Windows Subsystem for Linux - Ubuntu 20.04
-Microsoft Energy Data Services instance is using OSDU&trade; M12 Version of sdutil
+Microsoft Energy Data Services instance is using OSDU&trade; M12 Version of sdutil. Follow the below steps if you would like to use SDUTIL to leverage the SDMS API of your MEDS instance.
-1. Download the source code from community [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable/) Azure Stable branch.
+1. Ensure you have followed the [installation](#prerequisites) and [configuration](#configuration) steps from above. This includes downloading the SDUTIL source code, configuring your python virtual environment, editing the `config.yaml` file and setting your three environment variables.
-2. In case python virtual env isn't installed, use below commands. Otherwise, skip to next section
+2. Run below commands to sign in, list, upload and download files in the seismic store.
- ```bash
- sudo apt-get update
- sudo apt-get install python3-venv
- ```
-
-3. Create a new virtual environment and install package
-
- ```bash
- #create new virtual env with name : sdutilenv
- python3 -m venv sdutilenv
-
- #activate the virtual end
- source sdutilenv/bin/Activate
-
- #install python package for sdutil
- pip install -r requirements.txt
- ```
-
-4. Replace the contents of `config.yaml` in the `sdlib` folder with the following yaml and fill in the three templatized values (tow `<meds-instance-url>` and `<put refresh token here...>`):
-
- ```yaml
- seistore:
- service: '{"azure": {"azureGlabEnv":{"url": "https://<meds-instance-url>/seistore-svc/api/v3", "appkey": ""}}}'
- url: 'https://<meds-instance-url>/seistore-svc/api/v3'
- cloud_provider: 'azure'
- env: 'glab'
- auth-mode: 'JWT Token'
- ssl_verify: False
- auth_provider:
- azure: '{
- "provider": "azure",
- "authorize_url": "https://login.microsoftonline.com/",
- "oauth_token_host_end": "/oauth2/token",
- "scope_end":"/.default openid profile offline_access",
- "redirect_uri":"http://localhost:8080",
- "login_grant_type": "refresh_token",
- "refresh_token": "<put refresh token here from auth_token.http authorize request>"
- }'
- azure:
- empty: 'none'
- ```
-
- > [!NOTE]
- > Follow the directions in [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to obtain a token if not already present.
-
-5. Export or set below environment variables
-
- ```bash
- export AZURE_TENANT_ID=check-env-provisioning-team-as-specific-to-cluster
- export AZURE_CLIENT_ID=check-env-provisioning-team-as-specific-to-cluster
- export AZURE_CLIENT_SECRET=check-env-provisioning-team-as-specific-to-cluster
- ```
+ 1. Initialize
-6. Run below commands to sign in, list, upload and download files in the seismic store.
+ ```code
+ (sdutilenv) > python sdutil config init
+ [one] Azure
+ Select the cloud provider: **enter 1**
+ Insert the Azure (azureGlabEnv) application key: **just press enter--no need to provide a key**
- - Initialize
+ sdutil successfully configured to use Azure (azureGlabEnv)
- ```code
- (sdutilenv) > python sdutil config init
- [one] Azure
- Select the cloud provider: **enter 1**
- Insert the Azure (azureGlabEnv) application key: **just press enter--no need to provide a key**
-
- sdutil successfully configured to use Azure (azureGlabEnv)
-
- Should display sign in success message. Credentials expiry set to 1 hour.
- ```
+ Should display sign in success message. Credentials expiry set to 1 hour.
+ ```
- - Sign in
+ 2. Sign in
- ```bash
- python sdutil config init
- python sdutil auth login
- ```
+ ```bash
+ python sdutil config init
+ python sdutil auth login
+ ```
- - List files in your seismic store
+ 3. List files in your seismic store
- ```bash
- python sdutil ls sd://<tenant> # e.g. sd://<instance-name>-<datapartition>
- python sdutil ls sd://<tenant>/<subproject> # e.g. sd://<instance-name>-<datapartition>/test
- ```
+ ```bash
+ python sdutil ls sd://<tenant> # e.g. sd://<instance-name>-<datapartition>
+ python sdutil ls sd://<tenant>/<subproject> # e.g. sd://<instance-name>-<datapartition>/test
+ ```
- - Upload a file from your local machine to the seismic store
+ 4. Upload a file from your local machine to the seismic store
- ```bash
- python sdutil cp local-dir/file-name-at-source.txt sd://<datapartition>/test/file-name-at-destination.txt
- ```
+ ```bash
+ python sdutil cp local-dir/file-name-at-source.txt sd://<datapartition>/test/file-name-at-destination.txt
+ ```
- - Download a file from the seismic store to your local machine
+ 5. Download a file from the seismic store to your local machine
- ```bash
- python sdutil cp sd://<datapartition>/test/file-name-at-ddms.txt local-dir/file-name-at-destination.txt
- ```
+ ```bash
+ python sdutil cp sd://<datapartition>/test/file-name-at-ddms.txt local-dir/file-name-at-destination.txt
+ ```
- > [!NOTE]
- > Don't use `cp` command to download VDS files. The VDS conversion results in multiple files, therefore the `cp` command won't be able to download all of them in one command. Use either the [SEGYExport](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYExport/README.html) or [VDSCopy](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/VDSCopy/README.html) tool instead. These tools use a series of REST calls accessing a [naming scheme](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/connection.html) to retrieve information about all the resulting VDS files.
+ > [!NOTE]
+ > Don't use `cp` command to download VDS files. The VDS conversion results in multiple files, therefore the `cp` command won't be able to download all of them in one command. Use either the [SEGYExport](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYExport/README.html) or [VDSCopy](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/VDSCopy/README.html) tool instead. These tools use a series of REST calls accessing a [naming scheme](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/connection.html) to retrieve information about all the resulting VDS files.
OSDU&trade; is a trademark of The Open Group.
hdinsight Apache Hadoop Connect Excel Hive Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-excel-hive-odbc-driver.md
The following steps show you how to create a Hive ODBC Data Source.
| Port |Use **443**. (This port has been changed from 563 to 443.) | | Database |Use **default**. | | Mechanism |Select **Windows Azure HDInsight Service** |
- | User Name |Enter HDInsight cluster HTTP user username. The default username is **admin**. |
+ | User Name |Enter HDInsight cluster HTTP user username. The default username is `admin`. |
| Password |Enter HDInsight cluster user password. Select the checkbox **Save Password (Encrypted)**.| 1. Optional: Select **Advanced Options...**
hdinsight Apache Hadoop Hive Pig Udf Dotnet Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-hive-pig-udf-dotnet-csharp.md
You can also run a Pig job that uses your Pig UDF application.
The `DEFINE` statement creates an alias of `streamer` for the *PigUDF.exe* application, and `CACHE` loads it from default storage for the cluster. Later, `streamer` is used with the `STREAM` operator to process the single lines contained in `LOG` and return the data as a series of columns. > [!NOTE]
- > The application name that is used for streaming must be surrounded by the \` (backtick) character when aliased, and by the ' (single quote) character when used with `SHIP`.
+ > The application name that is used for streaming must be surrounded by the `` ` `` (backtick) character when aliased, and by the `'` (single quote) character when used with `SHIP`.
4. After entering the last line, the job should start. It returns output similar to the following text:
hdinsight Apache Hadoop Linux Create Cluster Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
|Region | From the drop-down list, select a region where the cluster is created. Choose a location closer to you for better performance. | |Cluster type| Select **Select cluster type**. Then select **Hadoop** as the cluster type.| |Version|From the drop-down list, select a **version**. Use the default version if you don't know what to choose.|
- |Cluster login username and password | The default login name is **admin**. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` \). Make sure you **do not provide** common passwords such as "Pass@word1".|
- |Secure Shell (SSH) username | The default username is **sshuser**. You can provide another name for the SSH username. |
+ |Cluster login username and password | The default login name is **admin**. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ```' ` "```). Make sure you **do not provide** common passwords such as "Pass@word1".|
+ |Secure Shell (SSH) username | The default username is `sshuser`. You can provide another name for the SSH username. |
|Use cluster login password for SSH| Select this check box to use the same password for SSH user as the one you provided for the cluster login user.| :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-basics.png" alt-text="HDInsight Linux get started provide cluster basic values" border="true":::
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
:::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-get-started-open-cluster-dashboard.png" alt-text="Screenshot showing HDInsight Linux get started cluster dashboard." border="true":::
-2. Enter the Hadoop username and password that you specified while creating the cluster. The default username is **admin**.
+2. Enter the Hadoop username and password that you specified while creating the cluster. The default username is `admin`.
3. Open **Hive View** as shown in the following screenshot:
hdinsight Apache Hadoop Linux Tutorial Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started.md
Two Azure resources are defined in the template:
|Location|The value will autopopulate with the location used for the resource group.| |Cluster Name|Enter a globally unique name. For this template, use only lowercase letters, and numbers.| |Cluster Type | Select **hadoop**. |
- |Cluster Login User Name|Provide the username, default is **admin**.|
- |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` ). |
- |Ssh User Name|Provide the username, default is **sshuser**|
+ |Cluster Login User Name|Provide the username, default is `admin`.|
+ |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ```' ` "```). |
+ |Ssh User Name|Provide the username, default is `sshuser`.|
|Ssh Password|Provide the password.| Some properties have been hardcoded in the template. You can configure these values from the template. For more explanation of these properties, see [Create Apache Hadoop clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md).
- > [!NOTE]
- > The values you provide must be unique and should follow the naming guidelines. The template does not perform validation checks. If the values you provide are already in use, or do not follow the guidelines, you get an error after you have submitted the template.
+ > [!NOTE]
+ > The values you provide must be unique and should follow the naming guidelines. The template does not perform validation checks. If the values you provide are already in use, or do not follow the guidelines, you get an error after you have submitted the template.
:::image type="content" source="./media/apache-hadoop-linux-tutorial-get-started/hdinsight-linux-get-started-arm-template-on-portal.png " alt-text="HDInsight Linux gets started Resource Manager template on portal" border="true":::
hdinsight Apache Hbase Rest Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-rest-sdk.md
HBase stores data in tables. A table consists of a *Rowkey*, the primary key, an
The data is physically stored in *HFiles*. A single HFile contains data for one table, one region, and one column family. Rows in HFile are stored sorted on Rowkey. Each HFile has a *B+ Tree* index for speedy retrieval of the rows.
-To create a new table, specify a `TableSchema` and columns. The following code checks whether the table 'RestSDKTable` already exists - if not, the table is created.
+To create a new table, specify a `TableSchema` and columns. The following code checks whether the table `RestSDKTable` already exists - if not, the table is created.
```csharp if (!client.ListTablesAsync().Result.name.Contains("RestSDKTable"))
hdinsight Apache Hbase Tutorial Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md
The following procedure uses an Azure Resource Manager template to create an HBa
|Resource group|Create an Azure Resource management group or use an existing one.| |Location|Specify the location of the resource group. | |ClusterName|Enter a name for the HBase cluster.|
- |Cluster login name and password|The default login name is **admin**.|
- |SSH username and password|The default username is **sshuser**.|
+ |Cluster login name and password|The default login name is `admin`.|
+ |SSH username and password|The default username is `sshuser`.|
- Other parameters are optional.
+ Other parameters are optional.
Each cluster has an Azure Storage account dependency. After you delete a cluster, the data stays in the storage account. The cluster default storage account name is the cluster name with "store" appended. It's hardcoded in the template variables section.
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/quickstart-resource-manager-template.md
Two Azure resources are defined in the template:
|Resource group|From the drop-down list, select your existing resource group, or select **Create new**.| |Location|The value will autopopulate with the location used for the resource group.| |Cluster Name|Enter a globally unique name. For this template, use only lowercase letters, and numbers.|
- |Cluster Login User Name|Provide the username, default is **admin**.|
- |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` ). |
- |Ssh User Name|Provide the username, default is sshuser|
+ |Cluster Login User Name|Provide the username, default is `admin`.|
+ |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ```' ` "```). |
+ |Ssh User Name|Provide the username, default is `sshuser`.|
|Ssh Password|Provide the password.| :::image type="content" source="./media/quickstart-resource-manager-template/resource-manager-template-hbase.png" alt-text="Deploy Resource Manager template HBase" border="true":::
hdinsight Hdinsight Administer Use Portal Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-portal-linux.md
The password is changed on all nodes in the cluster.
> [!NOTE] > SSH passwords cannot contain the following characters:
-> ```
-> " ' ` / \ < % ~ | $ & !
-> ```
+>
+> ``` " ' ` / \ < % ~ | $ & ! ```
| Field | Value | | | |
hdinsight Apache Hive Query Odbc Driver Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-query-odbc-driver-powershell.md
The following steps show you how to create an Apache Hive ODBC data source.
| Port |Use **443**.| | Database |Use **default**. | | Mechanism |Select **Windows Azure HDInsight Service** |
- | User Name |Enter HDInsight cluster HTTP user username. The default username is **admin**. |
+ | User Name |Enter HDInsight cluster HTTP user username. The default username is `admin`. |
| Password |Enter HDInsight cluster user password. Select the checkbox **Save Password (Encrypted)**.|
-1. Optional: Select **Advanced Options**.
+1. Optional: Select **Advanced Options**.
| Parameter | Description | | | |
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-resource-manager-template.md
Two Azure resources are defined in the template:
|Resource group|From the drop-down list, select your existing resource group, or select **Create new**.| |Location|The value will autopopulate with the location used for the resource group.| |Cluster Name|Enter a globally unique name. For this template, use only lowercase letters, and numbers.|
- |Cluster Login User Name|Provide the username, default is **admin**.|
- |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` ). |
+ |Cluster Login User Name|Provide the username, default is `admin`.|
+ |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ```' ` "``` ). |
|Ssh User Name|Provide the username, default is sshuser| |Ssh Password|Provide the password.|
hdinsight Apache Kafka Azure Container Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-azure-container-services.md
At this point, Kafka and Azure Kubernetes Service are in communication through t
3. Edit the `index.js` file and change the following lines: * `var topic = 'mytopic'`: Replace `mytopic` with the name of the Kafka topic used by this application.
- * `var brokerHost = '176.16.0.13:9092`: Replace `176.16.0.13` with the internal IP address of one of the broker hosts for your cluster.
+ * `var brokerHost = '176.16.0.13:9092'`: Replace `176.16.0.13` with the internal IP address of one of the broker hosts for your cluster.
To find the internal IP address of the broker hosts (workernodes) in the cluster, see the [Apache Ambari REST API](../hdinsight-hadoop-manage-ambari-rest-api.md#get-the-internal-ip-address-of-cluster-nodes) document. Pick IP address of one of the entries where the domain name begins with `wn`.
hdinsight Apache Kafka Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-get-started.md
To create an Apache Kafka cluster on HDInsight, use the following steps:
|Region | From the drop-down list, select a region where the cluster is created. Choose a region closer to you for better performance. | |Cluster type| Select **Select cluster type** to open a list. From the list, select **Kafka** as the cluster type.| |Version|The default version for the cluster type will be specified. Select from the drop-down list if you wish to specify a different version.|
- |Cluster login username and password | The default login name is **admin**. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` \). Make sure you **do not provide** common passwords such as "Pass@word1".|
- |Secure Shell (SSH) username | The default username is **sshuser**. You can provide another name for the SSH username. |
+ |Cluster login username and password | The default login name is `admin`. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lowercase letter, one non-alphanumeric character (except characters ```' ` "```). Make sure you **do not provide** common passwords such as `Pass@word1`.|
+ |Secure Shell (SSH) username | The default username is `sshuser`. You can provide another name for the SSH username. |
|Use cluster login password for SSH| Select this check box to use the same password for SSH user as the one you provided for the cluster login user.| :::image type="content" source="./media/apache-kafka-get-started/azure-portal-cluster-basics.png" alt-text="Azure portal create cluster basics" border="true":::
hdinsight Apache Kafka Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-resource-manager-template.md
Two Azure resources are defined in the template:
|Resource group|From the drop-down list, select your existing resource group, or select **Create new**.| |Location|The value will autopopulate with the location used for the resource group.| |Cluster Name|Enter a globally unique name. For this template, use only lowercase letters, and numbers.|
- |Cluster Login User Name|Provide the username, default is **admin**.|
- |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` ). |
- |Ssh User Name|Provide the username, default is **sshuser**|
+ |Cluster Login User Name|Provide the username, default is `admin`.|
+ |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ```' ` "```). |
+ |Ssh User Name|Provide the username, default is `sshuser`.|
|Ssh Password|Provide the password.| :::image type="content" source="./media/apache-kafka-quickstart-resource-manager-template/resource-manager-template-kafka.png" alt-text="A screenshot of the template properties" border="false":::
hdinsight Apache Spark Jupyter Spark Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md
Two Azure resources are defined in the template:
|Resource group|From the drop-down list, select your existing resource group, or select **Create new**.| |Location|The value will autopopulate with the location used for the resource group.| |Cluster Name|Enter a globally unique name. For this template, use only lowercase letters, and numbers.|
- |Cluster Login User Name|Provide the username, default is **admin**.|
- |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` ). |
- |Ssh User Name|Provide the username, default is **sshuser**|
+ |Cluster Login User Name|Provide the username, default is `admin`.|
+ |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ```' ` "```). |
+ |Ssh User Name|Provide the username, default is `sshuser`.|
|Ssh Password|Provide the password.| :::image type="content" source="./media/apache-spark-jupyter-spark-sql/resource-manager-template-spark.png " alt-text="Create Spark cluster in HDInsight using Azure Resource Manager template" border="true":::
healthcare-apis Events Consume Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md
Previously updated : 07/06/2022- Last updated : 10/21/2022+ # Consume events with Logic Apps
-This tutorial shows how to use Azure Logic Apps to process Azure Health Data Services Fast Healthcare Interoperability Resources (FHIR&#174;) events. Logic Apps create and run automated workflows to process event data from other applications. You will learn how to register a FHIR event with your Logic App, meet a specified event criteria, and perform a service operation.
+This tutorial shows how to use Azure Logic Apps to process Azure Health Data Services Fast Healthcare Interoperability Resources (FHIR&#174;) events. Logic Apps creates and runs automated workflows to process event data from other applications. You'll learn how to register a FHIR event with your Logic App, meet a specified event criteria, and perform a service operation.
Here's an example of a Logic App workflow:
Follow these steps to create a Logic App workflow to consume FHIR events:
## Prerequisites
-Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy Events in the Azure portal](./events-deploy-portal.md).
+Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy Events in the Azure portal](events-deploy-portal.md).
## Creating a Logic App
Follow these steps:
1. Go to the Azure portal. 2. Search for "Logic App".
-3. Click "Add".
+3. Select "Add".
4. Specify Basic details. 5. Specify Hosting. 6. Specify Monitoring.
Follow these steps:
You now need to fill out the details of your Logic App. Specify information for these five categories. They are in separate tabs: - Tab 1 - Basics - Tab 2 - Hosting
Continue specifying your Logic App by clicking "Next: Tags".
Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
-This example will not use tagging.
+This example won't use tagging.
### Review + create - Tab 5
Your proposed Logic app will display the following details:
- Plan - Monitoring
-If you're satisfied with the proposed configuration, click "Create". If not, click "Previous" to go back and specify new details.
+If you're satisfied with the proposed configuration, select "Create". If not, select "Previous" to go back and specify new details.
First you'll see an alert telling you that deployment is initializing. Next you'll see a new page telling you that the deployment is in progress.
-If there are no errors, you will finally see a notification telling you that your deployment is complete.
+If there are no errors, you'll finally see a notification telling you that your deployment is complete.
#### Your Logic App dashboard Azure creates a dashboard when your Logic App is complete. The dashboard will show you the status of your app. You can return to your dashboard by clicking Overview in the Logic App menu. Here's a Logic App dashboard: You can do the following activities from your dashboard.
Before you begin, you'll need to have a Logic App configured and running correct
Once your Logic App is running, you can create and configure a workflow. To initialize a workflow, follow these steps: 1. Start at the Azure portal.
-2. Click "Logic Apps" in Azure services.
+2. Select "Logic Apps" in Azure services.
3. Select the Logic App you created.
-4. Click "Workflows" in the Workflow menu on the left.
-5. Click "Add" to add a workflow.
+4. Select "Workflows" in the Workflow menu on the left.
+5. Select "Add" to add a workflow.
### Configuring a new workflow
-You will see a new panel on the right for creating a workflow.
+You'll see a new panel on the right for creating a workflow.
You can specify the details of the new workflow in the panel on the right.
To set up a new workflow, fill in these details:
Specify a new name for your workflow. Indicate whether you want the workflow to be stateful or stateless. Stateful is for business processes and stateless is for processing IoT events.
-When you have specified the details, click "Create" to begin designing your workflow.
+When you've specified the details, select "Create" to begin designing your workflow.
### Designing the workflow
-In your new workflow, click the name of the enabled workflow.
+In your new workflow, select the name of the enabled workflow.
You can write code to design a workflow for your application, but for this tutorial, choose the Designer option on the Developer menu.
-Next, click "Choose an operation" to display the "Add a Trigger" blade on the right. Then search for "Azure Event Grid" and click the "Azure" tab below. The Event Grid is not a Logic App Built-in.
+Next, select "Choose an operation" to display the "Add a Trigger" blade on the right. Then search for "Azure Event Grid" and select the "Azure" tab below. The Event Grid isn't a Logic App Built-in.
-When you see the "Azure Event Grid" icon, click on it to display the Triggers and Actions available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
+When you see the "Azure Event Grid" icon, select on it to display the Triggers and Actions available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
-Click "When a resource event occurs" to set up a trigger for the Azure Event Grid.
+Select "When a resource event occurs" to set up a trigger for the Azure Event Grid.
To tell Event Grid how to respond to the trigger, you must specify parameters and add actions.
Fill in the details for subscription, resource type, and resource name. Then you
- Resource deleted - Resource updated
-For more information about event types, see [What FHIR resource events does Events support?](./events-faqs.md).
+For more information about event types, see [What FHIR resource events does Events support?](events-faqs.md).
### Adding an HTTP action
-Once you have specified the trigger events, you must add more details. Click the "+" below the "When a resource event occurs" button.
+Once you've specified the trigger events, you must add more details. Select the "+" below the "When a resource event occurs" button.
-You need to add a specific action. Click "Choose an operation" to continue. Then, for the operation, search for "HTTP" and click on "Built-in" to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
+You need to add a specific action. Select "Choose an operation" to continue. Then, for the operation, search for "HTTP" and select on "Built-in" to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
The options in this example are: - Method is "Get"-- URL is "concat('https://', triggerBody()?['subject'], '/_history/', triggerBody()?['dataVersion'])".
+- URL is `"concat('https://', triggerBody()?['subject'], '/_history/', triggerBody()?['dataVersion'])"`.
- Authentication type is "Managed Identity".-- Audience is "concat('https://', triggerBody()?['data']['resourceFhirAccount'])"
+- Audience is `"concat('https://', triggerBody()?['data']['resourceFhirAccount'])"`.
### Allow FHIR Reader access to your Logic App At this point, you need to give the FHIR Reader access to your app, so it can verify that the event details are correct. Follow these steps to give it access:
-1. The first step is to go back to your Logic App and click the Identity menu item.
+1. The first step is to go back to your Logic App and select the Identity menu item.
2. In the System assigned tab, make sure the Status is "On".
-3. Click on Azure role assignments. Click "Add role assignment".
+3. Select on Azure role assignments. Select "Add role assignment".
-4. Specify the following:
+4. Specify the following options:
- Scope = Subscription - Subscription = your subscription - Role = FHIR Data Reader.
-When you have specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by clicking the name and then clicking the Select button. Finally, click "Review + assign" to assign the role.
+When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by clicking the name and then clicking the Select button. Finally, select "Review + assign" to assign the role.
### Add a condition
-After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Click the "+" below HTTP to "Choose an operation". On the right, search for the word "condition". Click on "Built-in" to display the Control icon. Next click Actions and choose Condition.
+After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the "+" below HTTP to "Choose an operation". On the right, search for the word "condition". Select on "Built-in" to display the Control icon. Next select Actions and choose Condition.
When the condition is ready, you can specify what actions happen if the condition is true or false. ### Choosing a condition criteria
-In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on "Condition" in the workflow on the left. You will then see a set of condition choices on the right.
+In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on "Condition" in the workflow on the left. You'll then see a set of condition choices on the right.
Under the "And" box, add these two conditions:
The expression for getting the resourceType is `body('HTTP')?['resourceType']`.
You can select Event Type from the Dynamic Content.
-Here is an example of the Condition criteria:
+Here's an example of the Condition criteria:
#### Save your workflow
-When you have entered the condition criteria, save your workflow.
+When you've entered the condition criteria, save your workflow.
#### Workflow dashboard
-To check the status of your workflow, click Overview in the workflow menu. Here is a dashboard for a workflow:
+To check the status of your workflow, select Overview in the workflow menu. Here's a dashboard for a workflow:
You can do the following operations from your workflow dashboard:
To test your new workflow, do the following steps:
3. The event should be shaded in green if the action was successful. 4. If it failed, the event will be shaded in red.
-Here is an example of a workflow trigger success operation:
+Here's an example of a workflow trigger success operation:
:::image type="content" source="media/events-logic-apps/events-logic-success.png" alt-text="Screenshot showing workflow success indicated by green highlighting of the workflow name." lightbox="./media/events-logic-apps/events-logic-success.png"::: ## Next steps
-For more information about FHIR events, see
+For more information about FHIR events, see:
->[!div class="nextstepaction"]
->[What are Events?](./events-overview.md)
+> [!div class="nextstepaction"]
+> [What are Events?](./events-overview.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md
Previously updated : 07/06/2022 Last updated : 10/21/2022 # Deploy Events using the Azure portal
-In this quickstart, youΓÇÖll learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send Fast Healthcare Interoperability Resources (FHIR&#174;) event messages.
+In this Quickstart, youΓÇÖll learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send Fast Healthcare Interoperability Resources (FHIR&#174;) event messages.
## Prerequisites
It's important that you have the following prerequisites completed before you be
* **Name**: Provide a name for your Events subscription. * **System Topic Name**: Provide a name for your System Topic.
- >[!NOTE]
+ > [!NOTE]
> The first time you set up the Events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional Events subscriptions that you create within the workspace. * **Event types**: Type of FHIR events to send messages for (for example: create, updated, and deleted).
It's important that you have the following prerequisites completed before you be
:::image type="content" source="media/events-deploy-in-portal/events-new-subscription-created.png" alt-text="Screenshot of a successfully deployed events subscription." lightbox="media/events-deploy-in-portal/events-new-subscription-created.png":::
- >[!TIP]
- >For more information about providing access using an Azure Managed identity, see [Assign a system-managed identity to an Event Grid system topic](../../event-grid/enable-identity-system-topics.md) and [Event delivery with a managed identity](../../event-grid/managed-service-identity.md)
+ > [!TIP]
+ > For more information about providing access using an Azure Managed identity, see [Assign a system-managed identity to an Event Grid system topic](../../event-grid/enable-identity-system-topics.md) and [Event delivery with a managed identity](../../event-grid/managed-service-identity.md)
>
- >For more information about managed identities, see [What are managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md)
+ > For more information about managed identities, see [What are managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md)
>
- >For more information about Azure role-based access control (Azure RBAC), see [What is Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
+ > For more information about Azure role-based access control (Azure RBAC), see [What is Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
## Next steps In this article, you've learned how to deploy Events in the Azure portal.
-To learn how to display the Events metrics, see
+To learn how to enable the Events metrics, see
->[!div class="nextstepaction"]
->[How to display Events metrics](./events-display-metrics.md)
+> [!div class="nextstepaction"]
+> [How to use Events metrics](events-use-metrics.md)
To learn how to export Event Grid system diagnostic logs and metrics, see
->[!div class="nextstepaction"]
->[How to export Events diagnostic logs and metrics](./events-display-metrics.md)
+> [!div class="nextstepaction"]
+> [How to enable Events diagnostic logs and metrics](events-enable-diagnostic-settings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Previously updated : 09/22/2022 Last updated : 10/21/2022
-# Disable Events and delete workspaces
+# How to disable Events and delete workspaces
In this article, you'll learn how to disable the Events feature and delete workspaces in the Azure Health Data Services.
As an example:
For more information about troubleshooting Events, see the Events troubleshooting guide:
->[!div class="nextstepaction"]
->[Troubleshoot Events](./events-troubleshooting-guide.md)
+> [!div class="nextstepaction"]
+> [Troubleshoot Events](events-troubleshooting-guide.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-enable-diagnostic-settings.md
+
+ Title: Enable Events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services
+description: This article provides resources on how to enable Events diagnostic settings for diagnostic logs and metrics exporting.
+++++ Last updated : 10/21/2022+++
+# How to enable diagnostic settings for Events
+
+In this article, you'll be provided resources to enable the Events diagnostic settings for Azure Event Grid system topics.
+
+After they're enabled, Event Grid system topics diagnostic logs and metrics will be exported to the destination of your choosing for audit, analysis, troubleshooting, or backup.
+
+## Resources
+
+|Description|Resource|
+|-|--|
+|Learn how to enable the Event Grid system topics diagnostic logging and metrics export feature.|[Enable diagnostic logs for Event Grid system topics](../../event-grid/enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-system-topics)|
+|View a list of currently captured Event Grid system topics diagnostic logs.|[Event Grid system topic diagnostic logs](../../azure-monitor/essentials/resource-logs-categories.md#microsofteventgridsystemtopics)|
+|View a list of currently captured Event Grid system topics metrics.|[Event Grid system topic metrics](../../azure-monitor/essentials/metrics-supported.md#microsofteventgridsystemtopics)|
+|More information about how to work with diagnostics logs.|[Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)|
+
+> [!NOTE]
+> It might take up to 15 minutes for the first Events diagnostic logs and metrics to display in the destination of your choice.
+
+## Next steps
+
+To learn how to use Events metrics in the Azure portal, see
+
+> [!div class="nextstepaction"]
+> [How to use Events metrics](events-use-metrics.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-use-metrics.md
+
+ Title: Use Events metrics in Azure Health Data Services
+description: This article explains how use display Events metrics
+++++ Last updated : 10/21/2022+++
+# How to use Events metrics
+
+In this article, you'll learn how to use Events metrics in the Azure portal.
+
+> [!TIP]
+> To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](/azure/azure-monitor/essentials/data-platform-metrics)]
+
+> [!NOTE]
+> For the purposes of this article, an Azure Event Hubs event hub was used as the Events message endpoint.
+
+## Use metrics
+
+1. Within your Azure Health Data Services workspace, select the **Events** button.
+
+ :::image type="content" source="media\events-display-metrics\events-metrics-workspace-select.png" alt-text="Screenshot of select the events button from the workspace." lightbox="media\events-display-metrics\events-metrics-workspace-select.png":::
+
+2. The Events page displays the combined metrics for all Events Subscriptions. For example, we have one subscription named **fhir-events** and one processed message. Select the subscription in the lower left-hand corner to view the metrics for that subscription.
+
+ :::image type="content" source="media\events-display-metrics\events-metrics-main.png" alt-text="Screenshot of events you would like to display metrics for." lightbox="media\events-display-metrics\events-metrics-main.png":::
+
+3. From this page, you'll notice that the subscription named **fhir-events** has one processed message. To view the Event Hubs metrics, select the name of the Event Hubs (for this example, **azuredocsfhirservice**) from the lower right-hand corner of the page.
+
+ :::image type="content" source="media\events-display-metrics\events-metrics-subscription.png" alt-text="Screenshot of select the metrics button." lightbox="media\events-display-metrics\events-metrics-subscription.png":::
+
+4. From this page, you'll notice that the Event Hubs received the incoming message presented in the previous Events subscription metrics pages.
+
+ :::image type="content" source="media\events-display-metrics\events-metrics-event-hub.png" alt-text="Screenshot of displaying event hubs metrics." lightbox="media\events-display-metrics\events-metrics-event-hub.png":::
+
+## Next steps
+
+To learn how to export Events Azure Event Grid system diagnostic logs and metrics, see
+
+> [!div class="nextstepaction"]
+> [Enable diagnostic settings for Events](events-enable-diagnostic-settings.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
import-export Storage Import Export Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-service.md
Previously updated : 03/14/2022 Last updated : 10/20/2022 # What is Azure Import/Export service?
The Azure Import/Export service supports copying data to and from all Azure stor
|North Central US | Australia Southeast | Brazil South | UK South | |South Central US | Japan West |Korea Central | Germany Central | |West Central US | Japan East | US Gov Virginia | Germany Northeast |
-|South Africa West | South Africa North |
+|South Africa West | South Africa North | UAE |
## Security considerations
industry Install Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/install-azure-farmbeats.md
This article describes how to install Azure FarmBeats in your Azure subscription
Azure FarmBeats is a business-to-business offering available in Azure Marketplace. It enables aggregation of agriculture data sets across providers and generation of actionable insights. Azure FarmBeats does this by enabling you to build artificial intelligence (AI) or machine learning (ML) models based on fused data sets. The two main components of Azure FarmBeats are: -- **Datahub**: An API layer that enables aggregation, normalization, and contextualization of various agriculture data sets across different providers.
+- **Data hub**: An API layer that enables aggregation, normalization, and contextualization of various agriculture data sets across different providers.
-- **Accelerator**: Web application that is built on top of Datahub. It jump-starts your model development and visualization. The accelerator uses Azure FarmBeats APIs to demonstrate visualization of ingested sensor data as charts and visualization of model output as maps.
+- **Accelerator**: Web application that is built on top of Data hub. It jump-starts your model development and visualization. The accelerator uses Azure FarmBeats APIs to demonstrate visualization of ingested sensor data as charts and visualization of model output as maps.
## General information
When you install Azure FarmBeats, the following resources are provisioned in you
| Azure Resources Installed | Azure FarmBeats component | |||
-| Application Insights | Datahub & Accelerator |
-| App Service | Datahub & Accelerator |
-| App Service Plan | Datahub & Accelerator |
-| API Connection | Datahub |
-| Azure Cache for Redis | Datahub |
-| Azure Cosmos DB | Datahub |
-| Azure Data Factory V2 | Datahub & Accelerator |
-| Azure Batch account | Datahub |
-| Azure Key Vault | Datahub & Accelerator |
+| Application Insights | Data hub & Accelerator |
+| App Service | Data hub & Accelerator |
+| App Service Plan | Data hub & Accelerator |
+| API Connection | Data hub |
+| Azure Cache for Redis | Data hub |
+| Azure Cosmos DB | Data hub |
+| Azure Data Factory V2 | Data hub & Accelerator |
+| Azure Batch account | Data hub |
+| Azure Key Vault | Data hub & Accelerator |
| Azure Maps Account | Accelerator |
-| Event Hub Namespace | Datahub |
-| Logic App | Datahub |
-| Storage Account | Datahub & Accelerator |
-| Time Series Insights | Datahub |
+| Event Hub Namespace | Data hub |
+| Logic App | Data hub |
+| Storage Account | Data hub & Accelerator |
+| Time Series Insights | Data hub |
### Costs incurred The cost of Azure FarmBeats is an aggregate of the cost of the underlying Azure services. Pricing information for Azure services can be calculated using the [Pricing Calculator](https://azure.microsoft.com/pricing/calculator). The actual cost of the total installation will vary based on the usage. The steady state cost for the two components is: -- Datahub - less than $10 per day
+- Data hub - less than $10 per day
- Accelerator - less than $2 per day ### Regions supported
Azure FarmBeats require Azure Active Directory application creation and registra
Run the following steps in a Cloud Shell instance using the PowerShell environment. First-time users will be prompted to select a subscription and create a storage account. Complete the setup as instructed.
-1. Download the [AAD app generator script](https://aka.ms/FarmBeatsAADScript)
+1. Download the AAD app generator script
```azurepowershell-interactive wget -q https://aka.ms/FarmBeatsAADScript -O ./create_aad_script.ps1
Your registration process is complete. Make a note of your **Sentinel Username**
## Install
-You are now ready to install FarmBeats. Follow the steps below to start the installation:
+You're now ready to install FarmBeats. Follow the steps below to start the installation:
1. Sign in to the Azure portal. Select your account in the top-right corner and switch to the Azure AD tenant where you want to install Azure FarmBeats.
You are now ready to install FarmBeats. Follow the steps below to start the inst
![Dependencies Tab](./media/install-azure-farmbeats/create-azure-farmbeats-dependencies.png)
-8. Once the entered details are validated, select **OK**. The Terms of use page appear. Review the terms and select **Create** to start the installation. You will be redirected to the page where you can follow the installation progress.
+8. Once the entered details are validated, select **OK**. The Terms of use page appear. Review the terms and select **Create** to start the installation. You'll be redirected to the page where you can follow the installation progress.
Once the installation is complete, you can verify the installation and start using FarmBeats portal by navigating to the website name you provided during installation: https://\<FarmBeats-website-name>.azurewebsites.net. You should see FarmBeats user interface with an option to create Farms.
-**Datahub** can be found at https://\<FarmBeats-website-name>-api.azurewebsites.net/swagger. Here you will see the different FarmBeats API objects and perform REST operations on the APIs.
+**Data hub** can be found at https://\<FarmBeats-website-name>-api.azurewebsites.net/swagger. Here you'll see the different FarmBeats API objects and perform REST operations on the APIs.
## Upgrade
To upgrade FarmBeats to the latest version, run the following steps in a Cloud S
First-time users will be prompted to select a subscription and create a storage account. Complete the setup as instructed.
-1. Download the [upgrade script](https://aka.ms/FarmBeatsUpgradeScript)
+1. Download the upgrade script
```azurepowershell-interactive wget ΓÇôq https://aka.ms/FarmBeatsUpgradeScript -O ./upgrade-farmbeats.ps1
The path to input.json file is optional. If not specified, the script will ask f
## Uninstall
-To uninstall Azure FarmBeats Datahub or Accelerator, complete the following steps:
+To uninstall Azure FarmBeats Data hub or Accelerator, complete the following steps:
1. Log in to the Azure portal and **delete the resource groups** in which these components are installed.
To uninstall Azure FarmBeats Datahub or Accelerator, complete the following step
## Next steps
-You have learned how to install Azure FarmBeats in your Azure subscription. Now, learn how to [add users](manage-users-in-azure-farmbeats.md#manage-users) to your Azure FarmBeats instance.
+You've learned how to install Azure FarmBeats in your Azure subscription. Now, learn how to [add users](manage-users-in-azure-farmbeats.md#manage-users) to your Azure FarmBeats instance.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
ms.devlang: c Previously updated : 10/18/2021 Last updated : 10/21/2022+ zone_pivot_groups: iot-develop-toolset #- id: iot-develop-toolset ## Owner: timlt
zone_pivot_groups: iot-develop-toolset
# prompt: Choose a build environment # - id: iot-toolset-mplab # Title: MPLAB- #Customer intent: As a device builder, I want to see a working IoT device sample connecting to IoT Hub and sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
Termite is now ready to receive output from the Microchip E54.
1. Expand the sample, then expand the **Sample** folder and open the sample_config.h file.
-1. Near the top of the file uncomment the `#define ENABLE_DPS_SAMPLE` directive.
+1. Near the top of the file, uncomment the `#define ENABLE_DPS_SAMPLE` directive.
```c #define ENABLE_DPS_SAMPLE
Termite is now ready to receive output from the Microchip E54.
1. Expand the **sample_azure_iot_embedded_sdk_pnp** project, then expand the **Header Files** folder and open the sample_config.h file.
-1. Near the top of the file uncomment the `#define ENABLE_DPS_SAMPLE` directive.
+1. Near the top of the file, uncomment the `#define ENABLE_DPS_SAMPLE` directive.
```c #define ENABLE_DPS_SAMPLE
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
ms.devlang: c Previously updated : 06/09/2021- Last updated : 10/21/2022+ # Connect an MXCHIP AZ3166 devkit to IoT Hub
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
ms.devlang: c Previously updated : 06/02/2021- Last updated : 10/21/2022+ # Quickstart: Connect an MXCHIP AZ3166 devkit to IoT Central
iot-develop Quickstart Devkit Nxp Mimxrt1050 Evkb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1050-evkb.md
ms.devlang: c Previously updated : 06/04/2021- Last updated : 10/21/2022+ # Quickstart: Connect an NXP MIMXRT1050-EVKB Evaluation kit to IoT Central
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk.md
ms.devlang: c Previously updated : 11/16/2021- Last updated : 10/21/2022+ zone_pivot_groups: iot-develop-nxp-toolset # Owner: timlt
In this section, you use IAR EW IDE to modify a configuration file for Azure IoT
1. Expand the project, then expand the **Sample** subfolder and open the *sample_config.h* file.
-1. Near the top of the file uncomment the `#define ENABLE_DPS_SAMPLE` directive.
+1. Near the top of the file, uncomment the `#define ENABLE_DPS_SAMPLE` directive.
```c #define ENABLE_DPS_SAMPLE
Keep the terminal open to monitor device output in the following steps.
* MCUXpresso IDE (MCUXpresso), version 11.3.1 or later. Download and install a [free copy of MCUXPresso](https://www.nxp.com/design/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE).
-* Download the [MIMXRT1060-EVK SDK 2.9.0 or later](https://mcuxpresso.nxp.com/en/builder). After you sign in, the website lets you build a custom SDK archive to download. After you select the EVK MIMXRT1060 board and click the option to build the SDK, you can download the zip archive. The only SDK component to include is the preselected **SDMMC Stack**.
+* Download the [MIMXRT1060-EVK SDK 2.9.0 or later](https://mcuxpresso.nxp.com/en/builder). After you sign in, the website lets you build a custom SDK archive to download. After you select the EVK MIMXRT1060 board and select the option to build the SDK, you can download the zip archive. The only SDK component to include is the preselected **SDMMC Stack**.
* Download the NXP MIMXRT1060-EVK MCUXpresso sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
In this section, you prepare your environment, and use MCUXpresso to build and r
:::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk/mcu-load-project.png" alt-text="Screenshot showing a loaded project in MCUXpresso.":::
-1. Near the top of the file uncomment the `#define ENABLE_DPS_SAMPLE` directive.
+1. Near the top of the file, uncomment the `#define ENABLE_DPS_SAMPLE` directive.
```c #define ENABLE_DPS_SAMPLE
In this section, you prepare your environment, and use MCUXpresso to build and r
> [!NOTE] > The terminal window appears in the lower half of the IDE and might initially display garbage characters until you download and run the sample.
-1. Select the **Start Debugging project [project name]** toolbar button. This downloads the project to the device, and runs it.
+1. Select the **Start Debugging project [project name]** toolbar button. This action downloads the project to the device, and runs it.
1. After the code hits a break in the IDE, select the **Resume (F8)** toolbar button.
If you experience issues building the device code, flashing the device, or conne
For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). :::zone-end :::zone pivot="iot-toolset-iar-ewarm"
-If you need help debugging the application, see the selections under **Help** in **IAR EW for ARM**.
+If you need help with debugging the application, see the selections under **Help** in **IAR EW for ARM**.
:::zone-end :::zone pivot="iot-toolset-iar-ewarm"
-If you need help debugging the application, in MCUXpresso open the **Help > MCUXPresso IDE User Guide** and see the content on Azure RTOS debugging.
+If you need help with debugging the application, in MCUXpresso open the **Help > MCUXPresso IDE User Guide** and see the content on Azure RTOS debugging.
:::zone-end ## Clean up resources
iot-develop Quickstart Devkit Renesas Rx65n 2Mb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-2mb.md
ms.devlang: c Previously updated : 06/04/2021- Last updated : 10/21/2022+ # Quickstart: Connect a Renesas Starter Kit+ for RX65N-2MB to IoT Central
iot-develop Quickstart Devkit Stm B L475e https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e.md
ms.devlang: c Previously updated : 06/02/2021- Last updated : 10/21/2022+ # Quickstart: Connect an STMicroelectronics B-L475E-IOT01A Discovery kit to IoT Central
In this quickstart, you use Azure RTOS to connect the STMicroelectronics [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) Discovery kit (from now on, the STM DevKit) to Azure IoT.
-You will complete the following tasks:
+You'll complete the following tasks:
* Install a set of embedded development tools for programming the STM DevKit in C * Build an image and flash it onto the STM DevKit
To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi
### Flash the image
-1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You will refer to these items in the next steps. All of them are highlighted in the following picture:
+1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You'll refer to these items in the next steps. All of them are highlighted in the following picture:
:::image type="content" source="media/quickstart-devkit-stm-b-l475e/stm-devkit-board-475.png" alt-text="Locate key components on the STM DevKit board":::
To view telemetry in IoT Central portal:
## Call a direct method on the device
-You can also use IoT Central to call a direct method that you have implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
+You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
To call a method in IoT Central portal:
iot-develop Quickstart Devkit Stm B L4s5i https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i.md
ms.devlang: c Previously updated : 06/02/2021- Last updated : 10/21/2022+ zone_pivot_groups: iot-develop-stm32-toolset # Owner: timlt
To view telemetry in IoT Central portal:
## Call a direct method on the device
-You can also use IoT Central to call a direct method that you have implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout.
+You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout.
To call a method in IoT Central portal:
Select the **About** tab from the device page.
## Download the STM32Cube IDE
-You can download a free version of STM32Cube IDE, but you will need to create an account. Follow the instructions on the ST website. THe STM32Cube IDE can be downloaded from this website:
+You can download a free version of STM32Cube IDE, but you'll need to create an account. Follow the instructions on the ST website. The STM32Cube IDE can be downloaded from this website:
https://www.st.com/en/development-tools/stm32cubeide.html
-The sample distribution zip file contains the following sub-folders that you will use later:
+The sample distribution zip file contains the following subfolders that you'll use later:
|Folder|Contents| |-|--|
To connect the device to Azure, you'll modify a configuration file for Azure IoT
### Build the project
-In STM32CubeIDE, select ***Project > Build All*** to build sample projects and its dependent libraries. You will observe compilation and linking of the sample project.
+In STM32CubeIDE, select ***Project > Build All*** to build sample projects and its dependent libraries. You'll observe compilation and linking of the sample project.
Download and run the project
-1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You will refer to these items in the next steps. All of them are highlighted in the following picture:
+1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You'll refer to these items in the next steps. All of them are highlighted in the following picture:
:::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/stm-b-l4s5i.png" alt-text="Locate key components on the STM DevKit board":::
To view telemetry in IoT Central portal:
## Call a direct method on the device
-You can also use IoT Central to call a direct method that you have implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout.
+You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout.
To call a method in IoT Central portal:
To view telemetry in IoT Central portal:
## Call a direct method on the device
-You can also use IoT Central to call a direct method that you have implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
+You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
To call a method in IoT Central portal:
To remove the entire Azure IoT Central sample application and all its devices an
## Next steps
-In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view customer content, and send messages.
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view device data, and send messages.
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md
Title: Understand Device Update for IoT Hub authentication and authorization | M
description: Understand how Device Update for IoT Hub uses Azure RBAC to provide authentication and authorization for users and service APIs. Previously updated : 2/11/2021 Last updated : 10/21/2022
Below actions will be blocked with upcoming release, if these permissions are no
4. Click **Next**. For **Assign access to**, select **User, group, or service principal**. Click **+ Select Members**, search for '**Azure Device Update**' 5. Click **Next** -> **Review + Assign**
+To validate that you've set permissions correctly:
+1. Go to the **IoT Hub** connected to your Device Update Instance. Click **Access Control(IAM)**
+2. Click **Check access**
+3. Select **User, group, or service principal** and search for '**Azure Device Update**'
+4. After clicking on '**Azure Device Update**', verify that the **IoT Hub Data Contributor** role is listed under **Role assignments**
## Authenticate to Device Update REST APIs
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
More about Azure Key Vault management guidelines, see:
| Key Vault Secrets User | Read secret contents including secret portion of a certificate with private key. Only works for key vaults that use the 'Azure role-based access control' permission model. | 4633458b-17de-408a-b874-0445c86b69e6 | > [!NOTE]
-> There is no 'Key Vault Certificate User` because applications require secrets portion of certificate with private key. 'Key Vault Secrets User` role should be used for applications to retrieve certificate.
-
+> There is no `Key Vault Certificate User` because applications require secrets portion of certificate with private key. The `Key Vault Secrets User` role should be used for applications to retrieve certificate.
For more information about Azure built-in roles definitions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
Add these lines, updating the URI to reflect the `vaultUri` of your key vault. B
Console.Write("Input the value of your secret > "); string secretValue = Console.ReadLine();
- Console.Write("Creating a secret in " + keyVaultName + " called '" + secretName + "' with the value '" + secretValue + "` ...");
+ Console.Write("Creating a secret in " + keyVaultName + " called '" + secretName + "' with the value '" + secretValue + "' ...");
client.SetSecret(secretName, secretValue);
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
public class App {
String secretValue = con.readLine();
- System.out.print("Creating a secret in " + keyVaultName + " called '" + secretName + "' with value '" + secretValue + "` ... ");
+ System.out.print("Creating a secret in " + keyVaultName + " called '" + secretName + "' with value '" + secretValue + "' ... ");
secretClient.setSecret(new KeyVaultSecret(secretName, secretValue));
lab-services Class Type Ethical Hacking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-ethical-hacking.md
Kali is a Linux distribution that includes tools for penetration testing and sec
1. On the **Configure Networking** page, leave the connection as **Not Connected**. You'll set up the network adapter later. 1. On the **Connect Virtual Hard Disk** page, select **Use an existing virtual hard disk**. Browse to the location for the **Kali-Linux-{version}-vmware-amd64.vhdk** file created in the previous step, and select **Next**. 1. On the **Completing the New Virtual Machine Wizard** page, and select **Finish**.
- 1. Once the virtual machine is created, select it in the Hyper-V Manager. Don't turn on the machine yet.
+ 1. Once the virtual machine is created, select it in the Hyper-V Manager. Don't turn on the machine yet.
1. Choose **Action** -> **Settings**. 1. On the **Settings for Kali-Linux** dialog for, select **Add Hardware**. 1. Select **Legacy Network Adapter**, and select **Add**. 1. On the **Legacy Network Adapter** page, select **LabServicesSwitch** for the **Virtual Switch** setting, and select **OK**. LabServicesSwitch was created when preparing the template machine for Hyper-V in the **Prepare Template for Nested Virtualization** section.
- 1. The Kali-Linux image is now ready for use. From **Hyper-V Manager**, choose **Action** -> **Start**, then choose **Action** -> **Connect** to connect to the virtual machine. The default username is **kali** and the password is **kali**.
+ 1. The Kali-Linux image is now ready for use. From **Hyper-V Manager**, choose **Action** -> **Start**, then choose **Action** -> **Connect** to connect to the virtual machine. The default username is `kali` and the password is `kali`.
### Set up a nested VM with Metasploitable Image
The Rapid7 Metasploitable image is an image purposely configured with security v
:::image type="content" source="./media/class-type-ethical-hacking/network-adapter-page.png" alt-text="Screenshot of settings dialog for Hyper V VM."::: 1. On the **Legacy Network Adapter** page, select **LabServicesSwitch** for the **Virtual Switch** setting, and select **OK**. LabServicesSwitch was created when preparing the template machine for Hyper-V in the **Prepare Template for Nested Virtualization** section. :::image type="content" source="./media/class-type-ethical-hacking/legacy-network-adapter-page.png" alt-text="Screenshot of Legacy Network adapter settings page for Hyper V VM.":::
- 1. The Metasploitable image is now ready for use. From **Hyper-V Manager**, choose **Action** -> **Start**, then choose **Action** -> **Connect** to connect to the virtual machine. The default username is **msfadmin** and the password is **msfadmin**.
+ 1. The Metasploitable image is now ready for use. From **Hyper-V Manager**, choose **Action** -> **Start**, then choose **Action** -> **Connect** to connect to the virtual machine. The default username is `msfadmin` and the password is `msfadmin`.
The template is now updated and has images needed for an ethical hacking penetration testing class, an image with tools to do the penetration testing and another image with security vulnerabilities to discover. The template image can now be [published](how-to-create-manage-template.md#publish-the-template-vm) to the class.
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
When you stop the debugging session for a workflow run that uses locally run web
> [!NOTE] > After your workflow starts running, the terminal window might show errors like this example: >
-> `message='Http request failed with unhandled exception of type 'InvalidOperationException' and message: 'System.InvalidOperationException: Synchronous operations are disallowed. Call ReadAsync or set AllowSynchronousIO to true instead.`
+> `message='Http request failed with unhandled exception of type 'InvalidOperationException' and message: 'System.InvalidOperationException: Synchronous operations are disallowed. Call ReadAsync or set AllowSynchronousIO to true instead.'`
> > In this case, open the **local.settings.json** file in your project's root folder, and make sure that the property is set to `true`: >
logic-apps Export From Ise To Standard Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-ise-to-standard-logic-app.md
Consider the following recommendations when you select logic apps for export:
The export tool downloads your project to your selected folder location, expands the project in Visual Studio Code, and deploys any managed connections, if you selected that option.
- ![Screenshot showing the 'Export status` section with export progress.](media/export-from-ise-to-standard-logic-app/export-status.png)
+ ![Screenshot showing the 'Export status' section with export progress.](media/export-from-ise-to-standard-logic-app/export-status.png)
1. After this process completes, Visual Studio Code opens a new workspace. You can now safely close the export window.
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 07/30/2022 Last updated : 10/21/2022 # Limits and configuration reference for Azure Logic Apps
The following tables list the values for the number of artifacts limited to each
| Assemblies | 10 | 25 | 1,000 | | Certificates | 25 | 2 | 1,000 | | Batch configurations | 5 | 1 | 50 |
-||||
+| RosettaNet partner interface process (PIP) | 10 | 1 | 500 |
<a name="artifact-capacity-limits"></a>
logic-apps Logic Apps Workflow Definition Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-workflow-definition-language.md
property or a value in an array.
| Operator | Task | |-||
-| ' | To use a string literal as input or in expressions and functions, wrap the string only with single quotation marks, for example, `'<myString>'`. Do not use double quotation marks (""), which conflict with the JSON formatting around an entire expression. For example: <p>**Yes**: length('Hello') </br>**No**: length("Hello") <p>When you pass arrays or numbers, you don't need wrapping punctuation. For example: <p>**Yes**: length([1, 2, 3]) </br>**No**: length("[1, 2, 3]") |
-| [] | To reference a value at a specific position (index) in an array, use square brackets. For example, to get the second item in an array: <p>`myArray[1]` |
-| . | To reference a property in an object, use the dot operator. For example, to get the `name` property for a `customer` JSON object: <p>`"@parameters('customer').name"` |
-| ? | To reference null properties in an object without a runtime error, use the question mark operator. For example, to handle null outputs from a trigger, you can use this expression: <p>`@coalesce(trigger().outputs?.body?.<someProperty>, '<property-default-value>')` |
+| `'` | To use a string literal as input or in expressions and functions, wrap the string only with single quotation marks, for example, `'<myString>'`. Do not use double quotation marks (`""`), which conflict with the JSON formatting around an entire expression. For example: <p>**Yes**: length('Hello') </br>**No**: length("Hello") <p>When you pass arrays or numbers, you don't need wrapping punctuation. For example: <p>**Yes**: length([1, 2, 3]) </br>**No**: length("[1, 2, 3]") |
+| `[]` | To reference a value at a specific position (index) in an array, use square brackets. For example, to get the second item in an array: <p>`myArray[1]` |
+| `.` | To reference a property in an object, use the dot operator. For example, to get the `name` property for a `customer` JSON object: <p>`"@parameters('customer').name"` |
+| `?` | To reference null properties in an object without a runtime error, use the question mark operator. For example, to handle null outputs from a trigger, you can use this expression: <p>`@coalesce(trigger().outputs?.body?.<someProperty>, '<property-default-value>')` |
||| <a name="functions"></a>
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
This breaking change comes from the June release of `azureml-inference-server-ht
+ AutoML training now supports numpy version 1.19 + Fix AutoML reset index logic for ensemble models in automl_setup_model_explanations API + In AutoML, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade
- + All internal intermediate artifacts that are produced by AutoML are now stored transparently on the parent run (instead of being sent to the default workspace blob store). Users should be able to see the artifacts that AutoML generates under the 'outputs/` directory on the parent run.
+ + All internal intermediate artifacts that are produced by AutoML are now stored transparently on the parent run (instead of being sent to the default workspace blob store). Users should be able to see the artifacts that AutoML generates under the `outputs/` directory on the parent run.
## 2022-01-24
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-deploy-model-custom-output.md
Follow the next steps to create a deployment using the previous scoring script:
Then, create the deployment with the following command: ```azurecli
- az ml batch-endpoint create -f endpoint.yml
+ DEPLOYMENT_NAME="classifier-xgboost-parquet"
+ az ml batch-deployment create -f endpoint.yml
``` # [Azure ML SDK for Python](#tab/sdk)
Follow the next steps to create a deployment using the previous scoring script:
retry_settings=BatchRetrySettings(max_retries=3, timeout=300), logging_level="info", )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
ml_client.batch_deployments.begin_create_or_update(deployment) ```
For testing our endpoint, we are going to use a sample of unlabeled data located
# [Azure ML CLI](#tab/cli) ```azurecli
- JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
+ JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
``` > [!NOTE]
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-image-processing-batch.md
One the scoring script is created, it's time to create a batch deployment for it
Then, create the deployment with the following command: ```azurecli
- az ml batch-endpoint create -f endpoint.yml
+ DEPLOYMENT_NAME="imagenet-classifier-resnetv2"
+ az ml batch-deployment create -f deployment.yml
``` # [Azure ML SDK for Python](#tab/sdk)
One the scoring script is created, it's time to create a batch deployment for it
logging_level="info", ) ```
+
+ Then, create the deployment with the following command:
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
-1. At this point, our batch endpoint is ready to be used.
+1. At this point, our batch endpoint is ready to be used.
## Testing out the deployment
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
Then, create the deployment with the following command: ```bash
+ DEPLOYMENT_NAME="classifier-xgboost-mlflow"
az ml batch-endpoint create -f endpoint.yml ```
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
> [!NOTE] > `scoring_script` and `environment` auto generation only supports `pyfunc` model flavor. To use a different flavor, see [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
-6. At this point, our batch endpoint is ready to be used.
+6. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+7. At this point, our batch endpoint is ready to be used.
## Testing out the deployment
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-nlp-processing-batch.md
One the scoring script is created, it's time to create a batch deployment for it
Then, create the deployment with the following command: ```bash
- az ml batch-endpoint create -f endpoint.yml
+ DEPLOYMENT_NAME="text-summarization-hfbart"
+ az ml batch-deployment create -f endpoint.yml
``` # [Azure ML SDK for Python](#tab/sdk)
One the scoring script is created, it's time to create a batch deployment for it
retry_settings=BatchRetrySettings(max_retries=3, timeout=3000), logging_level="info", )
+ ```
+
+ Then, create the deployment with the following command:
+ ```python
ml_client.batch_deployments.begin_create_or_update(deployment) ```
One the scoring script is created, it's time to create a batch deployment for it
> [!IMPORTANT] > You will notice in this deployment a high value in `timeout` in the parameter `retry_settings`. The reason for it is due to the nature of the model we are running. This is a very expensive model and inference on a single row may take up to 60 seconds. The `timeout` parameters controls how much time the Batch Deployment should wait for the scoring script to finish processing each mini-batch. Since our model runs predictions row by row, processing a long file may take time. Also notice that the number of files per batch is set to 1 (`mini_batch_size=1`). This is again related to the nature of the work we are doing. Processing one file at a time per batch is expensive enough to justify it. You will notice this being a pattern in NLP processing.
-3. At this point, our batch endpoint is ready to be used.
+3. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+4. At this point, our batch endpoint is ready to be used.
+ ## Considerations when deploying models that process text
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-event-grid-batch.md
The workflow will work in the following way:
4. It will trigger the batch endpoint (default deployment) using the newly created file as input. > [!IMPORTANT]
-> The proposed Logic App will create a batch deployment job for each file that triggers the event of *blog created*. However, keep in mind that batch deployments distribute the work at the file level. Since this execution is specifying only one file, then, there will not be any parallelization happening in the deployment. Instead, you will be taking advantage of the capability of batch deployments of executing multiple scoring jobs under the same compute cluster. If you need to run jobs on folders, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+> When using Logic App connected with event grid to invoke batch deployment, a job for each file that triggers the event of *blog created* will be generated. However, keep in mind that batch deployments distribute the work at the file level. Since this execution is specifying only one file, then, there will not be any parallelization happening in the deployment. Instead, you will be taking advantage of the capability of batch deployments of executing multiple scoring jobs under the same compute cluster. If you need to run jobs on entire folders in an automatic fashion, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
## Prerequisites
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
Once you have a model you like, you register it with the workspace. You then use
## Taxonomy
-A taxonomy of the workspace is illustrated in the following diagram:
-
-[![Workspace taxonomy](./media/concept-workspace/azure-machine-learning-taxonomy.png)](./media/concept-workspace/azure-machine-learning-taxonomy.png#lightbox)
-
-The diagram shows the following components of a workspace:
- + A workspace can contain [Azure Machine Learning compute instances](concept-compute-instance.md), cloud resources configured with the Python environment necessary to run Azure Machine Learning. + [User roles](how-to-assign-roles.md) enable you to share your workspace with other users, teams, or projects.
machine-learning How To Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md
Title: Add users to your data labeling project title.suffix: Azure Machine Learning description: Add users to your data labeling project so that they can label data, but not see the rest of your workspace.---+++
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
Title: Set up image labeling project description: Create a project to label images with the data labeling tool. Enable ML assisted labeling, or human in the loop labeling, to aid with the task.--+++
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
In this article, you can learn:
## Prerequisites * An AKS cluster is up and running in Azure.
+ * If you have not previously used cluster extensions, you need to [register the KubernetesConfiguration service provider](../aks/dapr.md#register-the-kubernetesconfiguration-service-provider).
* Or an Arc Kubernetes cluster is up and running. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md). * If the cluster is an Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, you must satisfy other prerequisite steps as documented in the [Reference for configuring Kuberenetes cluster](./reference-kubernetes.md#prerequisites-for-aro-or-ocp-clusters) article. * The Kubernetes cluster must have minimum of 4 vCPU cores and 8-GB memory. * Cluster running behind an outbound proxy server or firewall needs extra [network configurations](./how-to-access-azureml-behind-firewall.md#kubernetes-compute) * Install or upgrade Azure CLI to version 2.24.0 or higher. * Install or upgrade Azure CLI extension `k8s-extension` to version 1.2.3 or higher.
+
## Limitations
You can use AzureML CLI command `k8s-extension create` to deploy AzureML extensi
|`nodeSelector` | By default, the deployed kubernetes resources are randomly deployed to one or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional | |`installNvidiaDevicePlugin` | `True` or `False`, default `False`. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, AzureML extension deployment won't install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to `True`, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional | |`installPromOp`|`True` or `False`, default `True`. AzureML extension needs prometheus operator to manage prometheus. Set to `False` to reuse the existing prometheus operator. For more information about reusing the existing prometheus operator, refer to [reusing the prometheus operator](./how-to-troubleshoot-kubernetes-extension.md#prometheus-operator)| Optional| Optional | Optional |
- |`installVolcano`| `True` or `False`, default `True`. AzureML extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing vocano scheduler, refer to [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler) | Optional| N/A | Optional |
+ |`installVolcano`| `True` or `False`, default `True`. AzureML extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing volcano scheduler, refer to [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler) | Optional| N/A | Optional |
|`installDcgmExporter` |`True` or `False`, default `False`. Dcgm-exporter can expose GPU metrics for AzureML workloads, which can be monitored in Azure portal. Set `installDcgmExporter` to `True` to install dcgm-exporter. But if you want to utilize your own dcgm-exporter, refer to [DCGM exporter](./how-to-troubleshoot-kubernetes-extension.md#dcgm-exporter) |Optional |Optional |Optional |
Update, list, show and delete an AzureML extension.
- [Step 2: Attach Kubernetes cluster to workspace](how-to-attach-kubernetes-to-workspace.md) - [Create and manage instance types](./how-to-manage-kubernetes-instance-types.md) - [AzureML inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)-- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
+- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Last updated 09/06/2022- # Secure an Azure Machine Learning inferencing environment with virtual networks
machine-learning How To Secure Kubernetes Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-online-endpoint.md
+
+ Title: Configure secure online endpoint with TLS/SSL
+description: Learn about how to use TLS/SSL to configure secure Kubernetes online endpoint
++++++ Last updated : 10/10/2022++++
+# Configure secure online endpoint with TLS/SSL
+
+This article shows you how to secure a Kubernetes online endpoint that's created through Azure Machine Learning.
+
+You use [HTTPS](https://en.wikipedia.org/wiki/HTTPS) to restrict access to online endpoints and secure the data that clients submit. HTTPS helps secure communications between a client and an online endpoint by encrypting communications between the two. Encryption uses [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security). TLS is sometimes still referred to as *Secure Sockets Layer* (SSL), which was the predecessor of TLS.
+
+> [!TIP]
+> * Specifically, Kubernetes online endpoints support TLS version 1.2 for AKS and Arc Kubernetes.
+> * TLS version 1.3 for Azure Machine Learning Kubernetes Inference is unsupported.
+
+TLS and SSL both rely on *digital certificates*, which help with encryption and identity verification. For more information on how digital certificates work, see the Wikipedia topic [Public key infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure).
+
+> [!WARNING]
+> If you don't use HTTPS for your online endpoints, data that's sent to and from the service might be visible to others on the internet.
+>
+> HTTPS also enables the client to verify the authenticity of the server that it's connecting to. This feature protects clients against [**man-in-the-middle**](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) attacks.
+
+This is the general process to secure an online endpoint:
+
+1. Get a [domain name.](#get-a-domain-name)
+
+1. Get a [digital certificate.](#get-a-tlsssl-certificate)
+
+1. [Configure TLS/SSL in AzureML Extension.](#configure-tlsssl-in-azureml-extension)
+
+1. [Update your DNS with FQDN to point to the online endpoint.](#update-your-dns-with-fqdn)
+
+> [!IMPORTANT]
+> You need to purchase your own certificate to get a domain name or TLS/SSL certificate, and then configure them in AzureML Extension. For more detailed information, see the following sections of this article.
+
+## Get a domain name
+
+If you don't already own a domain name, purchase one from a *domain name registrar*. The process and price differ among registrars. The registrar provides tools to manage the domain name. You use these tools to map a fully qualified domain name (FQDN) (such as `www.contoso.com`) to the IP address that hosts your online endpoint.
+
+For more information on how to get the IP address of your online endpoints, see the [Update your DNS with FQDN](#update-your-dns-with-fqdn) section of this article.
+
+## Get a TLS/SSL certificate
+
+There are many ways to get an TLS/SSL certificate (digital certificate). The most common is to purchase one from a *certificate authority* (CA). Regardless of where you get the certificate, you need the following files:
+
+- A **certificate**. The certificate must contain the full certificate chain, and it must be "PEM-encoded."
+- A **key**. The key must also be PEM-encoded.
+
+> [!NOTE]
+> SSL Key in PEM file with pass phrase protected isn't supported.
+
+When you request a certificate, you must provide the FQDN of the address that you plan to use for the online endpoint (for example, `www.contoso.com`). The address that's stamped into the certificate and the address that the clients use are compared to verify the identity of the online endpoint. If those addresses don't match, the client gets an error message.
+
+For more information on how to configure IP banding with FQDN, see the [Update your DNS with FQDN](#update-your-dns-with-fqdn) section of this article.
+
+> [!TIP]
+> If the certificate authority can't provide the certificate and key as PEM-encoded files, you can use a utility such as [**OpenSSL**](https://www.openssl.org/) to change the format.
+
+> [!WARNING]
+> Use ***self-signed*** certificates only for development. Don't use them in production environments. Self-signed certificates can cause problems in your client applications. For more information, see the documentation for the network libraries that your client application uses.
+
+## Configure TLS/SSL in AzureML Extension
+
+For a Kubernetes online endpoint which is set to use inference HTTPS for secure connections, you can enable TLS termination with deployment configuration settings when you [deploy the AzureML extension](how-to-deploy-managed-online-endpoints.md) in an Kubernetes cluster.
+
+At AzureML extension deployment time, the config `allowInsecureConnections` by default will be `False`, and you would need to specify either `sslSecret` config setting or combination of `sslKeyPemFile` and `sslCertPemFile` config-protected settings to ensure successful extension deployment, otherwise you can set `allowInsecureConnections=True` to support HTTP and disable TLS termination.
+
+> [!NOTE]
+> To support HTTPS online endpoint, `allowInsecureConnections` must be set to `False`.
+
+To enable an HTTPS endpoint for real-time inference, you need to provide both PEM-encoded TLS/SSL certificate and key. There are two ways to specify the certificate and key at AzureML extension deployment time:
+1. Specify `sslSecret` config setting.
+1. Specify combination of `sslCertPemFile` and `slKeyPemFile` config-protected settings.
+
+### Configure sslSecret
+
+The best practice is to save the certificate and key in a Kubernetes secret in the `azureml` namespace.
+
+To configure `sslSecret`, you need to save a Kubernetes Secret in your Kubernetes cluster in `azureml` namespace to store **cert.pem** (PEM-encoded TLS/SSL cert) and **key.pem** (PEM-encoded TLS/SSL key).
+
+Below is a sample YAML definition of an TLS/SSL secret:
+
+```
+apiVersion: v1
+data:
+ cert.pem: <PEM-encoded SSL certificate>
+ key.pem: <PEM-encoded SSL key>
+kind: Secret
+metadata:
+ name: <secret name>
+ namespace: azureml
+type: Opaque
+```
+
+For more information on configuring [an sslSecret](reference-kubernetes.md#sample-yaml-definition-of-kubernetes-secret-for-tlsssl).
+
+After saving the secret in your cluster, you can specify the sslSecret to be the name of this Kubernetes secret with the following CLI command (this command will work only if you are using AKS):
+
+<!--CLI command-->
+```azurecli
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config inferenceRouterServiceType=LoadBalancer sslSecret=<Kubernetes secret name> sslCname=<ssl cname> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+```
+
+### Configure sslCertPemFile and sslKeyPemFile
+
+You can specify the `sslCertPemFile` config to be the path to TLS/SSL certificate file(PEM-encoded), and the `sslKeyPemFile` config to be the path to TLS/SSL key file (PEM-encoded).
+
+The following example (assuming you are using AKS) demonstrates how to use Azure CLI to specify .pem files to AzureML extension that uses a TLS/SSL certificate that you purchased:
+
+<!--CLI command-->
+```azurecli
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableInference=True inferenceRouterServiceType=LoadBalancer sslCname=<ssl cname> --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+```
+
+> [!NOTE]
+> 1. The PEM file with pass phrase protection is not supported.
+> 1. Both `sslCertPemFIle` and `sslKeyPemFIle` are using config-protected parameter, and do not configure sslSecret and sslCertPemFile/sslKeyPemFile at the same time.
++
+## Update your DNS with FQDN
+
+For model deployment on Kubernetes online endpoint with custom certificate, you must update your DNS record to point to the IP address of the online endpoint. This IP address is provided by AzureML inference router service(`azureml-fe`), for more information about `azureml-fe`, see the [Managed AzureML inference router](how-to-kubernetes-inference-routing-azureml-fe.md).
+
+You can follow following steps to update DNS record for your custom domain name:
+
+1. Get online endpoint IP address from scoring URI, which is usually in the format of `http://104.214.29.152:80/api/v1/service/<service-name>/score`. In this example, the IP address is 104.214.29.152.
+
+ <!-- where to find out your IP address-->
+ Once you have configured your custom domain name, the IP address in scoring URI would be replaced by that specific domain name. For Kubernetes clusters that using `LoadBalancer` as Inference Router Service, the `azureml-fe` will be exposed externally using a cloud provider's load balancer and TLS/SSL termination, and the IP address of Kubernetes online endpoint is the external IP of the `azureml-fe` service deployed in the cluster.
+
+ If you use AKS, you can easily get the IP address from [Azure portal](https://portal.azure.com/#home). Go to your AKS resource page, navigate to **Service and ingresses** and then find the **azureml-fe** service under the **azuerml** namespace, then you can find the IP address in the **External IP** column.
+
+ :::image type="content" source="media/how-to-secure-kubernetes-online-endpoint/get-ip-address-from-aks-ui.png" alt-text="Screenshot of adding new extension to the Arc-enabled Kubernetes cluster from Azure portal.":::
+
+ In addition, you can run this Kubernetes command `kubectl describe svc azureml-fe -n azureml` in your cluster to get the IP address from the **LoadBalancer Ingress** parameter in the output.
+
+ > [!NOTE]
+ > For Kubernetes clusters that using either `nodePort` or `clusterIP` as Inference Router Service, you need to set up your own load balancing solution and TLS/SSL termination for `azureml-fe`, and get the IP address of the `azureml-fe` service in cluster scope.
++
+1. Use the tools from your domain name registrar to update the DNS record for your domain name. The record maps the FQDN (for example, `www.contoso.com`) to the IP address. The record must point to the IP address of the online endpoint.
+
+ > [!TIP]
+ > Microsoft does not responsible for updating the DNS for your custom DNS name or certificate. You must update it with your domain name registrar.
++
+1. After DNS record update, you can validate DNS resolution using `nslookup custom-domain-name` command. If DNS record is correctly updated, the custom domain name will point to the IP address of online endpoint.
+
+ There can be a delay of minutes or hours before clients can resolve the domain name, depending on the registrar and the "time to live" (TTL) that's configured for the domain name.
+
+For more information on DNS resolution with Azure Machine Learning, see [How to use your workspace with a custom DNS server](how-to-custom-dns.md).
++
+## Update the TLS/SSL certificate
+
+TLS/SSL certificates expire and must be renewed. Typically this happens every year. Use the information in the following steps to update and renew your certificate for models deployed to Kubernetes (AKS and Arc Kubernetes).
+
+1. Use the documentation provided by the certificate authority to renew the certificate. This process creates new certificate files.
+
+1. Update your AzureML extension and specify the new certificate files with this az-k8s extension update command:
+
+ <!--Update sslSecret-->
+ If you used a Kubernetes Secret to configure TLS/SSL before, you need to first update the Kubernetes Secret with new `cert.pem` and `key.pem` configuration in your Kubernetes cluster, and then run the extension update command to update the certificate:
+
+ ```azurecli
+ az k8s-extension update --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config inferenceRouterServiceType=LoadBalancer sslSecret=<Kubernetes secret name> sslCname=<ssl cname> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ ```
+ <!--CLI command-->
+ If you directly configured the PEM files in extension deployment command before, you need to run extension update command with specifying the new PEM files path,
+
+ ```azurecli
+ az k8s-extension update --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ ```
+
+## Disable TLS
+
+To disable TLS for a model deployed to Kubernetes, update the AzureML extension with `allowInsercureconnection` to be `True`, and then remove sslCname config, also remove sslSecret or sslPem config settings, run CLI command in your Kubernetes cluster (assuming you are using AKS), then perform an update:
+
+<!--CLI command-->
+```azurecli
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableInference=True inferenceRouterServiceType=LoadBalancer allowInsercureconnection=True --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+```
+
+> [!WARNING]
+> By default, AzureML extension deployment expects config settings for HTTPS support. HTTP support is only recommended for development or testing purposes, and it is conveniently provided through config setting `allowInsecureConnections=True`.
++
+## Next steps
+
+Learn how to:
+- [Consume a machine learning model deployed as an online endpoint](how-to-deploy-managed-online-endpoints.md#invoke-the-local-endpoint-to-score-data-by-using-your-model)
+- [How to secure Kubernetes inferencing environment](how-to-secure-kubernetes-inferencing-environment.md)
+- [How to use your workspace with a custom DNS server](how-to-custom-dns.md)
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
Expect errors such as:
* Module not found errors such as,
- `No module named 'sklearn.decomposition._truncated_svd`
+ `No module named 'sklearn.decomposition._truncated_svd'`
* Import errors such as, `ImportError: cannot import name 'RollingOriginValidator'`, * Attribute errors such as,
- `AttributeError: 'SimpleImputer' object has no attribute 'add_indicator`
+ `AttributeError: 'SimpleImputer' object has no attribute 'add_indicator'`
Resolutions depend on your `AutoML` SDK training version:
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
To use local deployment, add `local=True` parameter in the command:
ml_client.begin_create_or_update(online_deployment, local=True) ```
-* `ml_client` and `online_deployment` are instances for `MLClient` class and `ManagedOnlineDeployment` class, respectively.
+* `ml_client` is the instance for `MLCLient` class, and `online_deployment` is the instance for either `ManagedOnlineDeployment` class or `KubernetesOnlineDeployment` class.
As a part of local deployment the following steps take place:
- Docker either builds a new container image or pulls an existing image from the local Docker cache. An existing image is used if there's one that matches the environment part of the specification file. - Docker starts a new container with mounted local artifacts such as model and code files.
-For more, see [Deploy locally in Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints).
+For more, see [Deploy locally in Deploy and score a machine learning model](how-to-deploy-managed-online-endpoint-sdk-v2.md#create-local-endpoint-and-deployment).
++ ## Conda installation
Below is a list of common image build failure scenarios:
If the error message mentions `"container registry authorization failure"`, that means the container registry could not be accessed with the current credentials. This can be caused by desynchronization of a workspace resource's keys and it takes some time to automatically synchronize.
-However, you can [manually call for a synchronization of keys](https://learn.microsoft.com/cli/azure/ml/workspace#az-ml-workspace-sync-keys) which may resolve the authorization failure.
+However, you can [manually call for a synchronization of keys](/cli/azure/ml/workspace#az-ml-workspace-sync-keys) which may resolve the authorization failure.
-Container registries that are behind a virtual network may also encounter this error if set up incorrectly. You must verify that the virtual network been set up properly.
+Container registries that are behind a virtual network may also encounter this error if set up incorrectly. You must verify that the virtual network has been set up properly.
#### Generic image build failure
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Before following the steps in this article, make sure you have the following pre
```python from azure.identity import DefaultAzureCredential
- from azure.keyvault.secret import SecretClient
+ from azure.keyvault.secrets import SecretClient
credential = DefaultAzureCredential()
Before following the steps in this article, make sure you have the following pre
## Next steps
-For an example of submitting a training job using the Azure Machine Learning Python SDK v2, see [Train models with the Python SDK v2](how-to-train-sdk.md).
+For an example of submitting a training job using the Azure Machine Learning Python SDK v2, see [Train models with the Python SDK v2](how-to-train-sdk.md).
machine-learning How To View Online Endpoints Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-view-online-endpoints-costs.md
Learn how to view costs for a managed online endpoint (preview). Costs for your endpoints will accrue to the associated workspace. You can see costs for a specific endpoint using tags. -- > [!IMPORTANT] > This article only applies to viewing costs for Azure Machine Learning managed online endpoints (preview). Managed online endpoints are different from other resources since they must use tags to track costs. For more information on viewing the costs of other Azure resources, see [Quickstart: Explore and analyze costs with cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md).
machine-learning Migrate To V2 Execution Hyperdrive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-hyperdrive.md
-+ Last updated 09/16/2022
machine-learning Migrate To V2 Resource Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-datastore.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
description="Datastore pointing to a blob container using SAS token.", account_name="mytestblobstore", container_name="data-container",
- credentials={
- "sas_token": "?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX"
- },
+ credentials=SasTokenCredentials(
+ sas_token= "?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX"
+ ),
) ml_client.create_or_update(store)
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
--+++ Last updated 10/21/2021 #Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
Title: Train models with the Azure ML Python SDK (v1) (preview)
description: Add compute resources (compute targets) to your workspace to use for machine learning training and inference with SDK v1. --++
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
--++ Last updated 05/02/2022
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
--++ Last updated 05/02/2022
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
description: Learn how to manage Azure Machine Learning workspaces in the Azure
--+++ Last updated 03/08/2022
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-move-data-in-out-of-pipelines.md
pipeline = Pipeline(workspace=ws, steps=[step1, step2])
If you'd like to make your `OutputFileDatasetConfig` available for longer than the duration of your experiment, register it to your workspace to share and reuse across experiments. ```python
-step1_output_ds = step1_output_data.register_on_complete(name='processed_data',
- description = 'files from step1`)
+step1_output_ds = step1_output_data.register_on_complete(
+ name='processed_data',
+ description = 'files from step1'
+)
``` ## Delete `OutputFileDatasetConfig` contents when no longer needed
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-train-deploy-notebook.md
+ Last updated 09/14/2022 #Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
migrate How To Create Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-assessment.md
ms. Previously updated : 07/15/2019- Last updated : 10/21/2022+ # Create an Azure VM assessment
-This article describes how to create an Azure VM assessment for on-premises servers in you VMware, Hyper-V or physical/other cloud environment with Azure Migrate: Discovery and assessment.
+This article describes how to create an Azure VM assessment for on-premises servers in your VMware, Hyper-V or physical/other cloud environments with Azure Migrate: Discovery and assessment.
-[Azure Migrate](migrate-services-overview.md) helps you to migrate to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as third-party independent software vendor (ISV) offerings.
+[Azure Migrate](migrate-services-overview.md) helps you to migrate to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as third-party Independent Software Vendor (ISV) offerings.
## Before you start
This article describes how to create an Azure VM assessment for on-premises serv
## Azure VM Assessment overview
-There are two types of sizing criteria you can use to create an Azure VM assessment using Azure Migrate: Discovery and assessment.
+There are two types of sizing criteria that you can use to create an Azure VM assessment using Azure Migrate: Discovery and assessment.
**Assessment** | **Details** | **Data** | |
-**Performance-based** | Assessments based on collected performance data | **Recommended VM size**: Based on CPU and memory utilization data.<br/><br/> **Recommended disk type (standard or premium managed disk)**: Based on the IOPS and throughput of the on-premises disks.
-**As on-premises** | Assessments based on on-premises sizing. | **Recommended VM size**: Based on the on-premises VM size<br/><br> **Recommended disk type**: Based on the storage type setting you select for the assessment.
+**Performance-based** | Assessments based on collected performance data. | **Recommended VM size**: Based on CPU and memory utilization data.<br/><br/> **Recommended disk type (standard or premium managed disk)**: Based on the IOPS and throughput of the on-premises disks.
+**As on-premises** | Assessments based on on-premises sizing. | **Recommended VM size**: Based on the on-premises VM size.<br/><br> **Recommended disk type**: Based on the storage type setting you select for the assessment.
[Learn more](concepts-assessment-calculation.md) about assessments.
There are two types of sizing criteria you can use to create an Azure VM assessm
Run an assessment as follows:
-1. On the **Overview** page > **Windows, Linux and SQL Server**, click **Assess and migrate servers**.
+1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
- ![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png)
+ ![Screenshot of Get started screen.](./media/tutorial-assess-vmware-azure-vm/assess.png)
-2. In **Azure Migrate: Discovery and assessment**, click **Assess** and select **Azure VM**
+2. In **Azure Migrate: Discovery and assessment**, select **Assess** and select **Azure VM**.
- ![Location of the Assess button](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
+ ![Screenshot of Assess VM selection.](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
-3. In **Assess servers** > **Assessment type**
+3. The **Create assessment** wizard appears with **Azure VM** as the **Assessment type**.
4. In **Discovery source**: - If you discovered servers using the appliance, select **Servers discovered from Azure Migrate appliance**. - If you discovered servers using an imported CSV file, select **Imported servers**.
-1. Click **Edit** to review the assessment properties.
+1. Select **Edit** to review the assessment properties.
- ![Location of the View all button to review assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-name.png)
+ ![Screenshot of View all button to review assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-name.png)
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government)
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government).
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
Run an assessment as follows:
- [Learn more](https://aka.ms/azurereservedinstances). 1. In **VM Size**: - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data:
- - In **Performance history**, indicate the data duration on which you want to base the assessment
+ - In **Performance history**, indicate the data duration on which you want to base the assessment.
- In **Percentile utilization**, specify the percentile value you want to use for the performance sample.
- - In **VM Series**, specify the Azure VM series you want to consider.
+ - In **VM Series**, specify the Azure VM series that you want to consider.
- If you're using performance-based assessment, Azure Migrate suggests a value for you.
- - Tweak settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series.
+ - Tweak the settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series.
- In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two: **Component** | **Effective utilization** | **Add comfort factor (2.0)**
Run an assessment as follows:
- In **Offer**, specify the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) if you're enrolled. The assessment estimates the cost for that offer. - In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
- - In **VM Uptime**, specify the duration (days per month/hour per day) that VMs will run.
+ - In **VM Uptime**, specify the duration (days per month/hour per day) that the VMs will run.
- This is useful for Azure VMs that won't run continuously. - Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day. - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license. If you do and they're covered with active Software Assurance of Windows Server Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
-1. Click **Save** if you make changes.
+1. Select **Save** if you make changes.
- ![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
+ ![Screenshot of Assessment properties.](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
-1. In **Assess Servers** > click **Next**.
+1. In **Assess Servers**, select **Next**.
1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment. 1. In **Select or create a group** > select **Create New** and specify a group name.
- ![Add VMs to a group](./media/tutorial-assess-vmware-azure-vm/assess-group.png)
-
+ ![Screenshot of adding VMs to a group.](./media/tutorial-assess-vmware-azure-vm/assess-group.png)
-1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
+1. Select the appliance, and select the VMs you want to add to the group. Then select **Next**.
-1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
-1. After the assessment is created, view it in **Servers** > **Azure Migrate: Discovery and assessment** > **Assessments**.
+1. After the assessment is created, view it in **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > **Assessments**.
-1. Click **Export assessment**, to download it as an Excel file.
+1. Select the name of the assessment that you want to view.
+1. Select **Export assessment** to download it as an Excel file.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
An Azure VM assessment describes:
### View an Azure VM assessment
-1. In **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment**, click the number next to **Azure VM assessment**.
+1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to **Azure VM**.
2. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only):
- ![Assessment summary](./media/how-to-create-assessment/assessment-summary.png)
+ ![Screenshot of an Assessment summary.](./media/how-to-create-assessment/assessment-summary.png)
### Review Azure readiness 1. In **Azure readiness**, verify whether servers are ready for migration to Azure. 2. Review the server status:
- - **Ready for Azure**: Azure Migrate recommends a VM size and cost estimates for VMs in the assessment.
+ - **Ready**: Azure Migrate recommends a VM size and cost estimates for VMs in the assessment.
- **Ready with conditions**: Shows issues and suggested remediation.
- - **Not ready for Azure**: Shows issues and suggested remediation.
+ - **Not ready**: Shows issues and suggested remediation.
- **Readiness unknown**: Used when Azure Migrate can't assess readiness, due to data availability issues.
-3. Click on an **Azure readiness** status. You can view server readiness details, and drill down to see server details, including compute, storage, and network settings.
--
+3. Select an **Azure readiness** status. You can view server readiness details and drill down to see server details, including compute, storage, and network settings.
### Review cost details
This view shows the estimated compute and storage cost of running VMs in Azure.
- Cost estimates are based on the size recommendations for a server, and its disks and properties. - Estimated monthly costs for compute and storage are shown. - The cost estimation is for running the on-premises servers as IaaS VMs. Azure VM assessment doesn't consider PaaS or SaaS costs.- 2. You can review monthly storage cost estimates. This view shows aggregated storage costs for the assessed group, split over different types of storage disks. 3. You can drill down to see details for specific servers.
This view shows the estimated compute and storage cost of running VMs in Azure.
When you run performance-based assessments, a confidence rating is assigned to the assessment.
-![Confidence rating](./media/how-to-create-assessment/confidence-rating.png)
+![Screenshot of Confidence rating.](./media/how-to-create-assessment/confidence-rating.png)
- A rating from 1-star (lowest) to 5-star (highest) is awarded. - The confidence rating helps you estimate the reliability of the size recommendations provided by the assessment.
Confidence ratings for an assessment are as follows.
61%-80% | 4 Star 81%-100% | 5 Star --- ## Next steps - Learn how to use [dependency mapping](how-to-create-group-machine-dependencies.md) to create high confidence groups.
mysql 06 Test Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/mysql-on-premises-azure-db/06-test-plans.md
WHERE TABLE_SCHEMA = '{SchemaName}';
Execute the `count(*)` SQL statement against every table to get an accurate count of rows. Running this command can take a large amount of time on large tables. The following script generates a set of SQL statements that can be executed to get the exact counts:
-```
+```sql
SELECT CONCAT( 'SELECT "', table_name,
openshift Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/troubleshoot.md
Currently, the `RedHatOpenShift/OpenShiftClusters` resource that's automatically
## Creating a cluster results in error that no registered resource provider found
-If creating a cluster results in an error that `No registered resource provider found for location '<location>' and API version '2019-04-30' for type 'openShiftManagedClusters'. The supported api-versions are '2018-09-30-preview`, then you were part of the preview and now need to [purchase Azure virtual machine reserved instances](https://aka.ms/openshift/buy) to use the generally available product. A reservation reduces your spend by pre-paying for fully managed Azure services. Refer to [*What are Azure Reservations*](../cost-management-billing/reservations/save-compute-costs-reservations.md) to learn more about reservations and how they save you money.
+If creating a cluster results in an error that `No registered resource provider found for location '<location>' and API version '2019-04-30' for type 'openShiftManagedClusters'. The supported api-versions are '2018-09-30-preview'.`, then you were part of the preview and now need to [purchase Azure virtual machine reserved instances](https://aka.ms/openshift/buy) to use the generally available product. A reservation reduces your spend by pre-paying for fully managed Azure services. For more information about reservations and how they save you money, see [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
## Next steps
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
Last updated 05/04/2022
-# Connect to and manage Cassandra in Microsoft Purview (Preview)
+# Connect to and manage Cassandra in Microsoft Purview
This article outlines how to register Cassandra, and how to authenticate and interact with Cassandra in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md).
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
Previously updated : 05/04/2022 Last updated : 10/21/2022
-# Connect to and manage Db2 in Microsoft Purview (Preview)
+# Connect to and manage Db2 in Microsoft Purview
This article outlines how to register Db2, and how to authenticate and interact with Db2 in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-erwin-source.md
Previously updated : 05/04/2022 Last updated : 10/21/2022
-# Connect to and manage erwin Mart servers in Microsoft Purview (Preview)
+# Connect to and manage erwin Mart servers in Microsoft Purview
This article outlines how to register erwin Mart servers, and how to authenticate and interact with erwin Mart Servers in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
Previously updated : 05/04/2022 Last updated : 10/21/2022
-# Connect to and manage Google BigQuery projects in Microsoft Purview (Preview)
+# Connect to and manage Google BigQuery projects in Microsoft Purview
This article outlines how to register Google BigQuery projects, and how to authenticate and interact with Google BigQuery in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
purview Register Scan Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mongodb.md
Previously updated : 10/15/2022 Last updated : 10/21/2022
-# Connect to and manage MongoDB in Microsoft Purview (Preview)
+# Connect to and manage MongoDB in Microsoft Purview
This article outlines how to register MongoDB, and how to authenticate and interact with MongoDB in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
Previously updated : 05/04/2022 Last updated : 10/21/2022
-# Connect to and manage MySQL in Microsoft Purview (Preview)
+# Connect to and manage MySQL in Microsoft Purview
This article outlines how to register MySQL, and how to authenticate and interact with MySQL in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
Previously updated : 05/04/2022 Last updated : 10/21/2022
-# Connect to and manage PostgreSQL in Microsoft Purview (Preview)
+# Connect to and manage PostgreSQL in Microsoft Purview
This article outlines how to register PostgreSQL, and how to authenticate and interact with PostgreSQL in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
Previously updated : 05/04/2022 Last updated : 10/21/2022
-# Connect to and manage Salesforce in Microsoft Purview (Preview)
+# Connect to and manage Salesforce in Microsoft Purview
This article outlines how to register Salesforce, and how to authenticate and interact with Salesforce in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
Previously updated : 08/03/2022 Last updated : 10/21/2022
-# Connect to and manage SAP Business Warehouse in Microsoft Purview (Preview)
+# Connect to and manage SAP Business Warehouse in Microsoft Purview
This article outlines how to register SAP Business Warehouse (BW), and how to authenticate and interact with SAP BW in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
Previously updated : 05/04/2022 Last updated : 10/21/2022
-# Connect to and manage SAP HANA in Microsoft Purview (Preview)
+# Connect to and manage SAP HANA in Microsoft Purview
This article outlines how to register SAP HANA, and how to authenticate and interact with SAP HANA in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata extraction**| **Full scan** |**Incremental scan**|**Scoped scan**|**Classification**|**Access policy**|**Lineage**| **Data Sharing**|
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
Previously updated : 07/11/2022 Last updated : 10/21/2022
-# Connect to and manage Snowflake in Microsoft Purview (Preview)
+# Connect to and manage Snowflake in Microsoft Purview
This article outlines how to register Snowflake, and how to authenticate and interact with Snowflake in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md). - ## Supported capabilities |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Many customers start with the free service. The free tier is limited to three in
Check the service overview page to find out how many indexes, indexers, and data sources you already have. ## Create and load an index
An indexer is a source-specific crawler that can read metadata and content from
1. In the wizard, select **Connect to your data** > **Samples** > **hotels-sample**. This data source is built-in. If you were creating your own data source, you would need to specify a name, type, and connection information. Once created, it becomes an "existing data source" that can be reused in other import operations.
- :::image type="content" source="media/search-get-started-portal/import-datasource-sample.png" alt-text="Screenshot of the select sample dataset page in the wizard.":::
+ :::image type="content" source="media/search-get-started-portal/import-datasource-sample.png" alt-text="Screenshot of the select sample dataset page in the wizard." border="true":::
1. Continue to the next page.
The wizard supports the creation of an [AI enrichment pipeline](cognitive-search
We'll skip this step for now, and move directly on to **Customize target index**.
- :::image type="content" source="media/search-get-started-portal/skip-cog-skill-step.png" alt-text="Screenshot of the Skip cognitive skill button in the wizard.":::
+ :::image type="content" source="media/search-get-started-portal/skip-cog-skill-step.png" alt-text="Screenshot of the Skip cognitive skill button in the wizard." border="true":::
> [!TIP] > You can step through an AI-indexing example in a [quickstart](cognitive-search-quickstart-blob.md) or [tutorial](cognitive-search-tutorial-blob.md).
We'll skip this step for now, and move directly on to **Customize target index**
For the built-in hotels sample index, a default index schema is defined for you. Except for a few advanced filter examples, queries in the documentation and samples that target the hotel-samples index will run on this index definition: Typically, in a code-based exercise, index creation is completed prior to loading data. The Import data wizard condenses these steps by generating a basic index for any data source it can crawl. Minimally, an index requires a name and a fields collection; one of the fields should be marked as the document key to uniquely identify each document. Additionally, you can specify language analyzers or suggesters if you want autocomplete or suggested queries.
-Fields have data types and attributes. The check boxes across the top are *index attributes* controlling how the field is used.
+Fields have a data type and attributes. The check boxes across the top are *attributes* controlling how the field is used.
+ **Retrievable** means that it shows up in search results list. You can mark individual fields as off limits for search results by clearing this checkbox, for example for fields used only in filter expressions.
-+ **Key** is the unique document identifier. It's always a string, and it's required.
++ **Key** is the unique document identifier. It's always a string, and it's required. Only one field can be the key. + **Filterable**, **Sortable**, and **Facetable** determine whether fields are used in a filter, sort, or faceted navigation structure. + **Searchable** means that a field is included in full text search. Strings are searchable. Numeric fields and Boolean fields are often marked as not searchable.
-Storage requirements don't vary as a result of your selection. For example, if you set the **Retrievable** attribute on multiple fields, storage requirements don't go up.
+[Storage requirements](search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters) can vary as a result of attribute selection. For example, **filterable** requires more storage, but **Retrievable** doesn't.
By default, the wizard scans the data source for unique identifiers as the basis for the key field. *Strings* are attributed as **Retrievable** and **Searchable**. *Integers* are attributed as **Retrievable**, **Filterable**, **Sortable**, and **Facetable**.
This object defines an executable process. You could put it on recurring schedul
Select **Submit** to create and simultaneously run the indexer.
- :::image type="content" source="media/search-get-started-portal/hotels-indexer.png" alt-text="Screenshot of the hotels indexer definition in the wizard.":::
+ :::image type="content" source="media/search-get-started-portal/hotels-indexer.png" alt-text="Screenshot of the hotels indexer definition in the wizard." border="true":::
## Monitor progress
The wizard should take you to the Indexers list where you can monitor progress.
It can take a few minutes for the portal to update the page, but you should see the newly created indexer in the list, with status indicating "in progress" or success, along with the number of documents indexed.
- :::image type="content" source="media/search-get-started-portal/indexers-inprogress.png" alt-text="Screenshot of the indexer progress message in the wizard.":::
+ :::image type="content" source="media/search-get-started-portal/indexers-inprogress.png" alt-text="Screenshot of the indexer progress message in the wizard." border="true":::
## Check results
The service overview page provides links to the resources created in your Azure
Wait for the portal page to refresh. After a few minutes, you should see the index with a document count and storage size.
- :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Screenshot of the Indexes list on the service dashboard.":::
+ :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Screenshot of the Indexes list on the service dashboard." border="true":::
From this list, you can select on the *hotels-sample* index that you just created, view the index schema. and optionally add new fields.
The **Fields** tab shows the index schema. If you're writing queries and need to
Scroll to the bottom of the list to enter a new field. While you can always create a new field, in most cases, you can't change existing fields. Existing fields have a physical representation in your search service and are thus non-modifiable, not even in code. To fundamentally change an existing field, create a new index, dropping the original.
- :::image type="content" source="media/search-get-started-portal/sample-index-def.png" alt-text="Screenshot of the sample index definition in Azure portal.":::
+ :::image type="content" source="media/search-get-started-portal/sample-index-def.png" alt-text="Screenshot of the sample index definition in Azure portal." border="true":::
Other constructs, such as scoring profiles and CORS options, can be added at any time.
You now have a search index that can be queried using [**Search explorer**](sear
1. Select **Search explorer** on the command bar.
- :::image type="content" source="medi.png" alt-text="Screenshot of the Search Explorer command on the command bar.":::
+ :::image type="content" source="medi.png" alt-text="Screenshot of the Search Explorer command on the command bar." border="true":::
1. From **Index**, choose "hotels-sample-index".
- :::image type="content" source="media/search-get-started-portal/search-explorer-changeindex.png" alt-text="Screenshot of the Index and API selection lists in Search Explorer.":::
+ :::image type="content" source="media/search-get-started-portal/search-explorer-changeindex.png" alt-text="Screenshot of the Index and API selection lists in Search Explorer." border="true":::
1. In the search bar, paste in a query string from the examples below and select **Search**.
- :::image type="content" source="media/search-get-started-portal/search-explorer-query-string-example.png" alt-text="Screenshot of the query string text field and search button in Search Explorer.":::
+ :::image type="content" source="media/search-get-started-portal/search-explorer-query-string-example.png" alt-text="Screenshot of the query string text field and search button in Search Explorer." border="true":::
## Run more example queries
All of the queries in this section are designed for **Search Explorer** and the
This tutorial provided a quick introduction to Azure Cognitive Search using the Azure portal.
-You learned how to create a search index using the **Import data** wizard. You created your first [indexer](search-indexer-overview.md) and learned the basic workflow for index design.
+You learned how to create a search index using the **Import data** wizard. You created your first [indexer](search-indexer-overview.md) and learned the basic workflow for index design. See [Import data wizard in Azure Cognitive Search](search-import-data-portal.md) for more information about the wizard's benefits and limitations.
Using the **Search explorer** in the Azure portal, you learned some basic query syntax through hands-on examples that demonstrated key capabilities such as filters, hit highlighting, fuzzy search, and geospatial search.
search Search How To Load Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-load-search-index.md
Previously updated : 01/11/2022 Last updated : 10/21/2022 # Load data into a search index in Azure Cognitive Search
This article explains how to import, refresh, and manage content in a predefined
A search service imports and indexes text in JSON, used in full text search or knowledge mining scenarios. Text content is obtainable from alphanumeric fields in the external data source, metadata that's useful in search scenarios, or enriched content created by a [skillset](cognitive-search-working-with-skillsets.md) (skills can extract or infer textual descriptions from images and unstructured content).
-Once data is indexed, the physical data structures of the index are locked in. For guidance on what can and cannot be changed, see [Drop and rebuild an index](search-howto-reindex.md).
+Once data is indexed, the physical data structures of the index are locked in. For guidance on what can and can't be changed, see [Drop and rebuild an index](search-howto-reindex.md).
-Indexing is not a background process. A search service will balance indexing and query workloads, but if [query latency is too high](search-performance-analysis.md#impact-of-indexing-on-queries), you can either [add capacity](search-capacity-planning.md#add-or-reduce-replicas-and-partitions) or identify periods of low query activity for loading an index.
+Indexing isn't a background process. A search service will balance indexing and query workloads, but if [query latency is too high](search-performance-analysis.md#impact-of-indexing-on-queries), you can either [add capacity](search-capacity-planning.md#add-or-reduce-replicas-and-partitions) or identify periods of low query activity for loading an index.
## Load documents
You can prepare these documents yourself, but if content resides in a [supported
### [**Azure portal**](#tab/portal)
-Using Azure portal, the sole means for loading an index is the [Import Data wizard](search-import-data-portal.md). The wizard creates objects. If you want to load an existing index, you will need to use an alternative approach.
+Using Azure portal, the sole means for loading an index is an indexer or running the [Import Data wizard](search-import-data-portal.md). The wizard creates objects. If you want to load an existing index, you'll need to use an alternative approach.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, click **Import data** on the command bar to create and populate a search index.
+1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** on the command bar to create and populate a search index. You can follow this link to review the workflow: [Quickstart: Create an Azure Cognitive Search index in the Azure portal](search-get-started-portal.md).
:::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
-1. Follow this link to review the workflow: [Quickstart: Create an Azure Cognitive Search index in the Azure portal](search-get-started-portal.md).
+1. Alternatively, you can [reset and run an indexer](search-howto-run-reset-indexers.md), which is useful if you're adding fields incrementally. Reset forces the indexer to start over, picking up all fields from all source documents.
### [**REST**](#tab/import-rest)
Using Azure portal, the sole means for loading an index is the [Import Data wiza
GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2020-06-30 ```
-When the document key or ID is new, **null** becomes the value for any field that is unspecified in the document. For actions on an existing document, updated values replace the previous values. Any fields that were not specified in a "merge" or "mergeUpload" are left intact in the search index.
+When the document key or ID is new, **null** becomes the value for any field that is unspecified in the document. For actions on an existing document, updated values replace the previous values. Any fields that weren't specified in a "merge" or "mergeUpload" are left intact in the search index.
### [**.NET SDK (C#)**](#tab/importcsharp)
search Search Import Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-import-data-portal.md
Previously updated : 08/24/2022 Last updated : 10/21/2022 # Import data wizard in Azure Cognitive Search
The wizard is organized into four main steps:
1. Run the wizard to create objects, load data, set a schedule and other configuration options.
+The workflow is a pipeline, so it's one way. You can't use the wizard to edit any of the objects that were created, but you can use other portal tools, such as the index or indexer designer or the JSON editors, for allowed updates.
+ <a name="data-source-inputs"></a> ### Data source configuration in the wizard
The wizard samples your data source to detect the fields and field type. Dependi
Because sampling is an imprecise exercise, review the index for the following considerations:
-1. Is the field list accurate? If your data source contains fields that were not picked up in sampling, you can manually add any new fields that sampling missed, and remove any that don't add value to a search experience or that won't be used in a [filter expression](search-query-odata-filter.md) or [scoring profile](index-add-scoring-profiles.md).
+1. Is the field list accurate? If your data source contains fields that weren't picked up in sampling, you can manually add any new fields that sampling missed, and remove any that don't add value to a search experience or that won't be used in a [filter expression](search-query-odata-filter.md) or [scoring profile](index-add-scoring-profiles.md).
1. Is the data type appropriate for the incoming data? Azure Cognitive Search supports the [entity data model (EDM) data types](/rest/api/searchservice/supported-data-types). For Azure SQL data, there's [mapping chart](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#TypeMapping) that lays out equivalent values. For more background, see [Field mappings and transformations](search-indexer-field-mappings.md).
Because sampling is an imprecise exercise, review the index for the following co
1. Set attributes to determine how that field is used in an index.
- Take your time with this step because attributes determine the physical expression of fields in the index. If you want to change attributes later, even programmatically, you will almost always need to drop and rebuild the index. Core attributes like **Searchable** and **Retrievable** have a [negligible impact on storage](search-what-is-an-index.md#index-size). Enabling filters and using suggesters increase storage requirements.
+ Take your time with this step because attributes determine the physical expression of fields in the index. If you want to change attributes later, even programmatically, you'll almost always need to drop and rebuild the index. Core attributes like **Searchable** and **Retrievable** have a [negligible impact on storage](search-what-is-an-index.md#index-size). Enabling filters and using suggesters increase storage requirements.
+ **Searchable** enables full-text search. Every field used in free form queries or in query expressions must have this attribute. Inverted indexes are created for each field that you mark as **Searchable**.
search Search Normalizers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-normalizers.md
Custom normalizers are [defined within the index schema](/rest/api/searchservice
"char_filter_name_2" ], "tokenFilters":[
- "token_filter_name_1
+ "token_filter_name_1"
] } ],
Custom normalizers are [defined within the index schema](/rest/api/searchservice
{ "name":"char_filter_name_1", "@odata.type":"#char_filter_type",
- "option1":value1,
- "option2":value2,
+ "option1": "value1",
+ "option2": "value2",
... } ],
Custom normalizers are [defined within the index schema](/rest/api/searchservice
{ "name":"token_filter_name_1", "@odata.type":"#token_filter_type",
- "option1":value1,
- "option2":value2,
+ "option1": "value1",
+ "option2": "value2",
... } ]
The example below illustrates a custom normalizer definition with corresponding
"charFilters":[ "map_dash", "remove_whitespace"
- ],,
+ ],
"tokenFilters":[ "my_asciifolding", "elision",
search Search Query Odata Syntax Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-syntax-reference.md
sign ::= '+' | '-'
/* In practice integer literals are limited in length to the precision of the corresponding EDM data type. */
-integer_literal ::= digit+
+integer_literal ::= sign? digit+
float_literal ::= sign? whole_part fractional_part? exponent?
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
Previously updated : 08/24/2022 Last updated : 10/21/2022
security Tls Certificate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/tls-certificate-changes.md
tags: azure-resource-manager
Previously updated : 04/28/2022 Last updated : 10/21/2022
Here are some ways to detect if your application was impacted:
- **Android**: Check the documentation for your device and version of Android. - **Other hardware devices, especially IoT**: Contact the device manufacturer. -- If you have an environment where firewall rules are set to allow outbound calls to only specific Certificate Revocation List (CRL) download and/or Online Certificate Status Protocol (OCSP) verification locations, you'll need to allow the following CRL and OCSP URLs:
+- If you have an environment where firewall rules are set to allow outbound calls to only specific Certificate Revocation List (CRL) download and/or Online Certificate Status Protocol (OCSP) verification locations, you'll need to allow the following CRL and OCSP URLs. For a complete list of CRL and OCSP URLs used in Azure, see the [Azure CA details article](azure-CA-details.md#certificate-downloads-and-revocation-lists).
- http://crl3&#46;digicert&#46;com - http://crl4&#46;digicert&#46;com - http://ocsp&#46;digicert&#46;com
- - http://www&#46;d-trust&#46;net
- - http://root-c3-ca2-2009&#46;ocsp&#46;d-trust&#46;net
- http://crl&#46;microsoft&#46;com - http://oneocsp&#46;microsoft&#46;com - http://ocsp&#46;msocsp&#46;com
- - http://www&#46;microsoft&#46;com/pkiops
## Next steps
sentinel Entity Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entity-pages.md
Microsoft Sentinel currently offers the following entity pages:
> The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident. For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md). - Azure resource (**Preview**)
+- IoT device (**Preview**)
## Next steps
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
- [Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip) - [Out of the box anomaly detection on the SAP audit log (Preview)](#out-of-the-box-anomaly-detection-on-the-sap-audit-log-preview)
+- [IoT device entity page (Preview)](#iot-device-entity-page-preview)
### Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)
-As of **October 24, 2022**, [Microsoft 365 Defender](/microsoft-365/security/defender/) will be integrating [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents. Customers can choose between two levels of integration:
+As of **October 24, 2022**, [Microsoft 365 Defender](/microsoft-365/security/defender/) will be integrating [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents. Customers can choose between three levels of integration:
-- **Selective alerts** (default) includes only alerts chosen by Microsoft security researchers, mostly of Medium and High severities.-- **All alerts** includes all AADIP alerts of any severity.-
-This integration can't be disabled.
+- **Show high-impact alerts only (Default)** includes only alerts about known malicious or highly suspicious activities that might require attention. These alerts are chosen by Microsoft security researchers and are mostly of Medium and High severities.
+- **Show all alerts** includes all AADIP alerts, including activity that might not be unwanted or malicious.
+- **Turn off all alerts** disables any AADIP alerts from appearing in your Microsoft 365 Defender incidents.
Microsoft Sentinel customers (who are also AADIP subscribers) with [Microsoft 365 Defender integration](microsoft-365-defender-sentinel-integration.md) enabled will automatically start receiving AADIP alerts and incidents in their Microsoft Sentinel incidents queue. Depending on your configuration, this may affect you as follows:
Microsoft Sentinel customers (who are also AADIP subscribers) with [Microsoft 36
| Preference | Action in Microsoft 365 Defender | Action in Microsoft Sentinel | | - | - | - |
- | **1** | Keep the default AADIP integration of **Selective alerts**. | Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
- | **2** | Choose the **All alerts** AADIP integration. | Create automation rules to automatically close incidents with unwanted alerts.<br><br>Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
- | **3** | Don't use Microsoft 365 Defender for AADIP alerts:<br>Choose either option for AADIP integration. | Create automation rules to close all incidents where <br>- the *incident provider* is `Microsoft 365 Defender` and <br>- the *alert provider* is `Azure Active Directory Identity Protection`. <br><br>Leave enabled those [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+ | **1** | Keep the default AADIP integration of **Show high-impact alerts only**. | Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+ | **2** | Choose the **Show all alerts** AADIP integration. | Create automation rules to automatically close incidents with unwanted alerts.<br><br>Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+ | **3** | Don't use Microsoft 365 Defender for AADIP alerts:<br>Choose the **Turn off all alerts** option for AADIP integration. | Leave enabled those [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
- If you don't have your [AADIP connector](data-connectors-reference.md#azure-active-directory-identity-protection) enabled, you must enable it. Be sure **not** to enable incident creation on the connector page. If you don't enable the connector, you may receive AADIP incidents without any data in them.
Learn more:
- [Learn about the new feature (blog)](https://aka.ms/Sentinel4sapDynamicAnomalyAuditRuleBlog) - [Use the new rule for anomaly detection](sap/configure-audit-log-rules.md#anomaly-detection)
+### IoT device entity page (Preview)
+
+OT/IoT devices, including Programmable Logic Controllers (PLCs), Human-Machine Interfaces (HMIs), engineering workstations, network devices, and more, are becoming increasingly prevalent in organizations. Often, these devices are used as entry points for attacks, but they can also be used by attackers to move laterally.
+For SOCs, monitoring IoT/OT networks presents a number of challenges, including the lack of visibility for security teams into their OT networks, the lack of experience among SOC analysts in managing OT incidents, and the lack of communication between OT teams and SOC teams.
+
+The new [IoT device entity page](entity-pages.md) is designed to help the SOC investigate incidents that involve IoT/OT devices in their environment, by providing the full OT/IoT context through Microsoft Defender for IoT to Sentinel. This enables SOC teams to detect and respond more quickly across all domains to the entire attack timeline.
+
+Learn more about [investigating IoT device entities in Microsoft Sentinel](iot-advanced-threat-monitoring.md).
+ ## September 2022 - [Create automation rule conditions based on custom details (Preview)](#create-automation-rule-conditions-based-on-custom-details-preview)
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
In this quickstart, you will do the following steps:
1. Create a Service Bus namespace, using the Azure portal. 2. Create a Service Bus queue, using the Azure portal.
-3. Write a .NET Core console application to send a set of messages to the queue.
-4. Write a .NET Core console application to receive those messages from the queue.
+3. Write a .NET console application to send a set of messages to the queue.
+4. Write a .NET console application to receive those messages from the queue.
> [!NOTE] > This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For an overview of the .NET client library, see [Azure Service Bus client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/README.md). For more samples, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
In this quickstart, you will do the following steps:
If you're new to the service, see [Service Bus overview](service-bus-messaging-overview.md) before you do this quickstart. -- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/).-- **Microsoft Visual Studio 2019**. The Azure Service Bus client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax won't be available. To make use of the full syntax, we recommend that you compile with the .NET Core SDK 3.0 or higher and language version set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects.
+- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/dotnet).
+- **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. You can still use the Service Bus client library with previous C# language versions, but the syntax may vary. To use the latest syntax, we recommend that you install .NET 6.0 or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects.
[!INCLUDE [service-bus-create-queue-portal](./includes/service-bus-create-queue-portal.md)] ## Send messages to the queue
-This section shows you how to create a .NET Core console application to send messages to a Service Bus queue.
+This section shows you how to create a .NET console application to send messages to a Service Bus queue.
> [!NOTE] > This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples). ### Create a console application
-1. Start Visual Studio 2019.
+1. Start Visual Studio 2022.
1. Select **Create a new project**. 1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**. 1. Select **C#** for the programming language.
This section shows you how to create a .NET Core console application to send mes
:::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/project-solution-names.png" alt-text="Image showing the solution and project names in the Configure your new project dialog box "::: 1. On the **Additional information** page, select **Create** to create the solution and the project.
-### Add the Service Bus NuGet package
+### Add the NuGet packages to the project
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu. 1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
- ```cmd
+ ```powershell
Install-Package Azure.Messaging.ServiceBus ```
-### Add code to send messages to the queue
-1. In **Program.cs**, add the following `using` statements at the top of the namespace definition, before the class declaration.
+## Add code to send messages to the queue
- ```csharp
- using System.Threading.Tasks;
- using Azure.Messaging.ServiceBus;
- ```
-
-2. Within the `Program` class, declare the following properties, just before the `Main` method.
+1. Replace the contents of `Program.cs` with the following code. The important steps are outlined below, with additional information in the code comments.
- Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
+ ### [Passwordless (Recommended)](#tab/passwordless)
- ```csharp
+ > [!IMPORTANT]
+ > Per the `TODO` comment, update the placeholder values in the code snippets with the values from the Service Bus you created.
- // connection string to your Service Bus namespace
- static string connectionString = "<NAMESPACE CONNECTION STRING>";
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the passwordless `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
+ * Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue.
+ * Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method.
+ * Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
+ * Sends the batch of messages to the Service Bus queue using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
+ ```csharp
+ using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
+
// name of your Service Bus queue
- static string queueName = "<QUEUE NAME>";
- // the client that owns the connection and can be used to create senders and receivers
- static ServiceBusClient client;
-
+ ServiceBusClient client;
+
// the sender used to publish messages to the queue
- static ServiceBusSender sender;
-
+ ServiceBusSender sender;
+
// number of messages to be sent to the queue
- private const int numOfMessages = 3;
-
+ const int numOfMessages = 3;
+
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, ensure that ports 5671 and 5672 are open.
+ var clientOptions = new ServiceBusClientOptions
+ {
+ TransportType = ServiceBusTransportType.AmqpWebSockets
+ };
+ //TODO: Replace the "<NAMESPACE-NAME>" and "<QUEUE-NAME>" placeholders.
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(),
+ clientOptions);
+ sender = client.CreateSender("<QUEUE-NAME>");
+
+ // create a batch
+ using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();
+
+ for (int i = 1; i <= numOfMessages; i++)
+ {
+ // try adding a message to the batch
+ if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
+ {
+ // if it is too large for the batch
+ throw new Exception($"The message {i} is too large to fit in the batch.");
+ }
+ }
+
+ try
+ {
+ // Use the producer client to send the batch of messages to the Service Bus queue
+ await sender.SendMessagesAsync(messageBatch);
+ Console.WriteLine($"A batch of {numOfMessages} messages has been published to the queue.");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await sender.DisposeAsync();
+ await client.DisposeAsync();
+ }
+
+ Console.WriteLine("Press any key to end the application");
+ Console.ReadKey();
```
+
+ ### [Connection string](#tab/connection-string)
-3. Replace code in the `Main` method with the following code. See code comments for details about the code. Here are the important steps from the code.
- 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the primary connection string to the namespace.
- 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue.
- 1. Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method.
- 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
- 1. Sends the batch of messages to the Service Bus queue using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
+ > [!IMPORTANT]
+ > Per the `TODO` comment, update the placeholder values in the code snippets with the values from the Service Bus you created.
- ```csharp
- static async Task Main()
- {
- // The Service Bus client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when messages are being published or read
- // regularly.
- //
- // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
- // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
-
- var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient(connectionString, clientOptions);
- sender = client.CreateSender(queueName);
-
- // create a batch
- using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();
-
- for (int i = 1; i <= numOfMessages; i++)
- {
- // try adding a message to the batch
- if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
- {
- // if it is too large for the batch
- throw new Exception($"The message {i} is too large to fit in the batch.");
- }
- }
-
- try
- {
- // Use the producer client to send the batch of messages to the Service Bus queue
- await sender.SendMessagesAsync(messageBatch);
- Console.WriteLine($"A batch of {numOfMessages} messages has been published to the queue.");
- }
- finally
- {
- // Calling DisposeAsync on client types is required to ensure that network
- // resources and other unmanaged objects are properly cleaned up.
- await sender.DisposeAsync();
- await client.DisposeAsync();
- }
-
- Console.WriteLine("Press any key to end the application");
- Console.ReadKey();
- }
- ```
-
-4. Here's what your Program.cs file should look like:
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
+ * Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue.
+ * Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method.
+ * Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
+ * Sends the batch of messages to the Service Bus queue using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
```csharp
- using System;
- using System.Threading.Tasks;
using Azure.Messaging.ServiceBus;+
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
+
+ // the sender used to publish messages to the queue
+ ServiceBusSender sender;
- namespace QueueSender
+ // number of messages to be sent to the queue
+ const int numOfMessages = 3;
+
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
+
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
+ {
+ TransportType = ServiceBusTransportType.AmqpWebSockets
+ };
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+ sender = client.CreateSender("<QUEUE-NAME>");
+
+ // create a batch
+ using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();
+
+ for (int i = 1; i <= numOfMessages; i++)
{
- class Program
+ // try adding a message to the batch
+ if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
{
- // connection string to your Service Bus namespace
- static string connectionString = "<NAMESPACE CONNECTION STRING>";
-
- // name of your Service Bus queue
- static string queueName = "<QUEUE NAME>";
-
- // the client that owns the connection and can be used to create senders and receivers
- static ServiceBusClient client;
-
- // the sender used to publish messages to the queue
- static ServiceBusSender sender;
-
- // number of messages to be sent to the queue
- private const int numOfMessages = 3;
-
- static async Task Main()
- {
- // The Service Bus client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when messages are being published or read
- // regularly.
- //
- // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
- // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
-
- var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient(connectionString, clientOptions);
- sender = client.CreateSender(queueName);
-
- // create a batch
- using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();
-
- for (int i = 1; i <= numOfMessages; i++)
- {
- // try adding a message to the batch
- if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
- {
- // if it is too large for the batch
- throw new Exception($"The message {i} is too large to fit in the batch.");
- }
- }
-
- try
- {
- // Use the producer client to send the batch of messages to the Service Bus queue
- await sender.SendMessagesAsync(messageBatch);
- Console.WriteLine($"A batch of {numOfMessages} messages has been published to the queue.");
- }
- finally
- {
- // Calling DisposeAsync on client types is required to ensure that network
- // resources and other unmanaged objects are properly cleaned up.
- await sender.DisposeAsync();
- await client.DisposeAsync();
- }
-
- Console.WriteLine("Press any key to end the application");
- Console.ReadKey();
- }
+ // if it is too large for the batch
+ throw new Exception($"The message {i} is too large to fit in the batch.");
}
- }
+ }
+
+ try
+ {
+ // Use the producer client to send the batch of messages to the Service Bus queue
+ await sender.SendMessagesAsync(messageBatch);
+ Console.WriteLine($"A batch of {numOfMessages} messages has been published to the queue.");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await sender.DisposeAsync();
+ await client.DisposeAsync();
+ }
+
+ Console.WriteLine("Press any key to end the application");
+ Console.ReadKey();
```
+
+
-5. Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
6. Build the project, and ensure that there are no errors. 7. Run the program and wait for the confirmation message.
This section shows you how to create a .NET Core console application to send mes
1. On the **Overview** page, select the queue in the bottom-middle pane. :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/select-queue.png" alt-text="Image showing the Service Bus Namespace page in the Azure portal with the queue selected." lightbox="./media/service-bus-dotnet-get-started-with-queues/select-queue.png":::+ 1. Notice the values in the **Essentials** section.
- :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/sent-messages-essentials.png" alt-text="Image showing the number of messages received and the size of the queue" lightbox="./media/service-bus-dotnet-get-started-with-queues/sent-messages-essentials.png":::
+ :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/sent-messages-essentials.png" alt-text="Image showing the number of messages received and the size of the queue." lightbox="./media/service-bus-dotnet-get-started-with-queues/sent-messages-essentials.png":::
Notice the following values: - The **Active** message count value for the queue is now **3**. Each time you run this sender app without retrieving the messages, this value increases by 3.
This section shows you how to create a .NET Core console application to send mes
## Receive messages from the queue
-In this section, you'll create a .NET Core console application that receives messages from the queue.
+In this section, you'll create a .NET console application that receives messages from the queue.
> [!NOTE]
-> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
+> This quickstart provides step-by-step instructions to implement a scenario of sending a batch of messages to a Service Bus queue and then receiving them. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
### Create a project for the receiver
In this section, you'll create a .NET Core console application that receives mes
1. Enter **QueueReceiver** for the **Project name**, and select **Create**. 1. In the **Solution Explorer** window, right-click **QueueReceiver**, and select **Set as a Startup Project**.
-### Add the Service Bus NuGet package to the Receiver project
+### Add the NuGet packages to the project
+
+### [Passwordless (Recommended)](#tab/passwordless)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. In the **Package Manager Console** window, confirm that **QueueReceiver** is selected for the **Default project**. If not, use the drop-down list to select **QueueReceiver**.
+1. Run the following command to install the **Azure.Messaging.ServiceBus** and **Azure.Identity** NuGet packages:
+
+ ```powershell
+ Install-Package Azure.Messaging.ServiceBus
+ Install-Package Azure.Identity
+ ```
+
+ :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
- :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console":::
+### [Connection String](#tab/connection-string)
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
- ```cmd
+ ```powershell
Install-Package Azure.Messaging.ServiceBus ```
+ :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
++++ ### Add the code to receive messages from the queue In this section, you'll add code to retrieve messages from the queue.
-1. In **Program.cs**, add the following `using` statements at the top of the namespace definition, before the class declaration.
+1. Within the `Program` class, add the following code:
+
+ ### [Passwordless (Recommended)](#tab/passwordless)
```csharp using System.Threading.Tasks;
+ using Azure.Identity;
using Azure.Messaging.ServiceBus;
+
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
+
+ // the processor that reads and processes messages from the queue
+ ServiceBusProcessor processor;
```-
-2. Within the `Program` class, declare the following properties, just before the `Main` method.
-
- Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
-
+
+ ### [Connection string](#tab/connection-string)
+
```csharp
- // connection string to your Service Bus namespace
- static string connectionString = "<NAMESPACE CONNECTION STRING>";
-
- // name of your Service Bus queue
- static string queueName = "<QUEUE NAME>";
-
+ using System.Threading.Tasks;
+ using Azure.Messaging.ServiceBus;
+
// the client that owns the connection and can be used to create senders and receivers
- static ServiceBusClient client;
-
+ ServiceBusClient client;
+
// the processor that reads and processes messages from the queue
- static ServiceBusProcessor processor;
+ ServiceBusProcessor processor;
```
+
+
-3. Add the following methods to the `Program` class to handle received messages and any errors.
+1. Append the following methods to the end of the `Program` class.
```csharp // handle received messages
- static async Task MessageHandler(ProcessMessageEventArgs args)
+ async Task MessageHandler(ProcessMessageEventArgs args)
{ string body = args.Message.Body.ToString(); Console.WriteLine($"Received: {body}");
In this section, you'll add code to retrieve messages from the queue.
} // handle any errors when receiving messages
- static Task ErrorHandler(ProcessErrorEventArgs args)
+ Task ErrorHandler(ProcessErrorEventArgs args)
{ Console.WriteLine(args.Exception.ToString()); return Task.CompletedTask; } ```
-4. Replace code in the `Main` method with the following code. See code comments for details about the code. Here are the important steps from the code.
- 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the primary connection string to the namespace.
- 1. Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
- 1. Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- 1. Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- 1. When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
-
- For more information, see code comments.
-
- ```csharp
- static async Task Main()
- {
- // The Service Bus client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when messages are being published or read
- // regularly.
- //
- // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
- // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
-
- var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient(connectionString, clientOptions);
-
- // create a processor that we can use to process the messages
- processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());
-
- try
- {
- // add handler to process messages
- processor.ProcessMessageAsync += MessageHandler;
-
- // add handler to process any errors
- processor.ProcessErrorAsync += ErrorHandler;
-
- // start processing
- await processor.StartProcessingAsync();
-
- Console.WriteLine("Wait for a minute and then press any key to end the processing");
- Console.ReadKey();
-
- // stop processing
- Console.WriteLine("\nStopping the receiver...");
- await processor.StopProcessingAsync();
- Console.WriteLine("Stopped receiving messages");
- }
- finally
- {
- // Calling DisposeAsync on client types is required to ensure that network
- // resources and other unmanaged objects are properly cleaned up.
- await processor.DisposeAsync();
- await client.DisposeAsync();
- }
- }
- ```
-
-5. Here's what your `Program.cs` should look like:
+1. Append the following code to the end of the `Program` class. The important steps are outlined below, with additional information in the code comments.
+
+ ### [Passwordless (Recommended)](#tab/passwordless)
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
+ * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ * Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
+ * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
+
+
+ ```csharp
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
+ // If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
+
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
+ var clientOptions = new ServiceBusClientOptions()
+ {
+ TransportType = ServiceBusTransportType.AmqpWebSockets
+ };
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(),
+ clientOptions);
+
+ // create a processor that we can use to process the messages
+ // TODO: Replace the <QUEUE-NAME> placeholder
+ processor = client.CreateProcessor("<QUEUE-NAME>", new ServiceBusProcessorOptions());
+
+ try
+ {
+ // add handler to process messages
+ processor.ProcessMessageAsync += MessageHandler;
+
+ // add handler to process any errors
+ processor.ProcessErrorAsync += ErrorHandler;
+
+ // start processing
+ await processor.StartProcessingAsync();
+
+ Console.WriteLine("Wait for a minute and then press any key to end the processing");
+ Console.ReadKey();
+
+ // stop processing
+ Console.WriteLine("\nStopping the receiver...");
+ await processor.StopProcessingAsync();
+ Console.WriteLine("Stopped receiving messages");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await processor.DisposeAsync();
+ await client.DisposeAsync();
+ }
+ ```
+
+ ### [Connection string](#tab/connection-string)
+
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
+ * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ * Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+
+ ```csharp
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
+ // If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
+
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
+ {
+ TransportType = ServiceBusTransportType.AmqpWebSockets
+ };
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+
+ // create a processor that we can use to process the messages
+ // TODO: Replace the <QUEUE-NAME> placeholder
+ processor = client.CreateProcessor("<QUEUE-NAME>", new ServiceBusProcessorOptions());
+
+ try
+ {
+ // add handler to process messages
+ processor.ProcessMessageAsync += MessageHandler;
+
+ // add handler to process any errors
+ processor.ProcessErrorAsync += ErrorHandler;
+
+ // start processing
+ await processor.StartProcessingAsync();
+
+ Console.WriteLine("Wait for a minute and then press any key to end the processing");
+ Console.ReadKey();
+
+ // stop processing
+ Console.WriteLine("\nStopping the receiver...");
+ await processor.StopProcessingAsync();
+ Console.WriteLine("Stopped receiving messages");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await processor.DisposeAsync();
+ await client.DisposeAsync();
+ }
+ ```
+
+
+
+1. The completed `Program` class should match the following code:
+
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
```csharp
- using System;
using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
- namespace QueueReceiver
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
+
+ // the processor that reads and processes messages from the queue
+ ServiceBusProcessor processor;
+
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
+ // If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
+
+ // TODO: Replace the <NAMESPACE-NAME> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
{
- class Program
- {
- // connection string to your Service Bus namespace
- static string connectionString = "<NAMESPACE CONNECTION STRING>";
-
- // name of your Service Bus queue
- static string queueName = "<QUEUE NAME>";
-
-
- // the client that owns the connection and can be used to create senders and receivers
- static ServiceBusClient client;
-
- // the processor that reads and processes messages from the queue
- static ServiceBusProcessor processor;
-
- // handle received messages
- static async Task MessageHandler(ProcessMessageEventArgs args)
- {
- string body = args.Message.Body.ToString();
- Console.WriteLine($"Received: {body}");
-
- // complete the message. messages is deleted from the queue.
- await args.CompleteMessageAsync(args.Message);
- }
-
- // handle any errors when receiving messages
- static Task ErrorHandler(ProcessErrorEventArgs args)
- {
- Console.WriteLine(args.Exception.ToString());
- return Task.CompletedTask;
- }
-
- static async Task Main()
- {
- // The Service Bus client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when messages are being published or read
- // regularly.
- //
- // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
- // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
-
- var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient(connectionString, clientOptions);
-
- // create a processor that we can use to process the messages
- processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());
-
- try
- {
- // add handler to process messages
- processor.ProcessMessageAsync += MessageHandler;
-
- // add handler to process any errors
- processor.ProcessErrorAsync += ErrorHandler;
-
- // start processing
- await processor.StartProcessingAsync();
-
- Console.WriteLine("Wait for a minute and then press any key to end the processing");
- Console.ReadKey();
-
- // stop processing
- Console.WriteLine("\nStopping the receiver...");
- await processor.StopProcessingAsync();
- Console.WriteLine("Stopped receiving messages");
- }
- finally
- {
- // Calling DisposeAsync on client types is required to ensure that network
- // resources and other unmanaged objects are properly cleaned up.
- await processor.DisposeAsync();
- await client.DisposeAsync();
- }
- }
- }
+ TransportType = ServiceBusTransportType.AmqpWebSockets
+ };
+ client = new ServiceBusClient("<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(), clientOptions);
+
+ // create a processor that we can use to process the messages
+ // TODO: Replace the <QUEUE-NAME> placeholder
+ processor = client.CreateProcessor("<QUEUE-NAME>", new ServiceBusProcessorOptions());
+
+ try
+ {
+ // add handler to process messages
+ processor.ProcessMessageAsync += MessageHandler;
+
+ // add handler to process any errors
+ processor.ProcessErrorAsync += ErrorHandler;
+
+ // start processing
+ await processor.StartProcessingAsync();
+
+ Console.WriteLine("Wait for a minute and then press any key to end the processing");
+ Console.ReadKey();
+
+ // stop processing
+ Console.WriteLine("\nStopping the receiver...");
+ await processor.StopProcessingAsync();
+ Console.WriteLine("Stopped receiving messages");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await processor.DisposeAsync();
+ await client.DisposeAsync();
+ }
+
+ // handle received messages
+ async Task MessageHandler(ProcessMessageEventArgs args)
+ {
+ string body = args.Message.Body.ToString();
+ Console.WriteLine($"Received: {body}");
+
+ // complete the message. message is deleted from the queue.
+ await args.CompleteMessageAsync(args.Message);
+ }
+
+ // handle any errors when receiving messages
+ Task ErrorHandler(ProcessErrorEventArgs args)
+ {
+ Console.WriteLine(args.Exception.ToString());
+ return Task.CompletedTask;
} ```
+
+ ### [Connection string](#tab/connection-string)
+
+ ```csharp
+ using Azure.Messaging.ServiceBus;
+ using System;
+ using System.Threading.Tasks;
+
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
+
+ // the sender used to publish messages to the queue
+ ServiceBusSender sender;
+
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
+ // If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
+
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
+ {
+ TransportType = ServiceBusTransportType.AmqpWebSockets
+ };
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+
+ // create a processor that we can use to process the messages
+ // TODO: Replace the <QUEUE-NAME> placeholder
+ processor = client.CreateProcessor("<QUEUE-NAME>", new ServiceBusProcessorOptions());
+
+ try
+ {
+ // add handler to process messages
+ processor.ProcessMessageAsync += MessageHandler;
+
+ // add handler to process any errors
+ processor.ProcessErrorAsync += ErrorHandler;
+
+ // start processing
+ await processor.StartProcessingAsync();
+
+ Console.WriteLine("Wait for a minute and then press any key to end the processing");
+ Console.ReadKey();
+
+ // stop processing
+ Console.WriteLine("\nStopping the receiver...");
+ await processor.StopProcessingAsync();
+ Console.WriteLine("Stopped receiving messages");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await processor.DisposeAsync();
+ await client.DisposeAsync();
+ }
+
+ // handle received messages
+ async Task MessageHandler(ProcessMessageEventArgs args)
+ {
+ string body = args.Message.Body.ToString();
+ Console.WriteLine($"Received: {body}");
+
+ // complete the message. message is deleted from the queue.
+ await args.CompleteMessageAsync(args.Message);
+ }
+
+ // handle any errors when receiving messages
+ Task ErrorHandler(ProcessErrorEventArgs args)
+ {
+ Console.WriteLine(args.Exception.ToString());
+ return Task.CompletedTask;
+ }
+ ```
+
+
-6. Replace `<NAMESPACE CONNECTION STRING>` with the primary connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of your queue.
-7. Build the project, and ensure that there are no errors.
-8. Run the receiver application. You should see the received messages. Press any key to stop the receiver and the application.
+1. Build the project, and ensure that there are no errors.
+1. Run the receiver application. You should see the received messages. Press any key to stop the receiver and the application.
```console Wait for a minute and then press any key to end the processing
In this section, you'll add code to retrieve messages from the queue.
Stopped receiving messages ```
-9. Check the portal again. Wait for a few minutes and refresh the page if you don't see `0` for **Active** messages.
+1. Check the portal again. Wait for a few minutes and refresh the page if you don't see `0` for **Active** messages.
- The **Active** message count and **Current size** values are now **0**. - In the **Messages** chart in the bottom **Metrics** section, you can see that there are three incoming messages and three outgoing messages for the queue. :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/queue-messages-size-final.png" alt-text="Screenshot showing active messages and size after receive." lightbox="./media/service-bus-dotnet-get-started-with-queues/queue-messages-size-final.png":::-
+
## Clean up resources Navigate to your Service Bus namespace in the Azure portal, and select **Delete** on the Azure portal to delete the namespace and the queue in it.
service-bus-messaging Service Bus Dotnet How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md
In this quickstart, you'll do the following steps:
1. Create a Service Bus namespace, using the Azure portal. 2. Create a Service Bus topic, using the Azure portal. 3. Create a Service Bus subscription to that topic, using the Azure portal.
-4. Write a .NET Core console application to send a set of messages to the topic.
-5. Write a .NET Core console application to receive those messages from the subscription.
+4. Write a .NET console application to send a set of messages to the topic.
+5. Write a .NET console application to receive those messages from the subscription.
> [!NOTE] > This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples). ## Prerequisites
-If you're new to the service, see [Service Bus overview](service-bus-messaging-overview.md) before you do this quickstart.
-- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).-- **Microsoft Visual Studio 2019**. The Azure Service Bus client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax won't be available. To make use of the full syntax, we recommend that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects. Visual Studio 2019, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
+If you're new to the service, see [Service Bus overview](service-bus-messaging-overview.md) before you do this quickstart.
+- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/dotnet/).
+- **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. You can still use the Service Bus client library with previous C# language versions, but the syntax may vary. To use the latest syntax, we recommend that you install .NET 6.0 or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects.
+ [!INCLUDE [service-bus-create-topic-subscription-portal](./includes/service-bus-create-topic-subscription-portal.md)]
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
> Note down the connection string to the namespace, the topic name, and the subscription name. You'll use them later in this tutorial. ## Send messages to the topic
-This section shows you how to create a .NET Core console application to send messages to a Service Bus topic.
+This section shows you how to create a .NET console application to send messages to a Service Bus topic.
> [!NOTE] > This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples). ### Create a console application
-1. Start Visual Studio 2019.
-1. Select **Create a new project**.
-1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**.
+1. Start Visual Studio 2022.
+1. Select **Create a new project**.
+1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**.
1. Select **C#** for the programming language.
- 1. Select **Console** for the type of the application.
- 1. Select **Console Application** from the results list.
- 1. Then, select **Next**.
+ 1. Select **Console** for the type of the application.
+ 1. Select **Console Application** from the results list.
+ 1. Then, select **Next**.
:::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/new-send-project.png" alt-text="Image showing the Create a new project dialog box with C# and Console selected"::: 1. Enter **TopicSender** for the project name, **ServiceBusTopicQuickStart** for the solution name, and then select **Next**. 1. On the **Additional information** page, select **Create** to create the solution and the project.
-### Add the Service Bus NuGet package
+### Add the NuGet packages to the project
-1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the [Azure.Messaging.ServiceBus](https://www.nuget.org/packages/Azure.Messaging.ServiceBus/) NuGet package:
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
- ```cmd
+ ```powershell
Install-Package Azure.Messaging.ServiceBus ``` + ### Add code to send messages to the topic
-1. In **Program.cs**, add the following `using` statements at the top of the namespace definition, before the class declaration.
+1. Replace the contents of Program.cs with the following code. The important steps are outlined below, with additional information in the code comments.
+
+ ## [Passwordless (Recommended)](#tab/passwordless)
+ 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
+ 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic.
+ 1. Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync).
+ 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
+ 1. Sends the batch of messages to the Service Bus topic using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
+
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
- ```
-2. Within the `Program` class, declare the following properties, just before the `Main` method. Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. And, replace `<TOPIC NAME>` with the name of your Service Bus topic.
-
- ```csharp
- // connection string to your Service Bus namespace
- static string connectionString = "<NAMESPACE CONNECTION STRING>";
-
- // name of your Service Bus topic
- static string topicName = "<TOPIC NAME>";
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers
- static ServiceBusClient client;
+ ServiceBusClient client;
// the sender used to publish messages to the topic
- static ServiceBusSender sender;
+ ServiceBusSender sender;
// number of messages to be sent to the topic
- private const int numOfMessages = 3;
- ```
-1. Replace code in the **Program.cs** with the following code. Here are the important steps from the code.
- 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
- 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic.
- 1. Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync).
- 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
- 1. Sends the batch of messages to the Service Bus topic using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
-
- ```csharp
- static async Task Main()
- {
- // The Service Bus client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when messages are being published or read
- // regularly.
- //
- // Create the clients that we'll use for sending and processing messages.
- client = new ServiceBusClient(connectionString);
- sender = client.CreateSender(topicName);
-
- // create a batch
- using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();
-
- for (int i = 1; i <= numOfMessages; i++)
- {
- // try adding a message to the batch
- if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
- {
- // if it is too large for the batch
- throw new Exception($"The message {i} is too large to fit in the batch.");
- }
- }
+ const int numOfMessages = 3;
- try
- {
- // Use the producer client to send the batch of messages to the Service Bus topic
- await sender.SendMessagesAsync(messageBatch);
- Console.WriteLine($"A batch of {numOfMessages} messages has been published to the topic.");
- }
- finally
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+
+ //TODO: Replace the "<NAMESPACE-NAME>" and "<TOPIC-NAME>" placeholders.
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
+ sender = client.CreateSender("<TOPIC-NAME>");
+
+ // create a batch
+ using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();
+
+ for (int i = 1; i <= numOfMessages; i++)
+ {
+ // try adding a message to the batch
+ if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
{
- // Calling DisposeAsync on client types is required to ensure that network
- // resources and other unmanaged objects are properly cleaned up.
- await sender.DisposeAsync();
- await client.DisposeAsync();
+ // if it is too large for the batch
+ throw new Exception($"The message {i} is too large to fit in the batch.");
}
+ }
- Console.WriteLine("Press any key to end the application");
- Console.ReadKey();
+ try
+ {
+ // Use the producer client to send the batch of messages to the Service Bus topic
+ await sender.SendMessagesAsync(messageBatch);
+ Console.WriteLine($"A batch of {numOfMessages} messages has been published to the topic.");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await sender.DisposeAsync();
+ await client.DisposeAsync();
}+
+ Console.WriteLine("Press any key to end the application");
+ Console.ReadKey();
```
-1. Here's what your Program.cs file should look like:
-
- For more information, see code comments.
+ ## [Connection String](#tab/connection-string)
+
+ 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
+ 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic.
+ 1. Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync).
+ 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
+ 1. Sends the batch of messages to the Service Bus topic using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
+
```csharp
- using System;
using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
-
- namespace TopicSender
+
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
+
+ // the sender used to publish messages to the topic
+ ServiceBusSender sender;
+
+ // number of messages to be sent to the topic
+ const int numOfMessages = 3;
+
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //TODO: Replace the "<NAMESPACE-CONNECTION-STRING>" and "<TOPIC-NAME>" placeholders.
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>");
+ sender = client.CreateSender("<TOPIC-NAME>");
+
+ // create a batch
+ using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();
+
+ for (int i = 1; i <= numOfMessages; i++)
{
- class Program
+ // try adding a message to the batch
+ if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
{
- // connection string to your Service Bus namespace
- static string connectionString = "<NAMESPACE CONNECTION STRING>";
-
- // name of your Service Bus topic
- static string topicName = "<TOPIC NAME>";
-
- // the client that owns the connection and can be used to create senders and receivers
- static ServiceBusClient client;
-
- // the sender used to publish messages to the topic
- static ServiceBusSender sender;
-
- // number of messages to be sent to the topic
- private const int numOfMessages = 3;
-
- static async Task Main()
- {
- // The Service Bus client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when messages are being published or read
- // regularly.
- //
- // Create the clients that we'll use for sending and processing messages.
- client = new ServiceBusClient(connectionString);
- sender = client.CreateSender(topicName);
-
- // create a batch
- using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();
-
- for (int i = 1; i <= numOfMessages; i++)
- {
- // try adding a message to the batch
- if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
- {
- // if it is too large for the batch
- throw new Exception($"The message {i} is too large to fit in the batch.");
- }
- }
-
- try
- {
- // Use the producer client to send the batch of messages to the Service Bus topic
- await sender.SendMessagesAsync(messageBatch);
- Console.WriteLine($"A batch of {numOfMessages} messages has been published to the topic.");
- }
- finally
- {
- // Calling DisposeAsync on client types is required to ensure that network
- // resources and other unmanaged objects are properly cleaned up.
- await sender.DisposeAsync();
- await client.DisposeAsync();
- }
-
- Console.WriteLine("Press any key to end the application");
- Console.ReadKey();
- }
+ // if it is too large for the batch
+ throw new Exception($"The message {i} is too large to fit in the batch.");
}
- }
+ }
+
+ try
+ {
+ // Use the producer client to send the batch of messages to the Service Bus topic
+ await sender.SendMessagesAsync(messageBatch);
+ Console.WriteLine($"A batch of {numOfMessages} messages has been published to the topic.");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await sender.DisposeAsync();
+ await client.DisposeAsync();
+ }
+
+ Console.WriteLine("Press any key to end the application");
+ Console.ReadKey();
```
-1. Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. And, replace `<TOPIC NAME>` with the name of your Service Bus topic.
+
+
+ 1. Build the project, and ensure that there are no errors. 1. Run the program and wait for the confirmation message.
This section shows you how to create a .NET Core console application to send mes
:::image type="content" source="./media/service-bus-dotnet-how-to-use-topics-subscriptions/subscription-page.png" alt-text="Messages received at the subscription" lightbox="./media/service-bus-dotnet-how-to-use-topics-subscriptions/subscription-page.png"::: ## Receive messages from a subscription
-In this section, you'll create a .NET Core console application that receives messages from the subscription to the Service Bus topic.
+In this section, you'll create a .NET console application that receives messages from the subscription to the Service Bus topic.
> [!NOTE] > This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
In this section, you'll create a .NET Core console application that receives mes
1. On the **Additional information** page, select **Create**. 1. In the **Solution Explorer** window, right-click **SubscriptionReceiver**, and select **Set as a Startup Project**.
-### Add the Service Bus NuGet package
+### Add the NuGet packages to the project
+
+### [Passwordless (Recommended)](#tab/passwordless)
-1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. In the **Package Manager Console** window, confirm that **SubscriptionReceiver** is selected for the **Default project**. If not, use the drop-down list to select **SubscriptionReceiver**.
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Run the following command to install the **Azure.Messaging.ServiceBus** and **Azure.Identity** NuGet packages:
- :::image type="content" source="./media/service-bus-dotnet-how-to-use-topics-subscriptions/select-subscription-receiver-project.png" alt-text="Image showing the selection of SubscriptionReceiver project in the Package Manager Console window." lightbox="./media/service-bus-dotnet-how-to-use-topics-subscriptions/select-subscription-receiver-project.png":::
+ ```powershell
+ Install-Package Azure.Messaging.ServiceBus
+ Install-Package Azure.Identity
+ ```
+
+ :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
+
+### [Connection String](#tab/connection-string)
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
- ```cmd
+ ```Powershell
Install-Package Azure.Messaging.ServiceBus ```
+ :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
+++ ### Add code to receive messages from the subscription
-1. In **Program.cs**, add the following `using` statements at the top of the namespace definition, before the class declaration.
+
+In this section, you'll add code to retrieve messages from the subscription.
+
+1. Replace the existing contents of `Program.cs` with the following properties and methods:
+
+ ## [Passwordless (Recommended)](#tab/passwordless)
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
- ```
-2. Within the `Program` class, declare the following properties, just before the `Main` method. Replace the placeholders with correct values:
- - `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace
- - `<TOPIC NAME>` with the name of your Service Bus topic
- - `<SERVICE BUS - TOPIC SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+ using Azure.Identity;
- ```csharp
- // connection string to your Service Bus namespace
- static string connectionString = "<NAMESPACE CONNECTION STRING>";
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
- // name of the Service Bus topic
- static string topicName = "<SERVICE BUS TOPIC NAME>";
+ // the processor that reads and processes messages from the subscription
+ ServiceBusProcessor processor;
- // name of the subscription to the topic
- static string subscriptionName = "<SERVICE BUS - TOPIC SUBSCRIPTION NAME>";
+ // handle received messages
+ async Task MessageHandler(ProcessMessageEventArgs args)
+ {
+ string body = args.Message.Body.ToString();
+ Console.WriteLine($"Received: {body} from subscription.");
- // the client that owns the connection and can be used to create senders and receivers
- static ServiceBusClient client;
+ // complete the message. messages is deleted from the subscription.
+ await args.CompleteMessageAsync(args.Message);
+ }
- // the processor that reads and processes messages from the subscription
- static ServiceBusProcessor processor;
+ // handle any errors when receiving messages
+ Task ErrorHandler(ProcessErrorEventArgs args)
+ {
+ Console.WriteLine(args.Exception.ToString());
+ return Task.CompletedTask;
+ }
```
-3. Add the following methods to the `Program` class to handle received messages and any errors.
+
+ ## [Connection String](#tab/connection-string)
```csharp
+ using System.Threading.Tasks;
+ using Azure.Messaging.ServiceBus;
+
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
+
+ // the processor that reads and processes messages from the subscription
+ ServiceBusProcessor processor;
+ // handle received messages
- static async Task MessageHandler(ProcessMessageEventArgs args)
+ async Task MessageHandler(ProcessMessageEventArgs args)
{
+ // TODO: Replace the <TOPIC-SUBSCRIPTION-NAME> placeholder
string body = args.Message.Body.ToString();
- Console.WriteLine($"Received: {body} from subscription: {subscriptionName}");
+ Console.WriteLine($"Received: {body} from subscription: <TOPIC-SUBSCRIPTION-NAME>");
// complete the message. messages is deleted from the subscription. await args.CompleteMessageAsync(args.Message); } // handle any errors when receiving messages
- static Task ErrorHandler(ProcessErrorEventArgs args)
+ Task ErrorHandler(ProcessErrorEventArgs args)
{ Console.WriteLine(args.Exception.ToString()); return Task.CompletedTask; } ```
-1. Replace code in the **Program.cs** with the following code. Here are the important steps from the code:
- 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
- 1. Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
- 1. Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the `ServiceBusProcessor` object.
- 1. Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
- 1. When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
-
+
+1. Append the following code to the end of `Program.cs`.
+
+ ## [Passwordless (Recommended)](#tab/passwordless)
+
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the passwordless `DefaultAzureCredential` object.
+ * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus topic.
+ * Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the `ServiceBusProcessor` object.
+ * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
+ * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
+
For more information, see code comments. ```csharp
- static async Task Main()
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Create the clients that we'll use for sending and processing messages.
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
+
+ // create a processor that we can use to process the messages
+ // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
+ processor = client.CreateProcessor("<TOPIC-NAME>", "<SUBSCRIPTION-NAME>", new ServiceBusProcessorOptions());
+
+ try
{
- // The Service Bus client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when messages are being published or read
- // regularly.
- //
- // Create the clients that we'll use for sending and processing messages.
- client = new ServiceBusClient(connectionString);
+ // add handler to process messages
+ processor.ProcessMessageAsync += MessageHandler;
- // create a processor that we can use to process the messages
- processor = client.CreateProcessor(topicName, subscriptionName, new ServiceBusProcessorOptions());
+ // add handler to process any errors
+ processor.ProcessErrorAsync += ErrorHandler;
- try
- {
- // add handler to process messages
- processor.ProcessMessageAsync += MessageHandler;
+ // start processing
+ await processor.StartProcessingAsync();
- // add handler to process any errors
- processor.ProcessErrorAsync += ErrorHandler;
+ Console.WriteLine("Wait for a minute and then press any key to end the processing");
+ Console.ReadKey();
- // start processing
- await processor.StartProcessingAsync();
+ // stop processing
+ Console.WriteLine("\nStopping the receiver...");
+ await processor.StopProcessingAsync();
+ Console.WriteLine("Stopped receiving messages");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await processor.DisposeAsync();
+ await client.DisposeAsync();
+ }
+ ```
- Console.WriteLine("Wait for a minute and then press any key to end the processing");
- Console.ReadKey();
+ ## [Connection String](#tab/connection-string)
- // stop processing
- Console.WriteLine("\nStopping the receiver...");
- await processor.StopProcessingAsync();
- Console.WriteLine("Stopped receiving messages");
- }
- finally
- {
- // Calling DisposeAsync on client types is required to ensure that network
- // resources and other unmanaged objects are properly cleaned up.
- await processor.DisposeAsync();
- await client.DisposeAsync();
- }
- }
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
+ * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus topic.
+ * Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the `ServiceBusProcessor` object.
+ * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
+ * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
+
+ For more information, see code comments.
+
+ ```csharp
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Create the clients that we'll use for sending and processing messages.
+ // TODO: Replace the <CONNECTION-STRING-VALUE> placeholder
+ client = new ServiceBusClient("<CONNECTION-STRING-VALUE>">);
+
+ // create a processor that we can use to process the messages
+ // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
+ processor = client.CreateProcessor("<TOPIC-NAME>", "<SUBSCRIPTION-NAME>", new ServiceBusProcessorOptions());
+
+ try
+ {
+ // add handler to process messages
+ processor.ProcessMessageAsync += MessageHandler;
+
+ // add handler to process any errors
+ processor.ProcessErrorAsync += ErrorHandler;
+
+ // start processing
+ await processor.StartProcessingAsync();
+
+ Console.WriteLine("Wait for a minute and then press any key to end the processing");
+ Console.ReadKey();
+
+ // stop processing
+ Console.WriteLine("\nStopping the receiver...");
+ await processor.StopProcessingAsync();
+ Console.WriteLine("Stopped receiving messages");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await processor.DisposeAsync();
+ await client.DisposeAsync();
+ }
```+
+
+ 1. Here's what your `Program.cs` should look like:
+ ## [Passwordless (Recommended)](#tab/passwordless)
+
```csharp using System; using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
- namespace SubscriptionReceiver
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
+
+ // the processor that reads and processes messages from the subscription
+ ServiceBusProcessor processor;
+
+ // handle received messages
+ async Task MessageHandler(ProcessMessageEventArgs args)
{
- class Program
- {
- // connection string to your Service Bus namespace
- static string connectionString = "<NAMESPACE CONNECTION STRING>";
-
- // name of the Service Bus topic
- static string topicName = "<SERVICE BUS TOPIC NAME>";
-
- // name of the subscription to the topic
- static string subscriptionName = "<SERVICE BUS - TOPIC SUBSCRIPTION NAME>";
-
- // the client that owns the connection and can be used to create senders and receivers
- static ServiceBusClient client;
-
- // the processor that reads and processes messages from the subscription
- static ServiceBusProcessor processor;
-
- // handle received messages
- static async Task MessageHandler(ProcessMessageEventArgs args)
- {
- string body = args.Message.Body.ToString();
- Console.WriteLine($"Received: {body} from subscription: {subscriptionName}");
-
- // complete the message. messages is deleted from the subscription.
- await args.CompleteMessageAsync(args.Message);
- }
-
- // handle any errors when receiving messages
- static Task ErrorHandler(ProcessErrorEventArgs args)
- {
- Console.WriteLine(args.Exception.ToString());
- return Task.CompletedTask;
- }
-
- static async Task Main()
- {
- // The Service Bus client types are safe to cache and use as a singleton for the lifetime
- // of the application, which is best practice when messages are being published or read
- // regularly.
- //
- // Create the clients that we'll use for sending and processing messages.
- client = new ServiceBusClient(connectionString);
-
- // create a processor that we can use to process the messages
- processor = client.CreateProcessor(topicName, subscriptionName, new ServiceBusProcessorOptions());
-
- try
- {
- // add handler to process messages
- processor.ProcessMessageAsync += MessageHandler;
-
- // add handler to process any errors
- processor.ProcessErrorAsync += ErrorHandler;
+ string body = args.Message.Body.ToString();
+ Console.WriteLine($"Received: {body} from subscription.");
+
+ // complete the message. messages is deleted from the subscription.
+ await args.CompleteMessageAsync(args.Message);
+ }
+
+ // handle any errors when receiving messages
+ Task ErrorHandler(ProcessErrorEventArgs args)
+ {
+ Console.WriteLine(args.Exception.ToString());
+ return Task.CompletedTask;
+ }
- // start processing
- await processor.StartProcessingAsync();
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Create the clients that we'll use for sending and processing messages.
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
+
+ // create a processor that we can use to process the messages
+ // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
+ processor = client.CreateProcessor("<TOPIC-NAME>", "<SUBSCRIPTION-NAME>", new ServiceBusProcessorOptions());
+
+ try
+ {
+ // add handler to process messages
+ processor.ProcessMessageAsync += MessageHandler;
+
+ // add handler to process any errors
+ processor.ProcessErrorAsync += ErrorHandler;
+
+ // start processing
+ await processor.StartProcessingAsync();
+
+ Console.WriteLine("Wait for a minute and then press any key to end the processing");
+ Console.ReadKey();
+
+ // stop processing
+ Console.WriteLine("\nStopping the receiver...");
+ await processor.StopProcessingAsync();
+ Console.WriteLine("Stopped receiving messages");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await processor.DisposeAsync();
+ await client.DisposeAsync();
+ }
+ ```
+
+ ## [Connection String](#tab/connection-string)
+
+ ```csharp
+ using System;
+ using System.Threading.Tasks;
+ using Azure.Messaging.ServiceBus;
- Console.WriteLine("Wait for a minute and then press any key to end the processing");
- Console.ReadKey();
+ // the client that owns the connection and can be used to create senders and receivers
+ ServiceBusClient client;
+
+ // the processor that reads and processes messages from the subscription
+ ServiceBusProcessor processor;
+
+ // handle received messages
+ async Task MessageHandler(ProcessMessageEventArgs args)
+ {
+ string body = args.Message.Body.ToString();
+ Console.WriteLine($"Received: {body} from subscription.");
+
+ // complete the message. messages is deleted from the subscription.
+ await args.CompleteMessageAsync(args.Message);
+ }
+
+ // handle any errors when receiving messages
+ Task ErrorHandler(ProcessErrorEventArgs args)
+ {
+ Console.WriteLine(args.Exception.ToString());
+ return Task.CompletedTask;
+ }
- // stop processing
- Console.WriteLine("\nStopping the receiver...");
- await processor.StopProcessingAsync();
- Console.WriteLine("Stopped receiving messages");
- }
- finally
- {
- // Calling DisposeAsync on client types is required to ensure that network
- // resources and other unmanaged objects are properly cleaned up.
- await processor.DisposeAsync();
- await client.DisposeAsync();
- }
- }
- }
+ // The Service Bus client types are safe to cache and use as a singleton for the lifetime
+ // of the application, which is best practice when messages are being published or read
+ // regularly.
+ //
+ // Create the clients that we'll use for sending and processing messages.
+ // TODO: Replace the <CONNECTION-STRING-VALUE> placeholder
+ client = new ServiceBusClient("<CONNECTION-STRING-VALUE>">);
+
+ // create a processor that we can use to process the messages
+ // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
+ processor = client.CreateProcessor("<TOPIC-NAME>", "<SUBSCRIPTION-NAME>", new ServiceBusProcessorOptions());
+
+ try
+ {
+ // add handler to process messages
+ processor.ProcessMessageAsync += MessageHandler;
+
+ // add handler to process any errors
+ processor.ProcessErrorAsync += ErrorHandler;
+
+ // start processing
+ await processor.StartProcessingAsync();
+
+ Console.WriteLine("Wait for a minute and then press any key to end the processing");
+ Console.ReadKey();
+
+ // stop processing
+ Console.WriteLine("\nStopping the receiver...");
+ await processor.StopProcessingAsync();
+ Console.WriteLine("Stopped receiving messages");
+ }
+ finally
+ {
+ // Calling DisposeAsync on client types is required to ensure that network
+ // resources and other unmanaged objects are properly cleaned up.
+ await processor.DisposeAsync();
+ await client.DisposeAsync();
} ```
-1. Replace the placeholders with correct values:
- - `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace
- - `<TOPIC NAME>` with the name of your Service Bus topic
- - `<SERVICE BUS - TOPIC SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+
+
+ 1. Build the project, and ensure that there are no errors. 1. Run the receiver application. You should see the received messages. Press any key to stop the receiver and the application.
service-fabric Service Fabric Reliable Services Communication Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-remoting.md
Follow these steps:
To upgrade from V1 to V2 (interface compatible, known as V2_1), two-step upgrades are required. Follow the steps in this sequence. > [!NOTE]
-> When upgrading from V1 to V2, ensure the `Remoting` namespace is updated to use V2. Example: 'Microsoft.ServiceFabric.Services.Remoting.V2.FabricTransport.Client`
->
->
+> When upgrading from V1 to V2, ensure the `Remoting` namespace is updated to use V2. Example: `Microsoft.ServiceFabric.Services.Remoting.V2.FabricTransport.Client`
1. Upgrade the V1 service to V2_1 service by using the following attribute. This change makes sure that the service is listening on the V1 and the V2_1 listener.
spring-apps Concept Outbound Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-outbound-type.md
Title: Customize Azure Spring Cloud egress with a user-defined route
-description: Learn how to customize Azure Spring Cloud egress with a user-defined route.
+ Title: Customize Azure Spring Apps egress with a user-defined route
+description: Learn how to customize Azure Spring Apps egress with a user-defined route.
Previously updated : 09/25/2021- Last updated : 10/20/2022+
-# Customize Azure Spring Cloud egress with a user-defined route
+# Customize Azure Spring Apps egress with a user-defined route
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Java ✔️ C#
spring-apps Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/connect-managed-identity-to-azure-sql.md
spring.datasource.url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:14
Configure your app deployed to Azure Spring to connect to an SQL Database with a system-assigned managed identity using the `az spring connection create` command, as shown in the following example.
+> [!NOTE]
+> This command requires you to run [Azure CLI](/cli/azure/install-azure-cli) version 2.41.0 or higher.
+ ```azurecli-interactive az spring connection create sql \ --resource-group $SPRING_APP_RESOURCE_GROUP \
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
Yes. For more information, see [Monitor app lifecycle events using Azure Activit
::: zone pivot="programming-language-java" ### What are the best practices for migrating existing Spring applications to Azure Spring Apps?
-For more information, see [Migrate Spring applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-cloud).
+For more information, see [Migrate Spring applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps).
::: zone-end ::: zone pivot="programming-language-csharp"
spring-apps How To Access Data Plane Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-access-data-plane-azure-ad-rbac.md
After the role is assigned, the assignee can access the Spring Cloud Config Serv
>[!NOTE] > If you're using Azure China, replace `*.azuremicroservices.io` with `*.microservices.azure.cn`. For more information, see the section [Check endpoints in Azure](/azure/china/resources-developer-guide#check-endpoints-in-azure) in the [Azure China developer guide](/azure/china/resources-developer-guide).
-1. Access the composed endpoint with the access token. Put the access token in a header to provide authorization: `--header 'Authorization: Bearer {TOKEN_FROM_PREVIOUS_STEP}`.
+1. Access the composed endpoint with the access token. Put the access token in a header to provide authorization: `--header 'Authorization: Bearer {TOKEN_FROM_PREVIOUS_STEP}'`.
For example:
- a. Access an endpoint like *'https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/health'* to see the health status of Config Server.
+ a. Access an endpoint like `https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/health` to see the health status of Config Server.
- b. Access an endpoint like *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/eureka/apps'* to see the registered apps in Spring Cloud Service Registry (Eureka here).
+ b. Access an endpoint like `https://SERVICE_NAME.svc.azuremicroservices.io/eureka/eureka/apps` to see the registered apps in Spring Cloud Service Registry (Eureka here).
- If the response is *401 Unauthorized*, check to see if the role is successfully assigned. It will take several minutes for the role to take effect or to verify that the access token has not expired.
+ If the response is `401 Unauthorized`, check to see if the role is successfully assigned. It will take several minutes for the role to take effect or to verify that the access token has not expired.
For more information about actuator endpoint, see [Production ready endpoints](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints).
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md
With Azure Spring Apps, you can bind select Azure services to your applications
* An application deployed to Azure Spring Apps. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md). * An Azure Database for PostgreSQL Flexible Server instance.
-* [Azure CLI](/cli/azure/install-azure-cli).
+* [Azure CLI](/cli/azure/install-azure-cli) version 2.41.0 or higher.
## Prepare your Java project
With Azure Spring Apps, you can bind select Azure services to your applications
Configure your Spring app to connect to a MySQL Database Flexible Server with a system-assigned managed identity by using the `az spring connection create` command, as shown in the following example.
-> [!NOTE]
-> This command requires you to run the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
- ```azurecli az spring connection create mysql-flexible \ --resource-group $AZURE_SPRING_APPS_RESOURCE_GROUP \
spring-apps How To Bind Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md
Last updated 09/26/2022-+ # Bind an Azure Database for PostgreSQL to your application in Azure Spring Apps
With Azure Spring Apps, you can bind select Azure services to your applications
* An application deployed to Azure Spring Apps. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md). * An Azure Database for PostgreSQL Flexible Server instance.
-* [Azure CLI](/cli/azure/install-azure-cli).
+* [Azure CLI](/cli/azure/install-azure-cli) version 2.41.0 or higher.
## Prepare your Java project
Use the following steps to bind your app.
Configure Azure Spring Apps to connect to the PostgreSQL Database Single Server with a system-assigned managed identity using the `az spring connection create` command.
-> [!NOTE]
-> This command requires you to run the latest [edge build of Azure CLI](https://github.com/Azure/azure-cli/blob/dev/doc/try_new_features_before_release.md). [Download and install the edge builds](https://github.com/Azure/azure-cli#edge-builds) for your platform.
- ```azurecli az spring connection create postgres \ --resource-group $SPRING_APP_RESOURCE_GROUP \
spring-apps How To Built In Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-built-in-persistent-storage.md
Last updated 10/28/2021 -+ # Use built-in persistent storage in Azure Spring Apps
spring-apps How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-config-server.md
eureka.instance.preferIpAddress
eureka.instance.instance-id server.port spring.cloud.config.tls.keystore
+spring.config.import
spring.application.name spring.jmx.enabled
+management.endpoints.jmx.exposure.include
``` > [!CAUTION]
spring-apps How To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-persistent-storage.md
Title: How to enable your own persistent storage in Azure Spring Apps | Microsoft Docs
-description: How to bring your own storage as persistent storages in Azure Spring Apps
+description: Learn how to bring your own storage as persistent storages in Azure Spring Apps
This article shows you how to enable your own persistent storage in Azure Spring Apps.
-When you use the built-in persistent storage in Azure Spring Apps, artifacts generated by your application are uploaded into Azure Storage Accounts. Microsoft controls the encryption-at-rest and lifetime management policies for those artifacts.
+When you use the built-in persistent storage in Azure Spring Apps, artifacts generated by your application are uploaded into Azure Storage Accounts. Microsoft controls the encryption-at-rest and lifetime management policies for those artifacts.
-With Bring Your Own Storage, these artifacts are uploaded into a storage account that you control. That means you control the encryption-at-rest policy, the lifetime management policy and network access. You will, however, be responsible for the costs associated with that storage account.
+When you use your own persistent storage, artifacts generated by your application are uploaded into a storage account that you control. You control the encryption-at-rest policy, the lifetime management policy, and network access. You're responsible for the costs associated with that storage account.
## Prerequisites
-* An existing Azure Storage Account and a pre-created Azure File Share. If you need to create a storage account and file share in Azure, see [Create an Azure file share](../storage/files/storage-how-to-create-file-share.md).
-* The [Azure Spring Apps extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
+- An existing Azure Storage Account and a pre-created Azure File Share. If you need to create a storage account and file share in Azure, see [Create an SMB Azure file share](../storage/files/storage-how-to-create-file-share.md).
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.0.67 or higher.
> [!IMPORTANT]
-> If you deployed your Azure Spring Apps in your own virtual network and you want the storage account to be accessed only from the virtual network, consult the following guidance:
-> - [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md)
-> - [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md), especially the [Grant access from a virtual network using service endpoint](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) section
+> If you deployed your Azure Spring Apps in your own virtual network and you want the storage account to be accessed only from the virtual network, see [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md) and the [Grant access from a virtual network](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) section of [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
## Mount your own extra persistent storage to applications > [!NOTE]
-> Updating persistent storage will result in the restart of applications.
+> Updating persistent storage will restart your applications.
### [Portal](#tab/Azure-portal) Use the following steps to bind an Azure Storage account as a storage resource in your Azure Spring Apps and create an app with your own persistent storage.
-1. Go to the service **Overview** page, then select **Storage** in the left-hand navigation pane.
+1. Go to the service **Overview** page, and then select **Storage** in the left-hand navigation pane.
-1. On the **Storage** page, select **Add storage**, add the values in the following table, and then select **Apply**.
+1. On the **Storage** page, select **Add storage**.
+
+ :::image type="content" source="media/how-to-custom-persistent-storage/add-storage.png" alt-text="Screenshot of Azure portal showing the Storage page." lightbox="media/how-to-custom-persistent-storage/add-storage.png":::
+
+1. Enter the following information on the **Add storage** page, and then select **Apply**.
| Setting | Value | |--|--|
Use the following steps to bind an Azure Storage account as a storage resource i
| Account name | The name of the storage account. | | Account key | The storage account key. |
- :::image type="content" source="media/how-to-custom-persistent-storage/add-storage-resource.png" alt-text="Screenshot of Azure portal showing the Storage page and the 'Add storage' pane." lightbox="media/how-to-custom-persistent-storage/add-storage-resource.png":::
+ :::image type="content" source="media/how-to-custom-persistent-storage/add-storage-resource.png" alt-text="Screenshot of Azure portal showing the Add storage page." lightbox="media/how-to-custom-persistent-storage/add-storage-resource.png":::
-1. Go to the **Apps** page, then select an application to mount the persistent storage.
+1. Go to the **Apps** page, and then select an application to mount the persistent storage.
:::image type="content" source="media/how-to-custom-persistent-storage/select-app-mount-persistent-storage.png" alt-text="Screenshot of Azure portal Apps page." lightbox="media/how-to-custom-persistent-storage/select-app-mount-persistent-storage.png":::
-1. Select **Configuration**, then select **Persistent Storage**.
+1. Select **Configuration**, and then select **Persistent Storage**.
-1. Select **Add persistent storage**, add the values in the following table, and then select **Apply**.
+1. Select **Add persistent storage**. Add the values in the following table, and then select **Apply**.
| Setting | Value | |-|-|
Use the following steps to bind an Azure Storage account as a storage resource i
| Mount options | Optional | | Read only | Optional |
- :::image type="content" source="media/how-to-custom-persistent-storage/add-persistent-storage.png" alt-text="Screenshot of Azure portal 'Add persistent storage' form.":::
+ :::image type="content" source="media/how-to-custom-persistent-storage/add-persistent-storage.png" alt-text="Screenshot of Azure portal showing the Add persistent storage page." lightbox="media/how-to-custom-persistent-storage/add-persistent-storage.png":::
-1. Select **Save** to apply all the configuration changes.
+1. Select **Save** to apply the configuration changes.
- :::image type="content" source="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png" alt-text="Screenshot of Azure portal Persistent Storage section of the Configuration page." lightbox="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png":::
+ :::image type="content" source="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png" alt-text="Screenshot of Azure portal showing the Persistent Storage tab of the Configuration page." lightbox="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png":::
### [CLI](#tab/Azure-CLI)
-You can enable your own storage with the Azure CLI by using the following steps.
+Use the following steps to enable your own storage with the Azure CLI.
-1. Use the following command to bind your Azure Storage account as a storage resource in your Azure Spring Apps instance:
+1. Use the following command to bind your Azure Storage account as a storage resource in your Azure Spring Apps instance.
```azurecli az spring storage add \
You can enable your own storage with the Azure CLI by using the following steps.
--persistent-storage <path-to-JSON-file> ```
- Here's a sample of the JSON file that is passed to the `--persistent-storage` parameter in the create command:
+ The following example shows a JSON file that is passed to the `--persistent-storage` parameter.
```json {
You can enable your own storage with the Azure CLI by using the following steps.
} ```
-1. Optionally, add extra persistent storage to an existing app using the following command:
+1. Optionally, use the following command to add extra persistent storage to an existing app.
```azurecli az spring app append-persistent-storage \
You can enable your own storage with the Azure CLI by using the following steps.
--storage-name <storage-resource-name> ```
-1. Optionally, list all existing persistent storage of a specific storage resource using the following command:
+1. Optionally, use the following command to list the existing persistent storage of a specific storage resource.
```azurecli az spring storage list-persistent-storage \
You can enable your own storage with the Azure CLI by using the following steps.
-## Use best practices
+## Best practices
-Use the following best practices when adding your own persistent storage to Azure Spring Apps.
+Use the following best practices when you add your own persistent storage to Azure Spring Apps.
-* To avoid potential latency issues, place the Azure Spring Apps instance and the Azure Storage Account in the same Azure region.
+- To avoid potential latency issues, place the Azure Spring Apps instance and the Azure Storage Account in the same Azure region.
-* In the Azure Storage Account, avoid regenerating the account key that's being used. The storage account contains two different keys. Use a step-by-step approach to ensure that the persistent storage remains available to the applications during key regeneration.
+- In the Azure Storage Account, avoid regenerating the account key that you're using. The storage account contains two different keys. Use a step-by-step approach to ensure that the persistent storage remains available to applications during key regeneration.
- For example, assuming that you used key1 to bind a storage account to Azure Spring Apps, you would use the following steps:
+ For example, use the following steps to ensure that the persistent storage remains available if you used *key1* to bind a storage account to Azure Spring Apps.
- 1. Regenerate key2.
- 1. Update the account key of the storage resource to use the regenerated key2.
- 1. Restart the applications that mount the persistent storage from this storage resource. (You can use `az spring storage list-persistent-storage` to list all related applications.)
- 1. Regenerate key1.
+ 1. Regenerate *key2*.
+ 1. Update the account key of the storage resource to use the regenerated *key2*.
+ 1. Restart the applications that mount the persistent storage from this storage resource. Use the `az spring storage list-persistent-storage` command to list all related applications.
+ 1. Regenerate *key1*.
-* If you delete an Azure Storage Account or Azure File Share, remove the corresponding storage resource or persistent storage in the applications to avoid possible errors.
+- If you delete an Azure Storage account or Azure file share, avoid possible errors by removing the corresponding storage resource or persistent storage in the applications.
-## FAQs
+## FAQ
-The following are frequently asked questions (FAQ) about using your own persistent storage with Azure Spring Apps.
+This section addresses frequently asked questions about using your own persistent storage with Azure Spring Apps.
-* If I have built-in persistent storage enabled, and then I enabled my own storage as extra persistent storage, will my data be migrated into my Storage Account?
+- If I have built-in persistent storage enabled, and then I enabled my own storage as extra persistent storage, will my data be migrated into my Azure Storage account?
*No. But we're going to provide a document to help you do the migration yourself soon.*
-* What are the reserved mount paths?
+- What are the reserved mount paths?
- *These mount paths are reserved by the Azure Spring Apps service:*
+ *The following mount paths are reserved by the Azure Spring Apps service:*
- * */tmp*
- * */persistent*
- * */secrets*
- * */app-insights/agents*
- * */etc/azure-spring-cloud/certs*
- * */app-insights/agents/settings*
- * */app-lifecycle/settings*
+ - */tmp*
+ - */persistent*
+ - */secrets*
+ - */app-insights/agents*
+ - */etc/azure-spring-cloud/certs*
+ - */app-insights/agents/settings*
+ - */app-lifecycle/settings*
-* What are the available mount options?
+- What are the available mount options?
- *We currently support the following mount options:*
+ *We currently support the following mount options:*
- * `uid`
- * `gid`
- * `file_mode`
- * `dir_mode`
+ - `uid`
+ - `gid`
+ - `file_mode`
+ - `dir_mode`
*The `mountOptions` property is optional. The default values for above mount options are: ["uid=0", "gid=0", "file_mode=0777", "dir_mode=0777"]*
-* I'm using the service endpoint to configure the storage account to allow access only from my own virtual network. Why did I receive *Permission Denied* while trying to mount custom persistent storage to my applications?
+- I'm using the service endpoint to configure the storage account to allow access only from my own virtual network. Why did I receive a *Permission Denied* error when I tried to mount custom persistent storage to my applications?
- *A service endpoint provides network access on a subnet level only. Be sure you've added both subnets used by the Azure Spring Apps instance to the scope of the service endpoint.*
+ *A service endpoint provides network access on a subnet level only. Make sure you've added both subnets used by the Azure Spring Apps instance to the scope of the service endpoint.*
## Next steps
-* [How to use Logback to write logs to custom persistent storage](how-to-write-log-to-custom-persistent-storage.md).
-* [Scale an application in Azure Spring Apps](how-to-scale-manual.md).
+- [How to use Logback to write logs to custom persistent storage](how-to-write-log-to-custom-persistent-storage.md).
+- [Scale an application in Azure Spring Apps](how-to-scale-manual.md).
spring-apps How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-marketplace-offer.md
You can obtain and pay for a license to Tanzu components through an [Azure Marke
To purchase in Azure Marketplace, you must meet the following prerequisites: - Your Azure subscription must be registered to the `Microsoft.SaaS` resource provider. For more information, see the [Register resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) section of [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).-- Your Azure subscription must have an associated payment method. Azure credits or free MSDN subscriptions aren't supported. For more information, see the [Purchasing requirements](/marketplace/azure-marketplace-overview.md#purchasing-requirements) section of [What is Azure Marketplace?](/marketplace/azure-marketplace-overview.md)
+- Your Azure subscription must have an associated payment method. Azure credits or free MSDN subscriptions aren't supported. For more information, see the [Purchasing requirements](/marketplace/azure-marketplace-overview#purchasing-requirements) section of [What is Azure Marketplace?](/marketplace/azure-marketplace-overview)
- Your Azure subscription must belong to a billing account in a supported geographic location. For more information, see the [Supported geographic locations](#supported-geographic-locations) section of this article. - Your organization must allow Azure Marketplace purchases. For more information, see the [Enabling Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases) section of [Azure Marketplace](../cost-management-billing/manage/ea-azure-marketplace.md).-- Your organization must allow acquisition of any Azure Marketplace software application described in the [Purchase policy management](/marketplace/azure-purchasing-invoicing.md#purchase-policy-management) section of [Azure Marketplace purchasing](/marketplace/azure-purchasing-invoicing.md).
+- Your organization must allow acquisition of any Azure Marketplace software application described in the [Purchase policy management](/marketplace/azure-purchasing-invoicing#purchase-policy-management) section of [Azure Marketplace purchasing](/marketplace/azure-purchasing-invoicing).
- You must accept the legal terms and privacy statements during enterprise tier provisioning on Azure portal, or use the following CLI command to do so in advance. ```azurecli
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
The following quickstarts will help you get started:
The following documents will help you migrate existing Spring Boot apps to Azure Spring Apps:
-* [Migrate Spring Boot applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-cloud)
-* [Migrate Spring Cloud applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-cloud?pivots=sc-standard-tier)
+* [Migrate Spring Boot applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps)
+* [Migrate Spring Cloud applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps?pivots=sc-standard-tier)
The following quickstarts apply to Basic/Standard tier only. For Enterprise tier quickstarts, see the next section.
spring-apps Secure Communications End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/secure-communications-end-to-end.md
Azure Spring Apps is jointly built, operated, and supported by Microsoft and VMw
- [Deploy Spring microservices to Azure](/training/modules/azure-spring-cloud-workshop/) - [Azure Key Vault Certificates Spring Cloud Azure Starter (GitHub.com)](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates/pom.xml) - [Azure Spring Apps reference architecture](reference-architecture.md)-- Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-cloud), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-cloud), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-cloud) applications to Azure Spring Apps
+- Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-apps) applications to Azure Spring Apps
storage Blobfuse2 Commands Completion Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-bash.md
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 completion bash` command to generate the autocompletion script for BlobFuse2 for the bash shell.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
## Syntax
storage Blobfuse2 Commands Completion Fish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-fish.md
Title: How to use the completion fish command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 completion fish' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs
-description: Learn how to use the completion fish command to generate the autocompletion script for BlobFuse2 (preview).
+description: Learn how to use the 'blobfuse2 completion fish' command to generate the autocompletion script for BlobFuse2 (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 completion fish` command to generate the autocompletion script for BlobFuse2 for the fish shell.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
## Syntax
storage Blobfuse2 Commands Completion Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-powershell.md
Title: How to use the "completion powershell" command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 completion powershell' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs
-description: Learn how to use the "completion powershell" command to generate the autocompletion script for BlobFuse2 (preview).
+description: Learn how to use the 'blobfuse2 completion powershell' command to generate the autocompletion script for BlobFuse2 (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
-# BlobFuse2 completion powershell command (preview)
+# blobfuse2 completion powershell (preview)
Use the `blobfuse2 completion powershell` command to generate the autocompletion script for BlobFuse2 for PowerShell.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
## Syntax
storage Blobfuse2 Commands Completion Zsh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-zsh.md
Title: How to use the completion zsh command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 completion zsh' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs
-description: Learn how to use the completion zsh command to generate the autocompletion script for BlobFuse2 (preview).
+description: Learn how to use the 'blobfuse2 completion zsh' command to generate the autocompletion script for BlobFuse2 (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 completion zsh` command to generate the autocompletion script for BlobFuse2 for the zsh shell.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
## Syntax
storage Blobfuse2 Commands Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion.md
Title: How to use the completion command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 completion' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs
-description: Learn how to use the completion command to generate the autocompletion script for BlobFuse2 (preview).
+description: Learn how to use the 'blobfuse2 completion' command to generate the autocompletion script for BlobFuse2 (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 completion` command to generate the autocompletion script for BlobFuse2 for a specified shell.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
## Syntax
storage Blobfuse2 Commands Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-help.md
Title: How to use BlobFuse2 help to get help info for the BlobFuse2 command and subcommands (preview) | Microsoft Docs
+ Title: How to use 'blobfuse2 help' to get help info for the BlobFuse2 command and subcommands (preview) | Microsoft Docs
-description: Learn how to use BlobFuse2 help to get help info for the BlobFuse2 command and subcommands (preview).
+description: Learn how to use 'blobfuse2 help' to get help info for the BlobFuse2 command and subcommands (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 help` command to get help info for the BlobFuse2 command and subcommands.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
## Syntax
storage Blobfuse2 Commands Mount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-all.md
Title: How to use the BlobFuse2 mount all command to mount all blob containers in a storage account as a Linux file system (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 mount all' command to mount all blob containers in a storage account as a Linux file system (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 mount all command to mount all blob containers in a storage account as a Linux file system (preview).
+description: Learn how to use the 'blobfuse2 mount all' all command to mount all blob containers in a storage account as a Linux file system (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022 # How to use the BlobFuse2 mount all command to mount all blob containers in a storage account as a Linux file system (preview)
-Use the `BlobFuse2 mount all` command to mount all blob containers in a storage account as a Linux file system. Each container will be mounted to a unique subdirectory under the path specified. The subdirectory names will correspond to the container names.
-
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+Use the `blobfuse2 mount all` command to mount all blob containers in a storage account as a Linux file system. Each container will be mounted to a unique subdirectory under the path specified. The subdirectory names will correspond to the container names.
+ ## Syntax
storage Blobfuse2 Commands Mount List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-list.md
Title: How to use the BlobFuse2 mount list command to display all BlobFuse2 mount points (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 mount list command to display all BlobFuse2 mount points. (preview)
+description: Learn how to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points. (preview)
Previously updated : 08/02/2022 Last updated : 10/17/2022 # How to use the BlobFuse2 mount list command to display all BlobFuse2 mount points (preview)
-Use the `BlobFuse2 mount list` command to display all existing BlobFuse2 mount points.
-
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+Use the `blobfuse2 mount list` command to display all existing BlobFuse2 mount points.
+ ## Syntax
storage Blobfuse2 Commands Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount.md
Title: How to use the BlobFuse2 mount command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview). | Microsoft Docs
+ Title: How to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview). | Microsoft Docs
-description: Learn how to use the BlobFuse2 mount command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview).
+description: Learn how to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview).
Previously updated : 10/01/2022 Last updated : 10/17/2022
Use the `blobfuse2 mount` command to mount a Blob Storage container as a file system in Linux, or to display existing mount points.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
## Command Syntax
storage Blobfuse2 Commands Mountv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mountv1.md
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 mountv1` command to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
## Syntax
storage Blobfuse2 Commands Secure Decrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-decrypt.md
Title: How to use the BlobFuse2 secure decrypt command to decrypt a BlobFuse2 configuration file (preview) | Microsoft Docs
+ Title: How to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 secure decrypt command to decrypt a BlobFuse2 configuration file (preview).
+description: Learn how to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022 # How to use the BlobFuse2 secure decrypt command to decrypt a BlobFuse2 configuration file (preview)
-Use the `BlobFuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file.
-
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+Use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file.
+ ## Syntax
storage Blobfuse2 Commands Secure Encrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-encrypt.md
Title: How to use the BlobFuse2 secure encrypt command to encrypt a BlobFuse2 configuration file (preview) | Microsoft Docs
+ Title: How to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 secure encrypt command to encrypt a BlobFuse2 configuration file. (preview)
+description: Learn how to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file. (preview)
Previously updated : 08/02/2022 Last updated : 10/17/2022 # How to use the BlobFuse2 secure encrypt command to encrypt a BlobFuse2 configuration file (preview)
-Use the `BlobFuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file.
-
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+Use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file.
+ ## Syntax
storage Blobfuse2 Commands Secure Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-get.md
Title: How to use the BlobFuse2 secure get command to display the value of a parameter from an encrypted BlobFuse2 configuration file (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 secure get' command to display the value of a parameter from an encrypted BlobFuse2 configuration file (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 secure get command to display the value of a parameter from an encrypted BlobFuse2 configuration file (preview)
+description: Learn how to use the 'blobfuse2 secure get' command to display the value of a parameter from an encrypted BlobFuse2 configuration file (preview)
Previously updated : 08/02/2022 Last updated : 10/17/2022 # How to use the BlobFuse2 secure get command to display the value of a parameter from an encrypted BlobFuse2 configuration file (preview)
-Use the `BlobFuse2 secure get` command to display the value of a specified parameter from an encrypted BlobFuse2 configuration file.
-
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+Use the `blobfuse2 secure get` command to display the value of a specified parameter from an encrypted BlobFuse2 configuration file.
+ ## Syntax
storage Blobfuse2 Commands Secure Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-set.md
Title: How to use the BlobFuse2 secure set command to change the value of a parameter in an encrypted BlobFuse2 configuration file (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 secure set' command to change the value of a parameter in an encrypted BlobFuse2 configuration file (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 secure set command to change the value of a parameter in an encrypted BlobFuse2 configuration file (preview)
+description: Learn how to use the 'blobfuse2 secure set' command to change the value of a parameter in an encrypted BlobFuse2 configuration file (preview)
Previously updated : 08/02/2022 Last updated : 10/17/2022 # How to use the BlobFuse2 secure set command to change the value of a parameter in an encrypted BlobFuse2 configuration file (preview)
-Use the `BlobFuse2 secure set` command to change the value of a specified parameter in an encrypted BlobFuse2 configuration file.
-
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+Use the `blobfuse2 secure set` command to change the value of a specified parameter in an encrypted BlobFuse2 configuration file.
+ ## Syntax
storage Blobfuse2 Commands Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure.md
Title: How to use the BlobFuse2 secure command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 secure command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file (preview).
+description: Learn how to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 secure` command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
## Command Syntax
storage Blobfuse2 Commands Unmount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount-all.md
Title: How to use the BlobFuse2 unmount all command to unmount all blob containers in a storage account as a Linux file system (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 unmount all command to unmount all blob containers in a storage account as a Linux file system (preview).
+description: Learn how to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 unmount all` command to unmount all existing BlobFuse2 mount points.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
## Syntax
storage Blobfuse2 Commands Unmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount.md
Title: How to use the BlobFuse2 unmount command to unmount an existing mount point (preview)| Microsoft Docs
+ Title: How to use the 'blobfuse2 unmount' command to unmount an existing mount point (preview)| Microsoft Docs
-description: How to use the BlobFuse2 unmount command to unmount an existing mount point. (preview)
+description: How to use the 'blobfuse2 unmount' command to unmount an existing mount point. (preview)
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 unmount` command to unmount one or more existing BlobFuse2 mount points.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
## Syntax
storage Blobfuse2 Commands Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-version.md
Title: How to use the BlobFuse2 version command to get the current version and optionally check for a newer one (preview) | Microsoft Docs
+ Title: How to use the 'blobfuse2 version' command to get the current version and optionally check for a newer one (preview) | Microsoft Docs
-description: Learn how to use the BlobFuse2 version command to get the current version and optionally check for a newer one (preview).
+description: Learn how to use the 'blobfuse2 version' command to get the current version and optionally check for a newer one (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
Use the `blobfuse2 version` command to display the current version of BlobFuse2, and optionally check for latest version.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
## Syntax
storage Blobfuse2 Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands.md
Previously updated : 08/02/2022 Last updated : 10/17/2022
This reference shows how to use the BlobFuse2 command set to mount Azure blob storage containers as file systems on Linux, and how to manage them.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
->
-> - [The BlobFuse v1 setup documentation](storage-how-to-mount-container-linux.md)
-> - [The BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
## Syntax
storage Blobfuse2 Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-configuration.md
Title: Configure settings for BlobFuse2 Preview
+ Title: Configure settings for BlobFuse2 (preview)
-description: Learn about your options for setting and changing configuration settings for BlobFuse2 Preview.
+description: Learn about your options for setting and changing configuration settings for BlobFuse2 (preview).
Previously updated : 09/29/2022 Last updated : 10/17/2022
-# Configure settings for BlobFuse2 Preview
+# Configure settings for BlobFuse2 (preview)
-You can use configuration settings to manage BlobFuse2 Preview in your deployment. Through configuration settings, you can set these aspects of how BlobFuse2 works in your environment:
+You can use configuration settings to manage BlobFuse2 in your deployment. Through configuration settings, you can set these aspects of how BlobFuse2 works in your environment:
- Access to a storage blob - Logging
You can use configuration settings to manage BlobFuse2 Preview in your deploymen
For a list of all BlobFuse2 settings and their descriptions, see the [base configuration file on GitHub](https://github.com/Azure/azure-storage-fuse/blob/main/setup/baseConfig.yaml).
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
->
-> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
-> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
+ To manage configuration settings for BlobFuse2, you have three options (in order of precedence): (1) [Configuration file](#configuration-file)
storage Blobfuse2 Health Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-health-monitor.md
Title: Use BlobFuse2 Preview Health Monitor to monitor mount activities and resource usage
+ Title: Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage
-description: Learn how to use BlobFuse2 Health Monitor to gain insights into BlobFuse2 Preview mount activities and resource usage.
+description: Learn how to Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage.
Previously updated : 09/26/2022 Last updated : 10/17/2022
-# Use Health Monitor to gain insights into BlobFuse2 Preview mounts
+# Use Health Monitor to gain insights into BlobFuse2 (preview) mounts
-This article provides references to help you deploy and use BlobFuse2 Preview Health Monitor to gain insights into BlobFuse2 mount activities and resource usage.
+This article provides references to help you deploy and use BlobFuse2 Health Monitor to gain insights into BlobFuse2 mount activities and resource usage.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
->
-> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
-> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
You can use BlobFuse2 Health Monitor to:
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
Title: Mount an Azure Blob Storage container on Linux by using BlobFuse2 Preview
+ Title: Mount an Azure Blob Storage container on Linux by using BlobFuse2 (preview)
-description: Learn how to mount an Azure Blob Storage container on Linux by using BlobFuse2 Preview.
+description: Learn how to mount an Azure Blob Storage container on Linux by using BlobFuse2 (preview).
Previously updated : 10/01/2022 Last updated : 10/17/2022
-# Mount an Azure Blob Storage container on Linux by using BlobFuse2 Preview
+# Mount an Azure Blob Storage container on Linux by using BlobFuse2 (preview)
-[BlobFuse2 Preview](blobfuse2-what-is.md) is a virtual file system driver for Azure Blob Storage. BlobFuse2 allows you to access your existing Azure block blob data in your storage account through the Linux file system. For more information, see [What is BlobFuse2?](blobfuse2-what-is.md).
+[BlobFuse2 (preview)](blobfuse2-what-is.md) is a virtual file system driver for Azure Blob Storage. BlobFuse2 allows you to access your existing Azure block blob data in your storage account through the Linux file system. For more information, see [What is BlobFuse2?](blobfuse2-what-is.md).
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
->
-> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
-> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
This article shows you how to install and configure BlobFuse2, mount an Azure blob container, and access data in the container. The basic steps are:
storage Blobfuse2 Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-troubleshooting.md
Title: Troubleshoot issues in BlobFuse2 Preview
+ Title: Troubleshoot issues in BlobFuse2 (preview)
-description: Learn how to troubleshoot issues in BlobFuse2 Preview.
+description: Learn how to troubleshoot issues in BlobFuse2 (preview).
Previously updated : 08/02/2022 Last updated : 10/17/2022
-# Troubleshoot issues in BlobFuse2 Preview
+# Troubleshoot issues in BlobFuse2 (preview)
This article provides references to help you troubleshoot BlobFuse2 issues during the preview.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
->
-> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
-> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
## BlobFuse2 troubleshooting guide
storage Blobfuse2 What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md
Title: What is BlobFuse2 Preview?
+ Title: What is BlobFuse2 (preview)?
-description: Get an overview of BlobFuse2 Preview and how to use it, including migration options if you use BlobFuse v1.
+description: Get an overview of BlobFuse2 (preview) and how to use it, including migration options if you use BlobFuse v1.
Previously updated : 10/01/2022 Last updated : 10/17/2022
-# What is BlobFuse2 Preview?
+# What is BlobFuse2 (preview)?
-BlobFuse2 Preview is a virtual file system driver for Azure Blob Storage. Use BlobFuse2 to access your existing Azure block blob data in your storage account through the Linux file system. BlobFuse2 also supports storage accounts that have a hierarchical namespace enabled.
+BlobFuse2 (preview) is a virtual file system driver for Azure Blob Storage. Use BlobFuse2 to access your existing Azure block blob data in your storage account through the Linux file system. BlobFuse2 also supports storage accounts that have a hierarchical namespace enabled.
-> [!IMPORTANT]
-> BlobFuse2 is the next generation of BlobFuse and currently is in preview. The preview version is provided without a service-level agreement. We recommend that you don't use the preview version for production workloads. In BlobFuse2 Preview, some features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> To use BlobFuse in a production environment, use the BlobFuse v1 general availability (GA) version. For information about the GA version, see:
->
-> - [Mount Azure Blob Storage as a file system by using BlobFuse v1](storage-how-to-mount-container-linux.md)
-> - [BlobFuse v1 project on GitHub](https://github.com/Azure/azure-storage-fuse/tree/master)
## About the BlobFuse2 open source project
storage Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/migrate-azure-credentials.md
The following steps explain how to migrate an existing application to use passwo
For local development, make sure you're authenticated with the same Azure AD account you assigned the role to on your Blob Storage account. You can authenticate via the Azure CLI, Visual Studio, Azure PowerShell, or other tools such as IntelliJ. Next you will need to update your code to use passwordless connections.
Once your application is configured to use passwordless connections and runs loc
#### Create the managed identity using the Azure portal
-The following steps demonstrate how to create a system-assigned managed identity for various web hosting services. The managed identity can securely connect to other Azure Services using the app configurations you setup previously.
+The following steps demonstrate how to create a system-assigned managed identity for various web hosting services. The managed identity can securely connect to other Azure Services using the app configurations you set up previously.
### [Service Connector](#tab/service-connector)
az spring app identity assign \
### [Azure Container Apps](#tab/container-apps-identity)
-You can assign a managed identity to an Azure Container Apps instance with the [az containerapp identity assign](/cli/azure/containerapp/identity) command.
+You can assign a managed identity to an Azure Container Apps instance with the [az container app identity assign](/cli/azure/containerapp/identity) command.
```azurecli az containerapp identity assign \
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
This section summarizes the supported Azure file shares authentication scenarios
Identity-based authentication for Azure Files offers several benefits over using Shared Key authentication: - **Extend the traditional identity-based file share access experience to the cloud with on-premises AD DS and Azure AD DS**
- If you plan to lift and shift your application to the cloud, replacing traditional file servers with Azure file shares, then you may want your application to authenticate with either on-premises AD DS or Azure AD DS credentials to access file data. Azure Files supports using both on-premises AD DS or Azure AD DS credentials to access Azure file shares over SMB from either on-premises AD DS or Azure AD DS domain-joined VMs.
+ If you plan to lift and shift your application to the cloud, replacing traditional file servers with Azure file shares, then you may want your application to authenticate with either on-premises AD DS or Azure AD DS credentials to access file data. Azure Files supports using either on-premises AD DS or Azure AD DS credentials to access Azure file shares over SMB from either on-premises AD DS or Azure AD DS domain-joined VMs.
- **Enforce granular access control on Azure file shares**
- You can grant permissions to a specific identity at the share, directory, or file level. For example, suppose that you have several teams using a single Azure file share for project collaboration. You can grant all teams access to non-sensitive directories, while limiting access to directories containing sensitive financial data to your Finance team only.
+ You can grant permissions to a specific identity at the share, directory, or file level. For example, suppose that you have several teams using a single Azure file share for project collaboration. You can grant all teams access to non-sensitive directories, while limiting access to directories containing sensitive financial data to your finance team only.
- **Back up Windows ACLs (also known as NTFS permissions) along with your data** You can use Azure file shares to back up your existing on-premises file shares. Azure Files preserves your ACLs along with your data when you back up a file share to Azure file shares over SMB.
Before you can enable identity-based authentication on Azure file shares, you mu
For on-premises AD DS authentication, you must set up your AD domain controllers and domain join your machines or VMs. You can host your domain controllers on Azure VMs or on-premises. Either way, your domain joined clients must have line of sight to the domain service, so they must be within the corporate network or virtual network (VNET) of your domain service.
-The following diagram depicts on-premises AD DS authentication to Azure file shares over SMB. The on-premises AD DS must be synced to Azure AD using Azure AD Connect sync. Only hybrid users that exist in both on-premises AD DS and Azure AD can be authenticated and authorized for Azure file share access. This is because the share level permission is configured against the identity represented in Azure AD where the directory/file level permission is enforced with that in AD DS. Make sure that you configure the permissions correctly against the same hybrid user.
+The following diagram depicts on-premises AD DS authentication to Azure file shares over SMB. The on-premises AD DS must be synced to Azure AD using Azure AD Connect sync or Azure AD Connect cloud sync. Only [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) that exist in both on-premises AD DS and Azure AD can be authenticated and authorized for Azure file share access. This is because the share-level permission is configured against the identity represented in Azure AD, whereas the directory/file-level permission is enforced with that in AD DS. Make sure that you configure the permissions correctly against the same hybrid user.
:::image type="content" source="media/storage-files-active-directory-overview/Files-on-premises-AD-DS-Diagram.png" alt-text="Diagram that depicts on-premises AD DS authentication to Azure file shares over SMB."::: ### Azure AD DS
-For Azure AD DS authentication, you should enable Azure AD Domain Services and domain join the VMs you plan to access file data from. Your domain-joined VM must reside in the same virtual network (VNET) as your Azure AD DS.
+For Azure AD DS authentication, you should enable Azure AD DS and domain-join the VMs you plan to access file data from. Your domain-joined VM must reside in the same virtual network (VNET) as your Azure AD DS.
The following diagram represents the workflow for Azure AD DS authentication to Azure file shares over SMB. It follows a similar pattern to on-premises AD DS authentication to Azure file shares. There are two major differences: -- First, you don't need to create the identity in Azure AD DS to represent the storage account. This is performed by the enablement process in the background.
+1. You don't need to create the identity in Azure AD DS to represent the storage account. This is performed by the enablement process in the background.
-- Second, all users that exist in Azure AD can be authenticated and authorized. The user can be cloud only or hybrid. The sync from Azure AD to Azure AD DS is managed by the platform without requiring any user configuration. However, the client must be domain joined to Azure AD DS, it cannot be Azure AD joined or registered.
+2. All users that exist in Azure AD can be authenticated and authorized. The user can be cloud-only or hybrid. The sync from Azure AD to Azure AD DS is managed by the platform without requiring any user configuration. However, the client must be domain-joined to Azure AD DS. It can't be Azure AD joined or registered.
:::image type="content" source="media/storage-files-active-directory-overview/Files-Azure-AD-DS-Diagram.png" alt-text="Diagram":::
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
description: Learn how to mount a Network File System (NFS) Azure file share on
Previously updated : 10/17/2022 Last updated : 10/21/2022
Azure file shares can be mounted in Linux distributions using either the Server
:::image type="content" source="media/storage-files-how-to-mount-nfs-shares/disable-secure-transfer.png" alt-text="Screenshot of storage account configuration screen with secure transfer disabled." lightbox="media/storage-files-how-to-mount-nfs-shares/disable-secure-transfer.png":::
-## Mount an NFS share
+## Mount an NFS share using the Azure portal
1. Once the file share is created, select the share and select **Connect from Linux**. 1. Enter the mount path you'd like to use, then copy the script.
Azure file shares can be mounted in Linux distributions using either the Server
You have now mounted your NFS share.
+## Mount an NFS share using /etc/fstab
+
+If you want the NFS file share to automatically mount every time the Linux server or VM boots, create a record in the **/etc/fstab** file for your Azure file share. Replace `YourStorageAccountName` and `FileShareName` with your information.
+
+```bash
+<YourStorageAccountName>.file.core.windows.net:/<YourStorageAccountName>/<FileShareName> /mount/<YourStorageAccountName>/<FileShareName> nfs vers=4,minorversion=1,sec=sys 0 0
+```
+
+For more information, enter the command `man fstab` from the Linux command line.
+ ### Validate connectivity
-If your mount failed, it's possible that your private endpoint was not set up correctly or is inaccessible. For details on confirming connectivity, see the [Verify connectivity](storage-files-networking-endpoints.md#verify-connectivity) section of the networking endpoints article.
+If your mount failed, it's possible that your private endpoint wasn't set up correctly or isn't accessible. For details on confirming connectivity, see [Verify connectivity](storage-files-networking-endpoints.md#verify-connectivity).
## Next steps
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
The following table lists the share-level permissions and how they align with th
## Share-level permissions for specific Azure AD users or groups
-If you intend to use a specific Azure AD user or group to access Azure file share resources, that identity must be a [hybrid identity](../../active-directory/hybrid/whatis-hybrid-identity.md) that exists in both on-premises AD DS and Azure AD. For example, say you have a user in your AD that is user1@onprem.contoso.com and you have synced to Azure AD as user1@contoso.com using Azure AD Connect sync. For this user to access Azure Files, you must assign the share-level permissions to user1@contoso.com. The same concept applies to groups or service principals.
+If you intend to use a specific Azure AD user or group to access Azure file share resources, that identity must be a [hybrid identity](../../active-directory/hybrid/whatis-hybrid-identity.md) that exists in both on-premises AD DS and Azure AD. For example, say you have a user in your AD that is user1@onprem.contoso.com and you have synced to Azure AD as user1@contoso.com using Azure AD Connect sync or Azure AD Connect cloud sync. For this user to access Azure Files, you must assign the share-level permissions to user1@contoso.com. The same concept applies to groups and service principals.
> [!IMPORTANT] > **Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (\*) character.** If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are granted access for all possible data actions. This means that all such identities will also be granted any new data action added to the platform. The additional access and permissions granted through new actions or data actions may be unwanted behavior for customers using wildcard. To mitigate any unintended future impact, we highly recommend declaring actions and data actions explicitly as opposed to using the wildcard. In order for share-level permissions to work, you must: -- Sync the users **and** the groups from your local AD to Azure AD using Azure AD Connect sync-- Add AD synced groups to RBAC role so they can access your storage account
+- Sync the users **and** the groups from your local AD to Azure AD using either the on-premises [Azure AD Connect sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application or [Azure AD Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md), a lightweight agent that can be installed from the Azure Active Directory Admin Center.
+- Add AD synced groups to RBAC role so they can access your storage account.
-Share-level permissions must be assigned to the Azure AD identity representing the same user or group in your AD DS to support AD DS authentication to your Azure file share. Authentication and authorization against identities that only exist in Azure AD, such as Azure Managed Identities (MSIs), are not supported with AD DS authentication.
+Share-level permissions must be assigned to the Azure AD identity representing the same user or group in your AD DS to support AD DS authentication to your Azure file share. Authentication and authorization against identities that only exist in Azure AD, such as Azure Managed Identities (MSIs), aren't supported with AD DS authentication.
> [!TIP] > Optional: Customers who want to migrate SMB server share-level permissions to RBAC permissions can use the `Move-OnPremSharePermissionsToAzureFileShare` PowerShell cmdlet to migrate directory and file-level permissions from on-premises to Azure. This cmdlet evaluates the groups of a particular on-premises file share, then writes the appropriate users and groups to the Azure file share using the three RBAC roles. You provide the information for the on-premises share and the Azure file share when invoking the cmdlet.
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
To help you set up identity-based authentication for some common use cases, we p
Before you enable AD DS authentication for Azure file shares, make sure you've completed the following prerequisites: -- Select or create your [AD DS environment](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) and [sync it to Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) with Azure AD Connect.
+- Select or create your [AD DS environment](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) and [sync it to Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) using either the on-premises [Azure AD Connect sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application or [Azure AD Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md), a lightweight agent that can be installed from the Azure Active Directory Admin Center.
You can enable the feature on a new or existing on-premises AD DS environment. Identities used for access must be synced to Azure AD or use a default share-level permission. The Azure AD tenant and the file share that you're accessing must be associated with the same subscription.
Azure Files authentication with AD DS is available in [all Azure Public, China a
If you plan to enable any networking configurations on your file share, we recommend you read the [networking considerations](./storage-files-networking-overview.md) article and complete the related configuration before enabling AD DS authentication.
-Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares with your on-premises AD DS credentials. Further, it allows you to better manage your permissions to allow granular access control. Doing this requires synching identities from on-premises AD DS to Azure AD with AD Connect. You assign share-level permissions to hybrid identities synced to Azure AD while managing file/directory level access using Windows ACLs.
+Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares with your on-premises AD DS credentials. Further, it allows you to better manage your permissions to allow granular access control. Doing this requires synching identities from on-premises AD DS to Azure AD using either the on-premises [Azure AD Connect sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application or [Azure AD Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md), a lightweight agent that can be installed from the Azure Active Directory Admin Center. You assign share-level permissions to hybrid identities synced to Azure AD while managing file/directory-level access using Windows ACLs.
Follow these steps to set up Azure Files for AD DS authentication:
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
The Azure AD Kerberos functionality for hybrid identities is only available on t
To learn how to create and configure a Windows VM and log in by using Azure AD-based authentication, see [Log in to a Windows virtual machine in Azure by using Azure AD](../../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
-This feature doesn't currently support user accounts that you create and manage solely in Azure AD. User accounts must be [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need AD DS and Azure AD Connect. You must create these accounts in Active Directory and sync them to Azure AD. To assign Azure Role-Based Access Control (RBAC) permissions for the Azure file share to a user group, you must create the group in Active Directory and sync it to Azure AD.
+This feature doesn't currently support user accounts that you create and manage solely in Azure AD. User accounts must be [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need AD DS and either [Azure AD Connect](../../active-directory/hybrid/whatis-azure-ad-connect.md) or [Azure AD Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md). You must create these accounts in Active Directory and sync them to Azure AD. To assign Azure Role-Based Access Control (RBAC) permissions for the Azure file share to a user group, you must create the group in Active Directory and sync it to Azure AD.
You must disable multi-factor authentication (MFA) on the Azure AD app representing the storage account.
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
description: This tutorial covers how to use the Azure portal to deploy a Linux
Previously updated : 10/17/2022 Last updated : 10/21/2022 #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share using NFS and Linux so I can determine whether I want to subscribe to the service.
Now that you've created an NFS share, to use it you have to mount it on your Lin
1. You should see **Connect to this NFS share from Linux** along with sample commands to use NFS on your Linux distribution and a provided mounting script. > [!IMPORTANT]
- > The provided mounting script will mount the NFS share only until the Linux machine is rebooted. To automatically mount the share every time the machine reboots, [add an entry in /etc/fstab](storage-how-to-use-files-linux.md#static-mount-with-etcfstab). For more information, enter the command `man fstab` from the Linux command line.
+ > The provided mounting script will mount the NFS share only until the Linux machine is rebooted. To automatically mount the share every time the machine reboots, see [Mount an NFS share using /etc/fstab](storage-files-how-to-mount-nfs-shares.md#mount-an-nfs-share-using-etcfstab).
:::image type="content" source="media/storage-files-quick-create-use-linux/mount-nfs-share.png" alt-text="Screenshot showing how to connect to an N F S file share from Linux using a provided mounting script." lightbox="media/storage-files-quick-create-use-linux/mount-nfs-share.png" border="true":::
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
Title: Create an SMB Azure file share
-description: How to create an SMB Azure file share by using the Azure portal, PowerShell, or Azure CLI.
+description: How to create and delete an SMB Azure file share by using the Azure portal, PowerShell, or Azure CLI.
Previously updated : 10/20/2022 Last updated : 10/21/2022
az storage share-rm update \
## Delete a file share
-To delete an Azure file share, you can use the Azure portal, Azure PowerShell, or Azure CLI. Shares can be recovered within the [soft delete](storage-files-prevent-file-share-deletion.md) retention period.
+To delete an Azure file share, you can use the Azure portal, Azure PowerShell, or Azure CLI. SMB Azure file shares can be recovered within the [soft delete](storage-files-prevent-file-share-deletion.md) retention period.
# [Portal](#tab/azure-portal) 1. Open the [Azure portal](https://portal.azure.com), and navigate to the storage account that contains the file share you want to delete.
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
Title: Mount SMB Azure file share on Linux | Microsoft Docs
-description: Learn how to mount an Azure file share over SMB on Linux. See the list of prerequisites. Review SMB security considerations on Linux clients.
+ Title: Mount SMB Azure file share on Linux
+description: Learn how to mount an Azure file share over SMB on Linux and review SMB security considerations on Linux clients.
Previously updated : 05/16/2022 Last updated : 10/21/2022
# Mount SMB Azure file share on Linux [Azure Files](storage-files-introduction.md) is Microsoft's easy to use cloud file system. Azure file shares can be mounted in Linux distributions using the [SMB kernel client](https://wiki.samba.org/index.php/LinuxCIFS).
-The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By default, Azure Files requires encryption in transit, which is supported by SMB 3.0+. Azure Files also supports SMB 2.1, which doesn't support encryption in transit, but you may not mount Azure file shares with SMB 2.1 from another Azure region or on-premises for security reasons. Unless your application specifically requires SMB 2.1, use SMB 3.1.1.
+The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By default, Azure Files requires encryption in transit, which is supported by SMB 3.0+. Azure Files also supports SMB 2.1, which doesn't support encryption in transit, but you can't mount Azure file shares with SMB 2.1 from another Azure region or on-premises for security reasons. Unless your application specifically requires SMB 2.1, use SMB 3.1.1.
| Distribution | SMB 3.1.1 | SMB 3.0 | |-|--||
sudo mkdir -p $mntPath
Finally, create a record in the `/etc/fstab` file for your Azure file share. In the command below, the default 0755 Linux file and folder permissions are used, which means read, write, and execute for the owner (based on the file/directory Linux owner), read and execute for users in owner group, and read and execute for others on the system. You may wish to set alternate `uid` and `gid` or `dir_mode` and `file_mode` permissions on mount as desired. For more information on how to set permissions, see [UNIX numeric notation](https://en.wikipedia.org/wiki/File_system_permissions#Numeric_notation) on Wikipedia.
+> [!Tip]
+> If you want Docker containers running .NET Core applications to be able to write to the Azure file share, include **nobrl** in the CIFS mount options to avoid sending byte range lock requests to the server.
+ ```bash httpEndpoint=$(az storage account show \ --resource-group $resourceGroupName \
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
Validate that permissions are configured correctly:
- **Active Directory Domain Services (AD DS)** see [Assign share-level permissions to an identity](./storage-files-identity-ad-ds-assign-permissions.md).
- Share-level permission assignments are supported for groups and users that have been synced from Active Directory Domain Services (AD DS) to Azure Active Directory (Azure AD) using Azure AD Connect. Confirm that groups and users being assigned share-level permissions are not unsupported "cloud-only" groups.
+ Share-level permission assignments are supported for groups and users that have been synced from AD DS to Azure Active Directory (Azure AD) using Azure AD Connect sync or Azure AD Connect cloud sync. Confirm that groups and users being assigned share-level permissions are not unsupported "cloud-only" groups.
- **Azure Active Directory Domain Services (Azure AD DS)** see [Assign share-level permissions to an identity](./storage-files-identity-auth-active-directory-domain-service-enable.md?tabs=azure-portal#assign-share-level-permissions-to-an-identity). <a id="error53-67-87"></a>
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
Title: Microsoft Teams on Azure Virtual Desktop - Azure
+ Title: Use Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 06/27/2022 Last updated : 10/21/2022 # Use Microsoft Teams on Azure Virtual Desktop
->[!NOTE]
->Media optimization for Microsoft Teams is only available for the following two clients:
->
->- Windows Desktop client for Windows 10 or 11 machines, version 1.2.1026.0 or later.
->- macOS Remote Desktop client, version 10.7.7 or later.
- Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality. To learn more about how to use Microsoft Teams in Virtual Desktop Infrastructure (VDI) environments, see [Teams for Virtualized Desktop Infrastructure](/microsoftteams/teams-for-vdi/).
-With media optimization for Microsoft Teams, the Remote Desktop client handles audio and video locally for Teams calls and meetings. You can still use Microsoft Teams on Azure Virtual Desktop with other clients without optimized calling and meetings. Teams chat and collaboration features are supported on all platforms. To redirect local devices in your remote session, check out [Customize Remote Desktop Protocol properties for a host pool](#customize-remote-desktop-protocol-properties-for-a-host-pool).
+With media optimization for Microsoft Teams, the Remote Desktop client handles audio and video locally for Teams calls and meetings by redirecting it to the local device. You can still use Microsoft Teams on Azure Virtual Desktop with other clients without optimized calling and meetings. Teams chat and collaboration features are supported on all platforms.
## Prerequisites
Before you can use Microsoft Teams on Azure Virtual Desktop, you'll need to do t
- [Prepare your network](/microsoftteams/prepare-network/) for Microsoft Teams. - Install the [Remote Desktop client](./user-documentation/connect-windows-7-10.md) on a Windows 10, Windows 10 IoT Enterprise, Windows 11, or macOS 10.14 or later device that meets the [hardware requirements for Microsoft Teams](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).-- Connect to a Windows 10 or 11 Multi-session or Windows 10 or 11 Enterprise virtual machine (VM).
+- Connect to an Azure Virtual Desktop session host running Windows 10 or 11 Multi-session or Windows 10 or 11 Enterprise.
+- The latest version of the [Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
+
+Media optimization for Microsoft Teams is only available for the following two clients:
+
+- Windows Desktop client for Windows 10 or 11 machines, version 1.2.1026.0 or later.
+- macOS Remote Desktop client, version 10.7.7 or later.
-## Install the Teams desktop app
+For more information about which features Teams on Azure Virtual Desktop supports and minimum required client versions, see [Supported features for Teams on Azure Virtual Desktop](teams-supported-features.md).
-This section will show you how to install the Teams desktop app on your Windows 10 or 11 Multi-session or Windows 10 or 11 Enterprise VM image. To learn more, check out [Install or update the Teams desktop app on VDI](/microsoftteams/teams-for-vdi#install-or-update-the-teams-desktop-app-on-vdi).
+## Prepare to install the Teams desktop app
+
+This section will show you how to install the Teams desktop app on your Windows 10 or 11 Enterprise multi-session or Windows 10 or 11 Enterprise VM image. To learn more, check out [Install or update the Teams desktop app on VDI](/microsoftteams/teams-for-vdi#install-or-update-the-teams-desktop-app-on-vdi).
### Prepare your image for Teams
New-Item -Path "HKLM:\SOFTWARE\Microsoft\Teams" -Force
New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Teams" -Name IsWVDEnvironment -PropertyType DWORD -Value 1 -Force ```
-### Install the Teams WebSocket Service
-
-Install the latest version of the [Remote Desktop WebRTC Redirector Service](https://aka.ms/msrdcwebrtcsvc/msi) on your VM image. If you encounter an installation error, install the [latest Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads) and try again.
-
-#### Latest WebSocket Service versions
-
-The following table lists the latest versions of the WebSocket Service:
-
-|Version |Release date |
-||--|
-|1.17.2205.23001|06/20/2022 |
-|1.4.2111.18001 |12/02/2021 |
-|1.1.2110.16001 |10/15/2021 |
-|1.0.2106.14001 |07/29/2021 |
-|1.0.2006.11001 |07/28/2020 |
-|0.11.0 |05/29/2020 |
-
-### Updates for version 1.17.2205.23001
--- Fixed an issue that made the WebRTC redirector service disconnect from Teams on Azure Virtual Desktop.-- Added keyboard shortcut detection for Shift+Ctrl+; that lets users turn on a diagnostic overlay during calls on Teams for Azure Virtual Desktop. This feature is supported in version 1.2.3313 or later of the Windows Desktop client. -- Added further stability and reliability improvements to the service.
+### Install the Remote Desktop WebRTC Redirector Service
-#### Updates for version 1.4.2111.18001
+The Remote Desktop WebRTC Redirector Service is required to run Teams on Azure Virtual Desktop. To install the service:
-- Fixed a mute notification problem.-- Multiple z-ordering fixes in Teams on Azure Virtual Desktop and Teams on Microsoft 365.-- Removed timeout that prevented the WebRTC redirector service from starting when the user connects.-- Fixed setup problems that prevented side-by-side installation from working.
+1. Sign in to a session host as a local administrator.
-#### Updates for version 1.1.2110.16001
+1. Download the [Remote Desktop WebRTC Redirector Service installer](https://aka.ms/msrdcwebrtcsvc/msi).
-- Fixed an issue that caused the screen to turn black while screen sharing. If you've been experiencing this issue, confirm that this update will resolve it by resizing the Teams window. If screen sharing starts working again after resizing, the update will resolve this issue.-- You can now control the meeting, ringtone, and notification volume from the host VM. You can only use this feature with version 1.2.2459 or later of [the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew).-- The installer will now make sure that Teams is closed before installing updates.-- Fixed an issue that prevented users from returning to full screen mode after leaving the call window.
+1. Open the file that you downloaded to start the setup process.
-#### Updates for version 1.0.2106.14001
+1. Follow the prompts. Once it's completed, select **Finish**.
-Increased the connection reliability between the WebRTC redirector service and the WebRTC client plugin.
+You can find more information about the latest version of the WebSocket service at [What's new in the Remote Desktop WebRTC Redirector Service](whats-new-webrtc.md).
-#### Updates for version 1.0.2006.11001
+## Install Teams on Azure Virtual Desktop
-- Fixed an issue where minimizing the Teams app during a call or meeting caused incoming video to drop.-- Added support for selecting one monitor to share in multi-monitor desktop sessions.-
-### Install Microsoft Teams
-
-You can deploy the Teams desktop app using a per-machine or per-user installation. To install Microsoft Teams in your Azure Virtual Desktop environment:
+You can deploy the Teams desktop app using a per-machine or per-user installation. To install Teams on Azure Virtual Desktop:
1. Download the [Teams MSI package](/microsoftteams/teams-for-vdi#deploy-the-teams-desktop-app-to-the-vm) that matches your environment. We recommend using the 64-bit installer on a 64-bit operating system.
-2. Run one of the following commands to install the MSI to the host VM:
-
- - Per-user installation
-
- ```powershell
- msiexec /i <path_to_msi> /l*v <install_logfile_name>
- ```
-
- This process is the default installation, which installs Teams to the **%AppData%** user folder. Teams won't work properly with per-user installation on a non-persistent setup.
+1. Run one of the following commands to install the MSI to the host VM:
- - Per-machine installation
+ - For per-machine installation, run this command:
```powershell
- msiexec /i <path_to_msi> /l*v <install_logfile_name> ALLUSER=1
+ msiexec /i <path_to_msi> /l*v <install_logfile_name> ALLUSER=1 ALLUSERS=1
``` This process installs Teams to the `%ProgramFiles(x86)%` folder on a 64-bit operating system and to the `%ProgramFiles%` folder on a 32-bit operating system. At this point, the golden image setup is complete. Installing Teams per-machine is required for non-persistent setups.
- There are two flags that may be set when installing teams, **ALLUSER=1** and **ALLUSERS=1**. It's important to understand the difference between these parameters. The **ALLUSER=1** parameter is used only in VDI environments to specify a per-machine installation. The **ALLUSERS=1** parameter can be used in non-VDI and VDI environments. When you set this parameter, **Teams Machine-Wide Installer** appears in Program and Features in Control Panel as well as Apps & features in Windows Settings. All users with admin credentials on the machine can uninstall Teams.
+ During this process, you can set the *ALLUSER=1* and the *ALLUSERS=1* parameters. The following table lists the differences between these two parameters.
+
+ |Parameter|Purpose|
+ |||
+ |ALLUSER=1|Used in virtual desktop infrastructure (VDI) environments to specify per-machine installation.|
+ |ALLUSERS=1|Used in both non-VDI and VDI environments to make the Teams Machine-Wide Installer appear in Programs and Features under the Control Panel and in Apps & Features in Windows Settings. The installer lets all users with admin credentials uninstall Teams.|
+ When you install Teams with the MSI setting ALLUSER=1, automatic updates will be disabled. We recommend you make sure to update Teams at least once a month. To learn more about deploying the Teams desktop app, check out [Deploy the Teams desktop app to the VM](/microsoftteams/teams-for-vdi#deploy-the-teams-desktop-app-to-the-vm/).
+
> [!NOTE]
+ > We recommend you use per-machine installation for better centralized management for both pooled and personal host pool setups.
+ >
> Users and admins can't disable automatic launch for Teams during sign-in at this time.
-3. To uninstall the MSI from the host VM, run this command:
-
- ```powershell
- msiexec /passive /x <msi_name> /l*v <uninstall_logfile_name>
- ```
-
- This uninstalls Teams from the Program Files (x86) folder or Program Files folder, depending on the operating system environment.
+ - For per-user installation, run the following command:
- > [!NOTE]
- > When you install Teams with the MSI setting ALLUSER=1, automatic updates will be disabled. We recommend you make sure to update Teams at least once a month. To learn more about deploying the Teams desktop app, check out [Deploy the Teams desktop app to the VM](/microsoftteams/teams-for-vdi#deploy-the-teams-desktop-app-to-the-vm/).
+ ```powershell
+ msiexec /i <path_to_msi> /l*v <install_logfile_name> ALLUSERS=1
+ ```
->[!IMPORTANT]
->If you're using a version of the Remote Desktop client for macOS that's earlier than 10.7.7, in order to use our latest Teams optimization features, you'll need to update your client to version 10.7.7 or later, then go to **Microsoft Remote Desktop Preferences** > **General** and enable Teams optimizations. If you're using the client for the first time and already have version 10.7.7 or later installed, you won't need to do this, because Teams optimizations are enabled by default.
+ This process installs Teams to the **%AppData%** user folder.
+
+ >[!NOTE]
+ >Per-user installation only works on personal host pools. If your deployment uses pooled host pools, we recommend using per-machine installation instead.
-### Verify media optimizations loaded
+## Verify media optimizations loaded
After installing the WebSocket Service and the Teams desktop app, follow these steps to verify that Teams media optimizations loaded: 1. Quit and restart the Teams application.
-2. Select your user profile image, then select **About**.
+1. Select your user profile image, then select **About**.
-3. Select **Version**.
+1. Select **Version**.
If media optimizations loaded, the banner will show you **Azure Virtual Desktop Media optimized**. If the banner shows you **Azure Virtual Desktop Media not connected**, quit the Teams app and try again.
-4. Select your user profile image, then select **Settings**.
+1. Select your user profile image, then select **Settings**.
If media optimizations loaded, the audio devices and cameras available locally will be enumerated in the device menu. If the menu shows **Remote audio**, quit the Teams app and try again. If the devices still don't appear in the menu, check the Privacy settings on your local PC. Ensure the under **Settings** > **Privacy** > **App permissions - Microphone** the setting **"Allow apps to access your microphone"** is toggled **On**. Disconnect from the remote session, then reconnect and check the audio and video devices again. To join calls and meetings with video, you must also grant permission for apps to access your camera. If optimizations don't load, uninstall then reinstall Teams and check again.
-## Known issues and limitations
-
-Using Teams in a virtualized environment is different from using Teams in a non-virtualized environment. For more information about the limitations of Teams in virtualized environments, check out [Teams for Virtualized Desktop Infrastructure](/microsoftteams/teams-for-vdi#known-issues-and-limitations).
-
-### Client deployment, installation, and setup
--- With per-machine installation, Teams on VDI isn't automatically updated the same way non-VDI Teams clients are. To update the client, you'll need to update the VM image by installing a new MSI.-- Media optimization for Teams is only supported for the Remote Desktop client on machines running Windows 10 or later or macOS 10.14 or later.-- Use of explicit HTTP proxies defined on the client endpoint device isn't supported.-- Zoom in/zoom out of chat windows isn't supported.-
-### Calls and meetings
--- The Teams desktop client in Azure Virtual Desktop environments doesn't support creating live events, but you can join live events. For now, we recommend you create live events from the [Teams web client](https://teams.microsoft.com) in your remote session instead. When watching a live event in the browser, [enable multimedia redirection (MMR) for Teams live events](multimedia-redirection.md#teams-live-events) for smoother playback.-- Calls or meetings don't currently support application sharing. Desktop sessions support desktop sharing.-- Due to WebRTC limitations, incoming and outgoing video stream resolution is limited to 720p.-- The Teams app doesn't support HID buttons or LED controls with other devices.-- New Meeting Experience (NME) isn't currently supported in VDI environments.-- Teams for Azure Virtual Desktop doesn't currently support uploading custom background images.-
-For Teams known issues that aren't related to virtualized environments, see [Support Teams in your organization](/microsoftteams/known-issues).
-
-### Known issues for Teams for macOS
--- You can't configure audio devices from the Teams app, and the client will automatically use the default client audio device. To switch audio devices, you'll need to configure your settings from the client audio preferences instead.-- Teams for Azure Virtual Desktop on macOS doesn't currently support background effects such as background blur and background images.-- Give control and take control aren't currently supported.-
-## Collect Teams logs
-
-If you encounter issues with the Teams desktop app in your Azure Virtual Desktop environment, collect client logs under **%appdata%\Microsoft\Teams\logs.txt** on the host VM.
-
-If you encounter issues with calls and meetings, collect Teams Web client logs with the key combination **Ctrl** + **Alt** + **Shift** + **1**. Logs will be written to **%userprofile%\Downloads\MSTeams Diagnostics Log DATE_TIME.txt** on the host VM.
-
-## Contact Microsoft Teams support
-
-To contact Microsoft Teams support, go to the [Microsoft 365 admin center](/microsoft-365/admin/contact-support-for-business-products).
- ## Customize Remote Desktop Protocol properties for a host pool Customizing a host pool's Remote Desktop Protocol (RDP) properties, such as multi-monitor experience or enabling microphone and audio redirection, lets you deliver an optimal experience for your users based on their needs.
Enabling device redirections isn't required when using Teams with media optimiza
- `camerastoredirect:s:*` redirects all cameras. To learn more, check out [Customize Remote Desktop Protocol properties for a host pool](customize-rdp-properties.md).+
+## Next steps
+
+See [Supported features for Teams on Azure Virtual Desktop](teams-supported-features.md) for more information about which features Teams on Azure Virtual Desktop supports and minimum required client versions.
+
+Learn about known issues, limitations, and how to log issues at [Troubleshoot Teams on Azure Virtual Desktop](troubleshoot-teams.md).
+
+Learn about the latest version of the WebSocket Service at [What's new in the WebSocket Service for Azure Virtual Desktop](whats-new-webrtc.md).
virtual-desktop Teams Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-supported-features.md
+
+ Title: Supported features for Microsoft Teams on Azure Virtual Desktop - Azure
+description: Supported features for Microsoft Teams on Azure Virtual Desktop.
++ Last updated : 10/21/2022++++
+# Supported features for Microsoft Teams on Azure Virtual Desktop
+
+This article lists the features of Microsoft Teams that Azure Virtual Desktop currently supports and the minimum requirements to use each feature.
+
+## Supported features
+
+The following table lists whether the Windows Desktop client or macOS client supports specific features for Teams on Azure Virtual Desktop.
+
+|Feature|Windows Desktop client|macOS client|
+||||
+|Audio/video call|Yes|Yes|
+|Screen share|Yes|Yes|
+|Configure camera devices|Yes|Yes|
+|Configure audio devices|Yes|No|
+|Live captions|Yes|Yes|
+|Communication Access Real-time Translation (CART) transcriptions|Yes|Yes|
+|Give and take control |Yes|No|
+|Multiwindow|Yes|Yes|
+|Background blur|Yes|No|
+|Background images|Yes|No|
+|Screen share and video together|Yes|Yes|
+|Secondary ringer|Yes|No|
+|Dynamic e911|Yes|Yes|
+|Diagnostic overlay|Yes|No|
+|Noise suppression|Yes|No|
+
+## Minimum requirements
+
+The following table lists the minimum required versions for each Teams feature. For optimal user experience on Teams for Azure Virtual Desktop, we recommend using the latest supported versions of each client and the WebRTC service, which you can find in the following list:
+
+- [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew)
+- [macOS](/windows-server/remote/remote-desktop-services/clients/mac-whatsnew)
+- [Teams WebRTC Service](https://aka.ms/msrdcwebrtcsvc/msi)
+- [Teams desktop app](/microsoftteams/teams-for-vdi#deploy-the-teams-desktop-app-to-the-vm)
+
+|Supported features|Windows Desktop client version |macOS client version|WebRTC Service version|Teams version|
+||||||
+|Audio/video call|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Screen share|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Configure camera devices|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Configure audio devices|1.2.1755 and later|Not supported|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Live captions|1.2.2322 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|CART transcriptions|1.2.2322 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Give and take control |1.2.2924 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Multiwindow|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|1.5.00.11865 and later|
+|Background blur|1.2.3004 and later|Not supported|1.0.2006.11001 and later|1.5.00.11865 and later|
+|Background images|1.2.3004 and later|Not supported|1.0.2006.11001 and later|1.5.00.11865 and later|
+|Screen share and video together|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Secondary ringer|1.2.3004 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Dynamic e911|1.2.2600 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Diagnostic overlay|1.2.3316 and later|Not supported|1.17.2205.23001 and later|Updates within 90 days of the current version|
+|Noise suppression|1.2.3316 and later|Not supported|1.0.2006.11001 and later|Updates within 90 days of the current version|
+
+## Next steps
+
+Learn more about how to set up Teams for Azure Virtual Desktop at [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md).
+
+Learn about known issues, limitations, and how to log issues at [Troubleshoot Teams on Azure Virtual Desktop](troubleshoot-teams.md).
+
+Learn about the latest version of the Remote Desktop WebRTC Redirector Service at [What's new in the Remote Desktop WebRTC Redirector Service](whats-new-webrtc.md).
virtual-desktop Troubleshoot Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-teams.md
+
+ Title: Troubleshoot Microsoft Teams on Azure Virtual Desktop - Azure
+description: Known issues and troubleshooting instructions for Teams on Azure Virtual Desktop.
++ Last updated : 10/21/2022++++
+# Troubleshoot Microsoft Teams for Azure Virtual Desktop
+
+This article describes known issues and limitations for Teams on Azure Virtual Desktop, as well as how to log issues and contact support.
+
+## Known issues and limitations
+
+Using Teams in a virtualized environment is different from using Teams in a non-virtualized environment. For more information about the limitations of Teams in virtualized environments, check out [Teams for Virtualized Desktop Infrastructure](/microsoftteams/teams-for-vdi#known-issues-and-limitations).
+
+### Client deployment, installation, and setup
+
+- With per-machine installation, Teams on VDI isn't automatically updated the same way non-VDI Teams clients are. To update the client, you'll need to update the VM image by installing a new MSI.
+- Media optimization for Teams is only supported for the Remote Desktop client on machines running Windows 10 or later or macOS 10.14 or later.
+- Use of explicit HTTP proxies defined on the client endpoint device isn't supported.
+- Zoom in/zoom out of chat windows isn't supported.
+
+### Calls and meetings
+
+- Due to WebRTC limitations, incoming and outgoing video stream resolution is limited to 720p.
+- The Teams app doesn't support HID buttons or LED controls with other devices.
+- This feature doesn't support uploading custom background images.
+- This feature doesnΓÇÖt support taking screenshots for incoming videos from the virtual machine (VM). As a workaround, we recommend you minimize the session desktop window and screenshot from the client machine instead.
+
+For Teams known issues that aren't related to virtualized environments, see [Support Teams in your organization](/microsoftteams/known-issues).
+
+## Collect Teams logs
+
+If you encounter issues with the Teams desktop app in your Azure Virtual Desktop environment, collect client logs under **%appdata%\Microsoft\Teams\logs.txt** on the host VM.
+
+If you encounter issues with calls and meetings, you can start collecting Teams diagnostic logs with the key combination **Ctrl** + **Alt** + **Shift** + **1**. Logs will be written to **%userprofile%\Downloads\MSTeams Diagnostics Log DATE_TIME.txt** on the host VM.
+
+## Contact Microsoft Teams support
+
+To contact Microsoft Teams support, go to the [Microsoft 365 admin center](/microsoft-365/admin/contact-support-for-business-products).
+
+## Next steps
+
+Learn more about how to set up Teams on Azure Virtual Desktop at [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md).
+
+Learn more about the WebSocket Services for Teams on Azure Virtual Desktop at [What's new in the WebSocket Service](whats-new-webrtc.md).
virtual-desktop Whats New Webrtc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-webrtc.md
+
+ Title: What's new in the Remote Desktop WebRTC Redirector Service?
+description: New features and product updates the Remote Desktop WebRTC Redirector Service for Azure Virtual Desktop.
++ Last updated : 10/21/2022+++++
+# What's new in the Remote Desktop WebRTC Redirector Service
+
+This article provides information about the latest updates to the Remote Desktop WebRTC Redirector Service for Teams for Azure Virtual Desktop, which you can download at [Remote Desktop WebRTC Redirector Service](https://aka.ms/msrdcwebrtcsvc/msi).
+
+## Latest versions of the Remote Desktop WebRTC Redirector Service
+
+The following sections describe what changed in each version of the Remote Desktop WebRTC Redirector Service.
+
+### Updates for version 1.17.2205.23001
+
+Date published: June 20, 2022
+
+Download: [MSI installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4YM8L)
+
+- Fixed an issue that made the WebRTC redirector service disconnect from Teams on Azure Virtual Desktop.
+- Added keyboard shortcut detection for Shift+Ctrl+; that lets users turn on a diagnostic overlay during calls on Teams for Azure Virtual Desktop. This feature is supported in version 1.2.3313 or later of the Windows Desktop client.
+- Added further stability and reliability improvements to the service.
+
+### Updates for version 1.4.2111.18001
+
+Date published: December 2, 2021
+
+Download: [MSI installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWQ1UW)
+
+- Fixed a mute notification problem.
+- Multiple z-ordering fixes in Teams on Azure Virtual Desktop and Teams on Microsoft 365.
+- Removed timeout that prevented the WebRTC redirector service from starting when the user connects.
+- Fixed setup problems that prevented side-by-side installation from working.
+
+### Updates for version 1.1.2110.16001
+
+Date published: October 15, 2021
+
+- Fixed an issue that caused the screen to turn black while screen sharing. If you've been experiencing this issue, confirm that this update will resolve it by resizing the Teams window. If screen sharing starts working again after resizing, the update will resolve this issue.
+- You can now control the meeting, ringtone, and notification volume from the host VM. You can only use this feature with version 1.2.2459 or later of [the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew).
+- The installer will now make sure that Teams is closed before installing updates.
+- Fixed an issue that prevented users from returning to full screen mode after leaving the call window.
+
+## Updates for version 1.0.2106.14001
+
+Date published: July 29, 2021
+
+Increased the connection reliability between the WebRTC redirector service and the WebRTC client plugin.
+
+## Updates for version 1.0.2006.11001
+
+Date published: July 28, 2020
+
+- Fixed an issue where minimizing the Teams app during a call or meeting caused incoming video to drop.
+- Added support for selecting one monitor to share in multi-monitor desktop sessions.
+
+## Next steps
+
+Learn more about how to set up Teams on Azure Virtual Desktop at [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md).
+
+Learn about known issues, limitations, and how to log issues at [Troubleshoot Teams on Azure Virtual Desktop](troubleshoot-teams.md).
virtual-machines Create Upload Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-ubuntu.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-Ubuntu now publishes official Azure VHDs for download at [https://cloud-images.ubuntu.com/](https://cloud-images.ubuntu.com/). If you need to build your own specialized Ubuntu image for Azure, rather than use the manual procedure below it is recommended to start with these known working VHDs and customize as needed. The latest image releases can always be found at the following locations:
+Ubuntu now publishes official Azure VHDs for download at [https://cloud-images.ubuntu.com/](https://cloud-images.ubuntu.com/). If you need to build your own specialized Ubuntu image for Azure, rather than use the manual procedure below it's recommended to start with these known working VHDs and customize as needed. The latest image releases can always be found at the following locations:
-* Ubuntu 18.04/Bionic: [bionic-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64-azure.vhd.zip)
-* Ubuntu 20.04/Focal: [focal-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-azure.vhd.zip)
+* Ubuntu 18.04/Bionic: [bionic-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64-azure.vhd.tar.gz)
+* Ubuntu 20.04/Focal: [focal-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-azure.vhd.tar.gz)
## Prerequisites
-This article assumes that you have already installed an Ubuntu Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
+This article assumes that you've already installed an Ubuntu Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
**Ubuntu installation notes** * Please see also [General Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more tips on preparing Linux for Azure.
-* The VHDX format is not supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the `Convert-VHD` cmdlet.
-* When installing the Linux system it is recommended that you use standard partitions rather than LVM (often the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks if preferred.
-* Do not configure a swap partition or swapfile on the OS disk. The cloud-init provisioning agent can be configured to create a swap file or a swap partition on the temporary resource disk. More information about this can be found in the steps below.
+* The VHDX format isn't supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the `Convert-VHD` cmdlet.
+* When installing the Linux system it's recommended that you use standard partitions rather than LVM (often the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks if preferred.
+* Don't configure a swap partition or swapfile on the OS disk. The cloud-init provisioning agent can be configured to create a swap file or a swap partition on the temporary resource disk. More information about this can be found in the steps below.
* All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you must ensure that the raw disk size is a multiple of 1MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information. ## Manual steps
This article assumes that you have already installed an Ubuntu Linux operating s
3. Replace the current repositories in the image to use Ubuntu's Azure repository.
- Before editing `/etc/apt/sources.list`, it is recommended to make a backup:
+ Before editing `/etc/apt/sources.list`, it's recommended to make a backup:
```console # sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
This article assumes that you have already installed an Ubuntu Linux operating s
13. Click **Action -> Shut Down** in Hyper-V Manager.
-14. Azure only accepts fixed-size VHDs. If the VM's OS disk is not a fixed-size VHD, use the `Convert-VHD` PowerShell cmdlet and specify the `-VHDType Fixed` option. Please have a look at the docs for `Convert-VHD` here: [Convert-VHD](/powershell/module/hyper-v/convert-vhd).
+14. Azure only accepts fixed-size VHDs. If the VM's OS disk isn't a fixed-size VHD, use the `Convert-VHD` PowerShell cmdlet and specify the `-VHDType Fixed` option. Please have a look at the docs for `Convert-VHD` here: [Convert-VHD](/powershell/module/hyper-v/convert-vhd).
15. To bring a Generation 2 VM on Azure, follow these steps:
virtual-machines Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-powershell.md
Invoke-AzVMRunCommand `
-ScriptString 'sudo apt-get update && sudo apt-get install -y nginx' ```
-The `-ScriptString' parameter requires version `4.27.0` or later of the 'Az.Compute` module.
-
+The `-ScriptString` parameter requires version `4.27.0` or later of the `Az.Compute` module.
## View the web server in action
virtual-machines Migration Classic Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-overview.md
- Previously updated : 02/06/2020 Last updated : 10/21/2022
If your storage account does not have any associated disks or Virtual Machines d
The following screenshots show how to upgrade a Classic storage account to an Azure Resource Manager storage account using Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to your storage account.
-3. In the **Settings** section, click **Migrate to Azure Resource Manager**.
+2. Navigate to your classic storage account.
+3. In the **Settings** section, click **Migrate to ARM**.
4. Click on **Validate** to determine migration feasibility.
-5. If validation passes, click on **Prepare** to create a migrated storage account.
-6. Type **yes** to confirm migration and click **Commit** to finish the migration.
+ :::image type="content" source="./media/migration-classic-resource-manager/validate-storage-account.png" alt-text="Screenshot showing the page for migrating your classic storage account to Azure Resource Manager.":::
+1. If validation passes, click on **Prepare** to create a migrated storage account.
+1. Type **yes** to confirm migration and click **Commit** to finish the migration.
- ![Validate Storage Account](../../includes/media/storage-account-upgrade-classic/storage-migrate-resource-manager-1.png)
-
- ![Prepare Storage Account](../../includes/media/storage-account-upgrade-classic/storage-migrate-resource-manager-2.png)
-
- ![Finalize Storage Account Migration](../../includes/media/storage-account-upgrade-classic/storage-migrate-resource-manager-3.png)
### Migration of unattached resources Storage Accounts with no associated disks or Virtual Machines data may be migrated independently.
The following configurations are not currently supported.
| Compute |Virtual machines that have alerts, Autoscale policies |The migration goes through and these settings are dropped. It is highly recommended that you evaluate your environment before you do the migration. Alternatively, you can reconfigure the alert settings after migration is complete. | | Compute |XML VM extensions (BGInfo 1.*, Visual Studio Debugger, Web Deploy, and Remote Debugging) |This is not supported. It is recommended that you remove these extensions from the virtual machine to continue migration or they will be dropped automatically during the migration process. | | Compute |Boot diagnostics with Premium storage |Disable Boot Diagnostics feature for the VMs before continuing with migration. You can re-enable boot diagnostics in the Resource Manager stack after the migration is complete. Additionally, blobs that are being used for screenshot and serial logs should be deleted so you are no longer charged for those blobs. |
-| Compute | Cloud services that contain more than one availability set or multiple availability sets. |This is currently not supported. Please move the Virtual Machines to the same availability set before migrating. |
+| Compute | Cloud services that contain more than one availability set or multiple availability sets. |This is currently not supported. Move the Virtual Machines to the same availability set before migrating. |
| Compute | VM with Microsoft Defender for Cloud extension | Microsoft Defender for Cloud automatically installs extensions on your Virtual Machines to monitor their security and raise alerts. These extensions usually get installed automatically if the Microsoft Defender for Cloud policy is enabled on the subscription. To migrate the Virtual Machines, disable the Defender for Cloud policy on the subscription, which will remove the Defender for Cloud monitoring extension from the Virtual Machines. | | Compute | VM with backup or snapshot extension | These extensions are installed on a Virtual Machine configured with the Azure Backup service. While the migration of these VMs is not supported, follow the guidance in [Frequently asked questions about classic to Azure Resource Manager migration](./migration-classic-resource-manager-faq.yml) to keep backups that were taken prior to migration. | | Compute | VM with Azure Site Recovery extension | These extensions are installed on a Virtual Machine configured with the Azure Site Recovery service. While the migration of storage used with Site Recovery will work, current replication will be impacted. You need to disable and enable VM replication after storage migration. |
-| Network |Virtual networks that contain virtual machines and web/worker roles |This is currently not supported. Please move the Web/Worker roles to their own Virtual Network before migrating. Once the classic Virtual Network is migrated, the migrated Azure Resource Manager Virtual Network can be peered with the classic Virtual Network to achieve similar configuration as before.|
+| Network |Virtual networks that contain virtual machines and web/worker roles |This is currently not supported. Move the Web/Worker roles to their own Virtual Network before migrating. Once the classic Virtual Network is migrated, the migrated Azure Resource Manager Virtual Network can be peered with the classic Virtual Network to achieve similar configuration as before.|
| Network | Classic Express Route circuits |This is currently not supported. These circuits need to be migrated to Azure Resource Manager before beginning IaaS migration. To learn more, see [Moving ExpressRoute circuits from the classic to the Resource Manager deployment model](../expressroute/expressroute-move.md).| | Azure App Service |Virtual networks that contain App Service environments |This is currently not supported. | | Azure HDInsight |Virtual networks that contain HDInsight services |This is currently not supported. |
virtual-machines Prepare For Upload Vhd Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/prepare-for-upload-vhd-image.md
Make sure the following settings are configured correctly for remote access:
> [!NOTE] > If you receive an error message when running
-> `Set-ItemProperty -Path 'HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services -Name <string> -Value <object>`,
+> `Set-ItemProperty -Path 'HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services' -Name <string> -Value <object>`,
> you can safely ignore it. It means the domain isn't setting that configuration through a Group > Policy Object.
virtual-network Public Ip Upgrade Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-portal.md
Title: Upgrade a public IP address - Azure portal
-description: In this article, learn how to upgrade a basic SKU public IP address using the Azure portal.
+ Title: 'Upgrade a public IP address - Azure portal'
+description: In this article, you learn how to upgrade a basic SKU public IP address using the Azure portal.
Previously updated : 05/20/2021 Last updated : 10/21/2022
In this article, you'll learn how to upgrade a static Basic SKU public IP addres
## Prerequisites
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-* A static basic SKU public IP address in your subscription. For more information, see [Create public IP address - Azure portal](./create-public-ip-portal.md#create-a-basic-sku-public-ip-address).
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* A static basic SKU public IP address in your subscription. For more information, see [Create a basic SKU public IP address using the Azure portal](./create-public-ip-portal.md?tabs=option-1-create-public-ip-basic#create-a-basic-sku-public-ip-address).
## Upgrade public IP address In this section, you'll sign in to the Azure portal and upgrade your static Basic SKU public IP to the Standard sku.
-In order to upgrade a public IP, it must not be associated with any resource (see [this page](/azure/virtual-network/virtual-network-public-ip-address#view-modify-settings-for-or-delete-a-public-ip-address) for more information about how to disassociate public IPs).
+In order to upgrade a public IP, it must not be associated with any resource. For more information, see [View, modify settings for, or delete a public IP address](./virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address).
>[!IMPORTANT] >Public IPs upgraded from Basic to Standard SKU continue to have no [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones). This means they cannot be associated with an Azure resource that is either zone-redundant or tied to a pre-specified zone in regions where this is offered.
In order to upgrade a public IP, it must not be associated with any resource (se
5. Select the upgrade banner at the top of the overview section in **myBasicPublicIP**.
- :::image type="content" source="./media/public-ip-upgrade-portal/upgrade-ip-portal.png" alt-text="Upgrade basic IP address in Azure portal" border="true":::
+ :::image type="content" source="./media/public-ip-upgrade-portal/upgrade-ip-portal.png" alt-text="Screenshot showing the upgrade banner in Azure portal used to upgrade basic IP address." border="true":::
> [!NOTE]
- > The basic public IP you are upgrading must have the static allocation type. You'll receive a warning that the IP can't be upgraded if you try to upgrade a dynamically allocated IP address.
+ > The basic public IP you are upgrading must have static assignment. You'll receive a warning that the IP can't be upgraded if you try to upgrade a dynamically allocated IP address. Change the IP address assignment to static before upgrading.
-6. Select the **I acknowledge** check box. Select **Upgrade**.
+6. Select the **I acknowledge** check box, and then select **Upgrade**.
> [!WARNING]
- > Upgrading a basic public IP to standard sku can't be reversed. Public IPs upgraded from basic to standard SKU continue to have no guaranteed [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones).
+ > Upgrading a basic public IP to standard SKU can't be reversed. Public IPs upgraded from basic to standard SKU continue to have no guaranteed [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones).
## Verify upgrade
In this section, you'll verify the public IP address is now the standard sku.
5. Verify that the SKU is listed as **Standard** in the **Overview** section.
- :::image type="content" source="./media/public-ip-upgrade-portal/verify-upgrade-ip.png" alt-text="Verify public IP address is standard SKU." border="true":::
+ :::image type="content" source="./media/public-ip-upgrade-portal/verify-upgrade-ip.png" alt-text="Screenshot showing public IP address is standard SKU." border="true":::
## Next steps
-In this article, you upgrade a basic SKU public IP address to standard SKU.
+In this article, you upgraded a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see:
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
For guides on how to enable NSG flow logs, see [Enabling NSG Flow Logs](../../ne
Each NAT gateway can provide up to 50 Gbps of throughput. This data throughput includes data processed both outbound and inbound through a NAT gateway resource. You can split your deployments into multiple subnets and assign each subnet or group of subnets a NAT gateway to scale out.
-NAT gateway can support up to 50,000 concurrent connections per public IP address **to the same destination endpoint** over the internet for TCP and UDP. NAT gateway can process 1M packets per second and scale up to 5M packets per second.
+NAT gateway can support up to 50,000 concurrent connections per public IP address **to the same destination endpoint** over the internet for TCP and UDP. The total number of connections that NAT gateway can support at any given time is up to 2 million. NAT gateway can process 1M packets per second and scale up to 5M packets per second.
Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
virtual-wan User Groups About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-about.md
description: Learn about using user groups to assign IP addresses from specific
Previously updated : 09/20/2022 Last updated : 10/21/2022
-# About user groups and IP address pools for P2S User VPNs (preview)
+# About user groups and IP address pools for P2S User VPNs - Preview
You can configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article describes the different configurations and parameters the Virtual WAN P2S VPN gateway uses to determine user groups and assign IP addresses.
virtual-wan User Groups Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-create.md
description: Learn how to configure user groups and assign IP addresses from spe
Previously updated : 09/22/2022 Last updated : 10/21/2022
-# Configure user groups and IP address pools for P2S User VPNs (preview)
+# Configure user groups and IP address pools for P2S User VPNs - Preview
You can configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article helps you configure user groups, group members, and prioritize groups. For more information about working with user groups, see [About user groups](user-groups-about.md).
virtual-wan User Groups Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-radius.md
description: Learn how to configure RADIUS/NPS for user groups to assign IP addr
Previously updated : 09/22/2022 Last updated : 10/21/2022
-# RADIUS - Configure NPS for vendor-specific attributes - P2S user groups (preview)
+# RADIUS - Configure NPS for vendor-specific attributes - P2S user groups - Preview
The following section describes how to configure Windows Server Network Policy Server (NPS) to authenticate users to respond to Access-Request messages with the Vendor Specific Attribute (VSA) used for user group support in Virtual WAN point-to-site-VPN. The following steps assume that your Network Policy Server is already registered to Active Directory. The steps may vary depending on the vendor/version of your NPS server.
The following steps describe setting up single Network Policy on the NPS server.
:::image type="content" source="./media/user-groups-radius/configure-settings.png" alt-text="Screenshot of the Configure Settings page." lightbox="./media/user-groups-radius/configure-settings.png":::
- 1. On the **Add Vendor Specific Attribute** page, scroll to select **Vendor-Specific**.
+1. On the **Add Vendor Specific Attribute** page, scroll to select **Vendor-Specific**.
:::image type="content" source="./media/user-groups-radius/vendor-specific.png" alt-text="Screenshot of the Add Vendor Specific Attribute page with Vendor-Specific selected." lightbox="./media/user-groups-radius/vendor-specific.png":::
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | ||||||
+|Feature| Routing |[Virtual hub routing preference](about-virtual-hub-routing-preference.md)|Hub routing preference gives you more control over your infrastructure by allowing you to select how your traffic is routed when a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections. |October 2022||
+|Feature| Routing|[Bypass next hop IP for workloads within a spoke VNet connected to the virtual WAN hub generally available](how-to-virtual-hub-routing.md)|Bypassing next hop IP for workloads within a spoke VNet connected to the virtual WAN hub lets you deploy and access other resources in the VNet with your NVA without any additional configuration.|October 2022||
+| Feature| Network Virtual Appliances (NVA)/Integrated Third-party solutions in Virtual WAN hubs| [Fortinet SD-WAN](https://docs.fortinet.com/document/fortigate-public-cloud/7.2.2/azure-vwan-sd-wan-deployment-guide/12818/deployment-overview)| General availability of Fortinet SD-WAN solution in Virtual WAN. Next-Generation Firewall use cases in preview.| October 2022| SD-WAN solution generally available, Next Generation Firewall use cases in preview.|
|Feature |ExpressRoute | [ExpressRoute circuit page now shows vWAN connection](virtual-wan-expressroute-portal.md)|| August 2022|| |Feature | Site-to-site VPN | [BGP dashboard](monitor-bgp-dashboard.md)| Using the BGP dashboard, you can monitor BGP peers, advertised routes, and learned routes. The BGP dashboard is available for site-to-site VPNs that are configured to use BGP. |August 2022| | |Feature|Branch connectivity/Site-to-site VPN|[Multi-APIPA BGP](virtual-wan-site-to-site-portal.md)| Ability to specify multiple custom BGP IPs for VPN gateway instances in vWAN. |June 2022|Currently only available via portal. (Not available yet in PowerShell)|
web-application-firewall Waf Front Door Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-create-portal.md
First, create a basic WAF policy with managed Default Rule Set (DRS) by using th
:::image type="content" source="../media/waf-front-door-create-portal/basic.png" alt-text="Screenshot of the Create a W A F policy page, with a Review + create button and list boxes for the subscription, resource group, and policy name.":::
-1. Select **Association**, and then select **+ Associate a Front door profile**, enter the following settings, and then select **Add**:
+1. Select **Association** tab, and then select **+ Associate a Front door profile**, enter the following settings, and then select **Add**:
| Setting | Value | | | |
- | Front door profile | Select your Front Door profile name. |
+ | Front Door profile | Select your Front Door profile name. |
| Domains | Select the domains you want to associate the WAF policy to, then select **Add**. | :::image type="content" source="../media/waf-front-door-create-portal/associate-profile.png" alt-text="Screenshot of the associate a Front Door profile page.":::
First, create a basic WAF policy with managed Default Rule Set (DRS) by using th
### Change mode When you create a WAF policy, by default, WAF policy is in **Detection** mode. In **Detection** mode, WAF doesn't block any requests, instead, requests matching the WAF rules are logged at WAF logs.
-To see WAF in action, you can change the mode settings from **Detection** to **Prevention**. In **Prevention** mode, requests that match rules that are defined in Default Rule Set (DRS) are blocked and logged at WAF logs.
+To see WAF in action, you can change the mode settings from **Detection** to **Prevention**. In **Prevention** mode, requests that match defined rules are blocked and logged at WAF logs.
- :::image type="content" source="../media/waf-front-door-create-portal/policy.png" alt-text="Screenshot of the Policy settings section. The Mode toggle is set to Prevention.":::
+ :::image type="content" source="../media/waf-front-door-create-portal/policy.png" alt-text="Screenshot of the Overview page of Front Door WAF policy that shows how to switch to prevention mode.":::
### Custom rules
Below is an example of configuring a custom rule to block a request if the query
### Default Rule Set (DRS)
-Azure-managed Default Rule Set (DRS) is enabled by default. Current default version is Microsoft_DefaultRuleSet_2.0. From **Managed rules** page, select **Assign** to assign a different DRS.
+Azure-managed Default Rule Set (DRS) is enabled by default for Premium and Classic tiers of Front Door. Current default rule set for Premium Front Door is Microsoft_DefaultRuleSet_2.0. Microsoft_DefaultRuleSet_1.1 is the current default rule set for Classic Front Door. From **Managed rules** page, select **Assign** to assign a different DRS.
To disable an individual rule, select the **check box** in front of the rule number, and select **Disable** at the top of the page. To change actions types for individual rules within the rule set, select the check box in front of the rule number, and then select the **Change action** at the top of the page. +
+> [!NOTE]
+> Managed rules are only supported in Front Door Premium tier and Front Door Classic tier policies.
## Clean up resources
-When no longer needed, remove the resource group and all related resources.
+When no longer needed, delete the resource group and all related resources.
## Next steps
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
The following rule groups and rules are available when using Web Application Fir
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |932120|Remote Command Execution: Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) Found|
|932140|Remote Command Execution: Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution: Unix Shell Code Found|
The following rule groups and rules are available when using Web Application Fir
|941180|Node-Validator Blacklist Keywords| |941190|XSS Using style sheets| |941200|XSS using VML frames|
-|941210|XSS using obfuscated JavaScript|
+|941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889))|
|941220|XSS using obfuscated VB Script| |941230|XSS using 'embed' tag| |941240|XSS using 'import' or 'implementation' attribute|
The following rule groups and rules are available when using Web Application Fir
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |931120|Remote Command Execution: Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) Found|
|932140|Remote Command Execution: Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution: Shellshock (CVE-2014-6271)|
The following rule groups and rules are available when using Web Application Fir
|941180|Node-Validator Blacklist Keywords| |941190|IE XSS Filters - Attack Detected.| |941200|IE XSS Filters - Attack Detected.|
-|941210|IE XSS Filters - Attack Detected.|
+|941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) found.|
|941220|IE XSS Filters - Attack Detected.| |941230|IE XSS Filters - Attack Detected.| |941240|IE XSS Filters - Attack Detected.|
The following rule groups and rules are available when using Web Application Fir
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |932120|Remote Command Execution: Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) Found|
|932140|Remote Command Execution: Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution: Unix Shell Code Found|
The following rule groups and rules are available when using Web Application Fir
|941180|Node-Validator Blacklist Keywords| |941190|XSS Using style sheets| |941200|XSS using VML frames|
-|941210|XSS using obfuscated JavaScript|
+|941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889))|
|941220|XSS using obfuscated VB Script| |941230|XSS using 'embed' tag| |941240|XSS using 'import' or 'implementation' attribute|
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
The following rule groups and rules are available when using Web Application Fir
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |932120|Remote Command Execution: Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) Found|
|932140|Remote Command Execution: Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution: Unix Shell Code Found|
The following rule groups and rules are available when using Web Application Fir
|941180|Node-Validator Blacklist Keywords| |941190|XSS Using style sheets| |941200|XSS using VML frames|
-|941210|XSS using obfuscated JavaScript|
+|941210|XSS using obfuscated JavaScript or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889))|
|941220|XSS using obfuscated VB Script| |941230|XSS using 'embed' tag| |941240|XSS using 'import' or 'implementation' attribute|
The following rule groups and rules are available when using Web Application Fir
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |932120|Remote Command Execution = Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) Found|
|932140|Remote Command Execution = Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution = Unix Shell Code Found|
The following rule groups and rules are available when using Web Application Fir
|941180|Node-Validator Blocklist Keywords| |941190|XSS using style sheets| |941200|XSS using VML frames|
-|941210|XSS using obfuscated JavaScript|
+|941210|XSS using obfuscated JavaScript or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889))|
|941220|XSS using obfuscated VB Script| |941230|XSS using 'embed' tag| |941240|XSS using 'import' or 'implementation' attribute|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |932120|Remote Command Execution = Windows PowerShell Command Found|
-|932130|**Application Gateway WAF v2**: Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found<br><br>**Application Gateway WAF v1**: Remote Command Execution: Unix Shell Expression|
+|932130|**Application Gateway WAF v2**: Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) Found<br><br>**Application Gateway WAF v1**: Remote Command Execution: Unix Shell Expression|
|932140|Remote Command Execution = Windows FOR/IF Command Found| |932160|Remote Command Execution = Unix Shell Code Found| |932170|Remote Command Execution = Shellshock (CVE-2014-6271)|
The following rule groups and rules are available when using Web Application Fir
|941180|Node-Validator Blocklist Keywords| |941190|XSS using style sheets| |941200|XSS using VML frames|
-|941210|XSS using obfuscated JavaScript|
+|941210|XSS using obfuscated JavaScript or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889))|
|941220|XSS using obfuscated VB Script| |941230|XSS using 'embed' tag| |941240|XSS using 'import' or 'implementation' attribute|
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
Previously updated : 10/20/2022 Last updated : 10/21/2022 #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway with Web Application Firewall so I can protect my applications.
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
### Frontends tab
-1. On the **Frontends** tab, verify **Frontend IP address type** is set to **Public**. <br>You can configure the Frontend IP to be Public or Private as per your use case. In this example, you'll choose a Public Frontend IP.
+1. On the **Frontends** tab, verify **Frontend IP address type** is set to **Public**. <br>You can configure the Frontend IP to be **Public** or **Both** as per your use case. In this example, you'll choose a Public Frontend IP.
> [!NOTE]
- > For the Application Gateway v2 SKU, you can only choose **Public** frontend IP configuration. Private frontend IP configuration is currently not enabled for this v2 SKU.
+ > For the Application Gateway v2 SKU, **Public** and **Both** Frontend IP address types are supported today. **Private** frontend IP configuration only is not currently supported.
2. Choose **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
web-application-firewall Custom Waf Rules Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/custom-waf-rules-overview.md
Currently, must be **MatchRule**.
Must be one of the variables: -- RemoteAddr ΓÇô IP Address/Range of the remote computer connection
+- RemoteAddr ΓÇô IPv4 Address/Range of the remote computer connection
- RequestMethod ΓÇô HTTP Request method (GET, POST, PUT, DELETE, and so on.) - QueryString ΓÇô Variable in the URI - PostArgs ΓÇô Arguments sent in the POST body. Custom Rules using this match variable are only applied if the 'Content-Type' header is set to 'application/x-www-form-urlencoded' and 'multipart/form-data'. Additional content type of `application/json` is supported with CRS version 3.2 or greater, bot protection rule set, and geo-match custom rules.
Describes the field of the matchVariable collection. For example, if the matchVa
Must be one of the following operators: -- IPMatch - only used when Match Variable is *RemoteAddr*
+- IPMatch - only used when Match Variable is *RemoteAddr,* and only supports IPv4
- Equal ΓÇô input is the same as the MatchValue - Any ΓÇô It should not have a MatchValue. It is recommended for Match Variable with a valid Selector. - Contains