Updates from: 11/19/2022 02:13:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Hr Attribute Retrieval Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-attribute-retrieval-issues.md
**Suggested workarounds** * **Option 1: Using Workday Provisioning Groups**: Check if the calculated field value can be represented as a provisioning group in Workday. Using the same logic that is used for the calculated field, your Workday Admin may be able to assign a Provisioning Group to the user. Reference Workday doc that requires Workday login: [Set Up Account Provisioning Groups](https://doc.workday.com/reader/3DMnG~27o049IYFWETFtTQ/keT9jI30zCzj4Nu9pJfGeQ). Once configured, this Provisioning Group assignment can be [retrieved in the provisioning job](../app-provisioning/workday-integration-reference.md#example-3-retrieving-provisioning-group-assignments) and used in attribute mappings and scoping filter.
-* **Option 2: Using Workday Custom IDs**: Check if the calculated field value can be represented as a Custom ID on the Worker Profile. Use `Maintain Custom ID Type` task in Workday to define a new type and populate values in this custom ID. Make sure the [Workday ISU account used for the integration](../saas-apps/workday-inbound-tutorial.md#configuring-domain-security-policy-permissions) has domain security permission for `Person Data: ID Information`. For example, you can define "External_Payroll_ID" as a custom ID in Workday and retrieved it using the XPATH: `wd:Worker/wd:Worker_Data/wd:Personal_Data/wd:Identification_Data/wd:Custom_ID/wd:Custom_ID_Data[wd:ID_Type_Reference/wd:ID[@wd:type=\"Custom_ID_Type_ID\"]=\"External_Payroll_ID\"]/wd:ID/text()`
+* **Option 2: Using Workday Custom IDs**: Check if the calculated field value can be represented as a Custom ID on the Worker Profile. Use `Maintain Custom ID Type` task in Workday to define a new type and populate values in this custom ID. Make sure the [Workday ISU account used for the integration](../saas-apps/workday-inbound-tutorial.md#configuring-domain-security-policy-permissions) has domain security permission for `Person Data: ID Information`.
+ * Example 1: Let's say you have a calculated field called Payroll ID. You can define "External_Payroll_ID" as a custom ID in Workday and retrieve it using an XPATH that uses "Custom_ID_Type_ID" as the selecting mechanism: `wd:Worker/wd:Worker_Data/wd:Personal_Data/wd:Identification_Data/wd:Custom_ID/wd:Custom_ID_Data[string(wd:ID_Type_Reference/wd:ID[@wd:type='Custom_ID_Type_ID']='External_Payroll_ID']/wd:ID/text()`
+ * Example 2: Let's say you have a calculated field called Badge ID. You can define "Badge ID" as a custom ID in Workday and retrieve the "Descriptor" attribute corresponding to it with an XPATH that uses "wd:ID_Type_Reference/@wd:Descriptor" as the selecting mechanism: `wd:Worker/wd:Worker_Data/wd:Personal_Data/wd:Identification_Data/wd:Custom_ID[string(wd:Custom_ID_Data/wd:ID_Type_Reference/@wd:Descriptor)='BADGE ID']/wd:Custom_ID_Reference/@wd:Descriptor`
## Next steps
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
+
+ Title: Manage authentication methods - Azure Active Directory
+description: Learn about the authentication methods policy and different ways to manage authentication methods.
+++++ Last updated : 11/17/2022++++++++
+# Customer intent: As an identity administrator, I want to understand what authentication options are available in Azure AD and how I can manage them.
+
+# Manage authentication methods for Azure AD
+
+Azure Active Directory (Azure AD) allows the use of a range of authentication methods to support a wide variety of sign-in scenarios. Administrators can specifically configure each method to meet their goals for user experience and security. This topic explains how to manage authentication methods for Azure AD, and how configuration options affect user sign-in and password reset scenarios.
+
+## Authentication methods policy
+
+The Authentication methods policy is the recommended way to manage authentication methods, including modern methods like passwordless authentication. [Authentication Policy Administrators](../roles/permissions-reference.md#authentication-policy-administrator) can edit this policy to enable authentication methods for specific users and groups.
+
+Methods enabled in the Authentication methods policy can typically be used anywhere in Azure AD - for both authentication and password reset scenarios. The exception is that some methods are inherently limited to use in authentication, such as FIDO2 and Windows Hello for Business, and others are limited to use in password reset, such as security questions. For more control over which methods are usable in a given authentication scenario, consider using the **Authentication Strengths** feature.
+
+Most methods also have configuration parameters to more precisely control how that method can be used. For example, if you enable **Phone call**, you can also specify whether an office phone can be used in addition to a mobile phone.
+
+Or let's say you want to enable passwordless authentication with Microsoft Authenticator. You can set extra parameters like showing the user sign-in location or the name of the app being signed into. These options provide more context for users when they sign-in and help prevent accidental MFA approvals.
+
+To manage the Authentication methods policy, click **Security** > **Authentication methods** > **Policies**.
++
+Only the [converged registration experience](concept-registration-mfa-sspr-combined.md) is aware of the Authentication methods policy. Users in scope of the Authentication methods policy but not the converged registration experience won't see the correct methods to register.
+
+>[!NOTE]
+>Some pieces of the Authentication methods policy experience are in preview. This includes management of Email OTP, third party software OATH tokens, SMS, and voice call as noted in the portal. Also, use of the authentication methods policy alone with the legacy MFA and SSPR polices disabled is a preview experience.
+
+## Legacy MFA and SSPR policies
+
+Two other policies, located in **Multifactor authentication** settings and **Password reset** settings, provide a legacy way to manage some authentication methods for all users in the tenant. You can't control who uses an enabled authentication method, or how the method can be used. A [Global Administrator](../roles/permissions-reference.md#global-administrator) is needed to manage these policies.
+
+>[!NOTE]
+>Hardware OATH tokens and security questions can only be enabled today by using these legacy policies. In the future, these methods will be available in the Authentication methods policy.
+
+To manage the legacy MFA policy, click **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings**.
++
+To manage authentication methods for self-service password reset (SSPR), click **Password reset** > **Authentication methods**. The **Mobile phone** option in this policy allows either voice call or SMS to be sent to a mobile phone. The **Office phone** option allows only voice call.
++
+## How policies work together
+
+Settings aren't synchronized between the policies, which allows administrators to manage each policy independently. Azure AD respects the settings in all of the policies so a user who is enabled for an authentication method in _any_ policy can register and use that method. To prevent users from using a method, it must be disabled in all policies.
+
+Let's walk through an example where a user who belongs to the Accounting group wants to register Microsoft Authenticator. The registration process first checks the Authentication methods policy. If the Accounting group is enabled for Microsoft Authenticator, the user can register it.
+
+If not, the registration process checks the legacy MFA policy. In that policy, any user can register Microsoft Authenticator if one of these settings is enabled for MFA:
+
+- **Notification through mobile app**
+- **Verification code from mobile app or hardware token**
+
+If the user can't register Microsoft Authenticator based on either of those policies, the registration process checks the legacy SSPR policy. In that policy too, a user can register Microsoft Authenticator if the user is enabled for SSPR and any of these settings are enabled:
+
+- **Mobile app notification**
+- **Mobile app code**
+
+For users who are enabled for **Mobile phone** for SSPR, the independent control between policies can impact sign-in behavior. Where the other policies have separate options for SMS and voice call, the **Mobile phone** for SSPR enables both options. As a result, anyone who uses **Mobile phone** for SSPR can also use voice call for password reset, even if the other policies don't allow phone calls.
+
+Similarly, let's suppose you enable **Phone call** for a group. After you enable it, you find that even users who aren't group members can sign-in with a voice call. In this case, it's likely those users are enabled for **Mobile phone** in the legacy SSPR policy or **Call to phone** in the legacy MFA policy.
+
+## Migration between policies
+
+The Authentication methods policy provides a migration path toward unified administration of all authentication methods. All desired methods can be enabled in the Authentication methods policy. Methods in the legacy MFA and SSPR policies can be disabled. Migration has three settings to let you move at your own pace, and avoid problems with sign-in or SSPR during the transition. After migration is complete, you'll centralize control over authentication methods for both sign-in and SSPR in a single place, and the legacy MFA and SSPR policies will be disabled.
+
+>[!Note]
+>Controls in the Authentication methods policy for Hardware OATH tokens and security questions are coming soon, but not yet available. If you are using hardware OATH tokens, which are currently in public preview, you should hold off on migrating OATH tokens and do not complete the migration process. If you are using security questions, and don't want to disable them, make sure to keep them enabled in the legacy SSPR policy until the new control is available in the future.
+
+To view the migration options, open the Authentication methods policy and click **Manage migration**.
++
+The following table describes each option.
+
+| Option | Description |
+|:-|:|
+| Pre-migration | The Authentication methods policy is used only for authentication.<br>Legacy policy settings are respected. |
+| Migration in Progress | The Authentication methods policy is used for authentication and SSPR.<br>Legacy policy settings are respected. |
+| Migration Complete | Only the Authentication methods policy is used for authentication and SSPR.<br>Legacy policy settings are ignored. |
+
+Tenants are set to either Pre-migration or Migration in Progress by default, depending on their tenant's current state. At any time, you can change to another option. If you move to Migration Complete, and then choose to roll back to an earlier state, we'll ask why so we can evaluate performance of the product.
++
+## Known issues
+
+* Currently, all users must be enabled for at least one MFA method that isn't passwordless and the user can register in interrupt mode. Possible methods include Microsoft Authenticator, SMS, voice call, and software OATH/mobile app code. The method(s) can be enabled in any policy. If a user is not eligible for at least one of those methods, the user will see an error during registration and when visiting My Security Info. We're working to improve this experience to enable fully passwordless configurations.
+
+## Next steps
+
+- [How to migrate MFA and SSPR policy settings to the Authentication methods policy](how-to-authentication-methods-manage.md)
+- [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
+- [How Azure AD Multi-Factor Authentication works](concept-mfa-howitworks.md)
+- [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview)
active-directory How To Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md
+
+ Title: How to migrate to the Authentication methods policy - Azure Active Directory
+description: Learn about how to centrally manage multifactor authentication (MFA) and self-service password reset (SSPR) settings in the Authentication methods policy.
+++++ Last updated : 11/17/2022++++++++
+# Customer intent: As an identity administrator, I want to understand what authentication options are available in Azure AD and how I can manage them.
+
+# How to migrate MFA and SSPR policy settings to the Authentication methods policy for Azure AD
+
+You can migrate Azure Active Directory (Azure AD) policy settings that separately control multifactor authentication (MFA) and self-service password reset (SSPR) to unified management with the Authentication methods policy. You can migrate policy settings on your own schedule, and the process is fully reversible. You can continue to use tenant-wide MFA and SSPR policies while you configure authentication methods more precisely for users and groups in the Authentication methods policy. You can complete the migration whenever you're ready to manage all authentication methods together in the Authentication methods policy.
+
+For more information about how these policies work together during migration, see [Manage authentication methods for Azure AD](concept-authentication-methods-manage.md).
+
+## Before you begin
+
+Begin by doing an audit of your existing policy settings for each authentication method that's available for users. If you roll back during migration, you'll want a record of the authentication method settings from each of these policies:
+
+- MFA policy
+- SSPR policy (if used)
+- Authentication methods policy (if used)
+
+If you aren't using SSPR and aren't yet using the Authentication methods policy, you only need to get settings from the MFA policy.
+
+### MFA policy
+
+Start by documenting which methods are available in the legacy MFA policy. Sign in as a [Global Administrator](../roles/permissions-reference.md#global-administrator), and click **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings** to view the settings. These settings are tenant-wide, so there's no need for user or group information.
+
+For each method, note whether or not it's enabled for the tenant. The following table lists methods available in the legacy MFA policy and corresponding methods in the Authentication method policy.
+
+| Multifactor authentication policy | Authentication method policy |
+|--||
+| Call to phone | Phone calls |
+| Text message to phone | SMS<br>Microsoft Authenticator |
+| Notification through mobile app | Microsoft Authenticator |
+| Verification code from mobile app or hardware token | Third party software OATH tokens<br>Hardware OATH tokens (not yet available)<br>Microsoft Authenticator |
+
+### SSPR policy
+
+To get the authentication methods available in the legacy SSPR policy, click **Password reset** > **Authentication methods**. The following table lists the available methods in the legacy SSPR policy and corresponding methods in the Authentication method policy. Record which users are in scope for SSPR (either all users, one specific group, or no users) and the authentication methods they can use. While security questions aren't yet available to manage in the Authentication methods policy, make sure you record them for later when they are.
+
+| SSPR authentication methods | Authentication method policy |
+|--||
+| Mobile app notification | Microsoft Authenticator |
+| Mobile app code | Microsoft Authenticator<br>Software OATH tokens |
+| Email | Email OTP |
+| Mobile phone | Phone calls<br>SMS |
+| Office phone | Phone calls |
+| Security questions | Not yet available; copy questions for later use |
+
+### Authentication methods policy
+
+To check settings in the Authentication methods policy, sign in as an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) and click **Security** > **Authentication methods** > **Policies**. A new tenant has all methods **Off** by default, which makes migration easier because legacy policy settings don't need to be merged with existing settings.
+
+The Authentication methods policy has other methods that aren't available in the legacy policies, such as FIDO2 security key, Temporary Access Pass, and Azure AD certificate-based authentication. These methods aren't in scope for migration and you won't need to make any changes to them if you have them configured already.
+
+If you've enabled other methods in the Authentication methods policy, write down users and groups who can or can't use those methods, and any configuration parameters that govern how the method can be used. For example, you can configure Microsoft Authenticator to provide location in push notifications. Make a record of which users and groups are enabled for similar configuration parameters associated with each method.
+
+## Start the migration
+
+After you capture available authentication methods from the policies you're currently using, you can start the migration. Open the Authentication methods policy, click **Manage migration**, and click **Migration in progress**. You'll want to set this option before you make any changes as it will apply your new policy to both sign-in and password reset scenarios.
++
+The next step is to update the Authentication methods policy to match your audit. You'll want to review each method one-by-one. If your tenant is only using the legacy MFA policy, and isn't using SSPR, the update is straightforward - you can enable each method for all users and precisely match your existing policy.
+
+If your tenant is using both MFA and SSPR, you'll need to consider each method:
+
+- If the method is enabled in both legacy policies, enable it for all users in the Authentication methods policy.
+- If the method is off in both legacy policies, leave it off for all users in the Authentication methods policy.
+- If the method is enabled only in one policy, you'll need to decide whether or not it should be available in all situations.
+
+Where the policies match, you can easily match your current state. Where there's a mismatch, you will need to decide whether to enable or disable the method altogether. For example, suppose **Notification through mobile app** is enabled to allow push notifications for MFA. In the legacy SSPR policy, the **Mobile app notification** method isn't enabled. In that case, the legacy policies allow push notifications for MFA but not SSPR.
+
+In the Authentication methods policy, you'll then need to choose whether to enable **Microsoft Authenticator** for both SSPR and MFA or disable it (we recommend enabling Microsoft Authenticator).
+
+As you update each method in the Authentication methods policy, some methods have configurable parameters that allow you to control how that method can be used. For example, if you enable **Phone calls** as authentication method, you can choose to allow both office phone and mobile phones, or mobile only. Step through the process to configure each authentication method from your audit.
+
+Note that you aren't required to match your existing policy! This is a great opportunity to review your enabled methods and choose a new policy that maximizes security and usability for your tenant. Just note that disabling methods for users who are already using them may require those users to register new authentication methods and prevent them from using previously registered methods.
+
+The next sections cover specific migration guidance for each method.
+
+### Email one-time passcode
+
+There are two controls for **Email one-time passcode**:
+
+Targeting using include and exclude in the configuration's **Enable and target** section is used to enable email OTP for members of a tenant for use in **Password reset**.
+
+There's a separate **Allow external users to use email OTP** control in the **Configure** section that controls use of email OTP for sign-in by B2B users. The authentication method can't be disabled if this control is enabled.
+
+### Microsoft Authenticator
+
+If **Notification through mobile app** is enabled in the legacy MFA policy, enable **Microsoft Authenticator** for **All users** in the Authentication methods policy. Set the authentication mode to **Any** to allow either push notifications or passwordless authentication.
+
+If **Verification code from mobile app or hardware token** is enabled in the legacy MFA policy, set **Allow use of Microsoft Authenticator OTP** to **Yes**.
++
+### SMS and phone calls
+
+The legacy MFA policy has separate controls for **SMS** and **Phone calls**. But there's also a **Mobile phone** control that enables mobile phones for both SMS and voice calls. And another control for **Office phone** enables an office phone only for voice call.
+
+The Authentication methods policy has controls for **SMS** and **Phone calls**, matching the legacy MFA policy. If your tenant is using SSPR and **Mobile phone** is enabled, you'll want to enable both **SMS** and **Phone calls** in the Authentication methods policy. If your tenant is using SSPR and **Office phone** is enabled, you'll want to enable **Phone calls** in the Authentication methods policy, and ensure that the **Office phone** option is enabled.
+
+### OATH tokens
+
+The OATH token controls in the legacy MFA and SSPR policies were single controls that enabled the use of three different types of OATH tokens: the Microsoft Authenticator app, third-party software OATH TOTP code generator apps, and hardware OATH tokens.
+
+The Authentication methods policy has granular control with separate controls for each type of OATH token. Use of OTP from Microsoft Authenticator is controlled by the **Allow use of Microsoft Authenticator OTP** control in the **Microsoft Authenticator** section of the policy. Third-party apps are controlled by the **Third party software OATH tokens** section of the policy.
+
+Another control for **Hardware OATH tokens** is coming soon. If you're using hardware OATH tokens, now in public preview, you should hold off on migrating OATH tokens and don't complete the migration process.
+
+### Security questions
+
+A control for **Security questions** is coming soon. If you're using security questions, and don't want to disable them, make sure to keep them enabled in the legacy SSPR policy until the new control is available. You _can_ finish migration as described in the next section with security questions enabled.
+
+## Finish the migration
+
+After you update the Authentication methods policy, go through the legacy MFA and SSPR policies and remove each authentication method one-by-one. Test and validate the changes for each method.
+
+When you determine that MFA and SSPR work as expected and you no longer need the legacy MFA and SSPR policies, you can change the migration process to **Migration Complete**. In this mode, Azure AD only follows the Authentication methods policy. No changes can be made to the legacy policies if **Migration Complete** is set, except for security questions in the SSPR policy. If you need to go back to the legacy policies for some reason, you can move the migration state back to **Migration in Progress** at any time.
++
+## Next steps
+
+- [Manage authentication methods for Azure AD](concept-authentication-methods-manage.md)
+- [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
+- [How Azure AD Multi-Factor Authentication works](concept-mfa-howitworks.md)
+- [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview)
++
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
You can enable passwordless phone sign-in for multiple accounts in Microsoft Aut
Previously, admins might not require passwordless sign-in for users with multiple accounts because it requires them to carry more devices for sign-in. By removing the limitation of one user sign-in from a device, admins can more confidently encourage users to register passwordless phone sign-in and use it as their default sign-in method.
-The Azure AD accounts can be in the same tenant or different tenants. Guest accounts aren't supported for multiple account sign-in from one device.
+The Azure AD accounts can be in the same tenant or different tenants. Guest accounts aren't supported for multiple account sign-ins from one device.
>[!NOTE] >Multiple accounts on iOS is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
The Azure AD accounts can be in the same tenant or different tenants. Guest acco
To use passwordless phone sign-in with Microsoft Authenticator, the following prerequisites must be met: -- Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity.
+- Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications. A user has a backup sign-in method even if their device doesn't have connectivity.
- Latest version of Microsoft Authenticator installed on devices running iOS or Android. - For Android, the device that runs Microsoft Authenticator must be registered to an individual user. We're actively working to enable multiple accounts on Android. - For iOS, the device must be registered with each tenant where it's used to sign in. For example, the following device must be registered with Contoso and Wingtiptoys to allow all accounts to sign in:
To use passwordless authentication in Azure AD, first enable the combined regist
## Enable passwordless phone sign-in authentication methods
-Azure AD lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Authenticator** authentication method policy manages both the traditional push MFA method, as well as the passwordless authentication method.
+Azure AD lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Authenticator** authentication method policy manages both the traditional push MFA method and the passwordless authentication method.
> [!NOTE] > If you enabled Microsoft Authenticator passwordless sign-in using Azure AD PowerShell, it was enabled for your entire directory. If you enable using this new method, it supersedes the PowerShell policy. We recommend you enable for all users in your tenant via the new **Authentication Methods** menu, otherwise users who aren't in the new policy can't sign in without a password.
To enable the authentication method for passwordless phone sign-in, complete the
1. Under **Microsoft Authenticator**, choose the following options: 1. **Enable** - Yes or No 1. **Target** - All users or Select users
-1. Each added group or user is enabled by default to use Microsoft Authenticator in both passwordless and push notification modes ("Any" mode). To change this, for each row:
+1. Each added group or user is enabled by default to use Microsoft Authenticator in both passwordless and push notification modes ("Any" mode). To change the mode, for each row:
1. Browse to **...** > **Configure**. 1. For **Authentication mode** - choose **Any**, or **Passwordless**. Choosing **Push** prevents the use of the passwordless phone sign-in credential. 1. To apply the new policy, click **Save**.
To enable the authentication method for passwordless phone sign-in, complete the
>[!NOTE] >If you see an error when you try to save, the cause might be due to the number of users or groups being added. As a workaround, replace the users and groups you are trying to add with a single group, in the same operation, and then click **Save** again.
-## User registration and management of Microsoft Authenticator
+## User registration
Users register themselves for the passwordless authentication method of Azure AD by using the following steps: 1. Browse to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). 1. Sign in, then click **Add method** > **Authenticator app** > **Add** to add Microsoft Authenticator. 1. Follow the instructions to install and configure the Microsoft Authenticator app on your device.
-1. Select **Done** to complete Authenticator configuration.
+1. Select **Done** to complete Microsoft Authenticator configuration.
1. In **Microsoft Authenticator**, choose **Enable phone sign-in** from the drop-down menu for the account registered. 1. Follow the instructions in the app to finish registering the account for passwordless phone sign-in.
An organization can direct its users to sign in with their phones, without using
## Sign in with passwordless credential
-A user can start to utilize passwordless sign-in after all the following actions are completed:
+A user can start using passwordless sign-in after all the following actions are completed:
- An admin has enabled the user's tenant. - The user has added Microsoft Authenticator as a sign-in method. - The first time a user starts the phone sign-in process, the user performs the following steps: 1. Enters their name at the sign-in page.
After the user has utilized passwordless phone sign-in, the app continues to gui
:::image type="content" border="true" source="./media/howto-authentication-passwordless-phone/number.png" alt-text="Screenshot that shows an example of a browser sign-in using the Microsoft Authenticator app."::: +
+## Management
+
+The Authentication methods policy is the recommended way to manage Microsoft Authenticator. [Authentication Policy Administrators](../roles/permissions-reference.md#authentication-policy-administrator) can edit this policy to enable or disable Microsoft Authenticator. Admins can include or exclude specific users and groups from using it.
+
+Admins can also configure parameters to better control how Microsoft Authenticator can be used. For example, they can add location or app name to the sign-in request so users have greater context before they approve.
+
+Global Administrators can also manage Microsoft Authenticator on a tenant-wide basis by using legacy MFA and SSPR policies. These policies allow Microsoft Authenticator to be enabled or disabled for all users in the tenant. There are no options to include or exclude anyone, or control how Microsoft Authenticator can be used for sign-in.
+ ## Known Issues The following known issues exist. ### Not seeing option for passwordless phone sign-in
-In one scenario, a user can have an unanswered passwordless phone sign-in verification that is pending. Yet the user might attempt to sign in again. When this happens, the user might see only the option to enter a password.
+In one scenario, a user can have an unanswered passwordless phone sign-in verification that is pending. If the user attempts to sign in again, they might only see the option to enter a password.
-To resolve this scenario, the following steps can be used:
+To resolve this scenario, follow these steps:
1. Open Microsoft Authenticator. 2. Respond to any notification prompts.
-Then the user can continue to utilize passwordless phone sign-in.
+Then the user can continue to use passwordless phone sign-in.
### Federated Accounts
active-directory Scenario Spa Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-overview.md
Previously updated : 10/12/2021 Last updated : 11/4/2022 -
-#Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
+
+# Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
# Scenario: Single-page application
Learn all you need to build a single-page application (SPA). For instructions re
## Getting started
-If you haven't already, create your first app by completing the JavaScript SPA quickstart:
+If you haven't already, create your first app by completing the JavaScript SPA quickstart:
-[Quickstart: Single-page application](./quickstart-v2-javascript-auth-code.md)
+[Quickstart: Single-page application](./single-page-app-quickstart.md?pivots=devlang-javascript)
## Overview
The Microsoft identity platform provides **two** options to enable single-page a
- [OAuth 2.0 Authorization code flow (with PKCE)](./v2-oauth2-auth-code-flow.md). The authorization code flow allows the application to exchange an authorization code for **ID** tokens to represent the authenticated user and **Access** tokens needed to call protected APIs.
- Proof Key for Code Exchange, or _PKCE_, is an extension to the authorization code flow to prevent authorization code injection attacks. This IETF standard mitigates the threat of having an authorization code intercepted and enables secure OAuth exchange from public clients as documented in [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636). In addition, it returns **refresh** tokens that provide long-term access to resources on behalf of users without requiring interaction from those users.
+ Proof Key for Code Exchange (PKCE), is an extension to the authorization code flow to prevent authorization code injection attacks. In addition, it returns **refresh** tokens that provide long-term access to resources on behalf of users without requiring additional interaction from them.
Using the authorization code flow with PKCE is the more secure and **recommended** authorization approach, not only in native and browser-based JavaScript apps, but for every other type of OAuth client. ![Single-page applications-auth](./media/scenarios/spa-app-auth.svg) -- [OAuth 2.0 implicit flow](./v2-oauth2-implicit-grant-flow.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow does not return a **Refresh token**.
+- [OAuth 2.0 implicit flow](./v2-oauth2-implicit-grant-flow.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow does not return a **Refresh token**. It is also less secure, so it's recommended to use the authorization code flow for new applications. This authentication flow does not include application scenarios that use cross-platform JavaScript frameworks such as Electron and React-Native. They require further capabilities for interaction with the native platforms.
![Single-page applications-implicit](./media/scenarios/spa-app.svg)
-This authentication flow does not include application scenarios that use cross-platform JavaScript frameworks such as Electron and React-Native. They require further capabilities for interaction with the native platforms.
- ## Specifics To enable this scenario for your application, you need:
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
Previously updated : 10/06/2022 Last updated : 11/11/2022
+# Customer intent: As a tenant administrator, I want to add Azure AD as an identity provider for external guest users.
# Add Azure Active Directory (Azure AD) as an identity provider for External Identities
-Azure Active Directory is available as an identity provider option for B2B collaboration by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account.
+Azure Active Directory is available as an identity provider option for [B2B collaboration](what-is-b2b.md) by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account.
## Guest sign-in using Azure Active Directory accounts
-Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the invitation flow or a [self-service sign-up user flow](self-service-sign-up-overview.md).
+Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the [invitation flow](redemption-experience.md#invitation-redemption-flow) or a [self-service sign-up user flow](self-service-sign-up-overview.md).
-![Azure AD account in the identity providers list](media/azure-ad-account/azure-ad-account-identity-provider.png)
### Azure AD account in the invitation flow When you [invite a guest user](add-users-administrator.md) to B2B collaboration, you can specify their Azure AD account as the email address they'll use to sign in.
-![Invite using a Azure AD account](media/azure-ad-account/azure-ad-account-invite.png)
### Azure AD account in self-service sign-up user flows Azure AD account is an identity provider option for your self-service sign-up user flows. Users can sign up for your applications using their own Azure AD accounts. First, you'll need to [enable self-service sign-up](self-service-sign-up-user-flow.md) for your tenant. Then you can set up a user flow for the application and select Azure Active Directory as one of the sign-in options.
-![Azure AD account in a self-service sign-up user flow](media/azure-ad-account/azure-ad-account-user-flow.png)
## Verifying the application's publisher domain As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md), ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) For Azure AD user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, follow these steps:
As of November 2020, new application registrations show up as unverified in the
## Next steps
+- [Microsoft account](microsoft-account.md)
- [Add Azure Active Directory B2B collaboration users](add-users-administrator.md) - [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
Then Contoso adds the Fabrikam organization and configures the following **Organ
- Allow inbound access to B2B direct connect for all Fabrikam users and groups. - Allow inbound access to all internal Contoso applications by Fabrikam B2B direct connect users.-- Allow all Contoso users and groups to have outbound access to Fabrikam using B2B direct connect.
+- Allow all Contoso users, or select users and groups to have outbound access to Fabrikam using B2B direct connect.
- Allow Contoso B2B direct connect users to have outbound access to all Fabrikam applications. For this scenario to work, Fabrikam also needs to allow B2B direct connect with Contoso by configuring these same cross-tenant access settings for Contoso and for their own users and applications. When configuration is complete, Contoso users who manage Teams shared channels will be able to add Fabrikam users by searching for their full Fabrikam email addresses.
The Microsoft Teams admin center displays reporting for shared channels, includi
- **Teams access reviews**: Access reviews of Groups that are Teams can now detect B2B direct connect users who are using Teams shared channels. When creating an access review, you can scope the review to all internal users, guest users, and external B2B direct connect users who have been added directly to a shared channel. The reviewer is then presented with users who have direct access to the shared channel. -- **Current limitations**: An access review can detect internal users and external B2B direct connect users, but not other teams, that have been added to a shared channel. To view and remove teams that have been added to a shared channel, the shared channel owner can manage membership from within Teams.
+- **Current limitations**: An access review can detect internal users and external B2B direct connect users, but not other teams that have been added to a shared channel. To view and remove teams that have been added to a shared channel, the shared channel owner can manage membership from within Teams.
For more information about Microsoft Teams audit logs, see the [Microsoft Teams auditing documentation](/microsoftteams/audit-log-events).
active-directory Bulk Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md
Previously updated : 02/11/2020 Last updated : 11/18/2022 -+ # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
# Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users
-If you use Azure Active Directory (Azure AD) B2B collaboration to work with external partners, you can invite multiple guest users to your organization at the same time. In this tutorial, you learn how to use PowerShell to send bulk invitations to external users. Specifically, you do the following:
+If you use [Azure Active Directory (Azure AD) B2B collaboration](what-is-b2b.md) to work with external partners, you can invite multiple guest users to your organization at the same time [via the portal](tutorial-bulk-invite.md) or via PowerShell. In this tutorial, you learn how to use PowerShell to send bulk invitations to external users. Specifically, you do the following:
> [!div class="checklist"] > * Prepare a comma-separated value (.csv) file with the user information
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
### Install the latest AzureADPreview module
-Make sure that you install the latest version of the Azure AD PowerShell for Graph module (AzureADPreview).
+Make sure that you install the latest version of the Azure AD PowerShell for Graph module (AzureADPreview).
-First, check which modules you have installed. Open Windows PowerShell as an elevated user (Run as administrator), and run the following command:
+First, check which modules you've' installed. Open Windows PowerShell as an elevated user (Run as administrator), and run the following command:
```powershell Get-Module -ListAvailable AzureAD*
In Microsoft Excel, create a CSV file with the list of invitee user names and em
For example, create a worksheet in the following format:
-![PowerShell output showing pending user acceptance](media/tutorial-bulk-invite/AddUsersExcel.png)
Save the file as **C:\BulkInvite\Invitations.csv**.
To verify that the invited users were added to Azure AD, run the following comma
Get-AzureADUser -Filter "UserType eq 'Guest'" ```
-You should see the users that you invited listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *lstokes_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations.
+You should see the users that you invited listed, with a [user principal name (UPN)](../hybrid/plan-connect-userprincipalname.md#what-is-userprincipalname) in the format *emailaddress*#EXT#\@*domain*. For example, *lstokes_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations.
## Clean up resources
For example: `Remove-AzureADUser -ObjectId "lstokes_fabrikam.com#EXT#@contoso.on
## Next steps
-In this tutorial, you sent bulk invitations to guest users outside of your organization. Next, learn how the invitation redemption process works.
+In this tutorial, you sent bulk invitations to guest users outside of your organization. Next, learn how the invitation redemption process works and how to enforce MFA for guest users.
-> [!div class="nextstepaction"]
-> [Learn about the Azure AD B2B collaboration invitation redemption process](redemption-experience.md)
+- [Learn about the Azure AD B2B collaboration invitation redemption process](redemption-experience.md)
+- [Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
If a user wants to sign in using a different email:
```powershell Install-Module Microsoft.Graph Select-MgProfile -Name beta
-Connect-MgGraph
+Connect-MgGraph -Scopes "User.ReadWrite.All"
$user = Get-MgUser -Filter "startsWith(mail, 'john.doe@fabrikam.net')" New-MgInvitation `
active-directory Reference Audit Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md
This article lists the audit activities that can be logged in your audit logs.
|Application Management|AdminUserJourneys-GetResources| |Directory Management|AdminUserJourneys-RemoveResources| |Directory Management|AdminUserJourneys-SetResources|
+|Directory Management|Create company|
|Directory Management|Create IdentityProvider| |Directory Management|Create a new AdminUserJourney| |Directory Management|Create localized resource json|
active-directory Adstream Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adstream-tutorial.md
# Azure Active Directory SSO integration with Adstream
-In this article, you'll learn how to integrate Adstream with Azure Active Directory (Azure AD). Adstream provides the safest and easiest to use business solution for sending and receiving files. When you integrate Adstream with Azure AD, you can:
+In this article, you'll learn how to integrate Adstream with Azure Active Directory (Azure AD). Adstream is a content management system that provides the ability for multiple teams to collaborate on assets and distribute content. When you integrate Adstream with Azure AD, you can:
* Control in Azure AD who has access to Adstream. * Enable your users to be automatically signed-in to Adstream with their Azure AD accounts.
active-directory Arcgis Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arcgis-tutorial.md
Previously updated : 06/18/2021 Last updated : 11/18/2022 # Tutorial: Azure Active Directory integration with ArcGIS Online
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure ArcGIS Online SSO
-1. If you want to setup ArcGIS Online manually, open a new web browser window and log into your ArcGIS company site as an administrator and perform the following steps:
+1. To automate the configuration within ArcGIS Online, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+1. After adding extension to the browser, click on **Set up ArcGIS Online** will direct you to the ArcGIS Online application. From there, provide the admin credentials to sign in to ArcGIS Online. The browser extension will automatically configure the application for you and automate steps 3-7.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to setup ArcGIS Online manually, open a new web browser window and log into your ArcGIS Online company site as an administrator and perform the following steps:
2. Go to the **Organization** -> **Settings**.
active-directory Citrix Cloud Saml Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citrix-cloud-saml-sso-tutorial.md
Previously updated : 07/22/2021 Last updated : 11/18/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Citrix Cloud SAML SSO
-1. Log in to your Citrix Cloud SAML SSO company site as an administrator.
+1. To automate the configuration within Citrix Cloud SAML SSO, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+1. After adding extension to the browser, click on **Set up Citrix Cloud SAML SSO** will direct you to the Citrix Cloud SAML SSO application. From there, provide the admin credentials to sign in to Citrix Cloud SAML SSO. The browser extension will automatically configure the application for you and automate steps 3-6.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to set up Citrix Cloud SAML SSO manually, log in to your Citrix Cloud SAML SSO company site as an administrator.
1. Navigate to the Citrix Cloud menu and select **Identity and Access Management**.
active-directory Databook Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/databook-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Databook
+description: Learn how to configure single sign-on between Azure Active Directory and Databook.
++++++++ Last updated : 11/16/2022++++
+# Azure Active Directory SSO integration with Databook
+
+In this article, you'll learn how to integrate Databook with Azure Active Directory (Azure AD). Databook is a customer intelligence platform that provides insights into a company's financial & strategic priorities and maps best-fit Microsoft solutions to deliver high impact recommendations. When you integrate Databook with Azure AD, you can:
+
+* Control in Azure AD who has access to Databook.
+* Enable your users to be automatically signed-in to Databook with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Databook in a test environment. Databook supports **SP** and **IDP** initiated single sign-on and also supports **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Databook, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Databook single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Databook application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Databook from the Azure AD gallery
+
+Add Databook from the Azure AD application gallery to configure single sign-on with Databook. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Databook** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:databook:<CustomerID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://databook.auth0.com/login/callback?connection=<CustomerID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://databook.auth0.com/login?client=<ID>&connection=<CustomerID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Databook Client support team](mailto:info@trydatabook.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Databook application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Databook application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | grouptag | user.groups |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, select copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Databook SSO
+
+To configure single sign-on on **Databook** side, you need to send the **App Federation Metadata Url** to [Databook support team](mailto:info@trydatabook.com). The support team will use the copied URLs to configure the single sign-on on the application.
+
+### Create Databook test user
+
+In this section, a user called B.Simon is created in Databook. Databook supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Databook, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Databook Sign-on URL where you can initiate the login flow.
+
+* Go to Databook Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Databook for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Databook tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Databook for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Databook you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Digital Pigeon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/digital-pigeon-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Digital Pigeon
+description: Learn how to configure single sign-on between Azure Active Directory and Digital Pigeon.
++++++++ Last updated : 11/16/2022++++
+# Azure Active Directory SSO integration with Digital Pigeon
+
+In this article, you'll learn how to integrate Digital Pigeon with Azure Active Directory (Azure AD). Digital Pigeon helps creative people deliver their work, beautifully and quickly. Whatever your needs, Digital Pigeon makes sending and receiving large files seamless. When you integrate Digital Pigeon with Azure AD, you can:
+
+* Control in Azure AD who has access to Digital Pigeon.
+* Enable your users to be automatically signed-in to Digital Pigeon with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Digital Pigeon in a test environment. Digital Pigeon supports both **SP** and **IDP** initiated single sign-on and also supports **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Digital Pigeon, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Digital Pigeon single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Digital Pigeon application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Digital Pigeon from the Azure AD gallery
+
+Add Digital Pigeon from the Azure AD application gallery to configure single sign-on with Digital Pigeon. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+ > [!NOTE]
+ > Please click [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to know how to configure Role in Azure AD. Role value is one of 'Digital Pigeon User', 'Digital Pigeon Power User', or 'Digital Pigeon Admin'. If role claim not supplied, default role is configurable in Digital Pigeon app by a Digital Pigeon Owner.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Digital Pigeon** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://digitalpigeon.com/saml2/service-provider-metadata/<CustomerID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://digitalpigeon.com/login/saml2/sso/<CustomerID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://digitalpigeon.com/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Digital Pigeon Client support team](mailto:help@digitalpigeon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Digital Pigeon application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Digital Pigeon application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | user.firstName | user.givenname |
+ | user.lastName | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Digital Pigeon** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Digital Pigeon SSO
+
+To configure single sign-on on **Digital Pigeon** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Digital Pigeon support team](mailto:help@digitalpigeon.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Digital Pigeon test user
+
+In this section, a user called B.Simon is created in Digital Pigeon. Digital Pigeon supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Digital Pigeon, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Digital Pigeon Sign on URL where you can initiate the login flow.
+
+* Go to Digital Pigeon Sign on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Digital Pigeon for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Digital Pigeon tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Digital Pigeon for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Digital Pigeon you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Drawboard Projects Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/drawboard-projects-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Drawboard Projects
+description: Learn how to configure single sign-on between Azure Active Directory and Drawboard Projects.
++++++++ Last updated : 11/16/2022++++
+# Azure Active Directory SSO integration with Drawboard Projects
+
+In this article, you'll learn how to integrate Drawboard Projects with Azure Active Directory (Azure AD). Drawboard Projects architecture, engineering and construction teams globally save valuable project time in the design review lifecycle. When you integrate Drawboard Projects with Azure AD, you can:
+
+* Control in Azure AD who has access to Drawboard Projects.
+* Enable your users to be automatically signed-in to Drawboard Projects with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Drawboard Projects in a test environment. Drawboard Projects supports both **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Drawboard Projects, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Drawboard Projects single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Drawboard Projects application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Drawboard Projects from the Azure AD gallery
+
+Add Drawboard Projects from the Azure AD application gallery to configure single sign-on with Drawboard Projects. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Drawboard Projects** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:bullclip:<CUSTOMERCONNECTIONNAME>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://id.drawboard.com/login/callback?connection=<CUSTOMERCONNECTIONNAME>`
+
+ c. In the **Sign on URL** textbox, type the URL:
+ `https://projects.drawboard.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Drawboard Projects Client support team](mailto:support@drawboard.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Drawboard Projects** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Drawboard Projects SSO
+
+To configure single sign-on on **Drawboard Projects** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Drawboard Projects support team](mailto:support@drawboard.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Drawboard Projects test user
+
+In this section, a user called B.Simon is created in Drawboard Projects. Drawboard Projects supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Drawboard Projects, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Drawboard Projects Sign-on URL where you can initiate the login flow.
+
+* Go to Drawboard Projects Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you select the Drawboard Projects tile in the My Apps, this will redirect to Drawboard Projects Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Drawboard Projects you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Embed Signage Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/embed-signage-tutorial.md
Previously updated : 10/01/2021 Last updated : 11/18/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure embed signage SSO
-1. Log in to your embed signage company site as an administrator.
+1. To automate the configuration within Embed Signage, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+1. After adding extension to the browser, click on **Set up Embed Signage** will direct you to the Embed Signage application. From there, provide the admin credentials to sign in to Embed Signage. The browser extension will automatically configure the application for you and automate steps 3-5.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to set up Embed Signage manually, log in to your Embed Signage company site as an administrator.
1. Go to **Account settings** and click **Security** > **Single sign on**.
active-directory Sharepoint On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md
Previously updated : 03/31/2021 Last updated : 11/14/2022 # Tutorial: Implement federated authentication between Azure Active Directory and SharePoint on-premises
$t.Update()
1. In the section **Reply URL (Assertion Consumer Service URL)**, add the URL (for example, `https://otherwebapp.contoso.local/`) of all additional web applications that need to sign in users with Azure Active Directory and click **Save**.
-![Specify additional web applications](./media/sharepoint-on-premises-tutorial/azure-active-directory-app-reply-urls.png)
+![Specify additional web applications](./media/sharepoint-on-premises-tutorial/azure-active-directory-app-reply-urls.png)
+
+### Configure the lifetime of the security token
+
+By default, Azure AD creates a SAML token that is valid for 1 hour.
+This lifetime cannot be customized in the Azure portal, or using a conditional access policy, but it can be done by creating a [custom token lifetime policy](../develop/active-directory-configurable-token-lifetimes.md) and apply it to the enterprise application created for SharePoint.
+To do this, complete the steps below using Windows PowerShell (at the time of this writing, AzureADPreview v2.0.2.149 does not work with PowerShell Core):
+
+1. Install the module [AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview/):
+
+ ```powershell
+ Install-Module -Name AzureADPreview -Scope CurrentUser
+ ```
+
+1. Run `Connect-AzureAD` to sign-in as a tenant administrator.
+
+1. Run the sample script below to update the application `SharePoint corporate farm` to issue a SAML token valid for 6h (value `06:00:00` of property `AccessTokenLifetime`):
+
+ ```powershell
+ $appDisplayName = "SharePoint corporate farm"
+
+ $sp = Get-AzureADServicePrincipal -Filter "DisplayName eq '$appDisplayName'"
+ $oldPolicy = Get-AzureADServicePrincipalPolicy -Id $sp.ObjectId | ?{$_.Type -eq "TokenLifetimePolicy"}
+ if ($null -ne $oldPolicy) {
+ # There can be only 1 TokenLifetimePolicy associated to the service principal (or 0, as by default)
+ Remove-AzureADServicePrincipalPolicy -Id $sp.ObjectId -PolicyId $oldPolicy.Id
+ }
+
+ # Create a custom TokenLifetimePolicy in Azure AD and add it to the service principal
+ $policy = New-AzureADPolicy -Definition @('{"TokenLifetimePolicy":{"Version":1,"AccessTokenLifetime":"06:00:00"}}') -DisplayName "Custom token lifetime policy" -IsOrganizationDefault $false -Type "TokenLifetimePolicy"
+ Add-AzureADServicePrincipalPolicy -Id $sp.ObjectId -RefObjectId $policy.Id
+ ```
+
+After the script completed, all users who successfully sign-in to the enterprise application will get a SAML 1.1 token valid for 6h in SharePoint.
+To revert the change, simply remove the custom `TokenLifetimePolicy` object from the service principal, as done at the beginning of the script.
active-directory Trend Micro Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trend-micro-tutorial.md
Complete these steps to configure TMWS SSO on the application side.
f. Select **Save**. > [!NOTE]
- > For more information on how to configure TMWS with Azure AD, see [Configuring Azure AD Settings on TMWS](https://docs.trendmicro.com/en-us/enterprise/trend-micro-web-security-online-help/administration_001/directory-services/azure-active-directo/configuring-azure-ad.aspx).
+ > For more information on how to configure TMWS with Azure AD, see [Configuring Azure AD Settings on TMWS](https://docs.trendmicro.com/en-us/enterprise/trend-micro-web-security-online-help/administration/directory-services/azure-active-directo/configuring-azure-ad.aspx
+).
## Test SSO
aks Aks Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-support-help.md
If you can't find an answer to your problem using search, submit a new question
| [Azure RBAC](../role-based-access-control/overview.md) | [azure-rbac](/answers/topics/azure-rbac.html)| | [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) | [azure-active-directory](/answers/topics/azure-active-directory.html)| | [Azure Policy](../governance/policy/overview.md) | [azure-policy](/answers/topics/azure-policy.html)|
-| [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md) | [virtual-machine-scale-sets](/answers/topics/azure-virtual-machine-scale-sets.html)|
+| [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md) | [virtual-machine-scale-sets](/answers/topics/123/azure-virtual-machines-scale-set.html)|
| [Azure Virtual Network](../virtual-network/network-overview.md) | [azure-virtual-network](/answers/topics/azure-virtual-network.html)| | [Azure Application Gateway](../application-gateway/overview.md) | [azure-application-gateway](/answers/topics/azure-application-gateway.html)| | [Azure Virtual Machines](../virtual-machines/linux/overview.md) | [azure-virtual-machines](/answers/topics/azure-virtual-machines.html) |
News and information about Azure Virtual Machines is shared at the [Azure blog](
## Next steps
-Learn more about [Azure Kubernetes Service](./index.yml)
+Learn more about [Azure Kubernetes Service](./index.yml)
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
Title: Concepts - Networking in Azure Kubernetes Services (AKS) description: Learn about networking in Azure Kubernetes Service (AKS), including kubenet and Azure CNI networking, ingress controllers, load balancers, and static IP addresses. Previously updated : 03/11/2021 Last updated : 11/18/2022
In a container-based, microservices approach to application development, applica
* You can connect to and expose applications internally or externally. * You can build highly available applications by load balancing your applications.
-* For your more complex applications, you can configure ingress traffic for SSL/TLS termination or routing of multiple components.
-* For security reasons, you can restrict the flow of network traffic into or between pods and nodes.
+* You can restrict the flow of network traffic into or between pods and nodes to improve security.
+* You can configure Ingress traffic for SSL/TLS termination or routing of multiple components for your more complex applications.
This article introduces the core concepts that provide networking to your applications in AKS: -- [Services](#services)-- [Azure virtual networks](#azure-virtual-networks)-- [Ingress controllers](#ingress-controllers)-- [Network policies](#network-policies)
+* [Services](#services)
+* [Azure virtual networks](#azure-virtual-networks)
+* [Ingress controllers](#ingress-controllers)
+* [Network policies](#network-policies)
## Kubernetes basics
In Kubernetes:
* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name. * You can distribute traffic using a *load balancer*.
-* More complex routing of application traffic can also be achieved with *Ingress Controllers*.
+* More complex routing of application traffic can also be achieved with *ingress controllers*.
* You can *control outbound (egress) traffic* for cluster nodes.
-* Security and filtering of the network traffic for pods is possible with Kubernetes *network policies*.
+* Security and filtering of the network traffic for pods is possible with *network policies*.
-The Azure platform also simplifies virtual networking for AKS clusters. When you create a Kubernetes load balancer, you also create and configure the underlying Azure load balancer resource. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure *external DNS* as new ingress routes are configured.
+The Azure platform also simplifies virtual networking for AKS clusters. When you create a Kubernetes load balancer, you also create and configure the underlying Azure load balancer resource. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure *external DNS* as new Ingress routes are configured.
## Services To simplify the network configuration for application workloads, Kubernetes uses *Services* to logically group a set of pods together and provide network connectivity. The following Service types are available: -- **Cluster IP**
+* **ClusterIP**
- Creates an internal IP address for use within the AKS cluster. Good for internal-only applications that support other workloads within the cluster.
+ ClusterIP creates an internal IP address for use within the AKS cluster. This Service is good for *internal-only applications* that support other workloads within the cluster.
- ![Diagram showing Cluster IP traffic flow in an AKS cluster][aks-clusterip]
+ ![Diagram showing ClusterIP traffic flow in an AKS cluster][aks-clusterip]
-- **NodePort**
+* **NodePort**
- Creates a port mapping on the underlying node that allows the application to be accessed directly with the node IP address and port.
+ NodePort creates a port mapping on the underlying node that allows the application to be accessed directly with the node IP address and port.
![Diagram showing NodePort traffic flow in an AKS cluster][aks-nodeport] -- **LoadBalancer**
+* **LoadBalancer**
- Creates an Azure load balancer resource, configures an external IP address, and connects the requested pods to the load balancer backend pool. To allow customers' traffic to reach the application, load balancing rules are created on the desired ports.
+ Creates an Azure load balancer resource, configures an external IP address, and connects the requested pods to the load balancer backend pool. To allow customers' traffic to reach the application, load balancing rules are created on the desired ports.
![Diagram showing Load Balancer traffic flow in an AKS cluster][aks-loadbalancer] For extra control and routing of the inbound traffic, you may instead use an [Ingress controller](#ingress-controllers). -- **ExternalName**
+* **ExternalName**
Creates a specific DNS entry for easier application access.
Learn more about Services in the [Kubernetes docs][k8s-service].
## Azure virtual networks
-In AKS, you can deploy a cluster that uses one of the following two network models:
+In AKS, you can deploy a cluster that uses one of the following network models:
-- *Kubenet* networking
+* ***Kubenet* networking**
The network resources are typically created and configured as the AKS cluster is deployed. -- *Azure Container Networking Interface (CNI)* networking
-
+* ***Azure Container Networking Interface (CNI)* networking**
+ The AKS cluster is connected to existing virtual network resources and configurations. ### Kubenet (basic) networking The *kubenet* networking option is the default configuration for AKS cluster creation. With *kubenet*:
-1. Nodes receive an IP address from the Azure virtual network subnet.
-1. Pods receive an IP address from a logically different address space than the nodes' Azure virtual network subnet.
-1. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network.
+
+1. Nodes receive an IP address from the Azure virtual network subnet.
+1. Pods receive an IP address from a logically different address space than the nodes' Azure virtual network subnet.
+1. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network.
1. The source IP address of the traffic is translated to the node's primary IP address.
-Nodes use the [kubenet][kubenet] Kubernetes plugin. You can:
-* Let the Azure platform create and configure the virtual networks for you, or
-* Choose to deploy your AKS cluster into an existing virtual network subnet.
+Nodes use the [kubenet][kubenet] Kubernetes plugin. You can let the Azure platform create and configure the virtual networks for you, or choose to deploy your AKS cluster into an existing virtual network subnet.
-Remember, only the nodes receive a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
+Only the nodes receive a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
For more information, see [Configure kubenet networking for an AKS cluster][aks-configure-kubenet-networking]. ### Azure CNI (advanced) networking
-With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. Without planning, this approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
+With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly.
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
For more information, see [Configure Azure CNI for an AKS cluster][aks-configure
Both kubenet and Azure CNI provide network connectivity for your AKS clusters. However, there are advantages and disadvantages to each. At a high level, the following considerations apply: * **kubenet**
- * Conserves IP address space.
- * Uses Kubernetes internal or external load balancer to reach pods from outside of the cluster.
- * You manually manage and maintain user-defined routes (UDRs).
- * Maximum of 400 nodes per cluster.
+
+ * Conserves IP address space.
+ * Uses Kubernetes internal or external load balancers to reach pods from outside of the cluster.
+ * You manually manage and maintain user-defined routes (UDRs).
+ * Maximum of 400 nodes per cluster.
+
* **Azure CNI**
- * Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
- * Requires more IP address space.
+
+ * Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
+ * Requires more IP address space.
The following behavior differences exist between kubenet and Azure CNI:
Although capabilities like service endpoints or UDRs are supported with both kub
## Ingress controllers
-When you create a LoadBalancer-type Service, you also create an underlying Azure load balancer resource. The load balancer is configured to distribute traffic to the pods in your Service on a given port.
+When you create a LoadBalancer-type Service, you also create an underlying Azure load balancer resource. The load balancer is configured to distribute traffic to the pods in your Service on a given port.
-The LoadBalancer only works at layer 4. At layer 4, the Service is unaware of the actual applications, and can't make any more routing considerations.
+The *LoadBalancer* only works at layer 4. At layer 4, the Service is unaware of the actual applications, and can't make any more routing considerations.
-*Ingress controllers* work at layer 7, and can use more intelligent rules to distribute application traffic. Ingress controllers typically route HTTP traffic to different applications based on the inbound URL.
+*Ingress controllers* work at layer 7 and can use more intelligent rules to distribute application traffic. Ingress controllers typically route HTTP traffic to different applications based on the inbound URL.
![Diagram showing Ingress traffic flow in an AKS cluster][aks-ingress]
-### Create an ingress resource
+### Create an Ingress resource
-In AKS, you can create an Ingress resource using NGINX, a similar tool, or the AKS HTTP application routing feature. When you enable HTTP application routing for an AKS cluster, the Azure platform creates the Ingress controller and an *External-DNS* controller. As new Ingress resources are created in Kubernetes, the required DNS A records are created in a cluster-specific DNS zone.
+In AKS, you can create an [Ingress resource using NGINX][nginx-ingress], a similar tool, or the AKS HTTP application routing feature. When you enable HTTP application routing for an AKS cluster, the Azure platform creates the ingress controller and an *External-DNS* controller. As new Ingress resources are created in Kubernetes, the required DNS `A` records are created in a cluster-specific DNS zone.
For more information, see [Deploy HTTP application routing][aks-http-routing]. ### Application Gateway Ingress Controller (AGIC)
-With the Application Gateway Ingress Controller (AGIC) add-on, AKS customers leverage Azure's native Application Gateway level 7 load-balancer to expose cloud software to the Internet. AGIC monitors the host Kubernetes cluster and continuously updates an Application Gateway, exposing selected services to the Internet.
+With the Application Gateway Ingress Controller (AGIC) add-on, you can use Azure's native Application Gateway level 7 load-balancer to expose cloud software to the Internet. AGIC runs as a pod within the AKS cluster. It consumes [Kubernetes Ingress Resources][k8s-ingress] and converts them to an Application Gateway configuration, which allows the gateway to load-balance traffic to the Kubernetes pods.
To learn more about the AGIC add-on for AKS, see [What is Application Gateway Ingress Controller?][agic-overview]. ### SSL/TLS termination
-SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt".
+SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt".
-For more information on configuring an NGINX Ingress controller with Let's Encrypt, see [Ingress and TLS][aks-ingress-tls].
+For more information on configuring an NGINX ingress controller with Let's Encrypt, see [Ingress and TLS][aks-ingress-tls].
### Client source IP preservation
-Configure your ingress controller to preserve the client source IP on requests to containers in your AKS cluster. When your ingress controller routes a client's request to a container in your AKS cluster, the original source IP of that request is unavailable to the target container. When you enable *client source IP preservation*, the source IP for the client is available in the request header under *X-Forwarded-For*.
+Configure your ingress controller to preserve the client source IP on requests to containers in your AKS cluster. When your ingress controller routes a client's request to a container in your AKS cluster, the original source IP of that request is unavailable to the target container. When you enable *client source IP preservation*, the source IP for the client is available in the request header under *X-Forwarded-For*.
If you're using client source IP preservation on your ingress controller, you can't use TLS pass-through. Client source IP preservation and TLS pass-through can be used with other services, such as the *LoadBalancer* type.
+To learn more about client source IP preservation, see [How client source IP preservation works for LoadBalancer Services in AKS][ip-preservation].
+ ## Control outbound (egress) traffic
-AKS clusters are deployed on a virtual network and have outbound dependencies on services outside of that virtual network. These outbound dependencies are almost entirely defined with fully qualified domain names (FQDNs). By default, AKS clusters have unrestricted outbound (egress) internet access. This allows the nodes and services you run to access external resources as needed. If desired, you can restrict outbound traffic.
+AKS clusters are deployed on a virtual network and have outbound dependencies on services outside of that virtual network. These outbound dependencies are almost entirely defined with fully qualified domain names (FQDNs). By default, AKS clusters have unrestricted outbound (egress) Internet access, which allows the nodes and services you run to access external resources as needed. If desired, you can restrict outbound traffic.
For more information, see [Control egress traffic for cluster nodes in AKS][limit-egress]. ## Network security groups
-A network security group filters traffic for VMs like the AKS nodes. As you create Services, such as a LoadBalancer, the Azure platform automatically configures any necessary network security group rules.
+A network security group filters traffic for VMs like the AKS nodes. As you create Services, such as a *LoadBalancer*, the Azure platform automatically configures any necessary network security group rules.
-You don't need to manually configure network security group rules to filter traffic for pods in an AKS cluster. Simply define any required ports and forwarding as part of your Kubernetes Service manifests. Let the Azure platform create or update the appropriate rules.
+You don't need to manually configure network security group rules to filter traffic for pods in an AKS cluster. You can define any required ports and forwarding as part of your Kubernetes Service manifests and let the Azure platform create or update the appropriate rules.
You can also use network policies to automatically apply traffic filter rules to pods.
+For more information, see [How network security groups filter network traffic][nsg-traffic].
+ ## Network policies By default, all pods in an AKS cluster can send and receive traffic without limitations. For improved security, define rules that control the flow of traffic, like:
-* Backend applications are only exposed to required frontend services.
+
+* Back-end applications are only exposed to required frontend services.
* Database components are only accessible to the application tiers that connect to them.
-Network policy is a Kubernetes feature available in AKS that lets you control the traffic flow between pods. You allow or deny traffic to the pod based on settings such as assigned labels, namespace, or traffic port. While network security groups are better for AKS nodes, network policies are a more suited, cloud-native way to control the flow of traffic for pods. As pods are dynamically created in an AKS cluster, required network policies can be automatically applied.
+Network policy is a Kubernetes feature available in AKS that lets you control the traffic flow between pods. You can allow or deny traffic to the pod based on settings such as assigned labels, namespace, or traffic port. While network security groups are better for AKS nodes, network policies are a more suited, cloud-native way to control the flow of traffic for pods. As pods are dynamically created in an AKS cluster, required network policies can be automatically applied.
For more information, see [Secure traffic between pods using network policies in Azure Kubernetes Service (AKS)][use-network-policies].
For associated best practices, see [Best practices for network connectivity and
For more information on core Kubernetes and AKS concepts, see the following articles: -- [Kubernetes / AKS clusters and workloads][aks-concepts-clusters-workloads]-- [Kubernetes / AKS access and identity][aks-concepts-identity]-- [Kubernetes / AKS security][aks-concepts-security]-- [Kubernetes / AKS storage][aks-concepts-storage]-- [Kubernetes / AKS scale][aks-concepts-scale]
+* [Kubernetes / AKS clusters and workloads][aks-concepts-clusters-workloads]
+* [Kubernetes / AKS access and identity][aks-concepts-identity]
+* [Kubernetes / AKS security][aks-concepts-security]
+* [Kubernetes / AKS storage][aks-concepts-storage]
+* [Kubernetes / AKS scale][aks-concepts-scale]
<!-- IMAGES --> [aks-clusterip]: ./media/concepts-network/aks-clusterip.png
For more information on core Kubernetes and AKS concepts, see the following arti
[operator-best-practices-network]: operator-best-practices-network.md [support-policies]: support-policies.md [limit-egress]: limit-egress-traffic.md
+[k8s-ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
+[nginx-ingress]: /ingress-basic.md
+[ip-preservation]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-client-source-ip-preservation-works-for-loadbalancer/ba-p/3033722#:~:text=Enable%20Client%20source%20IP%20preservation%201%20Edit%20loadbalancer,is%20the%20same%20as%20the%20source%20IP%20%28srjumpbox%29.
+[nsg-traffic]: ../virtual-network/network-security-group-how-it-works.md
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
For more information on core Kubernetes and AKS concepts, see the following arti
[azure-disk-csi]: azure-disk-csi.md [azure-netapp-files]: azure-netapp-files.md [azure-files-csi]: azure-files-csi.md
+[azure-files-volume]: azure-files-volume.md
[aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-identity]: concepts-identity.md [aks-concepts-scale]: concepts-scale.md
aks Dapr Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md
You've installed and configured Dapr OSS on your Kubernetes cluster and want to
dapr uninstall -k ΓÇô-all ```
-1. Uninstall the Dapr namespace:
+2. Uninstall the Dapr namespace:
```bash kubectl delete namespace dapr-system
kubectl delete namespace dapr-system
1. Run the following command to uninstall Dapr: ```bash
-dapr uninstall -k ΓÇô-all
+helm uninstall dapr -n dapr-system
```
-1. Uninstall CRDs:
+2. Uninstall CRDs:
```bash kubectl delete crd components.dapr.io
kubectl delete crd subscriptions.dapr.io
kubectl delete crd resiliencies.dapr.io ```
-1. Uninstall the Dapr namespace:
+3. Uninstall the Dapr namespace:
```bash kubectl delete namespace dapr-system
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Previously updated : 09/08/2022 Last updated : 11/07/2022
When installing the Dapr extension, use the flag value that corresponds to your
Create the Dapr extension, which installs Dapr on your AKS or Arc-enabled Kubernetes cluster. For example, for an AKS cluster:
-```azure-cli-interactive
+```azurecli
az k8s-extension create --cluster-type managedClusters \ --cluster-name myAKSCluster \ --resource-group myResourceGroup \name myDaprExtension \
+--name dapr \
--extension-type Microsoft.Dapr ``` You have the option of allowing Dapr to auto-update its minor version by specifying the `--auto-upgrade-minor-version` parameter and setting the value to `true`:
-```azure-cli-interactive
+```azurecli
--auto-upgrade-minor-version true ```
+When configuring the extension, you can choose to install Dapr from a particular `--release-train`. Specify one of the two release train values:
+
+| Value | Description |
+| -- | -- |
+| `stable` | Default. |
+| `dev` | Early releases, can contain experimental features. Not suitable for production. |
+
+For example:
+
+```azurecli
+--release-train stable
+```
+ ## Configuration settings The extension enables you to set Dapr configuration options by using the `--configuration-settings` parameter. For example, to provision Dapr with high availability (HA) enabled, set the `global.ha.enabled` parameter to `true`:
-```azure-cli-interactive
+```azurecli
az k8s-extension create --cluster-type managedClusters \ --cluster-name myAKSCluster \ --resource-group myResourceGroup \name myDaprExtension \
+--name dapr \
--extension-type Microsoft.Dapr \ --auto-upgrade-minor-version true \ --configuration-settings "global.ha.enabled=true" \
For a list of available options, see [Dapr configuration][dapr-configuration-opt
The same command-line argument is used for installing a specific version of Dapr or rolling back to a previous version. Set `--auto-upgrade-minor-version` to `false` and `--version` to the version of Dapr you wish to install. If the `version` parameter is omitted, the extension will install the latest version of Dapr. For example, to use Dapr X.X.X:
-```azure-cli-interactive
+```azurecli
az k8s-extension create --cluster-type managedClusters \ --cluster-name myAKSCluster \ --resource-group myResourceGroup \name myDaprExtension \
+--name dapr \
--extension-type Microsoft.Dapr \ --auto-upgrade-minor-version false \ --version X.X.X
az k8s-extension create --cluster-type managedClusters \
In some configurations, you may only want to run Dapr on certain nodes. You can limit the extension by passing a `nodeSelector` in the extension configuration. If the desired `nodeSelector` contains `.`, you must escape them from the shell and the extension. For example, the following configuration will install Dapr to only nodes with `topology.kubernetes.io/zone: "us-east-1c"`:
-```azure-cli-interactive
+```azurecli
az k8s-extension create --cluster-type managedClusters \ --cluster-name myAKSCluster \ --resource-group myResourceGroup \name myDaprExtension \
+--name dapr \
--extension-type Microsoft.Dapr \ --auto-upgrade-minor-version true \ --configuration-settings "global.ha.enabled=true" \
az k8s-extension create --cluster-type managedClusters \
For managing OS and architecture, use the [supported versions](https://github.com/dapr/dapr/blob/b8ae13bf3f0a84c25051fcdacbfd8ac8e32695df/docker/docker.mk#L50) of the `global.daprControlPlaneOs` and `global.daprControlPlaneArch` configuration:
-```azure-cli-interactive
+```azurecli
az k8s-extension create --cluster-type managedClusters \ --cluster-name myAKSCluster \ --resource-group myResourceGroup \name myDaprExtension \
+--name dapr \
--extension-type Microsoft.Dapr \ --auto-upgrade-minor-version true \ --configuration-settings "global.ha.enabled=true" \
az k8s-extension create --cluster-type managedClusters \
--configuration-settings "global.daprControlPlaneArch=amd64ΓÇ¥ ```
+## Set automatic CRD updates
+
+Starting with Dapr version 1.9.2, CRDs are automatically upgraded when the extension upgrades. To disable this setting, you can set `hooks.applyCrds` to `false`.
+
+```azurecli
+az k8s-extension upgrade --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version true \
+--configuration-settings "global.ha.enabled=true" \
+--configuration-settings "dapr_operator.replicaCount=2" \
+--configuration-settings "global.daprControlPlaneOs=linuxΓÇ¥ \
+--configuration-settings "global.daprControlPlaneArch=amd64ΓÇ¥ \
+--configuration-settings "hooks.applyCrds=false"
+```
+
+## Configure the Dapr release namespace
+
+You can configure the release namespace. The Dapr extension gets installed in the `dapr-system` namespace by default. To override it, use `--release-namespace`. Include the cluster `--scope` to redefine the namespace.
+
+```azurecli
+az k8s-extension create \
+--cluster-type managedClusters \
+--cluster-name dapr-aks \
+--resource-group dapr-rg \
+--name my-dapr-ext \
+--extension-type microsoft.dapr \
+--release-train stable \
+--auto-upgrade false \
+--version 1.9.2 \
+--scope cluster \
+--release-namespace dapr-custom
+```
+ ## Show current configuration settings Use the `az k8s-extension show` command to show the current Dapr configuration settings:
-```azure-cli-interactive
+```azurecli
az k8s-extension show --cluster-type managedClusters \ --cluster-name myAKSCluster \ --resource-group myResourceGroup \name myDaprExtension
+--name dapr
``` ## Update configuration settings
az k8s-extension show --cluster-type managedClusters \
> Some configuration options cannot be modified post-creation. Adjustments to these options require deletion and recreation of the extension, applicable to the following settings: > * `global.ha.*` > * `dapr_placement.*`-
-> [!NOTE]
-> High availability (HA) can be enabled at any time. However, once enabled, disabling it requires deletion and recreation of the extension. If you aren't sure if high availability is necessary for your use case, we recommend starting with it disabled to minimize disruption.
+>
+> HA is enabled enabled by default. Disabling it requires deletion and recreation of the extension.
To update your Dapr configuration settings, recreate the extension with the desired state. For example, assume we've previously created and installed the extension using the following configuration:
To update your Dapr configuration settings, recreate the extension with the desi
az k8s-extension create --cluster-type managedClusters \ --cluster-name myAKSCluster \ --resource-group myResourceGroup \name myDaprExtension \
+--name dapr \
--extension-type Microsoft.Dapr \ --auto-upgrade-minor-version true \ --configuration-settings "global.ha.enabled=true" \
To update the `dapr_operator.replicaCount` from two to three, use the following
az k8s-extension create --cluster-type managedClusters \ --cluster-name myAKSCluster \ --resource-group myResourceGroup \name myDaprExtension \
+--name dapr \
--extension-type Microsoft.Dapr \ --auto-upgrade-minor-version true \ --configuration-settings "global.ha.enabled=true" \ --configuration-settings "dapr_operator.replicaCount=3" ```
-## Set the outbound proxy for Dapr extension for Azure Arc on-prem
+## Set the outbound proxy for Dapr extension for Azure Arc on-premises
If you want to use an outbound proxy with the Dapr extension for AKS, you can do so by:
Troubleshoot Dapr errors via the [common Dapr issues and solutions guide][dapr-t
If you need to delete the extension and remove Dapr from your AKS cluster, you can use the following command:
-```azure-cli-interactive
-az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSCluster --cluster-type managedClusters --name myDaprExtension
+```azurecli
+az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSCluster --cluster-type managedClusters --name dapr
``` ## Next Steps
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[dapr-oss-support]: https://docs.dapr.io/operations/support/support-release-policy/ [dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions [dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/
-[supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
+[supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
The Kubernetes community releases minor versions roughly every three months. Rec
Minor version releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are intended for critical bug fixes within a minor version. Patch releases include fixes for security vulnerabilities or major bugs.
+>[!WARNING]
+> AKS clusters with Calico enabled should not upgrade to Kubernetes v1.25 preview.
+ ## Kubernetes versions Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme for each version:
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without doing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
+>[!WARNING]
+> AKS clusters with Calico enabled should not upgrade to Kubernetes v1.25 preview.
+ > [!NOTE] > Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker][release-tracker].
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The following table compares features available in the managed gateway versus th
| [Built-in cache](api-management-howto-cache.md) | ✔️ | ❌ | ❌ | | [External Redis-compatible cache](api-management-howto-cache-external.md) | ✔️ | ✔️ | ✔️ | | [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ✔️<sup>1</sup> |
-| [Private endpoints](private-endpoint.md) | ✔️ | ✔️ | ❌ |
+| [Private endpoints](private-endpoint.md) | ✔️ | ❌ | ❌ |
| [Availability zones](zone-redundancy.md) | Premium | ❌ | ✔️<sup>1</sup> | | [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ✔️<sup>1</sup> | | [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ❌ | ✔️<sup>2</sup> |
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
Last updated 05/25/2021
With the integration between Azure API Management and [Azure Arc on Kubernetes](../azure-arc/kubernetes/overview.md), you can deploy the API Management gateway component as an [extension in an Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/extensions.md).
-Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster expands API Management support for hybrid and multi-cloud environments. Enable the deployment using a cluster extension to make managing and applying policies to your Azure Arc-enabled cluster a consistent experience.
+Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster expands API Management support for hybrid and multicloud environments. Enable the deployment using a cluster extension to make managing and applying policies to your Azure Arc-enabled cluster a consistent experience.
[!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-azure-arc.md)]
Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster
## Prerequisites
-* [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within [a supported Azure Arc region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
+* [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within a supported Azure Arc region.
* Install the `k8s-extension` Azure CLI extension: ```azurecli
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
Title: Configure Linux Python apps description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Previously updated : 06/11/2021 Last updated : 11/16/2022 ms.devlang: python
You can use either the [Azure portal](https://portal.azure.com) or the Azure CLI
- Run commands locally by installing the latest version of the [Azure CLI](/cli/azure/install-azure-cli), then sign in to Azure using [az login](/cli/azure/reference-index#az-login). > [!NOTE]
-> Linux is currently the recommended option for running Python apps in App Service. For information on the Windows option, see [Python on the Windows flavor of App Service](/visualstudio/python/managing-python-on-azure-app-service).
+> Linux is the only operating system option for running Python apps in App Service. Python on Windows is no longer supported. You can however build your own custom Windows container image and run that in App Service. For more information, see [use a custom Docker image](tutorial-custom-container.md?pivots=container-windows).
## Configure Python version
App Service's build system, called Oryx, performs the following steps when you d
By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTSTATIC` settings are empty. -- To disable running collectstatic when building Django apps, set the `DISABLE_COLLECTSTATIC` setting to true.
+- To disable running collectstatic when building Django apps, set the `DISABLE_COLLECTSTATIC` setting to `true`.
- To run pre-build commands, set the `PRE_BUILD_COMMAND` setting to contain either a command, such as `echo Pre-build command`, or a path to a script file relative to your project root, such as `scripts/prebuild.sh`. All commands must use relative paths to the project root folder.
For more information on how App Service runs and builds Python apps in Linux, se
Existing web applications can be redeployed to Azure as follows: 1. **Source repository**: Maintain your source code in a suitable repository like GitHub, which enables you to set up continuous deployment later in this process.
- 1. Your *requirements.txt* file must be at the root of your repository for App Service to automatically install the necessary packages.
+ - Your *requirements.txt* file must be at the root of your repository for App Service to automatically install the necessary packages.
-1. **Database**: If your app depends on a database, create the necessary resources on Azure as well.
+1. **Database**: If your app depends on a database, create the necessary resources on Azure as well.
-1. **App service resources**: Create a resource group, App Service Plan, and App Service web app to host your application. You can do it easily by running the Azure CLI command [`az webapp up`](/cli/azure/webapp?az-webapp-up). Or, you can create and deploy resources as shown in [Tutorial: Deploy a Django web app with PostgreSQL](tutorial-python-postgresql-app.md). Replace the names of the resource group, App Service Plan, and the web app to be more suitable for your application.
+1. **App service resources**: Create a resource group, App Service Plan, and App Service web app to host your application. You can do it easily by running the Azure CLI command [`az webapp up`](/cli/azure/webapp?az-webapp-up). Or, you can create and deploy resources as shown in [Tutorial: Deploy a Python (Django or Flask) web app with PostgreSQL](tutorial-python-postgresql-app.md). Replace the names of the resource group, App Service Plan, and the web app to be more suitable for your application.
1. **Environment variables**: If your application requires any environment variables, create equivalent [App Service application settings](configure-common.md#configure-app-settings). These App Service settings appear to your code as environment variables, as described on [Access environment variables](#access-app-settings-as-environment-variables). - Database connections, for example, are often managed through such settings, as shown in [Tutorial: Deploy a Django web app with PostgreSQL - verify connection settings](tutorial-python-postgresql-app.md#2-verify-connection-settings).
Existing web applications can be redeployed to Azure as follows:
1. **App startup**: Review the section, [Container startup process](#container-startup-process) later in this article to understand how App Service attempts to run your app. App Service uses the Gunicorn web server by default, which must be able to find your app object or *wsgi.py* folder. If needed, you can [Customize the startup command](#customize-startup-command).
-1. **Continuous deployment**: Set up continuous deployment, as described on [Continuous deployment to Azure App Service](deploy-continuous-deployment.md) if using Azure Pipelines or Kudu deployment, or [Deploy to App Service using GitHub Actions](./deploy-continuous-deployment.md) if using GitHub actions.
+1. **Continuous deployment**: Set up continuous deployment from GitHub Actions, Bitbucket, or Azure Repos as described in the article [Continuous deployment to Azure App Service](deploy-continuous-deployment.md). Or, set up continuous deployment from Local Git as described in the article [Local Git deployment to Azure App Service](deploy-local-git.md).
1. **Custom actions**: To perform actions within the App Service container that hosts your app, such as Django database migrations, you can [connect to the container through SSH](configure-linux-open-ssh-session.md). For an example of running Django database migrations, see [Tutorial: Deploy a Django web app with PostgreSQL - generate database schema](tutorial-python-postgresql-app.md#4-generate-database-schema). - When using continuous deployment, you can perform those actions using post-build commands as described earlier under [Customize build automation](#customize-build-automation).
With these steps completed, you should be able to commit changes to your source
### Production settings for Django apps
-For a production environment like Azure App Service, Django apps should follow Django's [Deployment checklist](https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/) (djangoproject.com).
+For a production environment like Azure App Service, Django apps should follow Django's [Deployment checklist](https://docs.djangoproject.com/en/4.1/howto/deployment/checklist/) (djangoproject.com).
The following table describes the production settings that are relevant to Azure. These settings are defined in the app's *setting.py* file.
The following table describes the production settings that are relevant to Azure
| `SECRET_KEY` | Store the value in an App Service setting as described on [Access app settings as environment variables](#access-app-settings-as-environment-variables). You can alternately [store the value as a "secret" in Azure Key Vault](../key-vault/secrets/quick-create-python.md). | | `DEBUG` | Create a `DEBUG` setting on App Service with the value 0 (false), then load the value as an environment variable. In your development environment, create a `DEBUG` environment variable with the value 1 (true). | | `ALLOWED_HOSTS` | In production, Django requires that you include app's URL in the `ALLOWED_HOSTS` array of *settings.py*. You can retrieve this URL at runtime with the code, `os.environ['WEBSITE_HOSTNAME']`. App Service automatically sets the `WEBSITE_HOSTNAME` environment variable to the app's URL. |
-| `DATABASES` | Define settings in App Service for the database connection and load them as environment variables to populate the [`DATABASES`](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-DATABASES) dictionary. You can alternately store the values (especially the username and password) as [Azure Key Vault secrets](../key-vault/secrets/quick-create-python.md). |
+| `DATABASES` | Define settings in App Service for the database connection and load them as environment variables to populate the [`DATABASES`](https://docs.djangoproject.com/en/4.1/ref/settings/#std:setting-DATABASES) dictionary. You can alternately store the values (especially the username and password) as [Azure Key Vault secrets](../key-vault/secrets/quick-create-python.md). |
## Serve static files for Django apps
-If your Django web app includes static front-end files, first follow the instructions on [Managing static files](https://docs.djangoproject.com/en/3.1/howto/static-files/) in the Django documentation.
+If your Django web app includes static front-end files, first follow the instructions on [Managing static files](https://docs.djangoproject.com/en/4.1/howto/static-files/) in the Django documentation.
For App Service, you then make the following modifications:
For App Service, you then make the following modifications:
## Serve static files for Flask apps
-If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.1.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [quickstart sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on GitHub.
+If your Flask web app includes static front-end files, first follow the instructions on [managing static files](https://flask.palletsprojects.com/en/2.2.x/tutorial/static/) in the Flask documentation. For an example of serving static files in a Flask application, see the [quickstart sample Flask application](https://github.com/Azure-Samples/msdocs-python-flask-webapp-quickstart) on GitHub.
To serve static files directly from a route on your application, you can use the [`send_from_directory`](https://flask.palletsprojects.com/en/2.2.x/api/#flask.send_from_directory) method:
If the App Service doesn't find a custom command, a Django app, or a Flask app,
If you deployed code and still see the default app, see [Troubleshooting - App doesn't appear](#app-doesnt-appear).
-[![Default App Service on Linux web page](media/configure-language-python/default-python-app.png)](#app-doesnt-appear)
Again, if you expect to see a deployed app instead of the default app, see [Troubleshooting - App doesn't appear](#app-doesnt-appear).
To specify a startup command or command file:
Replace `<custom-command>` with either the full text of your startup command or the name of your startup command file.
-App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for more information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com).
+App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free, and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for more information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com).
### Example startup commands
App Service ignores any errors that occur when processing a custom startup comma
For more information, see [Gunicorn logging](https://docs.gunicorn.org/en/stable/settings.html#logging) (docs.gunicorn.org). -- **Custom Flask main module**: by default, App Service assumes that a Flask app's main module is *application.py* or *app.py*. If your main module uses a different name, then you must customize the startup command. For example, if you have a Flask app whose main module is *hello.py* and the Flask app object in that file is named `myapp`, then the command is as follows:
+- **Custom Flask main module**: By default, App Service assumes that a Flask app's main module is *application.py* or *app.py*. If your main module uses a different name, then you must customize the startup command. For example, if you have a Flask app whose main module is *hello.py* and the Flask app object in that file is named `myapp`, then the command is as follows:
```bash gunicorn --bind=0.0.0.0 --timeout 600 hello:myapp
if 'X-Forwarded-Proto' in request.headers and request.headers['X-Forwarded-Proto
# Do something when HTTPS is used ```
-Popular web frameworks let you access the `X-Forwarded-*` information in your standard app pattern. In [CodeIgniter](https://codeigniter.com/), the [is_https()](https://github.com/bcit-ci/CodeIgniter/blob/master/system/core/Common.php#L338-L365) checks the value of `X_FORWARDED_PROTO` by default.
+Popular web frameworks let you access the `X-Forwarded-*` information in your standard app pattern. For example in Django, you can use the [SECURE_PROXY_SSL_HEADER](https://docs.djangoproject.com/en/4.1/ref/settings/#secure-proxy-ssl-header) to tell Django to use the `X-Forwarded-Proto` header.
## Access diagnostic logs
When you deploy your code, App Service performs the build process described earl
Use the following steps to access the deployment logs:
-1. On the Azure portal for your web app, select **Deployment** > **Deployment Center (Preview)** on the left menu.
+1. On the Azure portal for your web app, select **Deployment** > **Deployment Center** on the left menu.
1. On the **Logs** tab, select the **Commit ID** for the most recent commit. 1. On the **Log details** page that appears, select the **Show Logs...** link that appears next to "Running oryx build...".
When you're successfully connected to the SSH session, you should see the messag
In general, the first step in troubleshooting is to use App Service Diagnostics:
-1. On the Azure portal for your web app, select **Diagnose and solve problems** from the left menu.
-1. Select **Availability and performance**.
-1. Examine the information in the **Application Logs**, **Container crash**, and **Container Issues** options, where the most common issues will appear.
+1. In the Azure portal for your web app, select **Diagnose and solve problems** from the left menu.
+1. Select **Availability and Performance**.
+1. Examine the information in the **Application Logs**, **Container Crash**, and **Container Issues** options, where the most common issues will appear.
Next, examine both the [deployment logs](#access-deployment-logs) and the [app logs](#access-diagnostic-logs) for any error messages. These logs often identify specific issues that can prevent app deployment or app startup. For example, the build can fail if your *requirements.txt* file has the wrong filename or isn't present in your project root folder.
The following sections provide guidance for specific issues.
- Restart the App Service, wait 15-20 seconds, and check the app again.
- - Be sure you're using App Service for Linux rather than a Windows-based instance. From the Azure CLI, run the command `az webapp show --resource-group <resource-group-name> --name <app-name> --query kind`, replacing `<resource-group-name>` and `<app-name>` accordingly. You should see `app,linux` as output; otherwise, recreate the App Service and choose Linux.
- - Use [SSH](#open-ssh-session-in-browser) to connect directly to the App Service container and verify that your files exist under *site/wwwroot*. If your files don't exist, use the following steps: 1. Create an app setting named `SCM_DO_BUILD_DURING_DEPLOYMENT` with the value of 1, redeploy your code, wait a few minutes, then try to access the app again. For more information on creating app settings, see [Configure an App Service app in the Azure portal](configure-common.md). 1. Review your deployment process, [check the deployment logs](#access-deployment-logs), correct any errors, and redeploy the app.
The following sections provide guidance for specific issues.
#### ModuleNotFoundError when app starts
-If you see an error like `ModuleNotFoundError: No module named 'example'`, this means that Python couldn't find one or more of your modules when the application started. This most often occurs if you deploy your virtual environment with your code. Virtual environments aren't portable, so a virtual environment shouldn't be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This will force Oryx to install your packages whenever you deploy to App Service. For more information, please see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html).
+If you see an error like `ModuleNotFoundError: No module named 'example'`, then Python couldn't find one or more of your modules when the application started. This error most often occurs if you deploy your virtual environment with your code. Virtual environments aren't portable, so a virtual environment shouldn't be deployed with your application code. Instead, let Oryx create a virtual environment and install your packages on the web app by creating an app setting, `SCM_DO_BUILD_DURING_DEPLOYMENT`, and setting it to `1`. This setting will force Oryx to install your packages whenever you deploy to App Service. For more information, please see [this article on virtual environment portability](https://azure.github.io/AppService/2020/12/11/cicd-for-python-apps.html).
### Database is locked
-When attempting to run database migrations with a Django app, you may see "sqlite3. OperationalError: database is locked." The error indicates that your application is using a SQLite database for which Django is configured by default, rather than using a cloud database such as PostgreSQL for Azure.
+When attempting to run database migrations with a Django app, you may see "sqlite3. OperationalError: database is locked." The error indicates that your application is using a SQLite database for which Django is configured by default rather than using a cloud database such as PostgreSQL for Azure.
Check the `DATABASES` variable in the app's *settings.py* file to ensure that your app is using a cloud database instead of SQLite.
If you're encountering this error with the sample in [Tutorial: Deploy a Django
- **Commands in the SSH session appear to be cut off**: The editor may not be word-wrapping commands, but they should still run correctly. -- **Static assets don't appear in a Django app**: Ensure that you have enabled the [whitenoise module](http://whitenoise.evans.io/en/stable/django.html)
+- **Static assets don't appear in a Django app**: Ensure that you've enabled the [whitenoise module](http://whitenoise.evans.io/en/stable/django.html)
- **You see the message, "Fatal SSL Connection is Required"**: Check any usernames and passwords used to access resources (such as databases) from within the app.
app-service Configure Linux Open Ssh Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-linux-open-ssh-session.md
ms.assetid: 66f9988f-8ffa-414a-9137-3a9b15a5573c Previously updated : 09/10/2021 Last updated : 11/18/2022
app-service Deploy Ci Cd Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ci-cd-custom-container.md
ms.assetid: a47fb43a-bbbd-4751-bdc1-cd382eae49f8 Previously updated : 03/12/2021 Last updated : 11/18/2022 zone_pivot_groups: app-service-containers-windows-linux
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 11/11/2022 Last updated : 11/18/2022
At this time, App Service Environment migrations to v3 using the migration featu
- Norway East - Norway West - South Central US
+- South India
- Switzerland North - Switzerland West - UAE North
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 11/14/2022 Last updated : 11/18/2022
App Service Environment v3 is available in the following regions:
| Region | Single zone support | Availability zone support | Single zone support | | -- | :--: | :-: | :-: | | | App Service Environment v3 | App Service Environment v3 | App Service Environment v1/v2 |
-| Australia Central | | | ✅ |
-| Australia Central 2 | | | ✅ |
+| Australia Central | ✅ | | ✅ |
+| Australia Central 2 | ✅* | | ✅ |
| Australia East | ✅ | ✅ | ✅ | | Australia Southeast | ✅ | | ✅ | | Brazil South | ✅ | ✅ | ✅ |
App Service Environment v3 is available in the following regions:
| East US 2 | ✅ | ✅ | ✅ | | France Central | ✅ | ✅ | ✅ | | France South | | | ✅ |
-| Germany North | | | ✅ |
+| Germany North | ✅ | | ✅ |
| Germany West Central | ✅ | ✅ | ✅ | | Japan East | ✅ | ✅ | ✅ | | Japan West | | | ✅ | | Jio India West | | | ✅ | | Korea Central | ✅ | ✅ | ✅ |
-| Korea South | | | ✅ |
+| Korea South | ✅ | | ✅ |
| North Central US | ✅ | | ✅ | | North Europe | ✅ | ✅ | ✅ | | Norway East | ✅ | ✅ | ✅ |
App Service Environment v3 is available in the following regions:
| South Africa North | ✅ | ✅ | ✅ | | South Africa West | | | ✅ | | South Central US | ✅ | ✅ | ✅ |
-| South India | | | ✅ |
+| South India | ✅ | | ✅ |
| Southeast Asia | ✅ | ✅ | ✅ | | Sweden Central | ✅ | ✅ | | | Switzerland North | ✅ | ✅ | ✅ |
App Service Environment v3 is available in the following regions:
| UK West | ✅ | | ✅ | | West Central US | ✅ | | ✅ | | West Europe | ✅ | ✅ | ✅ |
-| West India | | | ✅ |
+| West India | ✅* | | ✅ |
| West US | ✅ | | ✅ | | West US 2 | ✅ | ✅ | ✅ | | West US 3 | ✅ | ✅ | ✅ |
app-service Monitor App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-app-service.md
Previously updated : 04/16/2021 Last updated : 11/18/2022 # Monitoring App Service
app-service Provision Resource Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/provision-resource-bicep.md
description: Create your first app to Azure App Service in seconds using Azure B
Previously updated : 8/26/2021 Last updated : 11/18/2022 # Create App Service app using Bicep
app-service Quickstart Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html.md
description: Deploy your first HTML Hello World to Azure App Service in minutes.
ms.assetid: 60495cc5-6963-4bf0-8174-52786d226c26 Previously updated : 08/23/2019 Last updated : 11/18/2022
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./quickstart-html-uiex
# Create a static HTML web app in Azure
app-service Quickstart Multi Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md
description: Get started with multi-container apps on Azure App Service by deplo
keywords: azure app service, web app, linux, docker, compose, multicontainer, multi-container, web app for containers, multiple containers, container, wordpress, azure db for mysql, production database with containers Previously updated : 08/23/2019 Last updated : 11/18/2022
app-service Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md
Title: 'Quickstart: Create a Node.js web app' description: Deploy your first Node.js Hello World to Azure App Service in minutes. ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a++ Last updated 03/22/2022
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
Title: 'Quickstart: Create a PHP web app'
description: Deploy your first PHP Hello World to Azure App Service in minutes. You deploy using Git, which is one of many ways to deploy to App Service. ms.assetid: 6feac128-c728-4491-8b79-962da9a40788 ++ Last updated 03/10/2022 ms.devlang: php
app-service Samples Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-bicep.md
tags: azure-service-management Previously updated : 8/26/2021 Last updated : 11/18/2022
app-service Samples Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-terraform.md
tags: azure-service-management
ms.assetid: 1e5ecfa8-4ab1-47d3-ab23-97abf723516d Previously updated : 08/10/2020 Last updated : 11/18/2022
app-service Troubleshoot Intermittent Outbound Connection Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md
TCP connections and SNAT ports are not directly related. A TCP connections usage
* A SNAT port can be shared by different flows, if the flows are different in either protocol, IP address or port. The TCP Connections metric counts every TCP connection. * The TCP connections limit happens at the worker instance level. The Azure Network outbound load balancing doesn't use the TCP Connections metric for SNAT port limiting. * The TCP connections limits are described in [Sandbox Cross VM Numerical Limits - TCP Connections](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#cross-vm-numerical-limits)
-* Existing TCP sessions will fail when new outbound TCP sessions from Azure App Service source port. You can either use a single IP or reconfigure backend pool members to avoid conflicts
+* Existing TCP sessions will fail when new outbound TCP sessions are added from Azure App Service source port. You can either use a single IP or reconfigure backend pool members to avoid conflicts.
|Limit name|Description|Small (A1)|Medium (A2)|Large (A3)|Isolated tier (ASE)| |||||||
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
keywords: azure app service, web app, linux, docker, compose, multicontainer, mu
Previously updated : 10/31/2020 Last updated : 11/18/2022
-#Customer intent: As an Azure customer, I want to learn how to deploy multiple containers using WordPress into Web App for Containers.
# Tutorial: Create a multi-container (preview) app in Web App for Containers
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
recommendations: false
[:::image type="icon" source="media/studio/read-card.png" :::](https://formrecognizer.appliedai.azure.com/studio/read)
-The Read API analyzes and extracts ext lines, words, their locations, detected languages, and handwritten style if detected.
+The Read API analyzes and extracts lines, words, their locations, detected languages, and handwritten style if detected.
***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/read)***:
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 11/16/2022 Last updated : 11/17/2022 monikerRange: '>=form-recog-2.1.0' recommendations: false
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
## October 2022
-### Form Recognizer versioned content
-
-Form Recognizer documentation has been updated to present a versioned experience. Now, you can choose to view content targeting the v3.0 GA experience or the v2.1 GA experience. The v3.0 experience is the default.
--
-### Form Recognizer Studio Sample Code
-
-Sample code for the [Form Recognizer Studio labeling experience](https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX) is now available on GitHub. Customers can develop and integrate Form Recognizer into their own UX or build their own new UX using the Form Recognizer Studio sample code.
+
-### Language expansion
+* **Form Recognizer versioned content**
+ * Form Recognizer documentation has been updated to present a versioned experience. Now, you can choose to view content targeting the v3.0 GA experience or the v2.1 GA experience. The v3.0 experience is the default.
-With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Form Recognizer now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages.
+ :::image type="content" source="media/versioning-and-monikers.png" alt-text="Screenshot of the Form Recognizer landing page denoting the version dropdown menu.":::
-Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications.
+* **Form Recognizer Studio Sample Code**
+ * Sample code for the [Form Recognizer Studio labeling experience](https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX) is now available on GitHub. Customers can develop and integrate Form Recognizer into their own UX or build their own new UX using the Form Recognizer Studio sample code.
-### New Prebuilt Contract model
+* **Language expansion**
+ * With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Form Recognizer now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages.
+ * Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications.
-A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. the contracts model is currently in preview, request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
+* **New Prebuilt Contract model**
+ * A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. the contracts model is currently in preview, request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
-### Region expansion for training custom neural models
+* **Region expansion for training custom neural models**
+ * Training custom neural models now supported in added regions.
+ > [!div class="checklist"]
+ >
+ > * East US
+ > * East US2
+ > * US Gov Arizona
-Training custom neural models now supported in added regions.
-* East US
-* East US2
-* US Gov Arizona
+ ## September 2022
-### Region expansion for training custom neural models
-
-Training custom neural models is now supported in six new regions.
-
-* Australia East
-* Central US
-* East Asia
-* France Central
-* UK South
-* West US2
-
-For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md).
-
-#### Form Recognizer SDK version 4.0.0 GA release
-
-* **Form Recognizer SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**
-
-* For more information on Form Recognizer SDKs, see the [**SDK overview**](sdk-overview.md).
-
-* Update your applications using your programming language's **migration guide** (see below).
- >[!NOTE] > Starting with version 4.0.0, a new set of clients has been introduced to leverage the newest features of the Form Recognizer service.
-This release includes the following updates:
+**SDK version 4.0.0 GA release includes the following updates:**
### [**C#**](#tab/csharp)
This release includes the following updates:
-## August 2022
+* **Region expansion for training custom neural models now supported in six new regions**
+ > [!div class="checklist"]
+ >
+ > * Australia East
+ > * Central US
+ > * East Asia
+ > * France Central
+ > * UK South
+ > * West US2
+
+ * For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md).
+
+ * Form Recognizer SDK version 4.0.0 GA release
+ * **Form Recognizer SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**
+ * For more information on Form Recognizer SDKs, see the [**SDK overview**](sdk-overview.md).
+ * Update your applications using your programming language's **migration guide** (see above).
++
-#### Form Recognizer SDK beta August 2022 preview release
+## August 2022
-This release includes the following updates:
+**Form Recognizer SDK beta August 2022 preview release includes the following updates:**
### [**C#**](#tab/csharp)
This release includes the following updates:
-### Form Recognizer v3.0 generally available
-
-**Form Recognizer REST API v3.0 is now generally available and ready for use in production applications!** Update your applications with [**REST API version 2022-08-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
-
-#### The August release introduces the following new capabilities and updates:
-
-##### Form Recognizer Studio updates
-
-* **Next steps**. Under each model page, the Studio now has a next steps section. Users can quickly reference sample code, troubleshooting guidelines, and pricing information.
-
-* **Custom models**. The Studio now includes the ability to reorder labels in custom model projects to improve labeling efficiency.
+* Form Recognizer v3.0 generally available
-* **Copy Models** Custom models can be copied across Form Recognizer services from within the Studio. The operation enables the promotion of a trained model to other environments and regions.
+ * **Form Recognizer REST API v3.0 is now generally available and ready for use in production applications!** Update your applications with [**REST API version 2022-08-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
-* **Delete documents**. The Studio now supports deleting documents from labeled dataset within custom projects.
+* Form Recognizer Studio updates
+ > [!div class="checklist"]
+ >
+ > * **Next steps**. Under each model page, the Studio now has a next steps section. Users can quickly reference sample code, troubleshooting guidelines, and pricing information.
+ > * **Custom models**. The Studio now includes the ability to reorder labels in custom model projects to improve labeling efficiency.
+ > * **Copy Models** Custom models can be copied across Form Recognizer services from within the Studio. The operation enables the promotion of a trained model to other environments and regions.
+ > * **Delete documents**. The Studio now supports deleting documents from labeled dataset within custom projects.
-##### Form Recognizer service updates
+* Form Recognizer service updates
-* [**prebuilt-read**](concept-read.md). Read OCR model is now also available in Form Recognizer with paragraphs and language detection as the two new features. Form Recognizer Read targets advanced document scenarios aligned with the broader document intelligence capabilities in Form Recognizer.
-
-* [**prebuilt-layout**](concept-layout.md). The Layout model extracts paragraphs and whether the extracted text is a paragraph, title, section heading, footnote, page header, page footer, or page number.
-
-* [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields will now resolve to the existing fields TotalTax and Line/Tax respectively.
-
-* [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. Support for passport visa information.
-
-* [**prebuilt-receipt**](concept-receipt.md). Expanded locale support for French (fr-FR), Spanish (es-ES), Portuguese (pt-PT), Italian (it-IT) and German (de-DE).
-
-* [**prebuilt-businessCard**](concept-business-card.md). Address parsing support to extract subfields for address components like address, city, state, country, and zip code.
+ * [**prebuilt-read**](concept-read.md). Read OCR model is now also available in Form Recognizer with paragraphs and language detection as the two new features. Form Recognizer Read targets advanced document scenarios aligned with the broader document intelligence capabilities in Form Recognizer.
+ * [**prebuilt-layout**](concept-layout.md). The Layout model extracts paragraphs and whether the extracted text is a paragraph, title, section heading, footnote, page header, page footer, or page number.
+ * [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields will now resolve to the existing fields TotalTax and Line/Tax respectively.
+ * [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. Support for passport visa information.
+ * [**prebuilt-receipt**](concept-receipt.md). Expanded locale support for French (fr-FR), Spanish (es-ES), Portuguese (pt-PT), Italian (it-IT) and German (de-DE).
+ * [**prebuilt-businessCard**](concept-business-card.md). Address parsing support to extract subfields for address components like address, city, state, country, and zip code.
* **AI quality improvements** * [**prebuilt-read**](concept-read.md). Enhanced support for single characters, handwritten dates, amounts, names, other entities commonly found in receipts and invoices and improved processing of digital PDF documents.- * [**prebuilt-layout**](concept-layout.md). Support for better detection of cropped tables, borderless tables, and improved recognition of long spanning cells.- * [**prebuilt-document**](concept-general-document.md). Improved value and check box detection.- * [**custom-neural**](concept-custom-neural.md). Improved accuracy for table detection and extraction.
-## June 2022
-
-### [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) June Update
-
-The June release is the latest update to the Form Recognizer Studio. There are considerable user experience and accessibility improvements addressed in this update:
-
-* **Code sample for Javascript and C#**. The Studio code tab now adds JavaScript and C# code samples in addition to the existing Python one.
-* **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload user interface.
-* **New feature for custom projects**. Custom projects now support creating storage account and blobs when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
-
-### Form Recognizer v3.0 preview release
-
-The **2022-06-30-preview** release presents extensive updates across the feature APIs:
-
-* [**Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction).
-* [**Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
-* [**Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
-* [**Invoice model output now includes general document key-value pairs**](concept-invoice.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs).
-* [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
-* [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
-* [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
-* [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction ](concept-read.md#microsoft-office-and-html-text-extraction).
+
-#### Form Recognizer SDK beta June 2022 preview release
+## June 2022
-This new release includes the following updates:
+* Form Recognizer SDK beta June 2022 preview release includes the following updates:
### [**C#**](#tab/csharp)
This new release includes the following updates:
[**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
+* [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) June release is the latest update to the Form Recognizer Studio. There are considerable user experience and accessibility improvements addressed in this update:
-## February 2022
-
-### Form Recognizer v3.0 preview release
-
- Form Recognizer v3.0 preview release introduces several new features and capabilities and enhances existing one:
-
-* [**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**.
-* [**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
-* [**Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
-* [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
-* [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices.
-* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
-* [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
+ * **Code sample for Javascript and C#**. The Studio code tab now adds JavaScript and C# code samples in addition to the existing Python one.
+ * **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload user interface.
+ * **New feature for custom projects**. Custom projects now support creating storage account and blobs when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
-Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
+* Form Recognizer v3.0 **2022-06-30-preview** release presents extensive updates across the feature APIs:
-#### Form Recognizer model data extraction
+ * [**Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction).
+ * [**Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+ * [**Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+ * [**Invoice model output now includes general document key-value pairs**](concept-invoice.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs).
+ * [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
+ * [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
+ * [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
+ * [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction](concept-read.md#microsoft-office-and-html-text-extraction).
- | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Signatures**|
- | | :: |::| :: | :: |:: |
- |Read | Γ£ô | | | | |
- |General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
- | Layout | Γ£ô | | Γ£ô | Γ£ô | |
- | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
- |Receipt | Γ£ô | Γ£ô | | |Γ£ô|
- | ID document | Γ£ô | Γ£ô | | ||
- | Business card | Γ£ô | Γ£ô | | ||
- | Custom template |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
- | Custom neural |Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
-
-#### Form Recognizer SDK beta preview release
-
-This new release includes the following updates:
-
-* [Custom Document models and modes](concept-custom.md):
- * [Custom template](concept-custom-template.md) (formerly custom form)
- * [Custom neural](concept-custom-neural.md).
- * [Custom modelΓÇöbuild mode](concept-custom.md#build-mode).
-
-* [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
-
-* [Read prebuilt model](concept-read.md) (prebuilt-read).
+
-* [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
+## February 2022
### [**C#**](#tab/csharp)
This new release includes the following updates:
-## November 2021
-
-### Form Recognizer v3.0 preview SDK release update (beta.2)
-
- The beta.2 version of the Azure Form Recognizer SDKs has been released. This new beta release incorporates bug fixes and minor feature updates.
+* Form Recognizer v3.0 preview release introduces several new features, capabilities and enhancements:
-### [**C#**](#tab/csharp)
+ * [**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**.
+ * [**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
+ * [**Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
+ * [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
+ * [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices.
+ * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
+ * [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
-**Version 4.0.0-beta.2 (2021-11-09)**
+* Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
-| [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.2) | [**Changelog**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) | [**API reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
+* Form Recognizer model data extraction
-#### Bugs Fixed
+ | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Signatures**|
+ | | :: |::| :: | :: |:: |
+ |Read | Γ£ô | | | | |
+ |General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
+ | Layout | Γ£ô | | Γ£ô | Γ£ô | |
+ | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
+ |Receipt | Γ£ô | Γ£ô | | |Γ£ô|
+ | ID document | Γ£ô | Γ£ô | | ||
+ | Business card | Γ£ô | Γ£ô | | ||
+ | Custom template |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+ | Custom neural |Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
-The `BuildModelOperation` and `CopyModelOperation` now correctly populate the `PercentCompleted` property, and no longer return a constant value of 0.
+* Form Recognizer SDK beta preview release includes the following updates:
-### [**Java**](#tab/java)
+ * [Custom Document models and modes](concept-custom.md):
+ * [Custom template](concept-custom-template.md) (formerly custom form)
+ * [Custom neural](concept-custom-neural.md).
+ * [Custom modelΓÇöbuild mode](concept-custom.md#build-mode).
- | [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.2) | [**Changelog**](https://oss.sonatype.org/service/local/repositories/releases/content/com/azure/azure-ai-formrecognizer/4.0.0-beta.2/azure-ai-formrecognizer-4.0.0-beta.2-changelog.md) | [**API reference documentation**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0-beta.2/https://docsupdatetracker.net/index.html)
+ * [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
+ * [Read prebuilt model](concept-read.md) (prebuilt-read).
+ * [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
-#### Feature updates
+
-* The `HttpResponseException` has been updated to use azure-core `ResponseError`.
+## November 2021
-* Client validation has been added to check for empty `modelId` passed by the user for `beginAnalyzeDocument` methods.
+### [**C#**](#tab/csharp)
-#### Breaking changes
+**Version 4.0.0-beta.2 (2021-11-09)**
-* `DocumentAnalysisException` has been renamed to `DocumentModelOperationException`.
+| [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.2) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) | [**API reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
-* `FormRecognizerError` has been renamed to `DocumentModelOperationError`.
+### [**Java**](#tab/java)
-* `InnerError` has been renamed to `DocumentModelOperationInnerError`.
+ | [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.2) | [**Changelog/Release History**](https://oss.sonatype.org/service/local/repositories/releases/content/com/azure/azure-ai-formrecognizer/4.0.0-beta.2/azure-ai-formrecognizer-4.0.0-beta.2-changelog.md) | [**API reference documentation**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0-beta.2/https://docsupdatetracker.net/index.html)
### [**JavaScript**](#tab/javascript)
-| [**Package (NPM)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.2) | [**Changelog**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) | [**API reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true) |
-
-#### Feature updates
-
-* The `words` property has been added to the `DocumentLine` interface.
-
-* The `createdOn` (date created) and `lastUpdatedOn` (time last modified) properties have been added to the `DocumentAnalysisPollOperationState` and `TrainingPollOperationState` interfaces.
-
-#### Bugs fixed
-
-* The handling of long-running operations (analysis and model creation operations) has been improved. Clients will no longer attempt to parse model IDs and will now accept operation-location fields verbatim. Thus, the *unable to parse operationLocation* error is no longer possible.
-
-#### Breaking changes
-
-* The `operationId` field for `DocumentAnalysisPollOperationState` has been replaced with the `operationLocation` field that contains the full operation URL not the operation GUID.
+| [**Package (NPM)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.2) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) | [**API reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true) |
### [**Python**](#tab/python)
-| [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b2/) | [**Changelog**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b2/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) | [**API reference documentation**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/latest/azure.ai.formrecognizer.html)
-
-#### Feature updates
+| [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b2/) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b2/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) | [**API reference documentation**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/latest/azure.ai.formrecognizer.html)
-* The `get_words()` method has been added to the `DocumentLine` model. *See* our [How to get words contained in a Document line](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_get_words_on_document_line.py) sample on GitHub.
-
-#### Breaking changes
+
-* The `DocumentElement` class has been renamed to `DocumentContentElement`.
+* **Form Recognizer v3.0 preview SDK release update (beta.2) incorporates bug fixes and minor feature updates.**
## October 2021
-### Form Recognizer v3.0 preview release (beta.1)
-
-**Version 4.0.0-beta.1 (2021-10-07)**
+* **Form Recognizer v3.0 preview release version 4.0.0-beta.1 (2021-10-07)introduces several new features and capabilities:**
- Form Recognizer v3.0 preview release introduces several new features and capabilities:
+ * [**General document**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents.
+ * [**Hotel receipt**](concept-receipt.md) model added to prebuilt receipt processing.
+ * [**Expanded fields for ID document**](concept-id-document.md) the ID model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
+ * [**Signature field**](concept-custom.md) is a new field type in custom forms to detect the presence of a signature in a form field.
+ * [**Language Expansion**](language-support.md) Support for 122 languages (print) and 7 languages (handwritten). Form Recognizer Layout and Custom Form expand [supported languages](language-support.md) to 122 with its latest preview. The preview includes text extraction for print text in 49 new languages including Russian, Bulgarian, and other Cyrillic and more Latin languages. In addition extraction of handwritten text now supports seven languages that include English, and new previews of Chinese Simplified, French, German, Italian, Portuguese, and Spanish.
+ * **Tables and text extraction enhancements** Layout now supports extracting single row tables also called key-value tables. Text extraction enhancements include better processing of digital PDFs and Machine Readable Zone (MRZ) text in identity documents, along with general performance.
+ * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) To simplify use of the service, you can now access the Form Recognizer Studio to test the different prebuilt models or label and train a custom model
-* [**General document**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents.
-* [**Hotel receipt**](concept-receipt.md) model added to prebuilt receipt processing.
-* [**Expanded fields for ID document**](concept-id-document.md) the ID model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
-* [**Signature field**](concept-custom.md) is a new field type in custom forms to detect the presence of a signature in a form field.
+ * Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
-* [**Language Expansion**](language-support.md) Support for 122 languages (print) and 7 languages (handwritten). Form Recognizer Layout and Custom Form expand [supported languages](language-support.md) to 122 with its latest preview. The preview includes text extraction for print text in 49 new languages including Russian, Bulgarian, and other Cyrillic and more Latin languages. In addition extraction of handwritten text now supports seven languages that include English, and new previews of Chinese Simplified, French, German, Italian, Portuguese, and Spanish.
-
-* **Tables and text extraction enhancements** Layout now supports extracting single row tables also called key-value tables. Text extraction enhancements include better processing of digital PDFs and Machine Readable Zone (MRZ) text in identity documents, along with general performance.
-
-* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) To simplify use of the service, you can now access the Form Recognizer Studio to test the different prebuilt models or label and train a custom model
-
-Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
-
-#### Form Recognizer model data extraction
+* Form Recognizer model data extraction
| **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Entities** | | | :: |::| :: | :: |:: |
Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/
| Business card | Γ£ô | Γ£ô | | || | Custom |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | ++ ## September 2021 * [Azure metrics explorer advanced features](../../azure-monitor/essentials/metrics-charts.md) are available on your Form Recognizer resource overview page in the Azure portal.
-### Monitoring menu
+* Monitoring menu
+ :::image type="content" source="media/portal-metrics.png" alt-text="Screenshot showing the monitoring menu in the Azure portal":::
-### Charts
+* Charts
+ :::image type="content" source="media/portal-metrics-charts.png" alt-text="Screenshot showing an example metric chart in the Azure portal.":::
* **ID document** model update: given names including a suffix, with or without a period (full stop), process successfully:
Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/
| William Isaac Kirby Jr. |**FirstName**: William Isaac</br></br>**LastName**: Kirby Jr. | | Henry Caleb Ross Sr | **FirstName**: Henry Caleb </br></br> **LastName**: Ross Sr | ++ ## July 2021
-### System-assigned managed identity support
+* System-assigned managed identity support: You can now enable a system-assigned managed identity to grant Form Recognizer limited access to private storage accounts including accounts protected by a Virtual Network (VNet) or firewall or have enabled bring-your-own-storage (BYOS). *See* [Create and use managed identity for your Form Recognizer resource](managed-identity-byos.md) to learn more.
- You can now enable a system-assigned managed identity to grant Form Recognizer limited access to private storage accounts including accounts protected by a Virtual Network (VNet) or firewall or have enabled bring-your-own-storage (BYOS). *See* [Create and use managed identity for your Form Recognizer resource](managed-identity-byos.md) to learn more.
+ ## June 2021
-### Form Recognizer containers v2.1 released in gated preview
-
-Form Recognizer features are now supported by six feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom**. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and receive approval.
-
-*See* [**Install and run Docker containers for Form Recognizer**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout) and [**Configure Form Recognizer containers**](containers/form-recognizer-container-configuration.md?branch=main)
-
-### Form Recognizer connector released in preview
-
- The [**Form Recognizer connector**](/connectors/formrecognizer) integrates with [Azure Logic Apps](../../logic-apps/logic-apps-overview.md), [Microsoft Power Automate](/power-automate/getting-started), and [Microsoft Power Apps](/powerapps/powerapps-overview). The connector supports workflow actions and triggers to extract and analyze document data and structure from custom and prebuilt forms, invoices, receipts, business cards and ID documents.
-
-### Form Recognizer SDK v3.1.0 patched to v3.1.1 for C#, Java, and Python
-
-The patch addresses invoices that don't have subline item fields detected such as a `FormField` with `Text` but no `BoundingBox` or `Page` information.
- ### [**C#**](#tab/csharp) | [Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true) | [NuGet package version 3.1.1](https://www.nuget.org/packages/Azure.AI.FormRecognizer) |
The patch addresses invoices that don't have subline item fields detected such a
-## May 2021
-
-### Form Recognizer 2.1 API Generally Available release
-
-* Form Recognizer 2.1 is generally available. The General Availability release marks the stability of the changes introduced in prior 2.1 preview package versions. This release enables you to detect and extract information and data from the following document types:
+* Form Recognizer containers v2.1 released in gated preview and are now supported by six feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom**. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and receive approval.
-* [Documents](concept-layout.md)
-* [Receipts](./concept-receipt.md)
-* [Business cards](./concept-business-card.md)
-* [Invoices](./concept-invoice.md)
-* [Identity documents](./concept-id-document.md)
-* [Custom forms](concept-custom.md)
+ * *See* [**Install and run Docker containers for Form Recognizer**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout) and [**Configure Form Recognizer containers**](containers/form-recognizer-container-configuration.md?branch=main)
-#### Get started
+* Form Recognizer connector released in preview: The [**Form Recognizer connector**](/connectors/formrecognizer) integrates with [Azure Logic Apps](../../logic-apps/logic-apps-overview.md), [Microsoft Power Automate](/power-automate/getting-started), and [Microsoft Power Apps](/powerapps/powerapps-overview). The connector supports workflow actions and triggers to extract and analyze document data and structure from custom and prebuilt forms, invoices, receipts, business cards and ID documents.
-Go to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/) and follow the [quickstart](./quickstarts/try-sample-label-tool.md)
+* Form Recognizer SDK v3.1.0 patched to v3.1.1 for C#, Java, and Python. The patch addresses invoices that don't have subline item fields detected such as a `FormField` with `Text` but no `BoundingBox` or `Page` information.
-### Layout adds table headers
-
-The updated Layout API table feature adds header recognition with column headers that can span multiple rows. Each table cell has an attribute that indicates whether it's part of a header or not. This update can be used to identify which rows make up the table header.
+
-#### SDK updates
+## May 2021
### [**C#**](#tab/csharp)
-| [Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true) | [NuGet package version 3.0.1](https://www.nuget.org/packages/Azure.AI.FormRecognizer) |
-
-#### **Non-breaking changes**
-
-* **FormRecognizerModelFactory** class now supports updates to **TextAppearance** and **ReadingOrder** and removal of **TextStyle** models. See [Breaking changes](#breaking-changes-may)
-
-#### **Breaking changes (May)**
+* **Version 3.1.0 (2021-05-26)**
-* Client defaults to the latest supported service version, currently v2.1. You can specify version 2.0 in the **FormRecognizerClientOptions** object's **Version** property.
-
-* **StartRecognizeIdentityDocuments**. Renamed methods and method parameters using **Identity** to replace _ID_ keyword for all related identity documents recognition API functionalities.
-
-* **FormReadingOrder**. *ReadingOrder* renamed to **FormReadingOrder**.
-
-* **AsCountryRegion**. *AsCountryCode* renamed to **AsCountryRegion**.
-
-* **TextAppearance** now includes **StyleName** and **StyleConfidence** properties (formerly part of the **TextStyle** object).
-
-* **FieldValueType**. Value **Gender** removed from the model.
-
-* **TextStyle** model removed.
-
-* **FieldValueGender** type removed.
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true) | [NuGet package version 3.0.1](https://www.nuget.org/packages/Azure.AI.FormRecognizer) |
### [**Java**](#tab/java)
- | [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [Maven artifact package dependency version 3.1.0](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) |
-
-#### **Non-breaking changes**
-
-* **FormRecognizerClientBuilder** and **FormTrainingClientBuilder** . Added **clientOptions** and **getDefaultLogOptions** methods.
-
-* **FormRecognizerLanguage**. Added more language fields.
-
-#### **Breaking changes (May)**
-
-* Client defaults to the latest supported service version, currently v2.1. You can specify version 2.0 in the **FormRecognizerClientBuilder** object's **serviceVersion** method.
+* **Version 3.1.0 (2021-05-26)**
-* Removed v2.1-preview.1 and v2.1-preview.2 support.
-
-* **beginRecognizeIdentityDocuments**. Renamed methods and method parameters using **Identity** to replace `Id` keyword for all related identity documents recognition API functionalities.
-
-* **FormReadingOrder**. *ReadingOrder* renamed to **FormReadingOrder**, and refactor the class to be expandable string class.
-
-* **asCountryRegion**. *asCountry* renamed to **asCountryRegion** method.
-
-* **FieldValueType**. Field value *COUNTRY* renamed to **COUNTRY_REGION**.
-
-* **TextAppearance** class now includes **styleName** and **styleConfidence** properties (formerly part of the **TextStyle** object).
-
-* **FieldValueType**. Value *Gender* removed from the model.
-
-* **TextStyle** model removed.
-
-* **FieldValueGender** class type removed.
-
-* **pollInterval**. Removed the pollInterval methods from **RecognizeBusinessCardsOptions**, **RecognizeContentOptions**, **RecognizeCustomFormsOptions**, **RecognizeIdentityDocumentOptions**, **RecognizeInvoicesOptions**, and **RecognizeReceiptsOptions** classes. Polling interval can be updated using the Azure Core [**SyncPoller setPollInterval**](/java/api/com.azure.core.util.polling.syncpoller.setpollinterval?view=azure-java-stable&preserve-view=true) or [**PollerFlux setPollInterval**](/java/api/com.azure.core.util.polling.pollerflux.setpollinterval?view=azure-java-stable&preserve-view=true) methods synchronously or asynchronously, respectively.
-
-* **FormLine**, **FormPage**, **FormTable**, **FormSelectionMark**, **TextAppearance**, **CustomFormModel**, **CustomFormModelInfo**, **CustomFormModelProperties**, **CustomFormSubmodel**, and **TrainingDocumentInfo** are now immutable model classes.
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#310-2021-05-26) | [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [Maven artifact package dependency version 3.1.0](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) |
### [**JavaScript**](#tab/javascript)
-| [Reference documentation](/javascript/api/@azure/cognitiveservices-formrecognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
-
-#### **Non-breaking changes**
-
-* All REST API calls are migrated to the v2.1 endpoint.
+* **Version 3.1.0 (2021-05-26)**
-* **KnownFormLocale** enum added to access possible values of form locales.
-
-* **beginRecognizeIdDocuments...**. Renamed methods and method parameters using **Identity** to replace `Id` keyword for all related identity documents recognition API functionalities.
-
-* **FormReadingOrder** and **FormLanguage**. *ReadingOrder* renamed to *FormReadingOrder*. *Language* renamed to **FormLanguage**.
-
-* **FormCountryRegionField** and **countryRegion**. *FormCountryField* type renamed to **FormCountryRegionField**, and renamed the valueType *country* to **countryRegion**.
-
-* **TextAppearance** interface now includes **styleName** and **styleConfidence** properties (formerly name and confidence properties in the **TextStyle** interface).
-
-* **KnownStyleName**, **KnownSelectionMarkState**, and **KnownKeyValueType** enums removed.
-
-* **FormGenderField** type removed. Any recognized value that was previously produced as a _FormGenderField_ will now be returned as a FormStringField type and the value will remain the same.
-
-* **TextStyle** type removed.
-
-#### **Breaking changes (May)**
-
-**No breaking changes**
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/@azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/javascript/api/@azure/cognitiveservices-formrecognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
### [**Python**](#tab/python)
-| [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [PyPi azure-ai-formrecognizer 3.1.0](https://pypi.org/project/azure-ai-formrecognizer/) |
+* **Version 3.1.0 (2021-05-26)**
-#### **Non-breaking changes**
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [PyPi azure-ai-formrecognizer 3.1.0](https://pypi.org/project/azure-ai-formrecognizer/) |
-* **to_dict** and **from_dict** methods added to all of the models.
-
-#### **Breaking changes (May)**
-
-* **begin_recognize_identity_documents** and **begin_recognize_identity_documents_from_url**. Renamed methods and method parameters using **Identity** to replace _ID_ keyword.
+
-* **FieldValueType**. Renamed value type *country* to **countryRegion**. Removed value type *gender*.
+* Form Recognizer 2.1 is generally available. The GA release marks the stability of the changes introduced in prior 2.1 preview package versions. This release enables you to detect and extract information and data from the following document types:
+ > [!div class="checklist"]
+ >
+ > * [Documents](concept-layout.md)
+ > * [Receipts](./concept-receipt.md)
+ > * [Business cards](./concept-business-card.md)
+ > * [Invoices](./concept-invoice.md)
+ > * [Identity documents](./concept-id-document.md)
+ > * [Custom forms](concept-custom.md)
-* **TextAppearance**model now includes **style_name** and **style_confidence** properties (formerly name and confidence properties in the **TextStyle** object).
+* To get started, try the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/) and follow the [quickstart](./quickstarts/try-sample-label-tool.md).
-* **TextStyle** model removed.
+* The updated Layout API table feature adds header recognition with column headers that can span multiple rows. Each table cell has an attribute that indicates whether it's part of a header or not. This update can be used to identify which rows make up the table header.
## April 2021 <!-- markdownlint-disable MD029 -->
-### SDK preview updates for API version 2.1-preview.3
- ### [**C#**](#tab/csharp)
-NuGet package version 3.1.0-beta.4
+* ***NuGet package version 3.1.0-beta.4**
+
+* [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#310-beta4-2021-04-06)
* **New methods to analyze data from identity documents**:
NuGet package version 3.1.0-beta.4
The `ReadingOrder` property is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
-#### Breaking changes (April)
-
-* The client defaults to the latest supported service version, which is currently **2.1-preview.3**.
-
-* **[StartRecognizeCustomForms](/dotnet/api/azure.ai.formrecognizer.formrecognizerclient.startrecognizecustomforms?view=azure-dotnet-preview&preserve-view=true)** method now throws a `RequestFailedException()` when an invalid file is passed.
- ### [**Java**](#tab/java)
-Maven artifact package dependency version 3.1.0-beta.3
+**Maven artifact package dependency version 3.1.0-beta.3**
* **New methods to analyze data from identity documents**:
Maven artifact package dependency version 3.1.0-beta.3
For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Form Recognizer documentation.
-* **Bitmap Image file (.bmp) support for custom forms and training methods in the `FormContentType` enum**:
-
-* `image/bmp`
+* ** **Bitmap Image file (.bmp) support for custom forms and training methods in the [FormContentType](/java/api/com.azure.ai.formrecognizer.models.formcontenttype?view=azure-java-preview&preserve-view=true#fields) fields**:
-* **New property `Pages` supported by the following classes**:
+ * `image/bmp`
+
+ * **New property `Pages` supported by the following classes**:
**[RecognizeBusinessCardsOptions](/java/api/com.azure.ai.formrecognizer.models.recognizebusinesscardsoptions?view=azure-java-preview&preserve-view=true)**</br> **[RecognizeCustomFormOptions](/java/api/com.azure.ai.formrecognizer.models.recognizecustomformsoptions?view=azure-java-preview&preserve-view=true)**</br> **[RecognizeInvoicesOptions](/java/api/com.azure.ai.formrecognizer.models.recognizeinvoicesoptions?view=azure-java-preview&preserve-view=true)**</br> **[RecognizeReceiptsOptions](/java/api/com.azure.ai.formrecognizer.models.recognizereceiptsoptions?view=azure-java-preview&preserve-view=true)**</br>
- The `Pages` property allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
-
-* **Bitmap Image file (.bmp) support for custom forms and training methods in the [FormContentType](/java/api/com.azure.ai.formrecognizer.models.formcontenttype?view=azure-java-preview&preserve-view=true#fields) fields**:
-
- `image/bmp`
+ * The `Pages` property allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
* **New keyword argument `ReadingOrder` supported for the following methods**:
-* **[beginRecognizeContent](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontent?preserve-view=true&view=azure-java-preview)**</br>
-**[beginRecognizeContentFromUrl](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontentfromurl?view=azure-java-preview&preserve-view=true)**</br>
-
- The `ReadingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
+ * **[beginRecognizeContent](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontent?preserve-view=true&view=azure-java-preview)**</br>
+ * **[beginRecognizeContentFromUrl](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontentfromurl?view=azure-java-preview&preserve-view=true)**</br>
+ * The `ReadingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
* The client defaults to the latest supported service version, which currently is **2.1-preview.3**. ### [**JavaScript**](#tab/javascript)
-npm package version 3.1.0-beta.3
+**npm package version 3.1.0-beta.3**
* **New methods to analyze data from identity documents**:
npm package version 3.1.0-beta.3
### [**Python**](#tab/python)
-pip package version 3.1.0b4
+**pip package version 3.1.0b4**
* **New methods to analyze data from identity documents**:
pip package version 3.1.0b4
+* **SDK preview updates for API version 2.1-preview.3 introduces feature updates and enhancements.**
+++ ## March 2021
-**Form Recognizer v2.1 public preview 3 is now available.** v2.1-preview.3 has been released, including the following features:
+ **Form Recognizer v2.1 public preview v2.1-preview.3 has been released and includes the following features:**
* **New prebuilt ID model** The new prebuilt ID model enables customers to take IDs and return structured data to automate processing. It combines our powerful Optical Character Recognition (OCR) capabilities with ID understanding models to extract key information from passports and U.S. driver licenses. [Learn more about the prebuilt ID model](./concept-id-document.md)
- :::image type="content" source="./media/id-canada-passport-example.png" alt-text="passport example" lightbox="./media/id-canada-passport-example.png":::
+ :::image type="content" source="./media/id-canada-passport-example.png" alt-text="Screenshot of a sample passport." lightbox="./media/id-canada-passport-example.png":::
* **Line-item extraction for invoice model** - Prebuilt Invoice model now supports line item extraction; it now extracts full items and their parts - description, amount, quantity, product ID, date and more. With a simple API/SDK call, you can extract useful data from your invoices - text, table, key-value pairs, and line items.
pip package version 3.1.0b4
* **Supervised table labeling and training, empty-value labeling** - In addition to Form Recognizer's [state-of-the-art deep learning automatic table extraction capabilities](https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011), it now enables customers to label and train on tables. This new release includes the ability to label and train on line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained, the model will extract line items as part of the JSON output in the documentResults section.
- :::image type="content" source="./media/table-labeling.png" alt-text="Table labeling" lightbox="./media/table-labeling.png":::
+ :::image type="content" source="./media/table-labeling.png" alt-text="Screenshot of the table labeling feature." lightbox="./media/table-labeling.png":::
In addition to labeling tables, you can now label empty values and regions. If some documents in your training set don't have values for certain fields, you can label them so that your model will know to extract values properly from analyzed documents.
pip package version 3.1.0b4
* **Prebuilt receipt model quality improvements** This update includes many quality improvements for the prebuilt Receipt model, especially around line item extraction. ++ ## November 2020
-### New features
+* **Form Recognizer v2.1-preview.2 has been released and includes the following features:**
-**Form Recognizer v2.1 public preview 2 is now available.** v2.1-preview.2 has been released, including the following features:
+ * **New prebuilt invoice model** - The new prebuilt Invoice model enables customers to take invoices in various formats and return structured data to automate the invoice processing. It combines our powerful Optical Character Recognition (OCR) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts key text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, amount due, tax amount, ship to, and bill to.
-* **New prebuilt invoice model** - The new prebuilt Invoice model enables customers to take invoices in various formats and return structured data to automate the invoice processing. It combines our powerful Optical Character Recognition (OCR) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts key text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, amount due, tax amount, ship to, and bill to.
+ > [Learn more about the prebuilt invoice model](./concept-invoice.md)
- > [Learn more about the prebuilt invoice model](./concept-invoice.md)
+ :::image type="content" source="./media/invoice-example.jpg" alt-text="Screenshot of a sample invoice." lightbox="./media/invoice-example.jpg":::
- :::image type="content" source="./media/invoice-example.jpg" alt-text="invoice example" lightbox="./media/invoice-example.jpg":::
+ * **Enhanced table extraction** - Form Recognizer now provides enhanced table extraction, which combines our powerful Optical Character Recognition (OCR) capabilities with a deep learning table extraction model. Form Recognizer can extract data from tables, including complex tables with merged columns, rows, no borders and more.
-* **Enhanced table extraction** - Form Recognizer now provides enhanced table extraction, which combines our powerful Optical Character Recognition (OCR) capabilities with a deep learning table extraction model. Form Recognizer can extract data from tables, including complex tables with merged columns, rows, no borders and more.
+ :::image type="content" source="./media/tables-example.jpg" alt-text="Screenshot of tables analysis." lightbox="./media/tables-example.jpg":::
- :::image type="content" source="./media/tables-example.jpg" alt-text="tables example" lightbox="./media/tables-example.jpg":::
+ > [Learn more about Layout extraction](concept-layout.md)
- > [Learn more about Layout extraction](concept-layout.md)
+ * **Client library update** - The latest versions of the [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.
+ * **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md)
+ * **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages.
+ * **Quality improvements** - Extraction improvements including single digit extraction improvements.
+ * **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how your data will be extracted without writing any code.
-* **Client library update** - The latest versions of the [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.
-* **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md)
-* **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages.
-* **Quality improvements** - Extraction improvements including single digit extraction improvements.
-* **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how your data will be extracted without writing any code.
+ * [**Try the Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net)
- [**Try the Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net)
+ :::image type="content" source="media/ui-preview.jpg" alt-text="Screenshot of the Sample Labeling tool homepage.":::
- ![Screenshot: Sample Labeling tool.](./media/ui-preview.jpg)
+ * **Feedback Loop** - When Analyzing files via the Sample Labeling tool you can now also add it to the training set and adjust the labels if necessary and train to improve the model.
+ * **Auto Label Documents** - Automatically labels added documents based on previous labeled documents in the project.
-* **Feedback Loop** - When Analyzing files via the Sample Labeling tool you can now also add it to the training set and adjust the labels if necessary and train to improve the model.
-* **Auto Label Documents** - Automatically labels added documents based on previous labeled documents in the project.
+ ## August 2020
-### New features
+* **Form Recognizer v2.1-preview.1 has been released and includes the following features:
-**Form Recognizer v2.1 public preview is now available.** V2.1-preview.1 has been released, including the following features:
+ * **REST API reference is available** - View the [`v2.1-preview.1 reference`](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-1/operations/AnalyzeBusinessCardAsync)
+ * **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`).
+ * **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks.
+ * **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
+ * **Model name** - add a friendly name to your custom models for easier management and tracking.
+ * **[New prebuilt model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards.
+ * **[New locales for prebuilt Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN
+ * **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_.
-* **REST API reference is available** - View the [v2.1-preview.1 reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-1/operations/AnalyzeBusinessCardAsync)
-* **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`).
-* **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks.
-* **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
-* **Model name** - add a friendly name to your custom models for easier management and tracking.
-* **[New prebuilt model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards.
-* **[New locales for prebuilt Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN
-* **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_.
+* **v2.0** includes the following update:
-**v2.0** includes the following update:
+ * The [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for NET, Python, Java, and JavaScript have entered General Availability.
-* The [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for NET, Python, Java, and JavaScript have entered General Availability.
+ **New samples** are available on GitHub.
-**New samples** are available on GitHub.
+ * The [Knowledge Extraction Recipes - Forms Playbook](https://github.com/microsoft/knowledge-extraction-recipes-forms) collects best practices from real Form Recognizer customer engagements and provides usable code samples, checklists, and sample pipelines used in developing these projects.
+ * The [Sample Labeling tool](https://github.com/microsoft/OCR-Form-Tools) has been updated to support the new v2.1 functionality. See this [quickstart](label-tool.md) for getting started with the tool.
+ * The [Intelligent Kiosk](https://github.com/microsoft/Cognitive-Samples-IntelligentKiosk/blob/master/Documentation/FormRecognizer.md) Form Recognizer sample shows how to integrate `Analyze Receipt` and `Train Custom Model` - _Train without Labels_.
-* The [Knowledge Extraction Recipes - Forms Playbook](https://github.com/microsoft/knowledge-extraction-recipes-forms) collects best practices from real Form Recognizer customer engagements and provides usable code samples, checklists, and sample pipelines used in developing these projects.
-* The [Sample Labeling tool](https://github.com/microsoft/OCR-Form-Tools) has been updated to support the new v2.1 functionality. See this [quickstart](label-tool.md) for getting started with the tool.
-* The [Intelligent Kiosk](https://github.com/microsoft/Cognitive-Samples-IntelligentKiosk/blob/master/Documentation/FormRecognizer.md) Form Recognizer sample shows how to integrate `Analyze Receipt` and `Train Custom Model` - _Train without Labels_.
+ ## July 2020
-### New features
<!-- markdownlint-disable MD004 -->
-* **v2.0 reference available** - View the [v2.0 API Reference](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) and the updated SDKs for [.NET](/dotnet/api/overview/azure/ai.formrecognizer-readme), [Python](/python/api/overview/azure/), [Java](/java/api/overview/azure/ai-formrecognizer-readme), and [JavaScript](/javascript/api/overview/azure/).
-* **Table enhancements and Extraction enhancements** - includes accuracy improvements and table extractions enhancements, specifically, the capability to learn tables headers and structures in _custom train without labels_.
+* **Form Recognizer v2.0 reference available** - View the [v2.0 API Reference](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) and the updated SDKs for [.NET](/dotnet/api/overview/azure/ai.formrecognizer-readme), [Python](/python/api/overview/azure/), [Java](/java/api/overview/azure/ai-formrecognizer-readme), and [JavaScript](/javascript/api/overview/azure/).
+ * **Table enhancements and Extraction enhancements** - includes accuracy improvements and table extractions enhancements, specifically, the capability to learn tables headers and structures in _custom train without labels_.
-* **Currency support** - Detection and extraction of global currency symbols.
-* **Azure Gov** - Form Recognizer is now also available in Azure Gov.
-* **Enhanced security features**:
- * **Bring your own key** - Form Recognizer automatically encrypts your data when persisted to the cloud to protect it and to help you to meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. You can now also manage your subscription with your own encryption keys. [Customer-managed keys, also known as bring your own key (BYOK)](./encrypt-data-at-rest.md), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
- * **Private endpoints** ΓÇô Enables you on a virtual network to [securely access data over a Private Link.](../../private-link/private-link-overview.md)
+ * **Currency support** - Detection and extraction of global currency symbols.
+ * **Azure Gov** - Form Recognizer is now also available in Azure Gov.
+ * **Enhanced security features**:
+ * **Bring your own key** - Form Recognizer automatically encrypts your data when persisted to the cloud to protect it and to help you to meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. You can now also manage your subscription with your own encryption keys. [Customer-managed keys, also known as bring your own key (BYOK)](./encrypt-data-at-rest.md), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+ * **Private endpoints** ΓÇô Enables you on a virtual network to [securely access data over a Private Link.](../../private-link/private-link-overview.md)
-## June 2020
+
-### New features
+## June 2020
* **CopyModel API added to client SDKs** - You can now use the client SDKs to copy models from one subscription to another. See [Back up and recover models](./disaster-recovery.md) for general information on this feature. * **Azure Active Directory integration** - You can now use your Azure AD credentials to authenticate your Form Recognizer client objects in the SDKs.
pip package version 3.1.0b4
* [Java SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-jav) * [JavaScript SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_1.0.0-preview.3/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-## April 2020
+
-### New features
+## April 2020
* **SDK support for Form Recognizer API v2.0 Public Preview** - This month we expanded our service support to include a preview SDK for Form Recognizer v2.0 release. Use the links below to get started with your language of choice:
- * [.NET SDK](/dotnet/api/overview/azure/ai.formrecognizer-readme)
- * [Java SDK](/java/api/overview/azure/ai-formrecognizer-readme)
- * [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme)
- * [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
+* [.NET SDK](/dotnet/api/overview/azure/ai.formrecognizer-readme)
+* [Java SDK](/java/api/overview/azure/ai-formrecognizer-readme)
+* [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme)
+* [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
- The new SDK supports all the features of the v2.0 REST API for Form Recognizer. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
+The new SDK supports all the features of the v2.0 REST API for Form Recognizer. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
* **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource. This authorization is secured by calling the Copy Authorization operation against the target resource endpoint.
- * [Generate a copy authorization](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModelAuthorization) REST API
- * [Copy a custom model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModel) REST API
+* [Generate a copy authorization](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModelAuthorization) REST API
+* [Copy a custom model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModel) REST API
-### Security improvements
+* Security improvements
* Customer-Managed Keys are now available for FormRecognizer. For more information, see [Data encryption at rest for Form Recognizer](./encrypt-data-at-rest.md). * Use Managed Identities for access to Azure resources with Azure Active Directory. For more information, see [Authorize access to managed identities](../../cognitive-services/authentication.md#authorize-access-to-managed-identities).
-## March 2020
+
-### New features
+## March 2020
* **Value types for labeling** You can now specify the types of values you're labeling with the Form Recognizer Sample Labeling tool. The following value types and variations are currently supported:
- * `string`
- * default, `no-whitespaces`, `alphanumeric`
- * `number`
- * default, `currency`
- * `date`
- * default, `dmy`, `mdy`, `ymd`
- * `time`
- * `integer`
+* `string`
+ * default, `no-whitespaces`, `alphanumeric`
+* `number`
+ * default, `currency`
+* `date`
+ * default, `dmy`, `mdy`, `ymd`
+* `time`
+* `integer`
- See the [Sample Labeling tool](label-tool.md#specify-tag-value-types) guide to learn how to use this feature.
+See the [Sample Labeling tool](label-tool.md#specify-tag-value-types) guide to learn how to use this feature.
* **Table visualization** The Sample Labeling tool now displays tables that were recognized in the document. This feature lets you view recognized and extracted tables from the document prior to labeling and analyzing. This feature can be toggled on/off using the layers option.
- The following image is an example of how tables are recognized and extracted:
+* The following image is an example of how tables are recognized and extracted:
- > [!div class="mx-imgBorder"]
- > ![Table visualization using the Sample Labeling tool](./media/whats-new/table-viz.png)
+ :::image type="content" source="media/whats-new/table-viz.png" alt-text="Screenshot of table visualization using the Sample Labeling tool.":::
- The extracted tables are available in the JSON output under `"pageResults"`.
+* The extracted tables are available in the JSON output under `"pageResults"`.
> [!IMPORTANT]
- > Labeling tables isn't supported. If tables are not recognized and extrated automatically, you can only label them as key/value pairs. When labeling tables as key/value pairs, label each cell as a unique value.
+ > Labeling tables isn't supported. If tables are not recognized and extracted automatically, you can only label them as key/value pairs. When labeling tables as key/value pairs, label each cell as a unique value.
+
+* Extraction enhancements
-### Extraction enhancements
+* This release includes extraction enhancements and accuracy improvements, specifically, the capability to label and extract multiple key/value pairs in the same line of text.
-This release includes extraction enhancements and accuracy improvements, specifically, the capability to label and extract multiple key/value pairs in the same line of text.
+* Sample Labeling tool is now open-source
-### Sample Labeling tool is now open-source
+* The Form Recognizer Sample Labeling tool is now available as an open-source project. You can integrate it within your solutions and make customer-specific changes to meet your needs.
-The Form Recognizer Sample Labeling tool is now available as an open-source project. You can integrate it within your solutions and make customer-specific changes to meet your needs.
+* For more information about the Form Recognizer Sample Labeling tool, review the documentation available on [GitHub](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
-For more information about the Form Recognizer Sample Labeling tool, review the documentation available on [GitHub](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
+* TLS 1.2 enforcement
-### TLS 1.2 enforcement
+* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../../cognitive-services/security-features.md).
-TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../../cognitive-services/security-features.md).
+ ## January 2020 This release introduces the Form Recognizer 2.0. In the sections below, you'll find more information about new features, enhancements, and changes.
-### New features
+* New features
+
+ * **Custom model**
+ * **Train with labels** You can now train a custom model with manually labeled data. This method results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+ * **Asynchronous API** You can use async API calls to train with and analyze large data sets and files.
+ * **TIFF file support** You can now train with and extract data from TIFF documents.
+ * **Extraction accuracy improvements**
-* **Custom model**
- * **Train with labels** You can now train a custom model with manually labeled data. This method results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
- * **Asynchronous API** You can use async API calls to train with and analyze large data sets and files.
- * **TIFF file support** You can now train with and extract data from TIFF documents.
- * **Extraction accuracy improvements**
+ * **Prebuilt receipt model**
+ * **Tip amounts** You can now extract tip amounts and other handwritten values.
+ * **Line item extraction** You can extract line item values from receipts.
+ * **Confidence values** You can view the model's confidence for each extracted value.
+ * **Extraction accuracy improvements**
-* **Prebuilt receipt model**
- * **Tip amounts** You can now extract tip amounts and other handwritten values.
- * **Line item extraction** You can extract line item values from receipts.
- * **Confidence values** You can view the model's confidence for each extracted value.
- * **Extraction accuracy improvements**
+ * **Layout extraction** You can now use the Layout API to extract text data and table data from your forms.
-* **Layout extraction** You can now use the Layout API to extract text data and table data from your forms.
+* Custom model API changes
-### Custom model API changes
+ All of the APIs for training and using custom models have been renamed, and some synchronous methods are now asynchronous. The following are major changes:
-All of the APIs for training and using custom models have been renamed, and some synchronous methods are now asynchronous. The following are major changes:
+ * The process of training a model is now asynchronous. You initiate training through the **/custom/models** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}** to return the training results.
+ * Key/value extraction is now initiated by the **/custom/models/{modelID}/analyze** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}/analyzeResults/{resultID}** to return the extraction results.
+ * Operation IDs for the Train operation are now found in the **Location** header of HTTP responses, not the **Operation-Location** header.
-* The process of training a model is now asynchronous. You initiate training through the **/custom/models** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}** to return the training results.
-* Key/value extraction is now initiated by the **/custom/models/{modelID}/analyze** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}/analyzeResults/{resultID}** to return the extraction results.
-* Operation IDs for the Train operation are now found in the **Location** header of HTTP responses, not the **Operation-Location** header.
+* Receipt API changes
-### Receipt API changes
+ * The APIs for reading sales receipts have been renamed.
-The APIs for reading sales receipts have been renamed.
+ * Receipt data extraction is now initiated by the **/prebuilt/receipt/analyze** API call. This call returns an operation ID, which you can pass into **/prebuilt/receipt/analyzeResults/{resultID}** to return the extraction results.
-* Receipt data extraction is now initiated by the **/prebuilt/receipt/analyze** API call. This call returns an operation ID, which you can pass into **/prebuilt/receipt/analyzeResults/{resultID}** to return the extraction results.
+* Output format changes
-### Output format changes
+ * The JSON responses for all API calls have new formats. Some keys and values have been added, removed, or renamed. See the quickstarts for examples of the current JSON formats.
-The JSON responses for all API calls have new formats. Some keys and values have been added, removed, or renamed. See the quickstarts for examples of the current JSON formats.
+ ## Next steps
-Complete a [quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) to get started writing a forms processing app with Form Recognizer in the development language of your choice.
+
+* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-## See also
-* [What is Form Recognizer?](./overview.md)
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Title: Run Azure Automation runbooks on a Hybrid Runbook Worker
description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 11/17/2021 Last updated : 11/18/2022
By default, the Hybrid jobs run under the context of System account. However, to
1. Select **Settings**. 1. Change the value of **Hybrid Worker credentials** from **Default** to **Custom**. 1. Select the credential and click **Save**.
-1. If the following permissions are not assigned for Custom users, jobs might get suspended. Add these permission to the Hybrid Runbook Worker account on the runbook worker machine, instead of adding the account to **Administrators** group because the `Filtered Token` feature of UAC would grant standard user rights to this account when logging-in. For more details, refer to - [Information about UAC on Windows Server](/troubleshoot/windows-server/windows-security/disable-user-account-control#more-information).
-Use your discretion in assigning the elevated permissions corresponding to the following registry keys/folders:
+1. If the following permissions are not assigned for Custom users, jobs might get suspended.
-**Registry path**
--- HKLM\SYSTEM\CurrentControlSet\Services\EventLog (read) </br>-- HKLM\SYSTEM\CurrentControlSet\Services\WinSock2\Parameters (full access) </br>-- HKLM\SOFTWARE\Microsoft\Wbem\CIMOM (full access) </br>-- HKLM\Software\Policies\Microsoft\SystemCertificates\Root (full access) </br>-- HKLM\Software\Microsoft\SystemCertificates (full access) </br>-- HKLM\Software\Microsoft\EnterpriseCertificates (full access) </br>-- HKLM\software\Microsoft\HybridRunbookWorker (full access) </br>-- HKLM\software\Microsoft\HybridRunbookWorkerV2 (full access) </br>-- HKEY_CURRENT_USER\SOFTWARE\Policies\Microsoft\SystemCertificates\Disallowed (full access) </br>-- HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\PnpLockdownFiles (full access) </br>- **Folders** - C:\ProgramData\AzureConnectedMachineAgent\Tokens (read) </br>-- C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\0.1.0.18\HybridWorkerPackage\HybridWorkerAgent (full access)
+- C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\0.1.0.18\HybridWorkerPackage\HybridWorkerAgent (read and execute)
## <a name="runas-script"></a>Install Run As account certificate
azure-app-configuration Quickstart Python Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python-provider.md
Title: Quickstart for using Azure App Configuration with Python apps using the Python provider | Microsoft Docs
-description: In this quickstart, create a Python app with the Azure App Configuration Python provider to centralize storage and management of application settings separate from your code.
+ Title: Quickstart for using Azure App Configuration with Python apps | Microsoft Learn
+description: In this quickstart, create a Python app with the Azure App Configuration to centralize storage and management of application settings separate from your code.
ms.devlang: python - Previously updated : 10/31/2022+ Last updated : 11/17/2022 #Customer intent: As a Python developer, I want to manage all my app settings in one place.
-# Quickstart: Create a Python app with the Azure App Configuration Python provider
+# Quickstart: Create a Python app with Azure App Configuration
In this quickstart, you will use the Python provider for Azure App Configuration to centralize storage and management of application settings using the [Azure App Configuration Python provider client library](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration-provider).
azure-app-configuration Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python.md
Title: Quickstart for using Azure App Configuration with Python apps using the Azure SDK for Python | Microsoft Docs
-description: In this quickstart, create a Python app with the Azure SDK for Python to centralize storage and management of application settings separate from your code.
+ Title: Using Azure App Configuration in Python apps with the Azure SDK for Python | Microsoft Learn
+description: This document shows examples of how to use the Azure SDK for Python to access your data in Azure App Configuration.
ms.devlang: python-+ Previously updated : 10/21/2022 Last updated : 11/17/2022
-#Customer intent: As a Python developer, I want to manage all my app settings in one place.
+#Customer intent: As a Python developer, I want to use the Azure SDK for Python to access my data in Azure App Configuration.
-# Quickstart: Create a Python app with the Azure SDK for Python
+# Create a Python app with the Azure SDK for Python
-In this quickstart, you will use the Azure SDK for Python to centralize storage and management of application settings using the [Azure App Configuration client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration).
+This document shows examples of how to use the [Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration) to access your data in Azure App Configuration.
-To use Azure App Configuration with the Python provider instead of the SDK, go to [Python provider](./quickstart-python-provider.md). The Python provider enables loading configuration settings from an Azure App Configuration store in a managed way.
+>[!TIP]
+> App Configuration offers a Python provider library that is built on top of the Python SDK and is designed to be easier to use with richer features. It enables configuration settings to be used like a dictionary, and offers other features like configuration composition from multiple labels, key name trimming, and automatic resolution of Key Vault references. Go to the [Python quickstart](./quickstart-python-provider.md) to learn more.
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/) - Python 3.6 or later - for information on setting up Python on Windows, see the [Python on Windows documentation](/windows/python/)
+- An Azure App Configuration store
-## Create an App Configuration store
+## Create a key-value
-
-9. Select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value pairs:
+1. In the Azure portal, open your App Configuration store and select **Configuration Explorer** > **Create** > **Key-value** to add the following key-value:
| Key | Value | |-|-|
To use Azure App Configuration with the Python provider instead of the SDK, go t
Leave **Label** and **Content Type** empty for now.
-10. Select **Apply**.
+1. Select **Apply**.
-## Setting up the Python app
+## Set up the Python app
-1. In this tutorial, you'll create a new directory for the project named *app-configuration-quickstart*.
+1. Create a new directory for the project named *app-configuration-example*.
```console
- mkdir app-configuration-quickstart
+ mkdir app-configuration-example
```
-1. Switch to the newly created *app-configuration-quickstart* directory.
+1. Switch to the newly created *app-configuration-example* directory.
```console
- cd app-configuration-quickstart
+ cd app-configuration-example
``` 1. Install the Azure App Configuration client library by using the `pip install` command.
To use Azure App Configuration with the Python provider instead of the SDK, go t
pip install azure-appconfiguration ```
-1. Create a new file called *app-configuration-quickstart.py* in the *app-configuration-quickstart* directory and add the following code:
+1. Create a new file called *app-configuration-example.py* in the *app-configuration-example* directory and add the following code:
```python import os from azure.appconfiguration import AzureAppConfigurationClient, ConfigurationSetting try:
- print("Azure App Configuration - Python Quickstart")
- # Quickstart code goes here
+ print("Azure App Configuration - Python example")
+ # Example code goes here
except Exception as ex: print('Exception:') print(ex) ``` > [!NOTE]
-> The code snippets in this quickstart will help you get started with the App Configuration client library for Python. For your application, you should also consider handling exceptions according to your needs. To learn more about exception handling, please refer to our [Python SDK documentation](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration).
+> The code snippets in this example will help you get started with the App Configuration client library for Python. For your application, you should also consider handling exceptions according to your needs. To learn more about exception handling, please refer to our [Python SDK documentation](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration).
## Configure your App Configuration connection string
To use Azure App Configuration with the Python provider instead of the SDK, go t
## Code samples
-The sample code snippets in this section show you how to perform common operations with the App Configuration client library for Python. Add these code snippets to the `try` block in *app-configuration-quickstart.py* file you created earlier.
+The sample code snippets in this section show you how to perform common operations with the App Configuration client library for Python. Add these code snippets to the `try` block in *app-configuration-example.py* file you created earlier.
> [!NOTE] > The App Configuration client library refers to a key-value object as `ConfigurationSetting`. Therefore, in this article, the **key-values** in App Configuration store will be referred to as **configuration settings**.
The following code snippet deletes a configuration setting by `key` name.
## Run the app
-In this quickstart, you created a Python app that uses the Azure App Configuration client library to retrieve a configuration setting created through the Azure portal, add a new setting, retrieve a list of existing settings, lock and unlock a setting, update a setting, and finally delete a setting.
+In this example, you created a Python app that uses the Azure App Configuration client library to retrieve a configuration setting created through the Azure portal, add a new setting, retrieve a list of existing settings, lock and unlock a setting, update a setting, and finally delete a setting.
-At this point, your *app-configuration-quickstart.py* file should have the following code:
+At this point, your *app-configuration-example.py* file should have the following code:
```python import os from azure.appconfiguration import AzureAppConfigurationClient, ConfigurationSetting try:
- print("Azure App Configuration - Python Quickstart")
- # Quickstart code goes here
+ print("Azure App Configuration - Python example")
+ # Example code goes here
connection_string = os.getenv('AZURE_APPCONFIG_CONNECTION_STRING') app_config_client = AzureAppConfigurationClient.from_connection_string(connection_string)
except Exception as ex:
print(ex) ```
-In your console window, navigate to the directory containing the *app-configuration-quickstart.py* file and execute the following Python command to run the app:
+In your console window, navigate to the directory containing the *app-configuration-example.py* file and execute the following Python command to run the app:
```console
-python app-configuration-quickstart.py
+python app-configuration-example.py
``` You should see the following output: ```output
-Azure App Configuration - Python Quickstart
+Azure App Configuration - Python example
Retrieved configuration setting: Key: TestApp:Settings:Message, Value: Data from Azure App Configuration
Key: TestApp:Settings:NewSetting, Value: Value has been updated!
## Next steps
-In this quickstart, you created a new App Configuration store and learned how to access key-values from a Python app.
+This guide showed you how to use the Azure SDK for Python to access your data in Azure App Configuration.
For additional code samples, visit: > [!div class="nextstepaction"] > [Azure App Configuration client library samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/appconfiguration/azure-appconfiguration/samples)+
+To learn how to use Azure App Configuration with Python apps, go to:
+
+> [!div class="nextstepaction"]
+> [Create a Python app with Azure App Configuration](./quickstart-python-provider.md)
azure-functions Azure Functions Az Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/azure-functions-az-redundancy.md
- Title: Azure Functions availability zone support on Elastic Premium plans
-description: Learn how to use availability zone redundancy with Azure Functions for high-availability function applications on Elastic Premium plans.
-- Previously updated : 08/29/2022-
-# Goal: Introduce availability zone redundancy in Azure Functions Elastic Premium plans to customers and a tutorial on how to get started with Portal and ARM templates
--
-# Azure Functions support for availability zone redundancy
-
-Azure function apps in the Premium plan can be deployed into availability zones to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
-
-Availability zones support for Azure Functions is available on Premium (Elastic Premium) and Dedicated (App Service) plans. A zone-redundant function app plan automatically balances its instances between availability zones for higher availability. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, refer [here](../reliability/migrate-app-service.md).
--
-## Overview
-
-An [availability zone](../reliability//availability-zones-overview.md) is a high-availability offering that protects your applications and data from datacenter failures. Availability zones are unique physical locations within an Azure region. Each zone comprises one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high-availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating into other zones.
-
-A zone redundant function app automatically distributes the instances your app runs on between the availability zones in the region. For apps running in a zone-redundant Premium plan, even as the app scales in and out, the instances the app is running on are still evenly distributed between availability zones.
-
-Function apps are zonal services, which means that function apps can be deployed using one of the following methods:
--- For function apps that aren't configured to be zone redundant, the instances are placed in a single zone that is selected by the platform in the selected region.-- For function apps that are configured as zone redundant, the platform automatically spreads the instances in the plan across all the zones in the selected region. For example, in a region with three zones, if an instance count is larger than three and the number of instances is divisible by three, the instances is distributed evenly. Otherwise, instance counts beyond `3 * N` are distributed across the remaining one or two zones.-
-## Availability zone considerations
-
-All of the available function app instances of function apps configured as zone redundant are enabled and processing events. When a zone goes down, Functions detect lost instances and automatically attempts to find new replacement instances, when needed. [Elastic scale behavior](functions-premium-plan.md#rapid-elastic-scale) still applies. However, in a zone-down scenario there's no guarantee that requests for additional instances can succeed, since back-filling lost instances occurs on a best-effort basis.
-
-Applications that are deployed in a Premium plan that has availability zones enabled continue to run even when other zones in the same region suffer an outage. However, it's possible that non-runtime behaviors could still be impacted from an outage in other availability zones. These impacted behaviors can include Premium plan scaling, application creation, application configuration, and application publishing. Zone redundancy for Premium plans only guarantees continued uptime for deployed applications.
-
-When Functions allocates instances to a zone redundant Premium plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). A Premium plan is considered _balanced_ when each zone has either the same number of VMs (┬▒ 1 VM) in all of the other zones used by the Premium plan.
-
-## Requirements
-
-Availability zone support is a property of the Premium plan. The following are the current requirements/limitations for enabling availability zones:
--- You can only enable availability zones when creating a Premium plan for your function app. You can't convert an existing Premium plan to use availability zones.-- You must use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) for your function app's [storage account](storage-considerations.md#storage-account-requirements). If you use a different type of storage account, Functions may show unexpected behavior during a zonal outage.-- Both Windows and Linux are supported.-- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. To learn how to use zone redundancy with a Dedicated plan, see [Migrate App Service to availability zone support](../reliability/migrate-app-service.md).
- - Availability zone support isn't currently available for function apps on [Consumption](consumption-plan.md) plans.
-- Function apps hosted on a Premium plan must have a minimum [always ready instances](functions-premium-plan.md#always-ready-instances) count of three.
- - The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three.
-- If you aren't using Premium plan or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](../reliability/migrate-functions.md).-
-## Regional availability
-
-Zone-redundant Premium plans can currently be enabled in any of the following regions:
-
-| Americas | Europe | Middle East | Africa | Asia Pacific |
-||-||--|-|
-| Brazil South | France Central | Qatar Central | | Australia East |
-| Canada Central | Germany West Central | | | Central India |
-| Central US | North Europe | | | China North 3 |
-| East US | Sweden Central | | | East Asia |
-| East US 2 | UK South | | | Japan East |
-| South Central US | West Europe | | | Southeast Asia |
-| West US 2 | | | | |
-| West US 3 | | | | |
-
-## How to deploy a function app on a zone redundant Premium plan
-
-There are currently two ways to deploy a zone-redundant Premium plan and function app. You can use either the [Azure portal](https://portal.azure.com) or an ARM template.
-
-# [Azure portal](#tab/azure-portal)
-
-1. Open the Azure portal and navigate to the **Create Function App** page. Information on creating a function app in the portal can be found [here](functions-create-function-app-portal.md#create-a-function-app).
-
-1. In the **Basics** page, fill out the fields for your function app. Pay special attention to the fields in the table below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
-
- | Setting | Suggested value | Notes for Zone Redundancy |
- | | - | -- |
- | **Region** | Preferred region | The subscription under which this new function app is created. You must pick a region that is availability zone enabled from the [list above](#requirements). |
-
- ![Screenshot of Basics tab of function app create page.](./media/functions-az-redundancy\azure-functions-basics-az.png)
-
-1. In the **Hosting** page, fill out the fields for your function app hosting plan. Pay special attention to the fields in the table below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
-
- | Setting | Suggested value | Notes for Zone Redundancy |
- | | - | -- |
- | **Storage Account** | A [zone-redundant storage account](storage-considerations.md#storage-account-requirements) | As mentioned above in the [requirements](#requirements) section, we strongly recommend using a zone-redundant storage account for your zone redundant function app. |
- | **Plan Type** | Functions Premium | This article details how to create a zone redundant app in a Premium plan. Zone redundancy isn't currently available in Consumption plans. Information on zone redundancy on app service plans can be found [in this article](../reliability/migrate-app-service.md). |
- | **Zone Redundancy** | Enabled | This field populates the flag that determines if your app is zone redundant or not. You won't be able to select `Enabled` unless you have chosen a region supporting zone redundancy, as mentioned in step 2. |
-
- ![Screenshot of Hosting tab of function app create page.](./media/functions-az-redundancy\azure-functions-hosting-az.png)
-
-1. For the rest of the function app creation process, create your function app as normal. There are no fields in the rest of the creation process that affect zone redundancy.
-
-# [ARM template](#tab/arm-template)
-
-You can use an [ARM template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md) to deploy to a zone-redundant Premium plan. A guide to hosting Functions on Premium plans can be found [here](functions-infrastructure-as-code.md#deploy-on-premium-plan).
-
-The only properties to be aware of while creating a zone-redundant hosting plan are the new `zoneRedundant` property and the plan's instance count (`capacity`) fields. The `zoneRedundant` property must be set to `true` and the `capacity` property should be set based on the workload requirement, but not less than `3`. Choosing the right capacity varies based on several factors and high availability/fault tolerance strategies. A good rule of thumb is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
-
-> [!IMPORTANT]
-> Azure Functions apps hosted on an elastic premium, zone-redundant plan must have a minimum [always ready instance](functions-premium-plan.md#always-ready-instances) count of 3. This make sure that a zone-redundant function app always has enough instances to satisfy at least one worker per zone.
-
-Below is an ARM template snippet for a zone-redundant, Premium plan showing the `zoneRedundant` field and the `capacity` specification.
-
-```json
-"resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2021-01-15",
- "name": "<YOUR_PLAN_NAME>",
- "location": "<YOUR_REGION_NAME>",
- "sku": {
- "name": "EP1",
- "tier": "ElasticPremium",
- "size": "EP1",
- "family": "EP",
- "capacity": 3
- },
- "kind": "elastic",
- "properties": {
- "perSiteScaling": false,
- "elasticScaleEnabled": true,
- "maximumElasticWorkerCount": 20,
- "isSpot": false,
- "reserved": false,
- "isXenon": false,
- "hyperV": false,
- "targetWorkerCount": 0,
- "targetWorkerSizeId": 0,
- "zoneRedundant": true
- }
- }
-]
-```
-
-To learn more about these templates, see [Automate resource deployment in Azure Functions](functions-infrastructure-as-code.md).
---
-After the zone-redundant plan is created and deployed, any function app hosted on your new plan is considered zone-redundant.
-
-## Migrate your function app to a zone-redundant plan
-
-For information on how to migrate the public multi-tenant Premium plan from non-availability zone to availability zone support, see [Migrate App Service to availability zone support](../reliability/migrate-functions.md).
-
-## Pricing
-
-There's no additional cost associated with enabling availability zones. Pricing for a zone redundant Premium plan is the same as a single zone Premium plan. You'll be charged based on your Premium plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances.
-
-## Next steps
--- [Learn about the Azure Functions Premium plan](functions-premium-plan.md)-- [Improve the performance and reliability of Azure Functions](performance-reliability.md)-- [Learn how to deploy Azure Functions](functions-deployment-technologies.md)-- [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)-- [Azure Functions geo-disaster recovery](functions-geo-disaster-recovery.md)
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
Your code can also read the function app settings values as environment variable
## Configure the project for local development
-The Functions runtime uses an Azure Storage account internally. For all trigger types other than HTTP and webhooks, set the `Values.AzureWebJobsStorage` key to a valid Azure Storage account connection string. Your function app can also use the [Azurite emulator](/storage/common/storage-use-azurite.md) for the `AzureWebJobsStorage` connection setting that's required by the project. To use the emulator, set the value of `AzureWebJobsStorage` to `UseDevelopmentStorage=true`. Change this setting to an actual storage account connection string before deployment. For more information, see [Local storage emulator](functions-develop-local.md#local-storage-emulator).
+The Functions runtime uses an Azure Storage account internally. For all trigger types other than HTTP and webhooks, set the `Values.AzureWebJobsStorage` key to a valid Azure Storage account connection string. Your function app can also use the [Azurite emulator](../storage/common/storage-use-azurite.md) for the `AzureWebJobsStorage` connection setting that's required by the project. To use the emulator, set the value of `AzureWebJobsStorage` to `UseDevelopmentStorage=true`. Change this setting to an actual storage account connection string before deployment. For more information, see [Local storage emulator](functions-develop-local.md#local-storage-emulator).
To set the storage account connection string:
azure-functions Functions Geo Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-geo-disaster-recovery.md
Title: Azure Functions geo-disaster recovery and high availability
+ Title: Azure Functions geo-disaster recovery and reliability
description: How to use geographical regions for redundancy and to fail over in Azure Functions. ms.assetid: 9058fb2f-8a93-4036-a921-97a0772f503c
When entire Azure regions or datacenters experience downtime, your mission-criti
## Basic concepts
-Azure Functions run in a function app in a specific region. There's no built-in redundancy available. To avoid loss of execution during outages, you can redundantly deploy the same functions to function apps in multiple regions.
+Functions run in a function app in a specific Azure region. There's no built-in redundancy available. To avoid loss of execution during outages, you can redundantly deploy the same functions to function apps in multiple regions.
When you run the same function code in multiple regions, there are two patterns to consider: | Pattern | Description | | | | |**Active/active** | Functions in both regions are actively running and processing events, either in a duplicate manner or in rotation. We recommend using an active/active pattern in combination with [Azure Front Door](../frontdoor/front-door-overview.md) for your critical HTTP triggered functions. |
-|**Active/passive** | Functions run actively in region receiving events, while the same functions in a second region remains idle. When failover is required, the second region is activated and takes over processing. We recommend this pattern for your event-driven, non-HTTP triggered functions, such as Service Bus and Event Hub triggered functions.
+|**Active/passive** | Functions run actively in region receiving events, while the same functions in a second region remain idle. When failover is required, the second region is activated and takes over processing. We recommend this pattern for your event-driven, non-HTTP triggered functions, such as Service Bus and Event Hubs triggered functions.
To learn more about multi-region deployments, see the guidance in [Highly available multi-region web application](/azure/architecture/reference-architectures/app-service-web-app/multi-region). ## Redundancy for HTTP trigger functions
-The active/active pattern is the best deployment model for HTTP trigger functions. In this case, you need to use [Azure Front Door](../frontdoor/front-door-overview.md) to coordinate requests between both regions. Azure Front Door can route and round-robin HTTP requests between functions running in multiple regions.It also periodically checks the health of each endpoint. When a function in one region stops responding to health checks, Azure Front Door takes it out of rotation and only forwards traffic to the remaining healthy functions.
+The active/active pattern is the best deployment model for HTTP trigger functions. In this case, you need to use [Azure Front Door](../frontdoor/front-door-overview.md) to coordinate requests between both regions. Azure Front Door can route and round-robin HTTP requests between functions running in multiple regions. It also periodically checks the health of each endpoint. When a function in one region stops responding to health checks, Azure Front Door takes it out of rotation, and only forwards traffic to the remaining healthy functions.
![Architecture for Azure Front Door and Function](media/functions-geo-dr/front-door.png) ## Redundancy for non-HTTP trigger functions
-Redundancy for functions that consume events from other services requires a different pattern, which work with the failover pattern of the related services.
+Redundancy for functions that consume events from other services requires a different pattern, which works with the failover pattern of the related services.
### Active/passive redundancy for non-HTTP trigger functions
-Active/passive provides a way for only a single function to process each message, but provides a mechanism to fail over to a secondary region in case of a disaster. Function apps work with the failover behaviors of the partner services, such as [Azure Service Bus geo-recovery](../service-bus-messaging/service-bus-geo-dr.md) and [Azure Event Hubs geo-recovery](../event-hubs/event-hubs-geo-dr.md). The secondary function app is considered _passive_ because the failover service to which it's connected isn't currently active, so the function app is essentially _idle_.
+Active/passive provides a way for only a single function to process each message while providing a mechanism to fail over to a secondary region in a disaster. Function apps work with the failover behaviors of the partner services, such as [Azure Service Bus geo-recovery](../service-bus-messaging/service-bus-geo-dr.md) and [Azure Event Hubs geo-recovery](../event-hubs/event-hubs-geo-dr.md). The secondary function app is considered _passive_ because the failover service to which it's connected isn't currently active, so the function app remains _idle_.
Consider an example topology using an Azure Event Hubs trigger. In this case, the active/passive pattern requires involve the following components:
-* Azure Event Hub deployed to both a primary and secondary region.
-* [Geo-disaster enabled](../service-bus-messaging/service-bus-geo-dr.md) to pair the primary and secondary Event Hub. This also creates an _alias_ you can use to connect to event hubs and switch from primary to secondary without changing the connection info.
+* Azure Event Hubs deployed to both a primary and secondary region.
+* [Geo-disaster enabled](../service-bus-messaging/service-bus-geo-dr.md) to pair the primary and secondary event hubs. This also creates an _alias_ you can use to connect to event hubs and switch from primary to secondary without changing the connection info.
* Function apps are deployed to both the primary and secondary (failover) region, with the app in the secondary region essentially being idle because messages aren't being sent there. * Function app triggers on the *direct* (non-alias) connection string for its respective event hub. * Publishers to the event hub should publish to the alias connection string. ![Active-passive example architecture](media/functions-geo-dr/active-passive.png)
-Before failover, publishers sending to the shared alias route to the primary event hub. The primary function app is listening exclusively to the primary event hub. The secondary function app is passive and idle. As soon as failover is initiated, publishers sending to the shared alias are routed to the secondary event hub. The secondary function app now become active and start triggering automatically. Effective failover to a secondary region can be driven entirely from the event hub, with the functions becoming active only when the respective event hub is active.
+Before failover, publishers sending to the shared alias route to the primary event hub. The primary function app is listening exclusively to the primary event hub. The secondary function app is passive and idle. As soon as failover is initiated, publishers sending to the shared alias are routed to the secondary event hub. The secondary function app now becomes active and starts triggering automatically. Effective failover to a secondary region can be driven entirely from the event hub, with the functions becoming active only when the respective event hub is active.
Read more on information and considerations for failover with [Service Bus](../service-bus-messaging/service-bus-geo-dr.md) and [Event Hubs](../event-hubs/event-hubs-geo-dr.md). ### Active/active redundancy for non-HTTP trigger functions
-You can still achieve active/active deployments for non-HTTP triggered functions. However, you need to consider how the two active regions interact or coordinate with one another. When you deploy the same function app to two regions with each triggering on the same Service Bus queue, they would act as competing consumers on de-queueing that queue. While this means each message is only being processed by either one of the instances, it also means there is still a single point of failure on the single Service Bus instance.
+You can still achieve active/active deployments for non-HTTP triggered functions. However, you need to consider how the two active regions interact or coordinate with one another. When you deploy the same function app to two regions with each triggering on the same Service Bus queue, they would act as competing consumers on de-queueing that queue. While this means each message is only being processed by either one of the instances, it also means there's still a single point of failure on the single Service Bus instance.
-You could instead deploy two Service Bus queues, with one in a primary region, one in a secondary region. In this case, you could have two function apps, with each pointed to the Service Bus queue active in their region. The challenge with this topology is how the queue messages are distributed between the two regions. Often, this means that each publisher attempts to publish a message to *both* regions, and each message is processed by both active function apps. While this creates the desired active/active pattern, it also creates other challenges around duplication of compute and when or how data is consolidated. Because of these challenges, we recommend use the active/passive pattern for non-HTTPS trigger functions.
+You could instead deploy two Service Bus queues, with one in a primary region, one in a secondary region. In this case, you could have two function apps, with each pointed to the Service Bus queue active in their region. The challenge with this topology is how the queue messages are distributed between the two regions. Often, this means that each publisher attempts to publish a message to *both* regions, and each message is processed by both active function apps. While this creates the desired active/active pattern, it also creates other challenges around duplication of compute and when or how data is consolidated. Because of these challenges, we recommend using the active/passive pattern for non-HTTPS trigger functions.
## Next steps
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
To update your project to Azure Functions 4.x:
[!INCLUDE [functions-extension-bundles-json-v3](../../includes/functions-extension-bundles-json-v3.md)]
- The `extensionBundle` element is required because after version 1.x, bindings are maintained as external packages. For more information, see [Extension bundles](/functions-bindings-register.md#extension-bundles).
+ The `extensionBundle` element is required because after version 1.x, bindings are maintained as external packages. For more information, see [Extension bundles](functions-bindings-register.md#extension-bundles).
1. Update your local.settings.json file so that it has at least the following elements:
To update your project to Azure Functions 4.x:
} ```
- The `AzureWebJobsStorage` setting can be either the Azurite storage emulator or an actual Azure storage account. For more information, see [Local storage emulator](/functions-develop-local.md#local-storage-emulator).
+ The `AzureWebJobsStorage` setting can be either the Azurite storage emulator or an actual Azure storage account. For more information, see [Local storage emulator](functions-develop-local.md#local-storage-emulator).
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-csharp" [!INCLUDE [functions-migrate-v4](../../includes/functions-migrate-v4.md)]
This section details changes made after version 1.x in both trigger and binding
### Changes in triggers and bindings
-Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](./functions-bindings-register.md).
+Starting with version 2.x, you must install the extensions for specific triggers and bindings used by the functions in your app. The only exception for this HTTP and timer triggers, which don't require an extension. For more information, see [Register and install binding extensions](functions-bindings-register.md).
There are also a few changes in the *function.json* or attributes of the function between versions. For example, the Event Hubs `path` property is now `eventHubName`. See the [existing binding table](functions-versions.md#bindings) for links to documentation for each binding.
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
This new version of Start/Stop VMs v2 provides a decentralized low-cost automati
> + We've updated our Start/Stop VMs v2 function app resource to use [Azure Functions version 4.x](../functions-versions.md), and you'll get this version by default when you install Start/Stop VMs v2 from the marketplace. Existing customers should migrate from Functions version 3.x to version 4.x using our auto-update functionality. This functionality gets the latest version either by running the TriggerAutoUpdate timer function once manually or waiting for the schedule to run, if you've enabled it. >
-> + We've added a plan (**AZ - Availability Zone**) to our Start/Stop VMs v2 solution to enable a high-availability offering. You can now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the monthly cost of the Availability Zone plan is higher when compared to the Consumption plan.
+> + We've added a plan (**AZ - Availability Zone**) to our Start/Stop VMs v2 solution to enable a more reliable offering. You can now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the monthly cost of the Availability Zone plan is higher when compared to the Consumption plan.
> > + Automatic updating functionality was introduced on April 28th, 2022. This new auto update feature helps you stay on the latest version of the solution. This feature is enabled by default when you perform a new installation. > If you deployed your solution before this date, you can reinstall to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments)
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation.
[rendering readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Rendering/README.md [rendering sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Rendering/samples [geolocation package]: https://www.nuget.org/packages/Azure.Maps.geolocation
-[geolocation readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.geolocation/README.md
+[geolocation readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Geolocation/README.md
[geolocation sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Geolocation/samples [FuzzySearch]: /dotnet/api/azure.maps.search.mapssearchclient.fuzzysearch [Azure.Maps Namespace]: /dotnet/api/azure.maps [search-api]: /dotnet/api/azure.maps.search [Identity library .NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet [defaultazurecredential.NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet#defaultazurecredential
-[NuGet]: https://www.nuget.org/
+[NuGet]: https://www.nuget.org/
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Azure Maps Java SDK supports [Java 8][Java 8] or above.
[C# rendering readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Rendering/README.md [C# rendering sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Rendering/samples [C# geolocation package]: https://www.nuget.org/packages/Azure.Maps.geolocation
-[C# geolocation readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.geolocation/README.md
+[C# geolocation readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Geolocation/README.md
[C# geolocation sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Geolocation/samples <!-- Python SDK Developers Guide >
Azure Maps Java SDK supports [Java 8][Java 8] or above.
[py search readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-search/README.md [py search sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-search/samples [py route package]: https://pypi.org/project/azure-maps-route
-[py route readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-routing/README.md
-[py route sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-routing/samples
+[py route readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-route/README.md
+[py route sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-route/samples
[py render package]: https://pypi.org/project/azure-maps-render [py render readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-render/README.md [py render sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-render/samples
azure-monitor Data Collection Snmp Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-snmp-data.md
+
+ Title: Collect SNMP trap data with Azure Monitor Agent
+description: Learn how to collect SNMP trap data and send the data to Azure Monitor Logs using Azure Monitor Agent.
+ Last updated : 06/22/2022++++
+# Collect SNMP trap data with Azure Monitor Agent
+
+Simple Network Management Protocol (SNMP) is a widely-deployed management protocol for monitoring and configuring Linux devices and appliances.
+
+You can collect SNMP data in two ways:
+
+- **Polls** - The managing system polls an SNMP agent to gather values for specific properties.
+- **Traps** - An SNMP agent forwards events or notifications to a managing system.
+
+Traps are most often used as event notifications, while polls are more appropriate for stateful health detection and collecting performance metrics.
+
+You can use Azure Monitor Agent to collect SNMP traps as syslog events or as events logged in a text file.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Set up the trap receiver log options and format
+> * Configure the trap receiver to send traps to syslog or text file
+> * Collect SNMP traps using Azure Monitor Agent
+
+## Prerequisites
+
+To complete this tutorial, you need:
+
+- A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).
+
+- Management Information Base (MIB) files for the devices you are monitoring.
+
+ SNMP identifies monitored properties using Object Identifier (OID) values, which are defined and described in vendor-provided MIB files.
+
+ The device vendor typically provides MIB files. If you don't have the MIB files, you can find the files for many vendors on third-party websites.
+
+ Place all MIB files for each device that sends SNMP traps in `/usr/share/snmp/mibs`, the default directory for MIB files. This enables logging SNMP trap fields with meaningful names instead of OIDs.
+
+ Some vendors maintain a single MIB for all devices, while others have hundreds of MIB files. To load an MIB file correctly, snmptrapd must load all dependent MIBs. Be sure to check the snmptrapd log file after loading MIBs to ensure that there are no missing dependencies in parsing your MIB files.
+
+- A Linux server with an SNMP trap receiver.
+
+ In this article, we use **snmptrapd**, an SNMP trap receiver from the [Net-SNMP](https://www.net-snmp.org/) agent, which most Linux distributions provide. However, there are many other SNMP trap receiver services you can use.
+
+ The snmptrapd configuration procedure may vary between Linux distributions. For more information on snmptrapd configuration, including guidance on configuring for SNMP v3 authentication, see the [Net-SNMP documentation](https://www.net-snmp.org/docs/man/snmptrapd.conf.html).
+
+ It's important that the SNMP trap receiver you use can load MIB files for your environment, so that the properties in the SNMP trap message have meaningful names instead of OIDs.
+
+## Set up the trap receiver log options and format
+
+To set up the snmptrapd trap receiver on a CentOS 7, Red Hat Enterprise Linux 7, Oracle Linux 7 server:
+
+1. Install and enable snmptrapd:
+
+ ```bash
+ #Install the SNMP agent
+ sudo yum install net-snmp
+ #Enable the service
+ sudo systemctl enable snmptrapd
+ #Allow UDP 162 through the firewall
+ sudo firewall-cmd --zone=public --add-port=162/udp --permanent
+ ```
+
+1. Authorize community strings (SNMP v1 and v2 authentication strings) and define the format for the traps written to the log file:
+
+ 1. Open `snmptrapd.conf`:
+
+ ```bash
+ sudo vi /etc/snmp/snmptrapd.conf
+ ```
+
+ 1. Add these lines to your `snmptrapd.conf` file:
+
+ ```bash
+ # Allow all traps for all OIDs, from all sources, with a community string of public
+ authCommunity log,execute,net public
+ # Format logs for collection by Azure Monitor Agent
+ format2 snmptrap %a %B %y/%m/%l %h:%j:%k %N %W %q %T %W %v \n
+ ```
+
+ > [!NOTE]
+ > snmptrapd logs both traps and daemon messages - for example, service stop and start - to the same log file. In the example above, weΓÇÖve defined the log format to start with the word ΓÇ£snmptrapΓÇ¥ to make it easy to filter snmptraps from the log later on.
+## Configure the trap receiver to send trap data to syslog or text file
+
+There are two ways snmptrapd can send SNMP traps to Azure Monitor Agent:
+
+- Forward incoming traps to syslog, which you can set as the data source for Azure Monitor Agent.
+
+- Write the syslog messages to a file, which Azure Monitor Agent can *tail* and parse. This option allows you to send the SNMP traps as a new datatype rather than sending as syslog events.
+
+To edit the output behavior configuration of snmptrapd:
+
+1. Open the `/etc/snmp/snmptrapd.conf` file:
+
+ ```bash
+ sudo vi /etc/sysconfig/snmptrapd
+ ```
+
+1. Configure the output destination.
+
+ Here's an example configuration:
+
+ ```bash
+ # snmptrapd command line options
+ # '-f' is implicitly added by snmptrapd systemd unit file
+ # OPTIONS="-Lsd"
+ OPTIONS="-m ALL -Ls2 -Lf /var/log/snmptrapd"
+ ```
+
+ The options in this example configuration are:
+
+ - `-m ALL` - Load all MIB files in the default directory.
+ - `-Ls2` - Output traps to syslog, to the Local2 facility.
+ - `-Lf /var/log/snmptrapd` - Log traps to the `/var/log/snmptrapd` file.
+
+> [!NOTE]
+> See Net-SNMP documentation for more information about [how to set output options](https://www.net-snmp.org/docs/man/snmpcmd.html) and [how to set formatting options](https://www.net-snmp.org/docs/man/snmptrapd.html).
+
+## Collect SNMP traps using Azure Monitor Agent
+
+If you configured snmptrapd to send events to syslog, follow the steps described in [Collect events and performance counters with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md). Make sure to select **Linux syslog** as the data source when you define the data collection rule for Azure Monitor Agent.
+
+If you configured snmptrapd to write events to a file, follow the steps described in [Collect text and IIS logs with Azure Monitor agent](../agents/data-collection-text-log.md).
+
+## Next steps
+
+Learn more about:
+
+- [Azure Monitor Agent](azure-monitor-agent-overview.md).
+- [Data collection rules](../essentials/data-collection-rule-overview.md).
+- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
To get the Log Analytics gateway from the Azure portal, follow these steps:
1. Browse the list of services, and then select **Log Analytics**. 1. Select a workspace.
-1. In your workspace blade, under **General**, select **Quick Start**.
+1. In your workspace pane, from the pane on the left, under **General**, select **Quick Start**.
1. Under **Choose a data source to connect to the workspace**, select **Computers**.
-1. In the **Direct Agent** blade, select **Download Log Analytics gateway**.
+1. In the **Direct Agent** pane, select **Download Log Analytics gateway**.
![Screenshot of the steps to download the Log Analytics gateway](./media/gateway/download-gateway.png) or
-1. In your workspace blade, under **Settings**, select **Advanced settings**.
+1. In your workspace pane, from the pane on the left, under **Settings**, select **Advanced settings**.
1. Go to **Connected Sources** > **Windows Servers** and select **Download Log Analytics gateway**. ## Install Log Analytics gateway using setup wizard
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
Title: Connect Operations Manager to Azure Monitor | Microsoft Docs description: To maintain your existing investment in System Center Operations Manager and use extended capabilities with Log Analytics, you can integrate Operations Manager with your workspace. Previously updated : 03/31/2022 Last updated : 11/18/2022
If your IT security policies do not allow computers on your network to connect t
Before starting, review the following requirements.
+>[!Note]
+>From 1 February 2023, System Center Operations Manager version lower than [2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents) will stop sending data to Log Analytics workspace. Ensure your agents are on SCOM Agent version 10.19.10177.0 ([2019 UR3](/system-center/scom/release-build-versions?view=sc-om-2019#agents) or later) or 10.22.10056.0 ([2022 RTM](/system-center/scom/release-build-versions?view=sc-om-2022#agents)) and SCOM Management Group version is SCOM 2022 & 2019 UR3 or later version.
+ * Azure Monitor supports the following: * System Center Operations Manager 2022 * System Center Operations Manager 2019
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
Emails about smart detection performance anomalies are limited to one email per
* *If I don't do anything in response to a notification, will I get a reminder?*
- * No, you get a message about each issue only once. If the issue persists, it will be updated in the smart detection feed blade.
+ * No, you get a message about each issue only once. If the issue persists, it will be updated in the smart detection feed pane.
* *I lost the email. Where can I find the notifications in the portal?* * In the Application Insights overview of your app, click the **Smart detection** tile. There you'll find all notifications up to 90 days back.
Consider the parameters of the issue. If it's geography-dependent, set up [avail
### Diagnose slow page loads Where is the problem? Is the server slow to respond, is the page too long, or does the browser need too much work to display it?
-Open the Browsers metric blade. The segmented display of browser page load time shows where the time is going.
+Open the Browsers metric pane. The segmented display of browser page load time shows where the time is going.
* If **Send Request Time** is high, either the server is responding slowly, or the request is a post with large amount of data. Look at the [performance metrics](../app/performance-counters.md) to investigate response times. * Set up [dependency tracking](../app/asp-net-dependencies.md) to see whether the slowness is because of external services or your database.
azure-monitor Source Map Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/source-map-support.md
If you would like to configure or change the storage account or Blob container t
4. Select `Apply`. > [!div class="mx-imgBorder"]
-> ![Reconfigure your selected Azure Blob Container by navigating to the Properties Blade](./media/source-map-support/reconfigure.png)
+> ![Reconfigure your selected Azure Blob Container by navigating to the Properties pane](./media/source-map-support/reconfigure.png)
## Troubleshooting
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-get-started.md
If you don't have an Azure subscription, create a [free account](https://azure.m
### Install prerequisites -- To enable monitoring you will require a connection string. A connection string is displayed on the Overview blade of your Application Insights resource. For more information, see page [Connection Strings](./sdk-connection-string.md?tabs=net#find-your-connection-string).
+- To enable monitoring you will require a connection string. A connection string is displayed on the Overview pane of your Application Insights resource. For more information, see page [Connection Strings](./sdk-connection-string.md?tabs=net#find-your-connection-string).
> [!NOTE] > As of April 2020, PowerShell Gallery has deprecated TLS 1.1 and 1.0.
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
In order to provide globally unique names to resources, a six-character suffix i
<!-- The long description for app-service-app-setting-button.png: Screenshot of the App Service resource screen in the Azure portal. The screenshot shows Configuration in the left menu under the Settings section selected and highlighted, the Application settings tab selected and highlighted, and the + New application setting toolbar button highlighted. --> :::image type="content" source="media/tutorial-asp-net-core/app-service-app-setting-button.png" alt-text="Screenshot of the App Service resource screen in the Azure portal." lightbox="media/tutorial-asp-net-core/app-service-app-setting-button.png":::
-3. In the Add/Edit application setting blade, complete the form as follows and select **OK**.
+3. In the Add/Edit application setting pane, complete the form as follows and select **OK**.
| Field | Value | |-|-| | Name | APPLICATIONINSIGHTS_CONNECTION_STRING | | Value | Paste the Application Insights connection string value you copied in the preceding section. |
- :::image type="content" source="media/tutorial-asp-net-core/add-edit-app-setting.png" alt-text="Screenshot of the Add/Edit application setting blade in the Azure portal with the preceding values populated in the Name and Value fields." lightbox="media/tutorial-asp-net-core/add-edit-app-setting.png":::
+ :::image type="content" source="media/tutorial-asp-net-core/add-edit-app-setting.png" alt-text="Screenshot of the Add/Edit application setting pane in the Azure portal with the preceding values populated in the Name and Value fields." lightbox="media/tutorial-asp-net-core/add-edit-app-setting.png":::
4. On the App Service Configuration screen, select the **Save** button from the toolbar menu. When prompted to save the changes, select **Continue**.
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
Select one or more subscriptions to view:
- All of its resources' changes from the past 24 hours. - Old and new values to provide insights at one glance. Click into a change to view full Resource Manager snippet and other properties. :::image type="content" source="./media/change-analysis/change-details.png" alt-text="Screenshot of change details":::
-Send feedback from the Change Analysis blade:
+Send feedback from the Change Analysis pane:
:::image type="content" source="./media/change-analysis/change-analysis-feedback.png" alt-text="Screenshot of feedback button in Change Analysis tab":::
azure-monitor Tutorial Outages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/tutorial-outages.md
Since the connection string is a secret value, we hide this on the overview page
:::image type="content" source="./media/change-analysis/view-change-details.png" alt-text="Screenshot of viewing change details for troubleshooting.":::
-The change details blade also shows important information, including who made the change.
+The change details pane also shows important information, including who made the change.
Now that you've discovered the web app in-guest change and understand next steps, you can proceed with troubleshooting the issue.
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Run the following commands to locate the full Azure Resource Manager identifier
In the output, find the workspace name of interest. The `id` field of that represents the Azure Resource Manager identifier of that Log Analytics workspace. >[!TIP]
- > This `id` can also be found in the *Overview* blade of the Log Analytics workspace through the Azure portal.
+ > This `id` can also be found in the *Overview* pane of the Log Analytics workspace through the Azure portal.
## Create extension instance
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
6. Select the 'Configure' button to deploy the Azure Monitor Container Insights cluster extension.
-### Onboarding from Azure Monitor blade
+### Onboarding from Azure Monitor pane
-1. In the Azure portal, navigate to the 'Monitor' blade, and select the 'Containers' option under the 'Insights' menu.
+1. In the Azure portal, navigate to the 'Monitor' pane, and select the 'Containers' option under the 'Insights' menu.
2. Select the 'Unmonitored clusters' tab to view the Azure Arc-enabled Kubernetes clusters that you can enable monitoring for.
az k8s-extension delete --name azuremonitor-containers --cluster-type connectedC
``` ## Disconnected cluster
-If your cluster is disconnected from Azure for > 48 hours, then Azure Resource Graph won't have information about your cluster. As a result the Insights blade may display incorrect information about your cluster state.
+If your cluster is disconnected from Azure for > 48 hours, then Azure Resource Graph won't have information about your cluster. As a result the Insights pane may display incorrect information about your cluster state.
## Troubleshooting For issues with enabling monitoring, we have provided a [troubleshooting script](https://aka.ms/azmon-ci-troubleshooting) to help diagnose any problems.
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
The following table highlights the key differences between monitoring using the
| Agent | Log Analytics Agent (deprecated in 2024) | [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) | Alerting | Log based alerts tied to Log Analytics Workspace | Log based alerting and [recommended metric-based](./container-insights-metric-alerts.md) alerts | | Metrics | Does not support Azure Monitor metrics | Supports Azure Monitor metrics |
-| Consumption | Viewable only from Log Analytics Workspace | Accessible from both Azure Monitor and AKS/Arc resource blade |
+| Consumption | Viewable only from Log Analytics Workspace | Accessible from both Azure Monitor and AKS/Arc resource pane |
| Agent | Manual agent upgrades | Automatic updates for monitoring agent with version control through Azure Arc cluster extensions | ## Next steps
azure-monitor Collect Custom Metrics Guestos Vm Cloud Service Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md
The process that's outlined in this article works only for performance counters
1. Create and deploy a classic cloud service. A sample classic Cloud Services application and deployment can be found at [Get started with Azure Cloud Services and ASP.NET](../../cloud-services/cloud-services-dotnet-get-started.md).
-2. You can use an existing storage account or deploy a new storage account. It's best if the storage account is in the same region as the classic cloud service that you created. In the Azure portal, go to the **Storage accounts** resource blade, and then select **Keys**. Take note of the storage account name and the storage account key. You'll need this information in later steps.
+2. You can use an existing storage account or deploy a new storage account. It's best if the storage account is in the same region as the classic cloud service that you created. In the Azure portal, go to the **Storage accounts** resource pane, and then select **Keys**. Take note of the storage account name and the storage account key. You'll need this information in later steps.
![Storage account keys](./media/collect-custom-metrics-guestos-vm-cloud-service-classic/storage-keys.png)
Set-AzureServiceDiagnosticsExtension -ServiceName <classicCloudServiceName> -Sto
2. On the left menu, select **Monitor.**
-3. On the **Monitor** blade, select the **Metrics Preview** tab.
+3. On the **Monitor** pane, select the **Metrics Preview** tab.
4. In the resources drop-down menu, select your classic cloud service.
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
Azure Managed Grafana supports Azure authentication by default.
## Self-managed Grafana The following sections describe how to configure Azure Monitor managed service for Prometheus (preview) as a data source for self-managed Grafana on an Azure virtual machine.
-### Configure system identify
+### Configure system identity
Azure virtual machines support both system assigned and user assigned identity. The following steps configure system assigned identity. **Configure from Azure virtual machine**<br>
azure-monitor Resource Logs Blob Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-blob-format.md
You are only impacted by this change if you:
To identify if you have diagnostic settings that are sending data to an Azure storage account, you can navigate to the **Monitor** section of the portal, click on **Diagnostic Settings**, and identify any resources that have **Diagnostic Status** set to **Enabled**:
-![Azure Monitor Diagnostic Settings blade](media/resource-logs-blob-format/portal-diag-settings.png)
+![Azure Monitor Diagnostic Settings pane](media/resource-logs-blob-format/portal-diag-settings.png)
If Diagnostic Status is set to enabled, you have an active diagnostic setting on that resource. Click on the resource to see if any diagnostic settings are sending data to a storage account:
azure-monitor Ad Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-assessment.md
View the summarized compliance assessments for your infrastructure and then dril
1. On the **Overview** page, click the **Active Directory Health Check** tile.
-2. On the **Health Check** page, review the summary information in one of the focus area blades and then click one to view recommendations for that focus area.
+2. On the **Health Check** page, review the summary information in one of the focus area panes and then click one to view recommendations for that focus area.
3. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.
azure-monitor Azure Key Vault Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-key-vault-deprecated.md
After you click the **Key Vault Analytics** tile, you can view summaries of your
### To view details for any operation 1. On the **Overview** page, click the **Key Vault Analytics** tile.
-2. On the **Azure Key Vault** dashboard, review the summary information in one of the blades, and then click one to view detailed information about it in the log search page.
+2. On the **Azure Key Vault** dashboard, review the summary information in one of the panes, and then click one to view detailed information about it in the log search page.
On any of the log search pages, you can view results by time, detailed results, and your log search history. You can also filter by facets to narrow the results.
azure-monitor App Insights Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-insights-connector.md
This solution does not install any management packs in connected management grou
## Use the solution
-The following sections describe how you can use the blades shown in the Application Insights dashboard to view and interact with data from your apps.
+The following sections describe how you can use the panes shown in the Application Insights dashboard to view and interact with data from your apps.
### View Application Insights Connector information
-Click the **Application Insights** tile to open the **Application Insights** dashboard to see the following blades.
+Click the **Application Insights** tile to open the **Application Insights** dashboard to see the following panes.
-![Screenshot of the Application Insights dashboard showing the blades for Applications, Data Volume, and Availability.](./media/app-insights-connector/app-insights-dash01.png)
+![Screenshot of the Application Insights dashboard showing the panes for Applications, Data Volume, and Availability.](./media/app-insights-connector/app-insights-dash01.png)
-![Screenshot of the Application Insights dashboard showing the blades for Server Requests, Failures, and Exceptions.](./media/app-insights-connector/app-insights-dash02.png)
+![Screenshot of the Application Insights dashboard showing the panes for Server Requests, Failures, and Exceptions.](./media/app-insights-connector/app-insights-dash02.png)
-The dashboard includes the blades shown in the table. Each blade lists up to 10 items matching that blade's criteria for the specified scope and time range. You can run a log search that returns all records when you click **See all** at the bottom of the blade or when you click the blade header.
+The dashboard includes the panes shown in the table. Each pane lists up to 10 items matching that pane's criteria for the specified scope and time range. You can run a log search that returns all records when you click **See all** at the bottom of the pane or when you click the pane header.
| **Column** | **Description** |
The dashboard includes the blades shown in the table. Each blade lists up to 10
When you click any item in the dashboard, you see an Application Insights perspective shown in search. The perspective provides an extended visualization, based on the telemetry type that selected. So, visualization content changes for different telemetry types.
-When you click anywhere in the Applications blade, you see the default **Applications** perspective.
+When you click anywhere in the Applications pane, you see the default **Applications** perspective.
![Application Insights Applications perspective](./media/app-insights-connector/applications-blade-drill-search.png) The perspective shows an overview of the application that you selected.
-The **Availability** blade shows a different perspective view where you can see web test results and related failed requests.
+The **Availability** pane shows a different perspective view where you can see web test results and related failed requests.
![Application Insights Availability perspective](./media/app-insights-connector/availability-blade-drill-search.png)
-When you click anywhere in the **Server Requests** or **Failures** blades, the perspective components change to give you a visualization that related to requests.
+When you click anywhere in the **Server Requests** or **Failures** panes, the perspective components change to give you a visualization that related to requests.
-![Application Insights Failures blade](./media/app-insights-connector/server-requests-failures-drill-search.png)
+![Application Insights Failures pane](./media/app-insights-connector/server-requests-failures-drill-search.png)
-When you click anywhere in the **Exceptions** blade, you see a visualization that's tailored to exceptions.
+When you click anywhere in the **Exceptions** pane, you see a visualization that's tailored to exceptions.
-![Application Insights Exceptions blade](./media/app-insights-connector/exceptions-blade-drill-search.png)
+![Application Insights Exceptions pane](./media/app-insights-connector/exceptions-blade-drill-search.png)
Regardless of whether you click something one the **Application Insights Connector** dashboard, within the **Search** page itself, any query returning Application Insights data shows the Application Insights perspective. For example, if you are viewing Application Insights data, a **&#42;** query also shows the perspective tab like the following image:
Perspective components are updated depending on the search query. This means tha
### Pivot to an app in the Azure portal
-Application Insights Connector blades are designed to enable you to pivot to the selected Application Insights app *when you use the Azure portal*. You can use the solution as a high-level monitoring platform that helps you troubleshoot an app. When you see a potential problem in any of your connected applications, you can either drill into it in Log Analytics search or you can pivot directly to the Application Insights app.
+Application Insights Connector panes are designed to enable you to pivot to the selected Application Insights app *when you use the Azure portal*. You can use the solution as a high-level monitoring platform that helps you troubleshoot an app. When you see a potential problem in any of your connected applications, you can either drill into it in Log Analytics search or you can pivot directly to the Application Insights app.
To pivot, click the ellipses (**…**) that appears at the end of each line, and select **Open in Application Insights**.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-nam
You need to have *write* permissions on the cluster resource.
-When deleting a cluster, you're losing access to all data in cluster, ingested from workspaces that are linked to it, or were linked previously. This operation isn't reversible. If you delete your cluster while workspaces are linked, the workspaces get automatically unlinked from the cluster before the delete, and new data to workspaces gets ingested to Log Analytics. If workspace data retention is longer than the period it was linked to the cluster, you can query workspace for the time range before the link to cluster and after the unlink, and the service performs cross-cluster queries seamlessly.
+When deleting a cluster, you're losing access to all data in cluster, which was ingested from workspaces that were linked to it. This operation isn't reversible. If you delete your cluster while workspaces are linked, Workspaces get automatically unlinked from the cluster before the cluster delete, and new data sent to workspaces gets ingested to Log Analytics store instead. If the retention of data in workspaces older than the period it was linked to the cluster, you can query workspace for the time range before the link to cluster and after the unlink, and the service performs cross-cluster queries seamlessly.
> [!NOTE]
-> - There is a limit of seven clusters per subscription, five active plus two if were deleted in past 14 days.
-> - Cluster's name remain reserved for 14 days after deletion, and can't be used for creating a new cluster. Deleted cluster's name is released and can be reused after 14 days.
+> - There is a limit of seven clusters per subscription, five active, plus two deleted in past 14 days.
+> - Cluster's name remain reserved for 14 days after deletion, and can't be used for creating a new cluster.
Use the following commands to delete a cluster:
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials
Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md) > [!NOTE]
-> Queries sent through the Azure Resource Management (ARM) API can't use Azure Monitor Private Links. These queries can only go through if the target resource allows queries from public networks (set through the Network Isolation blade, or [using the CLI](./private-link-configure.md#set-resource-access-flags)).
+> Queries sent through the Azure Resource Management (ARM) API can't use Azure Monitor Private Links. These queries can only go through if the target resource allows queries from public networks (set through the Network Isolation pane, or [using the CLI](./private-link-configure.md#set-resource-access-flags)).
> > The following experiences are known to run queries through the ARM API: > * LogicApp connector
Restricting access as explained above applies to data in the resource. However,
> * Change Tracking solution > * VM Insights > * Container Insights
-> * Log Analytics' Workspace Summary blade (showing the solutions dashboard)
+> * Log Analytics' Workspace Summary pane (showing the solutions dashboard)
## Application Insights considerations * YouΓÇÖll need to add resources hosting the monitored workloads to a private link. For example, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
We've identified the following products and experiences query workspaces through
> * LogicApp connector > * Update Management solution > * Change Tracking solution
-> * The Workspace Summary blade in the portal (showing the solutions dashboard)
+> * The Workspace Summary pane in the portal (showing the solutions dashboard)
> * VM Insights > * Container Insights
azure-monitor Query Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-packs.md
Each query in the query pack has the following properties.
| tags | Additional tags used by the user for sorting and filtering in Log Analytics. Each tag will be added to Category, Resource Type, and Solution when [grouping and filtering queries](queries.md#finding-and-filtering-queries). | ## Create a query pack
-You can create a query pack using the REST API or from the **Log Analytics query packs** blade in the Azure portal. Currently the **Log Analytics query packs** blade shows up under **Other** category of **All services** page in the Azure portal.
+You can create a query pack using the REST API or from the **Log Analytics query packs** pane in the Azure portal. Currently the **Log Analytics query packs** pane shows up under **Other** category of **All services** page in the Azure portal.
### Create token You require a token for authentication of the API request. There are multiple methods to get a token including using **armclient**.
azure-monitor Profiler Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-data.md
If your Azure service already has incoming traffic or if you just want to manual
1. From the Application Insights overview page for your Azure service, select **Performance** from the left menu. 1. On the **Performance** pane, select **Profiler** from the top menu for Profiler settings.
- :::image type="content" source="./media/profiler-overview/profiler-button-inline.png" alt-text="Screenshot of the Profiler button from the Performance blade." lightbox="media/profiler-settings/profiler-button.png":::
+ :::image type="content" source="./media/profiler-overview/profiler-button-inline.png" alt-text="Screenshot of the Profiler button from the Performance pane." lightbox="media/profiler-settings/profiler-button.png":::
1. Once the Profiler settings page loads, select **Profile Now**.
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-settings.md
Once you've enabled the Application Insights Profiler, you can:
- Configure Profiler triggers - View recent profiling sessions
-To open the Azure Application Insights Profiler settings pane, select **Performance** from the left menu within your Application Insights page.
+To open the Azure Application Insights Profiler settings pane, select **Performance** from the pane on the left within your Application Insights page.
View profiler traces across your Azure resources via two methods:
View profiler traces across your Azure resources via two methods:
Select the **Profiler** button from the top menu. **By operation**
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 10/25/2022 Last updated : 11/18/2022 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
> You should enable Continuous Availability only for Citrix App Layering, SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
+ **Custom applications are not supported with SMB Continuous Availability.**
+ <!-- [1/13/21] Commenting out command-based steps below, because the plan is to use form-based (URL) registration, similar to CRR feature registration --> <!-- ```azurepowershell-interactive
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
This feature is used for installing SQL Server in certain scenarios where a non-administrator AD DS domain account must temporarily be granted elevated security privilege. >[!NOTE]
- > Using the Security privilege users feature requires that you submit a waitlist request through the Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
+ > Using the Security privilege users feature requires that you submit a waitlist request through the Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page. Wait for an official confirmation email from the Azure NetApp Files team before using this feature. SMB Continuous Availability is **not** supported on custom applications. It is is only supported for workloads using Citrix App Laying, [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md), and Microsoft SQL Server (not Linux SQL Server).
> [!IMPORTANT] > Using the **Security privilege users** feature requires that you submit a waitlist request through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
na Previously updated : 05/18/2022 Last updated : 11/17/2022 # Enable Continuous Availability on existing SMB volumes
You can enable the SMB Continuous Availability (CA) feature when you [create a n
> > See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations.
+>[!IMPORTANT]
+> Custom applications are not supported with SMB Continuous Availability.
+ ## Steps 1. Make sure that you have [registered the SMB Continuous Availability Shares](https://aka.ms/anfsmbcasharespreviewsignup) feature.
azure-netapp-files Faq Application Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md
Previously updated : 10/27/2022 Last updated : 11/17/2022 # Application resilience FAQs for Azure NetApp Files
Yes, certain SMB-based applications require SMB Transparent Failover. SMB Transp
* Citrix App Laying * [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md) * Microsoft SQL Server (not Linux SQL Server)
+**Custom applications are not supported with SMB Continuous Availability.**
## I'm running IBM MQ on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the NFS protocol?
azure-netapp-files Snapshots Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-introduction.md
na Previously updated : 01/31/2022 Last updated : 11/18/2022 # How Azure NetApp Files snapshots work
This section explains how online snapshots and vaulted snapshots are deleted.
### Deleting online snapshots
-Snapshots consume storage capacity. As such, they are not typically kept indefinitely. For data protection, retention, and recoverability, a number of snapshots (created at various points in time) are usually kept online for a certain duration depending on RPO, RTO, and retention SLA requirements. However, older snapshots often do not have to be kept on the storage service and might need to be deleted to free up space. Any snapshot can be deleted (not necessarily in order of creation) by an administrator at any time.
+Snapshots consume storage capacity. As such, they are not typically kept indefinitely. For data protection, retention, and recoverability, a number of snapshots (created at various points in time) are usually kept online for a certain duration depending on RPO, RTO, and retention SLA requirements. Snapshots can be deleted from the storage service by an administrator at any time. Any snapshot can be deleted regardless of the order in which it was created. Deleting older snapshots will free up space.
> [!IMPORTANT] > The snapshot deletion operation cannot be undone. You should retain offline copies (vaulted snapshots) of the volume for data protection and retention purposes.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB Continuous Availability (CA) shares support for Citrix App Layering](enable-continuous-availability-existing-smb.md) (Preview)
- [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html) radically reduces the time it takes to manage Windows applications and images. App Layering separates the management of your OS and apps from your infrastructure. You can install each app and OS patch once, update the associated templates, and redeploy your images. You can publish layered images as open standard virtual disks, usable in any environment. App Layering can be used to provide dynamic access application layer virtual disks stored on SMB shared networked storage, including Azure NetApp Files. To enhance App Layering resiliency to events of storage service maintenance, Azure NetApp Files has extended support for [SMB Transparent Failover via SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for App Layering virtual disks. For more information, see [Azure NetApp Files Azure Virtual Desktop Infrastructure solutions | Citrix](azure-netapp-files-solution-architectures.md#citrix).
-
+ [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html) radically reduces the time it takes to manage Windows applications and images. App Layering separates the management of your OS and apps from your infrastructure. You can install each app and OS patch once, update the associated templates, and redeploy your images. You can publish layered images as open standard virtual disks, usable in any environment. App Layering can be used to provide dynamic access application layer virtual disks stored on SMB shared networked storage, including Azure NetApp Files. To enhance App Layering resiliency to events of storage service maintenance, Azure NetApp Files has extended support for [SMB Transparent Failover via SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for App Layering virtual disks. For more information, see [Azure NetApp Files Azure Virtual Desktop Infrastructure solutions | Citrix](azure-netapp-files-solution-architectures.md#citrix). Custom applications are not supported with SMB Continuous Availability.
## April 2022
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md
Title: ARM template test toolkit description: Describes how to run the Azure Resource Manager template (ARM template) test toolkit on your template. The toolkit lets you see if you have implemented recommended practices. Previously updated : 07/25/2022 Last updated : 11/16/2022
To publish an offering to Azure Marketplace, use the test toolkit to validate th
After installing the toolkit and importing the module, run the following cmdlet to test your package: ```powershell
-Test-AzMarketplaceTemplate "Path to the unzipped package folder"
+Test-AzMarketplacePackage -TemplatePath "Path to the unzipped package folder"
``` ### Interpret the results
azure-video-analyzer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/troubleshoot.md
If you've run all the preceding checks and are still encountering issues, gather
Video Analyzer edge module works collaboratively with the IoT Edge agent and hub modules. Some of the common errors that you'll encounter with its deployment are caused by issues with the underlying IoT infrastructure. The errors include: -- [The IoT Edge agent stops after about a minute](../../../iot-edge/troubleshoot-common-errors.md#iot-edge-agent-stops-after-about-a-minute).
+- [IoT Edge agent stops after a minute](../../../iot-edge/troubleshoot-common-errors.md#iot-edge-agent-stops-after-a-minute).
- [The IoT Edge agent can't access a module's image (403)](../../../iot-edge/troubleshoot-common-errors.md#iot-edge-agent-cant-access-a-modules-image-403).-- [The IoT Edge agent module reports "empty config file" and no modules start on the device](../../../iot-edge/troubleshoot-common-errors.md#edge-agent-module-reports-empty-config-file-and-no-modules-start-on-the-device).
+- [Edge Agent module reports 'empty config file' and no modules start on the device](../../../iot-edge/troubleshoot-common-errors.md#edge-agent-module-reports-empty-config-file-and-no-modules-start-on-the-device).
- [The IoT Edge hub fails to start](../../../iot-edge/troubleshoot-common-errors.md#iot-edge-hub-fails-to-start). - [The IoT Edge security daemon fails with an invalid hostname](../../../iot-edge/troubleshoot-common-errors.md#iot-edge-security-daemon-fails-with-an-invalid-hostname). - [The Video Analyzer or any other custom IoT Edge module fails to send a message to the edge hub with 404 error](../../../iot-edge/troubleshoot-common-errors.md#iot-edge-module-fails-to-send-a-message-to-edgehub-with-404-error).
Using pipeline extension processors you can extend the pipeline to send and rece
> [!TIP] > Use **[Docker inspect command](https://docs.docker.com/engine/reference/commandline/inspect/)** to find the IP address of the machine. -- If you're running one or more live pipelines that uses the pipeline extension processor, you should use the `samplingOptions` field to manage the frames per second (fps) rate of the video feed.
+- If you're running one or more live pipelines that use the pipeline extension processor, you should use the `samplingOptions` field to manage the frames per second (fps) rate of the video feed.
- In certain situations, where the CPU or memory of the edge machine is highly utilized, you can lose certain inference events. To address this issue, set a low value for the `maximumSamplesPerSecond` property on the `samplingOptions` field. You can set it to 0.5 ("maximumSamplesPerSecond": "0.5") on each instance of the pipeline and then re-run the instance to check for inference events on the hub.
To gather the relevant logs that should be added to the ticket, follow the instr
1. [Turn on Debug Logs](#video-analyzer-edge-module-debug-logs) 1. Reproduce the issue 1. Restart the Video Analyzer edge module.
- > [!NOTE]
- > This step is required to gracefully terminate the edge module and get all log files in a usable format without dropping any events.
-
- On the IoT Edge device, use the following command after replacing `<avaedge>` with the name of your Video Analyzer edge module:
-
- ```cmd
- sudo iotedge restart <avaedge>
- ```
+ > [!NOTE]
+ > This step is required to gracefully terminate the edge module and get all log files in a usable format without dropping any events.
+
+ On the IoT Edge device, use the following command after replacing `<avaedge>` with the name of your Video Analyzer edge module:
+
+ ```cmd
+ sudo iotedge restart <avaedge>
+ ```
You can also restart modules remotely from the Azure portal. For more information, see [Monitor and troubleshoot IoT Edge devices from the Azure portal](../../../iot-edge/troubleshoot-in-portal.md). 1. Connect to the virtual machine from the **IoT Hub** page in the portal
To configure the Video Analyzer edge module to generate debug logs, do the follo
[Monitoring and logging](monitor-log-edge.md) should help in understanding the taxonomy and how to generate logs that will help in debugging issues with Video Analyzer.
-As gRPC server implementation differ across languages, there is no standard way of adding logging inside in the server.
+As gRPC server implementation differs across languages, there is no standard way of adding logging inside in the server.
As an example, if you build a gRPC server using .NET core, gRPC service adds logs under the **Grpc** category. To enable detailed logs from gRPC, configure the Grpc prefixes to the Debug level in your appsettings.json file by adding the following items to the LogLevel sub-section in Logging:
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
Last updated 11/16/2022
[!INCLUDE [accounts](./includes/arm-accounts.md)]
-Azure Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) supports both server-to-server and client-to-server communication. The API enables you to integrate video and audio insights into your application logic. To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://preview.flow.microsoft.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure Video Indexer API.
+Azure Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) supports both server-to-server and client-to-server communication. The API enables you to integrate video and audio insights into your application logic.
+
+> [!TIP]
+> For the latest `api-version`, chose the latest stable version in [our REST documentation](/rest/api/videoindexer/stable/generate).
+
+To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://preview.flow.microsoft.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure Video Indexer API.
You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for the integration gives you better visibility on the health of your workflow and an easy way to debug it.
The logic apps that you create in this article, contain one flow per app. The se
Create two containers: one to store the media files, second to store the insights generated by Azure Video Indexer. In this article, the containers are `videos` and `insights`.
-## Set up the first flow - file upload
+## Set up the file upload flow (the first flow)
This section describes how to set up the first ("file upload") flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
The following image shows the first flow:
1. Select **Consumption** for **Plan type**. 1. Press **Review + Create** -> **Create**.
- 1. Once the Logic App deployment is complete, in the Azure portal, go to the newly created Logic App.
+ 1. Once the Logic App deployment is complete, in the Azure portal, search and navigate to the newly created Logic App.
1. Under the **Settings** section, on the left side's panel, select the **Identity** tab.
- 1. Under **System assigned**, change the **Status** from **Off** to **On** (the step is important for later on in this tutorial).
- 1. Press **Save** (on the top of the page).
- 1. Select the **Logic app designer** tab, in the pane on the left.
- 1. Pick a **Blank Logic App** flow.
- 1. Search for "blob".
- 1. In the **All** tab, choose the **Azure Blob Storage** component.
- 1. Under **Triggers**, select the **When a blob is added or modified (properties only) (V2)** trigger.
+ 3. Under **System assigned**, change the **Status** from **Off** to **On** (the step is important for later on in this tutorial).
+ 4. Press **Save** (on the top of the page).
+ 5. Select the **Logic app designer** tab, in the pane on the left.
+ 6. Pick a **Blank Logic App** flow.
+ 7. Search for "blob" in the **Choose an Operation** blade.
+ 8. In the **All** tab, choose the **Azure Blob Storage** component.
+ 9. Under **Triggers**, select the **When a blob is added or modified (properties only) (V2)** trigger.
1. Set the storage connection. After creating a **When a blob is added or modified (properties only) (V2)** trigger, the connection needs to be set to the following values:
The following image shows the first flow:
> :::image type="content" source="./media/logic-apps-connector-arm-accounts/create-sas.png" alt-text="Screenshot of the create SAS URI by path logic." lightbox="./media/logic-apps-connector-arm-accounts/create-sas.png"::: Select **+New Step**.
-1. Generate an access token.
+1. <a name="access_token"></a>Generate an access token.
> [!NOTE] > For details about the ARM API and the request/response examples, see [Generate an Azure Video Indexer access token](/rest/api/videoindexer/generate/access-token?tabs=HTTP).
The following image shows the first flow:
Search and create an **HTTP** action.
- |Key| Value|
- |-|-|
- |Method | **POST**|
- | URI| `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.VideoIndexer/accounts/{accountName}/generateAccessToken?api-version={API-version}`. |
- | Body|`{ "permissionType": "Contributor", "scope": "Account" }` |
- | Add new parameter | **Authentication** |
+ |Key| Value|Notes|
+ |-|-||
+ |Method | **POST**||
+ | URI| [generateAccessToken](/rest/api/videoindexer/stable/generate/access-token?tabs=HTTP#generate-accesstoken-for-account-contributor). ||
+ | Body|`{ "permissionType": "Contributor", "scope": "Account" }` |See the [REST doc example](/rest/api/videoindexer/preview/generate/access-token?tabs=HTTP#generate-accesstoken-for-account-contributor), make sure to delete the **POST** line.|
+ | Add new parameter | **Authentication** ||
![Screenshot of the HTTP access token.](./media/logic-apps-connector-arm-accounts/http-with-param.png)
The following image shows the first flow:
> Before moving to the next step, set up the right permission between the Logic app and the Azure Video Indexer account. > > Make sure you have followed the steps to enable the system -assigned managed identity of your Logic Apps.-
- ![Screenshot of the how to enable the system assigned managed identity.](./media/logic-apps-connector-arm-accounts/enable-system.png)
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/logic-apps-connector-arm-accounts/enable-system.png" alt-text="Screenshot of the how to enable the system assigned managed identity." lightbox="./media/logic-apps-connector-arm-accounts/enable-system.png":::
1. Set up system assigned managed identity for permission on Azure Video Indexer resource. In the Azure portal, go to your Azure Video Indexer resource/account.
The following image shows the first flow:
The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output.
-## Create a new logic app of type consumption
+## Create a new logic app of type consumption (the second flow)
Create the second flow, Logic Apps of type consumption. The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
Create the second flow, Logic Apps of type consumption. The second flow is t
Follow all the steps from:
- 1. **Generate an access token** we did for the first flow.
+ 1. **Generate an access token** we did for the first flow ([as shown here](#access_token)).
1. Select **Save** -> **+ New step**. 1. Get Video Indexer insights.
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
Title: Concepts - Identity and access
description: Learn about the identity and access concepts of Azure VMware Solution Previously updated : 07/21/2022- Last updated : 11/18/2022+ # Azure VMware Solution identity concepts
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
### Create custom roles on vCenter Server
-Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role. You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role.
+Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role. You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges less than or equal to their current role.
- >[!NOTE]
+ >[!NOTE]
>You can create roles with privileges greater than CloudAdmin. However, you can't assign the role to any users or groups or delete the role. Roles that have privileges greater than that of CloudAdmin is unsupported. To prevent creating roles that can't be assigned or deleted, clone the CloudAdmin role as the basis for creating new custom roles.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
1. Check the **Propagate to children** if needed, and select **OK**. The added permission displays in the **Permissions** section.
+## VMware NSX-T Data Center NSX-T Manager access and identity
-## NSX-T Manager access and identity
-
-When a private cloud is provisioned using Azure portal, software-defined data center (SDDC) management components like vCenter Server and NSX-T Manager are provisioned for customers.
+When a private cloud is provisioned using Azure portal, software-defined data center (SDDC) management components like vCenter Server and VMware NSX-T Data Center NSX-T Manager are provisioned for customers.
-Microsoft is responsible for the lifecycle management of NSX-T appliances like NSX-T Managers and NSX-T Data Center Edges. They're responsible for bootstrapping network configuration, like creating the Tier-0 gateway.
+Microsoft is responsible for the lifecycle management of NSX-T appliances like, VMware NSX-T Data Center NSX-T Manager and VMware NSX-T Data Center Microsoft Edge appliances. They're responsible for bootstrapping network configuration, like creating the Tier-0 gateway.
-You're responsible for NSX-T Data Center software-defined networking (SDN) configuration, for example:
+You're responsible for VMware NSX-T Data Center software-defined networking (SDN) configuration, for example:
- Network segments - Other Tier-1 gateways
You're responsible for NSX-T Data Center software-defined networking (SDN) confi
- Stateful services like gateway firewall - Load balancer on Tier-1 gateways
-You can access NSX-T Manager using the built-in local user "cloudadmin" assigned to a custom role that gives limited privileges to a user to manage NSX-T Data Center. While Microsoft manages the lifecycle of NSX-T Data Center, certain operations aren't allowed by a user. Operations not allowed include editing the configuration of host and edge transport nodes or starting an upgrade. For new users, Azure VMware Solution deploys them with a specific set of permissions needed by that user. The purpose is to provide a clear separation of control between the Azure VMware Solution control plane configuration and Azure VMware Solution private cloud user.
+You can access VMware NSX-T Data Center NSX-T Manager using the built-in local user "cloudadmin" assigned to a custom role that gives limited privileges to a user to manage VMware NSX-T Data Center. While Microsoft manages the lifecycle of VMware NSX-T Data Center, certain operations aren't allowed by a user. Operations not allowed include editing the configuration of host and edge transport nodes or starting an upgrade. For new users, Azure VMware Solution deploys them with a specific set of permissions needed by that user. The purpose is to provide a clear separation of control between the Azure VMware Solution control plane configuration and Azure VMware Solution private cloud user.
-For new private cloud deployments, NSX-T Data Center access will be provided with a built-in local user cloudadmin assigned to the **cloudadmin** role with a specific set of permissions to use NSX-T Data Center functionality for workloads.
+For new private cloud deployments, VMware NSX-T Data Center access will be provided with a built-in local user cloudadmin assigned to the **cloudadmin** role with a specific set of permissions to use VMware NSX-T Data Center functionality for workloads.
-### NSX-T Data Center cloudadmin user permissions
+### VMware NSX-T Data Center cloudadmin user permissions
The following permissions are assigned to the **cloudadmin** user in Azure VMware Solution NSX-T Data Center. > [!NOTE]
-> **NSX-T Data Center cloudadmin user** on Azure VMware Solution is not the same as the **cloudadmin user** mentioned in the VMware product documentation.
+> **VMware NSX-T Data Center cloudadmin user** on Azure VMware Solution is not the same as the **cloudadmin user** mentioned in the VMware product documentation.
| Category | Type | Operation | Permission | |--|--|-||
The following permissions are assigned to the **cloudadmin** user in Azure VMwar
| System | Configuration<br>Settings<br>Settings<br>Settings | Identity firewall<br>Users and Roles<br>Certificate Management (Service Certificate only)<br>User Interface Settings | Full Access<br>Full Access<br>Full Access<br>Full Access | | System | All other | | Read-only |
-You can view the permissions granted to the Azure VMware Solution cloudadmin role on your Azure VMware Solution private cloud NSX-T Data Center.
+You can view the permissions granted to the Azure VMware Solution cloudadmin role on your Azure VMware Solution private cloud VMware NSX-T Data Center.
1. Log in to the NSX-T Manager. 1. Navigate to **Systems** and locate **Users and Roles**.
You can view the permissions granted to the Azure VMware Solution cloudadmin rol
> [!NOTE] > **Private clouds created before June 2022** will switch from **admin** role to **cloudadmin** role. You'll receive a notification through Azure Service Health that includes the timeline of this change so you can change the NSX-T Data Center credentials you've used for other integration.
-## NSX-T Data Center LDAP integration for role based access control (RBAC)
+## NSX-T Data Center LDAP integration for role-based access control (RBAC)
-In an Azure VMware Solution deployment, the NSX-T Data Center can be integrated with external LDAP directory service to add remote directory users or group, and assign them an NSX-T Data Center RBAC role, like on-premises deployment. For more information on how to enable NSX-T Data Center LDAP integration, see the [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html).
+In an Azure VMware Solution deployment, the VMware NSX-T Data Center can be integrated with external LDAP directory service to add remote directory users or group, and assign them a VMware NSX-T Data Center RBAC role, like on-premises deployment. For more information on how to enable VMware NSX-T Data Center LDAP integration, see the [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html).
-Unlike on-premises deployment, not all pre-defined NSX-T Data Center RBAC roles are supported with Azure VMware solution to keep Azure VMware Solution IaaS control plane config management separate from tenant network and security configuration. Please see the next section, Supported NSX-T Data Center RBAC roles, for more details.
+Unlike on-premises deployment, not all pre-defined NSX-T Data Center RBAC roles are supported with Azure VMware solution to keep Azure VMware Solution IaaS control plane config management separate from tenant network and security configuration. See the next section, Supported NSX-T Data Center RBAC roles, for more details.
> [!NOTE]
-> NSX-T LDAP Integration supported only with SDDCΓÇÖs with NSX-T Data Center ΓÇ£cloudadminΓÇ¥ user.
+> VMware NSX-T Data Center LDAP Integration is supported only with SDDCΓÇÖs with VMware NSX-T Data Center ΓÇ£cloudadminΓÇ¥ user.
### Supported and unsupported NSX-T Data Center RBAC roles
- In an Azure VMware Solution deployment, the following NSX-T Data Center predefined RBAC roles are supported with LDAP integration:
+ In an Azure VMware Solution deployment, the following VMware NSX-T Data Center predefined RBAC roles are supported with LDAP integration:
- Auditor - Cloudadmin
Unlike on-premises deployment, not all pre-defined NSX-T Data Center RBAC roles
- VPN Admin - Network Operator
- In an Azure VMware Solution deployment, the following NSX-T Data Center predefined RBAC roles are not supported with LDAP integration:
+ In an Azure VMware Solution deployment, the following VMware NSX-T Data Center predefined RBAC roles aren't supported with LDAP integration:
- Enterprise Admin - Network Admin
You can create custom roles in NSX-T Data Center with permissions lesser than or
4. **Apply** the changes and **Save** the Role. > [!NOTE]
-> The NSX-T Data Center **System** > **Identity Firewall AD** configuration option isn't supported by the NSX custom role. The recommendation is to assign the **Security Operator** role to the user with the custom role to allow managing the Identity Firewall (IDFW) feature for that user.
+> The VMware NSX-T Data Center **System** > **Identity Firewall AD** configuration option isn't supported by the NSX custom role. The recommendation is to assign the **Security Operator** role to the user with the custom role to allow managing the Identity Firewall (IDFW) feature for that user.
+
+> [!NOTE]
+> The VMware NSX-T Data Center Traceflow feature isn't supported by the VMware NSX-T Data Center custom role. The recommendation is to assign the **Auditor** role to the user along with above custom role to enable Traceflow feature for that user.
> [!NOTE]
-> The NSX-T Data Center Traceflow feature isn't supported by NSX-T Data Center custom role. The recommendation is to assign the **Auditor** role to the user along with above custom role to enable Traceflow feature for that user.
+> VMware vRealize Automation(vRA) integration with the NSX-T Data Center component of the Azure VMware Solution requires the ΓÇ£auditorΓÇ¥ role to be added to the user with the NSX-T Manager cloudadmin role.
## Next steps
Now that you've covered Azure VMware Solution access and identity concepts, you
- [How Azure VMware Solution monitors and repairs private clouds](./concepts-private-clouds-clusters.md#host-monitoring-and-remediation) -- <!-- LINKS - external--> [VMware product documentation]: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Title: Configure vRealize Operations for Azure VMware Solution
description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud. Previously updated : 10/18/2022 Last updated : 11/18/2022 # Configure vRealize Operations for Azure VMware Solution
The warning occurs because the **cloudadmin@vsphere.local** user in Azure VMware
For more information, see [Privileges Required for Configuring a vCenter Server Adapter Instance](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.core.doc/GUID-3BFFC92A-9902-4CF2-945E-EA453733B426.html).
+> [!NOTE]
+> VMware vRealize Automation(vRA) integration with the NSX-T Data Center component of the Azure VMware Solution requires the ΓÇ£auditorΓÇ¥ role to be added to the user with the NSX-T Manager cloudadmin role.
+ <!-- LINKS - external --> <!-- LINKS - internal -->
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md
Title: Create an Azure Batch pool without public IP addresses (preview) description: Learn how to create an Azure Batch pool without public IP addresses. Previously updated : 01/11/2022 Last updated : 11/18/2022 # Create a Batch pool without public IP addresses (preview)
+> [!WARNING]
+> This preview version will be retired on **31 March 2023**, and will be replaced by
+> [Simplified node communication pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md).
+> For more information, see the [Retirement Migration Guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md).
+ > [!IMPORTANT] > - Support for pools without public IP addresses in Azure Batch is currently in public preview for the following regions: France Central, East Asia, West Central US, South Central US, West US 2, East US, North Europe, East US 2, Central US, West Europe, North Central US, West US, Australia East, Japan East, Japan West.
-> - This preview version will be retired on **31 March 2023**, and will be replaced by [Simplified node communication pool without public IP addresses](simplified-node-communication-pool-no-public-ip.md). For more details, please refer to [Retirement Migration Guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md).
> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
To restrict access to these nodes and reduce the discoverability of these nodes
- **Authentication**. To use a pool without public IP addresses inside a [virtual network](./batch-virtual-network.md), the Batch client API must use Azure Active Directory (AD) authentication. Azure Batch support for Azure AD is documented in [Authenticate Batch service solutions with Active Directory](batch-aad-auth.md). If you aren't creating your pool within a virtual network, either Azure AD authentication or key-based authentication can be used. -- **An Azure VNet**. If you are creating your pool in a [virtual network](batch-virtual-network.md), follow these requirements and configurations. To prepare a VNet with one or more subnets in advance, you can use the Azure portal, Azure PowerShell, the Azure CLI, or other methods.
+- **An Azure VNet**. If you're creating your pool in a [virtual network](batch-virtual-network.md), follow these requirements and configurations. To prepare a VNet with one or more subnets in advance, you can use the Azure portal, Azure PowerShell, the Azure CLI, or other methods.
- The VNet must be in the same subscription and region as the Batch account you use to create your pool. - The subnet specified for the pool must have enough unassigned IP addresses to accommodate the number of VMs targeted for the pool; that is, the sum of the `targetDedicatedNodes` and `targetLowPriorityNodes` properties of the pool. If the subnet doesn't have enough unassigned IP addresses, the pool partially allocates the compute nodes, and a resize error occurs.
- - You must disable private link service and endpoint network policies. This can be done by using Azure CLI:
+ - You must disable private link service and endpoint network policies. This action can be done by using Azure CLI:
`az network vnet subnet update --vnet-name <vnetname> -n <subnetname> --resource-group <resourcegroup> --disable-private-endpoint-network-policies --disable-private-link-service-network-policies`
To restrict access to these nodes and reduce the discoverability of these nodes
1. In the **Pools** window, select **Add**. 1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown. 1. Select the correct **Publisher/Offer/Sku** of your image.
-1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, as well as any desired optional settings.
-1. Optionally select a virtual network and subnet you wish to use. This virtual network must be in the same resource group as the pool you are creating.
+1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, and any desired optional settings.
+1. Optionally select a virtual network and subnet you wish to use. This virtual network must be in the same resource group as the pool you're creating.
1. In **IP address provisioning type**, select **NoPublicIPAddresses**. ![Screenshot of the Add pool screen with NoPublicIPAddresses selected.](./media/batch-pool-no-public-ip-address/create-pool-without-public-ip-address.png)
client-request-id: 00000000-0000-0000-0000-000000000000
## Outbound access to the internet
-In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). Note that NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
+In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
-Another way to provide outbound connectivity is to use a user-defined route (UDR). This lets you route traffic to a proxy machine that has public internet access.
+Another way to provide outbound connectivity is to use a user-defined route (UDR). This method lets you route traffic to a proxy machine that has public internet access.
## Next steps
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
Title: Use simplified compute node communication description: Learn about the simplified compute node communication mode in the Azure Batch service and how to enable it. Previously updated : 11/02/2022 Last updated : 11/17/2022
Simplified compute node communication in Azure Batch is currently available for
- Government: USGov Arizona, USGov Virginia, USGov Texas. -- China: China North 3.
+- China: all China regions where Batch is present except for China North 1 and China East 1.
## Compute node communication differences between Classic and Simplified
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
Title: Create a simplified node communication pool without public IP addresses (preview) description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses. Previously updated : 11/14/2022 Last updated : 11/18/2022
az network vnet subnet update \
--disable-private-endpoint-network-policies ``` -- Enable outbound access for Batch node management. A pool with no public IP addresses doesn't have internet outbound access enabled by default. To allow compute nodes to access the Batch node management service (see [Use simplified compute node communication](simplified-compute-node-communication.md)) either:
+- Enable outbound access for Batch node management. A pool with no public IP addresses doesn't have internet outbound access enabled by default. Choose one of the following options to allow compute nodes to access the Batch node management service (see [Use simplified compute node communication](simplified-compute-node-communication.md)):
- Use [**nodeManagement**](private-connectivity.md) private endpoint with Batch accounts, which provides private access to Batch node management service from the virtual network. This solution is the preferred method.
az network vnet subnet update \
1. In the **Pools** window, select **Add**. 1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown. 1. Select the correct **Publisher/Offer/Sku** of your image.
-1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, and any desired optional settings.
+1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**.
+1. For **Node communication mode**, select **simplified** under Optional Settings.
1. Select a virtual network and subnet you wish to use. This virtual network must be in the same location as the pool you're creating. 1. In **IP address provisioning type**, select **NoPublicIPAddresses**.
-![Screenshot of the Add pool screen with NoPublicIPAddresses selected.](./media/batch-pool-no-public-ip-address/create-pool-without-public-ip-address.png)
+The following screenshot shows the elements that are required to be modified to enable a pool without public
+IP addresses as specified above.
+
+![Screenshot of the Add pool screen with NoPublicIPAddresses selected.](./media/simplified-compute-node-communication/add-pool-simplified-mode-no-public-ip.png)
## Use the Batch REST API to create a pool without public IP addresses
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
You can use the following method to download and upload the SAP components to yo
You also can [run scripts to automate this process](#option-1-upload-software-components-with-script) instead. 1. Create a new Azure storage account for storing the software components.
-1. Grant the Azure Center for SAP solutions application *Azure SAP Workloads Management* **Storage Blob Data Reader** and **Reader and Data Access** role access to this storage account.
+1. Grant the roles **Storage Blob Data Reader** and **Reader and Data Access** to the user-assigned managed identity, which you used during infrastructure deployment.
1. Create a container within the storage account. You can choose any container name; for example, **sapbits**. 1. Create two folders within the container, named **deployervmpackages** and **sapfiles**. > [!WARNING]
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
VNet injection allows Chaos resource provider to inject containerized workloads
5. Start the experiment. ## Limitations
-* At present the VNet injection will only be possible in subscriptions/regions where Azure Container Instances and Azure Relay are available.
+* At present the VNet injection will only be possible in subscriptions/regions where Azure Container Instances and Azure Relay are available. They are deployed to target regions.
* When you create a Target resource that you would like to enable with VNet injection, you will need Microsoft.Network/virtualNetworks/subnets/write access to the virtual network. For example, if the AKS cluster is deployed to VNet_A, then you must have permissions to create subnets in VNet_A in order to enable VNet injection for the AKS cluster. You will have to specify a subnet (in VNet_A) that the container will be deployed to. Request Body when created Target resource with VNet injection enabled:
cloud-services Diagnostics Extension To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-extension-to-storage.md
Title: Store and View Diagnostic Data in Azure Storage description: Learn how to collect Azure diagnostics data in an Azure Storage account so you can view it with one of several available tools.- -++ Last updated 08/01/2016-- # Store and view diagnostic data in Azure Storage
cognitive-services Developer Reference Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/developer-reference-resource.md
The [app schema](app-schema-definition.md) is imported and exported in a `.json`
|Language |Reference documentation|Package|Quickstarts| |--|--|--|--| |C#|[Authoring](/dotnet/api/microsoft.azure.cognitiveservices.language.luis.authoring)</br>[Prediction](/dotnet/api/microsoft.azure.cognitiveservices.language.luis.runtime)|[NuGet authoring](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Authoring/)<br>[NuGet prediction](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Runtime/)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Query prediction](./client-libraries-rest-api.md?pivots=rest-api)|
-|Go|[Authoring and prediction](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v2.0/luis)|[SDK](https://github.com/Azure/azure-sdk-for-go/tree/master/services/cognitiveservices/v2.0/luis)||
+|Go|[Authoring and prediction](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v2.0/luis)|[SDK](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/LUIS)||
|Java|[Authoring and prediction](/java/api/overview/azure/cognitiveservices/client/languageunderstanding)|[Maven authoring](https://search.maven.org/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-luis-authoring)<br>[Maven prediction](https://search.maven.org/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-luis-runtime)| |JavaScript|[Authoring](/javascript/api/@azure/cognitiveservices-luis-authoring/)<br>[Prediction](/javascript/api/@azure/cognitiveservices-luis-runtime/)|[NPM authoring](https://www.npmjs.com/package/@azure/cognitiveservices-luis-authoring)<br>[NPM prediction](https://www.npmjs.com/package/@azure/cognitiveservices-luis-runtime)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Prediction](./client-libraries-rest-api.md?pivots=rest-api)| |Python|[Authoring and prediction](./client-libraries-rest-api.md?pivots=rest-api)|[Pip](https://pypi.org/project/azure-cognitiveservices-language-luis/)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Prediction](./client-libraries-rest-api.md?pivots=rest-api)|
cognitive-services Keyword Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/keyword-recognition-overview.md
With the [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/c
### Pricing
-There's no cost to use custom keyword to generate models, including both Basic and Advanced models. There's also no cost to run models on-device with the Speech SDK.
+There's no cost to use custom keyword to generate models, including both Basic and Advanced models. There's also no cost to run models on-device with the Speech SDK when used in conjunction with other Speech service features such as speech-to-text.
### Types of models
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
These actions are performed before the destination endpoint listed in the Incomi
**Redirect** ΓÇô Using the IncomingCall event from Event Grid, a call can be redirected to one or more endpoints creating a single or simultaneous ringing (sim-ring) scenario. This means the call isn't answered by your application, it's simply ΓÇÿredirectedΓÇÖ to another destination endpoint to be answered.
-**Make Call** - Make Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update.
+**Create Call** - Create Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update.
### Mid-call actions
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
To help you troubleshoot certain types of issues, you may be asked for any of th
* **Call ID**: This ID is used to identify Communication Services calls. * **SMS message ID**: This ID is used to identify SMS messages. * **Short Code Program Brief ID**: This ID is used to identify a short code program brief application.
+* **Email message ID**: This ID is used to identify Send Email requests.
* **Correlation ID**: This ID is used to identify requests made using Call Automation. * **Call logs**: These logs contain detailed information that can be used to troubleshoot calling and network issues.
console.log(result); // your message ID will be in the result
The program brief ID can be found on the [Azure portal](https://portal.azure.com) in the Short Codes blade. :::image type="content" source="./media/short-code-trouble-shooting.png" alt-text="Screenshot showing a short code program brief ID.":::+++
+## Access your email message ID
+When troubleshooting send email or email message status requests, you may be asked to provide a `message ID`. This can be accessed in the response:
+
+# [.NET](#tab/dotnet)
+```csharp
+Console.WriteLine($"MessageId = {emailResult.MessageId}");
+```
## Enable and access call logs
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
When the Pre-Call diagnostic test runs, behind the scenes it uses calling minute
## Next steps
-This feature is currently in private preview. Please provide feedback on the API design, capabilities and pricing. Feedback is key for the team to move forward and push the feature into public preview and general availability.
+- [Check your network condition with the diagnostics tool](../developer-tools/network-diagnostic.md)
+- [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md)
+- [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)
+- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
-zone_pivot_groups: acs-csharp-java
# How to control and steer calls with Call Automation
Response<TransferCallResult> transferResponse = callConnectionAsync.transferToPa
``` -- The below sequence diagram shows the expected flow when your application places an outbound 1:1 call and then transfers it to another endpoint. + ![Sequence diagram for placing a 1:1 call and then transferring it.](media/transfer-flow.png) ## Add a participant to a call
AddParticipantsOptions addParticipantsOptions = new AddParticipantsOptions(targe
Response<AddParticipantsResult> addParticipantsResultResponse = callConnectionAsync.addParticipantsWithResponse(addParticipantsOptions).block(); ``` --
-To add a Communication Services user, provide a CommunicationUserIdentifier instead of PhoneNumberIdentifier. Source caller ID isn't mandatory in this case.
+To add a Communication Services user, provide a CommunicationUserIdentifier instead of PhoneNumberIdentifier. Source caller ID isn't mandatory in this case.
AddParticipant will publish a `AddParticipantSucceeded` or `AddParticipantFailed` event, along with a `ParticipantUpdated` providing the latest list of participants in the call.
-
+ ![Sequence diagram for adding a participant to the call.](media/add-participant-flow.png) ## Remove a participant from a call
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
# Add a bot to your chat app > [!IMPORTANT]
-> This functionality is in public preview.
+> This functionality is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/HBm8jRuuGZ) and we will review your scenario(s) and evaluate your participation in the preview.
>
+> Private Preview APIs and SDKs are provided without a service-level agreement, and are not appropriate for production workloads and should only be used with test users and test data. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ In this quickstart, you will learn how to build conversational AI experiences in a chat application using Azure Communication Services Chat messaging channel that is available under Azure Bot Services. This article will describe how to create a bot using BotFramework SDK and how to integrate this bot into any chat application that is built using Communication Services Chat SDK.
cosmos-db Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/create-alerts.md
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service
## Create an alert rule
-This section shows how to create an alert when you receive an HTTP status code 429, which is received when the requests are rate limited. For examples, you may want to receive an alert when there are 100 or more rate limited requests. This article shows you how to configure an alert for such scenario by using the HTTP status code. You can use the similar steps to configure other types of alerts as well, you just need to choose a different condition based on your requirement.
+This section shows how to create an alert when you receive an HTTP status code 429, which is received when the requests are rate limited. For example, you may want to receive an alert when there are 100 or more rate limited requests. This article shows you how to configure an alert for such scenario by using the HTTP status code. You can use the similar steps to configure other types of alerts as well, you just need to choose a different condition based on your requirement.
> [!TIP] > The scenario of alerting based on number of 429s exceeding a threshold is used here for illustration purposes. It does not mean that there is anything inherently wrong with seeing 429s on your database or container. In general, if you see 1-5% of requests with 429s in a production workload and your overall application latency is within your requirements, this is a normal and healthy sign that you are fully using the throughput (RU/s) you've provisioned. [Learn more about how to interpret and debug 429 exceptions](sql/troubleshoot-request-rate-too-large.md).
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-python-get-started.md
ms.devlang: python Previously updated : 11/16/2022 Last updated : 11/18/2022
client.close()
## Use MongoDB client classes with Azure Cosmos DB for API for MongoDB
-Let's look at the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+
+Each type of resource is represented by one or more associated Python classes. Here's a list of the most common classes:
* [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html) - The first step when working with PyMongo is to create a MongoClient to connect to Azure Cosmos DB's API for MongoDB. The client object is used to configure and execute requests against the service.
To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource
Now that you've connected to an API for MongoDB account, use the next guide to create and manage databases. > [!div class="nextstepaction"]
-> [Create a database in Azure Cosmos DB for MongoDB using Python](how-to-python-manage-databases.md)
+> [Create a database in Azure Cosmos DB for MongoDB using Python](how-to-python-manage-databases.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md
This quickstart will create a single Azure Cosmos DB account using the API for M
## Object model
-Let's look at the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The API for MongoDB creates resources in the following order:
+Let's look at the hierarchy of resources in the API for MongoDB and the object model that's used to create and access these resources. The Azure Cosmos DB creates resources in a hierarchy that consists of accounts, databases, collections, and documents.
+
+ Hierarchical diagram showing an Azure Cosmos DB account at the top. The account has two child database nodes. One of the database nodes includes two child collection nodes. The other database node includes a single child collection node. That single collection node has three child doc nodes.
+
+Each type of resource is represented by a Python class. Here are the most common classes:
* [MongoClient](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html) - The first step when working with PyMongo is to create a MongoClient to connect to Azure Cosmos DB's API for MongoDB. The client object is used to configure and execute requests against the service.
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md
Title: Azure EA agreements and amendments
description: This article explains how Azure EA agreements and amendments affect your Azure EA portal use. Previously updated : 06/27/2022 Last updated : 11/18/2022
This feature is meant to provide an estimation of the Azure cost to the end cust
Make sure to review the commercial information - monetary balance information, price list, etc. before publishing the marked-up prices to end customer.
+#### Azure savings plan purchases
+
+For [Azure Savings plan](../savings-plan/savings-plan-compute-overview.md) purchases, in some situations, indirect EA end customers could see minor variances in their utilization percentage when they view their [cost reports](../savings-plan/utilization-cost-reports.md) in Cost Management. Actual purchase and usage charges are always computed in partner prices and not in customer prices (for example, with markup). Subsequent markdown and uplift could result in floating point numbers exceeding eight decimal point precision. Azure rounds calculations to eight decimal precision, which can cause minor variances in the utilization numbers for end customers.
+
+Let's look at an example. For an Azure Savings Plan commitment amount of 3.33/hour entered by the customer, if the markup is 13%, after the markdown to arrive at partner price and the subsequent markup in the cost and usage reports, there's minor variance in numbers:
+
+- Customer entered value: 3.33/hour
+- Mark up: 13%
+- Partner commitment calculated from customer value and rounded to eight decimal point precision: 2.94690265
+- Final customer viewed commit (uplifting partner price): 3.32999999
+ ### How to add a price markup **Step One: Add price markup**
cost-management-billing View Payment History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-payment-history.md
+
+ Title: View payment history
+description: This article describes how to view your payment history for a Microsoft Customer Agreement.
++
+tags: billing
+++ Last updated : 11/15/2022+++
+# View payment history
+
+The article explains how you can view your payment history in the Azure portal. This article applies to customers with the following Azure account types:
+
+- A Microsoft Customer Agreement purchased directly through Azure.com
+- A Microsoft Customer Agreement purchased through a Microsoft representative
+- A Microsoft Customer Agreement purchased through a Microsoft partner
+
+## Required permissions
+
+To view the payment history for your billing account, you must have at least the Invoice section reader role. For more information about administrative roles for a Microsoft Customer Agreement, see [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md).
+
+## View your payment history
+
+To view your payment history, you can navigate to the Payment history page under a specific billing profile.
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+2. Search for **Cost Management + Billing** and select it.
+3. Select a Billing scope, if necessary.
+4. In the left menu under **Billing**, select **Billing profiles**.
+5. Select a billing profile.
+6. In the left menu under Billing, select **Payment history**. Your payment history associated with the billing profile is shown. Here's an example.
++
+To download an invoice, select the Invoice ID that you want to download.
+
+## Next steps
+
+- If you need to change your payment method, see [Add, update, or delete a payment method](change-credit-card.md).
cost-management-billing Buy Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md
description: This article helps you buy an Azure savings plan.
-+ Last updated 11/16/2022
cost-management-billing Cancel Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/cancel-savings-plan.md
-+ Last updated 10/12/2022
cost-management-billing Charge Back Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/charge-back-costs.md
description: Learn how to view Azure saving plan costs for chargeback.
-+ Last updated 10/12/2022
cost-management-billing Choose Commitment Amount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/choose-commitment-amount.md
description: This article helps you determine how to choose an Azure saving plan
-+ Last updated 10/12/2022
cost-management-billing Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/discount-application.md
-+ Last updated 10/20/2022
cost-management-billing Manage Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/manage-savings-plan.md
description: Learn how to manage savings plans. See steps to change the plan's s
-+ Last updated 10/12/2022
cost-management-billing Permission View Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/permission-view-manage.md
description: Learn how to view and manage your savings plan in the Azure portal.
-+ Last updated 10/12/2022
cost-management-billing Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md
-+ Last updated 11/16/2022
cost-management-billing Renew Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/renew-savings-plan.md
description: Learn how you can automatically renew an Azure saving plan to conti
-+ Last updated 10/12/2022
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
description: Learn how you can trade in your reservations for an Azure saving pl
-+ Last updated 10/24/2022
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
description: Learn how Azure savings plans help you save money by committing an
-+ Last updated 11/04/2022
cost-management-billing Software Costs Not Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/software-costs-not-included.md
-+ Last updated 10/12/2022
cost-management-billing Troubleshoot Savings Plan Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/troubleshoot-savings-plan-utilization.md
description: This article helps you understand why Azure savings plans can tempo
-+ Last updated 10/14/2022
cost-management-billing Utilization Cost Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/utilization-cost-reports.md
description: Learn how to view saving plan cost and usage details.
-+ Last updated 10/14/2022
cost-management-billing View Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/view-transactions.md
description: Learn how to view saving plan purchase transactions and details.
-+ Last updated 10/12/2022
cost-management-billing View Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/view-utilization.md
description: Learn how to view saving plan utilization in the Azure portal.
-+ Last updated 11/08/2022
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
___
### <code>regexExtract</code> <code><b>regexExtract(<i>&lt;string&gt;</i> : string, <i>&lt;regex to find&gt;</i> : string, [<i>&lt;match group 1-based index&gt;</i> : integral]) => string</b></code><br/><br/>
-Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `<regex>`(back quote) to match a string without escaping.
+Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `<regex>`(back quote) to match a string without escaping. Index 0 returns all matches. Without match groups, index 1 and above wonΓÇÖt return any result.
* ``regexExtract('Cost is between 600 and 800 dollars', '(\\d+) and (\\d+)', 2) -> '800'`` * ``regexExtract('Cost is between 600 and 800 dollars', `(\d+) and (\d+)`, 2) -> '800'`` ___
databox-online Azure Stack Edge Gpu 2210 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2210-release-notes.md
+
+ Title: Azure Stack Edge 2210 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2210 release.
++
+
+++ Last updated : 11/18/2022+++
+# Azure Stack Edge 2210 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2210 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2210** release, which maps to software version **2.2.2111.98**. This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2038.5916).
+
+## What's new
+
+The 2210 release has the following features and enhancements:
+
+- **High performance network VMs** - In this release, when you deploy high performance network (HPN) VMs, vCPUs are automatically reserved using a default SkuPolicy. If a vCPU reservation was defined in an earlier version, and if you update the device to 2210, then that existing reservation is carried forth to 2210. For more information, see how to [Deploy HPN VMs on your Azure Stack Edge](azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md).
+- **Kubernetes security updates** - This release includes security updates and security hardening improvements for Kubernetes VMs.
+
+If you have questions or concerns, [open a support case through the Azure portal](azure-stack-edge-contact-microsoft-support.md).
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 09/20/2022 Last updated : 11/18/2022 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
## About latest update
-The current update is Update 2209. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
+The current update is Update 2210. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
-- Device software version: Azure Stack Edge 2209 (2.2.2088.5593)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2209 (2.2.2088.5593)-- Kubernetes server version: v1.22.6
+- Device software version: Azure Stack Edge 2210 (2.2.2111.98)
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2210 (2.2.2111.98)
+- Kubernetes server version: v1.23.8
- IoT Edge version: 0.1.0-beta15 - Azure Arc version: 1.7.18-- GPU driver version: 515.48.07
+- GPU driver version: 515.65.01
- CUDA version: 11.7 For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2209-release-notes.md).
-**To apply 2209 update, your device must be running version 2207.**
+**To apply 2210 update, your device must be running version 2207 or later.**
- If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.* -- You can update to 2207 from 2106, and then install 2209.
+- You can update to 2207 from 2106 or later, and then install 2210.
### Updates for a single-node vs two-node
Do the following steps to download the update from the Microsoft Update Catalog.
2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**.
- The update listing appears as **Azure Stack Edge Update 2209**.
+ The update listing appears as **Azure Stack Edge Update 2210**.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
This procedure takes around 20 minutes to complete. Perform the following steps
5. The update starts. After the device is successfully updated, it restarts. The local UI is not accessible in this duration.
-6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2205**.
+6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2210**.
7. You will now update the Kubernetes software version. Select the remaining three Kubernetes files together (file with the *Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe* suffix) and repeat the above steps to apply update.
databox-online Azure Stack Edge Pro 2 System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-system-requirements.md
Previously updated : 02/09/2022 Last updated : 11/15/2022
For complete information, go to [Firewall and port configuration rules for IoT E
|-|--||-|-| | TCP 31000 (HTTPS)| In | LAN | In some cases. <br> See notes. |This port is required only if you are connecting to the Kubernetes dashboard to monitor your device. | | TCP 6443 (HTTPS)| In | LAN | In some cases. <br> See notes. |This port is required by Kubernetes API server only if you are using `kubectl` to access your device. |-
+| TCP 2379 (HTTPS)| In | LAN | In some cases. <br> See notes. |This port is required by Kubernetes `etcd` on your device. |
> [!IMPORTANT] > If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are in the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
deployment-environments How To Configure Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-use-cli.md
az devcenter dev environment list --dev-center <devcenter-name> --project-name <
**Delete an environment** ```azurecli
-az devcenter environment delete --dev-center <devcenter-name> --project-name <project-name> -n <name> --user-id "me"
+az devcenter dev environment delete --dev-center <devcenter-name> --project-name <project-name> -n <name> --user-id "me"
```
digital-twins Concepts Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md
In order to set up a data history connection, your Azure Digital Twins instance
Later, your Azure Digital Twins instance must have the following permission on the Event Hubs resource while data history is being used: **Azure Event Hubs Data Sender** (you can also opt instead to keep **Azure Event Hubs Data Owner** from data history setup).
+These permissions can be assigned using the Azure CLI or Azure portal.
+ ## Creating a data history connection Once all the [resources](#resources-and-data-flow) and [permissions](#required-permissions) are set up, you can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or the [Azure Digital Twins SDK](concepts-apis-sdks.md) to create the data history connection between them. The CLI command set is [az dt data-history](/cli/azure/dt/data-history).
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies.md
This article describes the concept of industry ontologies and how they can be used within the context of Azure Digital Twins.
-The vocabulary of an Azure Digital Twins solution is defined using [models](concepts-models.md), which describe the types of entities that exist in your environment. An *ontology* is a set of models for a given domain, like building structures, IoT systems, smart cities, energy grids, web content, and more.
+The vocabulary of an Azure Digital Twins solution is defined using [models](concepts-models.md), which describe the types of entities that exist in your environment. An *ontology* is a set of models for a given domain, like manufacturing, building structures, IoT systems, smart cities, energy grids, web content, and more.
Sometimes, when your solution is tied to a particular industry, it can be easier and more effective to start with a set of models for that industry that already exist, instead of authoring your own model set from scratch. This article explains more about using pre-existing industry ontologies for your Azure Digital Twins scenarios, including strategies for using the ontologies that are available today.
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration.md
Start on your app registration page in the Azure portal.
1. Select **Certificates & secrets** from the registration's menu, and then select **+ New client secret**.
- :::image type="content" source="media/how-to-create-app-registration/client-secret.png" alt-text="Screenshot of the Azure portal showing an Azure AD app registration and a highlight around 'New client secret'." lightbox="media/how-to-create-app-registration/client-secret.png":::
+ :::image type="content" source="media/how-to-create-app-registration/client-secret.png" alt-text="Screenshot of the Azure portal showing an Azure AD app registration and a highlight around 'New client secret'.":::
1. Enter whatever values you want for Description and Expires, and select **Add**.
- :::image type="content" source="media/how-to-create-app-registration/add-client-secret.png" alt-text="Screenshot of the Azure portal while adding a client secret.":::
+ :::image type="content" source="media/how-to-create-app-registration/add-client-secret.png" alt-text="Screenshot of the Azure portal while adding a client secret." lightbox="media/how-to-create-app-registration/add-client-secret-large.png":::
1. Verify that the client secret is visible on the **Certificates & secrets** page with Expires and Value fields.
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-iot-hub-data.md
az functionapp function show --resource-group <your-resource-group> --name <your
### Configure the function app
-To access Azure Digital Twins, your function app needs a system-managed identity with permissions to access your Azure Digital Twins instance. You'll set that up in this section, by assigning an access role for the function and configuring the application settings so that it can access your Azure Digital Twins instance.
+To access Azure Digital Twins, your function app needs a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) with permissions to access your Azure Digital Twins instance. You'll set that up in this section, by assigning an access role for the function and configuring the application settings so that it can access your Azure Digital Twins instance.
[!INCLUDE [digital-twins-configure-function-app-cli.md](../../includes/digital-twins-configure-function-app-cli.md)]
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
In Azure Digital Twins, you can route [event notifications](concepts-event-notif
- Instance name - Resource group
-You can find these details in the [Azure portal](https://portal.azure.com) after setting up your instance. Log into the portal and search for the name of your instance in the portal search bar.
+You can find these details in the [Azure portal](https://portal.azure.com) after setting up your instance. Log in to the portal and search for the name of your instance in the portal search bar.
:::image type="content" source="media/how-to-manage-routes/search-field-portal.png" alt-text="Screenshot of Azure portal search bar." lightbox="media/how-to-manage-routes/search-field-portal.png":::
To create a new endpoint, go to your instance's page in the [Azure portal](https
1. Complete the other details that are required for your endpoint type, including your subscription and the endpoint resources described [above](#prerequisite-create-endpoint-resources). 1. For Event Hubs and Service Bus endpoints only, you must select an **Authentication type**. You can use key-based authentication with a pre-created authorization rule, or identity-based authentication if you'll be using the endpoint with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) for your Azure Digital Twins instance.
+ :::image type="content" source="media/how-to-manage-routes/create-endpoint-event-hub-authentication.png" alt-text="Screenshot of creating an endpoint of type Event Hubs in the Azure portal.":::
1. Finish creating your endpoint by selecting **Save**.
If the endpoint creation fails, observe the error message and retry after a few
You can also view the endpoint that was created back on the **Endpoints** page for your Azure Digital Twins instance.
-Now the event grid, event hub, or Service Bus topic is available as an endpoint in Azure Digital Twins, under the name you chose for the endpoint. You'll typically use that name as the target of an event route, which you'll create [later in this article](#create-an-event-route).
+Now the Event Grid topic, event hub, or Service Bus topic is available as an endpoint in Azure Digital Twins, under the name you chose for the endpoint. You'll typically use that name as the target of an event route, which you'll create [later in this article](#create-an-event-route).
# [CLI](#tab/cli)
To create a Service Bus topic endpoint (key-based authentication):
az dt endpoint create servicebus --endpoint-name <Service-Bus-endpoint-name> --servicebus-resource-group <Service-Bus-resource-group-name> --servicebus-namespace <Service-Bus-namespace> --servicebus-topic <Service-Bus-topic-name> --servicebus-policy <Service-Bus-topic-policy> --dt-name <your-Azure-Digital-Twins-instance-name> ```
-After successfully running these commands, the event grid, event hub, or Service Bus topic will be available as an endpoint in Azure Digital Twins, under the name you supplied with the `--endpoint-name` argument. You'll typically use that name as the target of an event route, which you'll create [later in this article](#create-an-event-route).
+After successfully running these commands, the Event Grid topic, event hub, or Service Bus topic will be available as an endpoint in Azure Digital Twins, under the name you supplied with the `--endpoint-name` argument. You'll typically use that name as the target of an event route, which you'll create [later in this article](#create-an-event-route).
#### Create an endpoint with identity-based authentication
For instructions on how to create this type of endpoint with the Azure CLI, swit
To create an endpoint that has dead-lettering enabled, add the `--deadletter-sas-uri` parameter to the [az dt endpoint create](/cli/azure/dt/endpoint/create) command that [creates an endpoint](#create-the-endpoint).
-The value for the parameter is the dead letter SAS URI made up of the storage account name, container name, and SAS token that you gathered in the [previous section](#set-up-storage-resources). This parameter creates the endpoint with key-based authentication. Here is what the parameter looks like:
+The value for the parameter is the dead letter SAS URI made up of the storage account name, container name, and SAS token that you gathered in the [previous section](#set-up-storage-resources). This parameter creates the endpoint with key-based authentication. Here's what the parameter looks like:
```azurecli --deadletter-sas-uri https://<storage-account-name>.blob.core.windows.net/<container-name>?<SAS-token>
The following sample method shows how to create, list, and delete an event route
As described above, routes have a filter field. If the filter value on your route is `false`, no events will be sent to your endpoint.
-After enabling the minimal filter of `true`, endpoints will receive different kinds of events from Azure Digital Twins:
+After you've enabled a minimal filter of `true`, endpoints will receive different kinds of events from Azure Digital Twins:
* Telemetry fired by [digital twins](concepts-twins-graph.md) using the Azure Digital Twins service API * Twin property change notifications, fired on property changes for any twin in the Azure Digital Twins instance * Life-cycle events, fired when twins or relationships are created or deleted
You can either select from some basic common filter options, or use the advanced
To use the basic filters, expand the **Event types** option and select the checkboxes corresponding to the events you want to send to your endpoint. Doing so will autopopulate the filter text box with the text of the filter you've selected: ### Use the advanced filters
You can also use the advanced filter option to write your own custom filters.
To create an event route with advanced filter options, toggle the switch for the **Advanced editor** to enable it. You can then write your own event filters in the **Filter** box: # [API](#tab/api)
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
There are two possible error scenarios that each give their own error message:
Both of these error messages are shown in the screenshot below: #### View a twin's relationships
You can create a new digital twin from its model definition in the **Models** pa
To create a twin from a model, find that model in the list and choose the menu dots next to the model name. Then, select **Create a Twin**. You'll be asked to enter a **name** for the new twin, which must be unique. Then save the twin, which will add it to your graph. To add property values to your twin, see [Edit twin and relationship properties](#edit-twin-and-relationship-properties).
You can use the **Model Graph** panel to view a graphical representation of the
To see the full definition of a model, find that model in the **Models** pane and select the menu dots next to the model name. Then, select **View Model**. Doing so will display a **Model Information** modal showing the raw DTDL definition of the model. You can also view a model's full definition by selecting it in the **Model Graph**, and using the **Toggle model details** button to expand the **Model Detail** panel. This panel will also display the full DTDL code for the model.
You can upload custom images to represent different models in the Model Graph an
To upload an image for a single model, find that model in the **Models** panel and select the menu dots next to the model name. Then, select **Upload Model Image**. In the file selector box that appears, navigate on your machine to the image file you want to upload for that model. Choose **Open** to upload it. You can also upload model images in bulk.
You can use the Models panel to delete individual models, or all of the models i
To delete a single model, find that model in the list and select the menu dots next to the model name. Then, select **Delete Model**. To delete all of the models in your instance at once, choose the **Delete All Models** icon at the top of the Models panel.
digital-twins How To Use Postman With Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman-with-digital-twins.md
This article contains information about the following steps:
-1. Use the Azure CLI to [get a bearer token](#get-bearer-token) that you will use to make API requests in Postman.
+1. Use the Azure CLI to [get a bearer token](#get-bearer-token) that you'll use to make API requests in Postman.
1. Set up a [Postman collection](#about-postman-collections) and configure the Postman REST client to use your bearer token to authenticate. When setting up the collection, you can choose either of these options: 1. [Import a pre-built collection of Azure Digital Twins API requests](#import-collection-of-azure-digital-twins-apis). 1. [Create your own collection from scratch](#create-your-own-collection).
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
# [Data plane](#tab/data-plane)
- To get a token to use with the data plane APIs, use the following static value for the token context: `0b07f429-9f4b-4714-9392-cc5e8e80c8b0`. This is the resource ID for the Azure Digital Twins service endpoint.
+ To get a token to use with the data plane APIs, use the following static value for the token context: `0b07f429-9f4b-4714-9392-cc5e8e80c8b0`. This value is the resource ID for the Azure Digital Twins service endpoint.
```azurecli-interactive az account get-access-token --resource 0b07f429-9f4b-4714-9392-cc5e8e80c8b0
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
>[!NOTE] > If you need to access your Azure Digital Twins instance using a service principal or user account that belongs to a different Azure Active Directory tenant from the instance, you'll need to request a token from the Azure Digital Twins instance's "home" tenant. For more information on this process, see [Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
-3. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
+3. Copy the value of `accessToken` in the result, and save it to use in the next section. This value is your **token value** that you'll provide to Postman to authorize your requests.
:::image type="content" source="media/how-to-use-postman-with-digital-twins/console-access-token.png" alt-text="Screenshot of the console showing the result of the az account get-access-token command. The accessToken field and its sample value is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/console-access-token.png":::
The first step in importing the API set is to download a collection. Choose the
There are currently two Azure Digital Twins data plane collections available for you to choose from: * [Azure Digital Twins Postman Collection](https://github.com/microsoft/azure-digital-twins-postman-samples): This collection provides a simple getting started experience for Azure Digital Twins in Postman. The requests include sample data, so you can run them with minimal edits required. Choose this collection if you want a digestible set of key API requests containing sample information. - To find the collection, navigate to the repo link and open the file named *postman_collection.json*.
-* [Azure Digital Twins data plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins): This repo contains complete Swagger files for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request, but with empty data bodies rather than sample data. Choose this collection if you want to have access to every API call and fill in all the data yourself. You should also use this collection if you need a specific version of the APIs (like one that supports a preview feature).
+* [Azure Digital Twins data plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins): This repo contains complete Swagger files for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. These files provide a comprehensive set of every API request, but with empty data bodies rather than sample data. Choose this collection if you want to have access to every API call and fill in all the data yourself. You should also use this collection if you need a specific version of the APIs (like one that supports a preview feature).
- To find the collection, navigate to the repo link and choose the folder for your preferred spec version. From here, open the file called *digitaltwins.json*. # [Control plane](#tab/control-plane)
-The collection currently available for control plane is the [Azure Digital Twins control plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. This will provide a comprehensive set of every API request.
+The collection currently available for control plane is the [Azure Digital Twins control plane Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This repo contains the complete Swagger file for the Azure Digital Twins API set, which can be downloaded and imported to Postman as a collection. These files provide a comprehensive set of every API request.
To find the collection, navigate to the repo link and choose the folder for your preferred spec version. From here, open the file called *digitaltwins.json*.
To find the collection, navigate to the repo link and choose the folder for your
Here's how to download your chosen collection to your machine so that you can import it into Postman. 1. Use the links above to open the collection file in GitHub in your browser. 1. Select the **Raw** button to open the raw text of the file.
- :::image type="content" source="media/how-to-use-postman-with-digital-twins/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There is a highlight around the Raw button." lightbox="media/how-to-use-postman-with-digital-twins/swagger-raw.png":::
+ :::image type="content" source="media/how-to-use-postman-with-digital-twins/swagger-raw.png" alt-text="Screenshot of the data plane digitaltwins.json file in GitHub. There's a highlight around the Raw button." lightbox="media/how-to-use-postman-with-digital-twins/swagger-raw.png":::
1. Copy the text from the window, and paste it into a new file on your machine. 1. Save the file with a .json extension (the file name can be whatever you want, as long as you can remember it to find the file later).
Next, edit the collection you've created to configure some access details. Highl
:::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-edit-collection.png" alt-text="Screenshot of Postman. The 'View more actions' icon for the imported collection is highlighted, and 'Edit' is highlighted in the options." lightbox="media/how-to-use-postman-with-digital-twins/postman-edit-collection.png":::
-Follow these steps to add a bearer token to the collection for authorization. This is where you'll use the token value you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
+Follow these steps to add a bearer token to the collection for authorization. Use the token value you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
1. In the edit dialog for your collection, make sure you're on the **Authorization** tab.
If you're making a [data plane](concepts-apis-sdks.md#overview-data-plane-apis)
:::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-variables-imported.png" alt-text="Screenshot of the imported collection's edit dialog in Postman, showing the 'Variables' tab. The 'CURRENT VALUE' field is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-variables-imported.png":::
-1. If your collection has additional variables, fill and save those values as well.
+1. If your collection has extra variables, fill and save those values as well.
When you're finished with the above steps, you're done configuring the collection. You can close the editing tab for the collection if you want.
Instead of importing the existing collection of all Azure Digital Twins APIs, yo
### Create a Postman collection
-1. To create a collection, select the **New** button in the main postman window.
+1. To create a collection, select the **New** button in the main Postman window.
:::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-new.png" alt-text="Screenshot of the main Postman window. The 'New' button is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-new.png":::
Instead of importing the existing collection of all Azure Digital Twins APIs, yo
:::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-new-collection-2.png" alt-text="Screenshot of the 'Create New' dialog in Postman. The 'Collection' option is highlighted.":::
-1. This will open a tab for filling the details of the new collection. Select the Edit icon next to the collection's default name (**New Collection**) to replace it with your own choice of name.
+1. A tab opens. Fill in the details of the new collection. Select the Edit icon next to the collection's default name (**New Collection**) to replace it with your own choice of name.
:::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-new-collection-3.png" alt-text="Screenshot of the new collection's edit dialog in Postman. The Edit icon next to the name 'New Collection' is highlighted." lightbox="media/how-to-use-postman-with-digital-twins/postman-new-collection-3.png":::
Next, continue on to the next section to add a bearer token to the collection fo
### Configure authorization
-Follow these steps to add a bearer token to the collection for authorization. This is where you'll use the token value you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
+Follow these steps to add a bearer token to the collection for authorization. Use the token value you gathered in the [Get bearer token](#get-bearer-token) section in order to use it for all API requests in your collection.
1. Still in the edit dialog for your new collection, move to the **Authorization** tab.
To make a Postman request to one of the Azure Digital Twins APIs, you'll need th
To proceed with an example query, this article will use the Query API (and its [reference documentation](/rest/api/digital-twins/dataplane/query/querytwins)) to query for all the digital twins in an instance. 1. Get the request URL and type from the reference documentation. For the Query API, this is currently *POST* `https://digitaltwins-host-name/query?api-version=2020-10-31`.
-1. In Postman, set the type for the request and enter the request URL, filling in placeholders in the URL as required. This is where you will use your instance's host name from the [Prerequisites section](#prerequisites).
+1. In Postman, set the type for the request and enter the request URL, filling in placeholders in the URL as required. Use your instance's host name from the [Prerequisites section](#prerequisites).
:::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-request-url.png" alt-text="Screenshot of the new request's details in Postman. The query URL from the reference documentation has been filled into the request URL box." lightbox="media/how-to-use-postman-with-digital-twins/postman-request-url.png"::: 1. Check that the parameters shown for the request in the **Params** tab match those described in the reference documentation. For this request in Postman, the `api-version` parameter was automatically filled when the request URL was entered in the previous step. For the Query API, this is the only required parameter, so this step is done. 1. In the **Authorization** tab, set the Type to **Inherit auth from parent**. This indicates that this request will use the authorization you set up earlier for the entire collection. 1. Check that the headers shown for the request in the **Headers** tab match those described in the reference documentation. For this request, several headers have been automatically filled. For the Query API, none of the header options are required, so this step is done.
-1. Check that the body shown for the request in the **Body** tab matches the needs described in the reference documentation. For the Query API, a JSON body is required to provide the query text. Here is an example body for this request that queries for all the digital twins in the instance:
+1. Check that the body shown for the request in the **Body** tab matches the needs described in the reference documentation. For the Query API, a JSON body is required to provide the query text. Here's an example body for this request that queries for all the digital twins in the instance:
:::image type="content" source="media/how-to-use-postman-with-digital-twins/postman-request-body.png" alt-text="Screenshot of the new request's details in Postman, on the Body tab. It contains a raw JSON body with a query of 'SELECT * FROM DIGITALTWINS'." lightbox="media/how-to-use-postman-with-digital-twins/postman-request-body.png":::
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-code.md
There's also a section showing the complete code at the end of the tutorial. You
To begin, open the file *Program.cs* in any code editor. You'll see a minimal code template that looks something like this: First, add some `using` lines at the top of the code to pull in necessary dependencies.
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
There are two settings that need to be set for the function app to access your A
The first setting gives the function app the **Azure Digital Twins Data Owner** role in the Azure Digital Twins instance. This role is required for any user or function that wants to perform many data plane activities on the instance. You can read more about security and role assignments in [Security for Azure Digital Twins solutions](concepts-security.md).
-1. Use the following command to create a system-managed identity for the function. The output will display details of the identity that's been created. Take note of the **principalId** field in the output to use in the next step.
+1. Use the following command to create a [system-assigned identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) for the function. The output will display details of the identity that's been created. Take note of the **principalId** field in the output to use in the next step.
```azurecli-interactive az functionapp identity assign --resource-group <your-resource-group> --name <your-function-app-name>
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Before you begin the tutorial:
> [!IMPORTANT] > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio. -- Create a target instance of [Azure SQL Database](/azure/azure/azure-sql/database/single-database-create-quickstart).
+- Create a target instance of [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).
- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the db_datareader role and that the login for the target SQL Server instance is a member of the db_owner role.
event-grid Custom Event To Eventhub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-eventhub.md
Title: 'Quickstart: Send custom events to Event Hubs - Event Grid, Azure CLI' description: 'Quickstart: Use Azure Event Grid and Azure CLI to publish a topic, and subscribe to that event. An event hub is used for the endpoint.' Previously updated : 09/28/2021 Last updated : 11/18/2022 # Quickstart: Route custom events to Azure Event Hubs with Azure CLI and Event Grid
-Azure Event Grid is an eventing service for the cloud. Azure Event Hubs is one of the supported event handlers. In this article, you use the Azure CLI to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. You send the events to an event hub.
+[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications using events. Events are delivered by Event Grid to [supported event handlers](event-handlers.md) and Azure Event Hubs is one of them. In this article, you use Azure CLI for the following steps:
+
+1. Create an Event Grid custom topic.
+1. Create an Azure Event Hubs subscription for the custom topic.
+1. Send sample events to the custom topic.
+1. Verify that those events are delivered to the event hub.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
Azure Event Grid is an eventing service for the cloud. Azure Event Hubs is one o
Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command.
+Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **gridResourceGroup** in the **westus2** location.
-The following example creates a resource group named *gridResourceGroup* in the *westus2* location.
+> [!NOTE]
+> Select **Try it** next to the CLI example to launch Cloud Shell in the right pane. Select **Copy** button to copy the command, paste it in the Cloud Shell window, and then press ENTER to run the command.
```azurecli-interactive az group create --name gridResourceGroup --location westus2
az group create --name gridResourceGroup --location westus2
[!INCLUDE [event-grid-register-provider-cli.md](../../includes/event-grid-register-provider-cli.md)]
-## Create a Custom Topic
+## Create a custom topic
-An event grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<your-topic-name>` with a unique name for your custom topic. The custom topic name must be unique because it's represented by a DNS entry.
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a DNS entry.
-```azurecli-interactive
-topicname=<your-topic-name>
-```
+1. Specify a name for the topic.
-```azurecli-interactive
-az eventgrid topic create --name $topicname -l westus2 -g gridResourceGroup
-```
+ ```azurecli-interactive
+ topicname="<TOPIC NAME>"
+ ```
+1. Run the following command to create the topic.
+
+ ```azurecli-interactive
+ az eventgrid topic create --name $topicname -l westus2 -g gridResourceGroup
+ ```
-## Create event hub
+## Create an event hub
Before subscribing to the custom topic, let's create the endpoint for the event message. You create an event hub for collecting the events.
-```azurecli-interactive
-namespace=<unique-namespace-name>
-```
+1. Specify a unique name for the Event Hubs namespace.
-```azurecli-interactive
-hubname=demohub
+ ```azurecli-interactive
+ namespace="<EVENT HUBS NAMESPACE NAME>"
+ ```
+1. Run the following commands to create an Event Hubs namespace and an event hub named `demohub` in that namespace.
-az eventhubs namespace create --name $namespace --resource-group gridResourceGroup
-az eventhubs eventhub create --name $hubname --namespace-name $namespace --resource-group gridResourceGroup
-```
+
+ ```azurecli-interactive
+ hubname=demohub
+
+ az eventhubs namespace create --name $namespace --resource-group gridResourceGroup
+ az eventhubs eventhub create --name $hubname --namespace-name $namespace --resource-group gridResourceGroup
+ ```
## Subscribe to a custom topic
-You subscribe to an event grid topic to tell Event Grid which events you want to track. The following example subscribes to the custom topic you created, and passes the resource ID of the event hub for the endpoint. The endpoint is in the format:
+You subscribe to an Event Grid topic to tell Event Grid which events you want to track. The following example subscribes to the custom topic you created, and passes the resource ID of the event hub for the endpoint. The endpoint is in the format:
-`/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.EventHub/namespaces/<namespace-name>/eventhubs/<hub-name>`
+`/subscriptions/<AZURE SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventHub/namespaces/<NAMESPACE NAME>/eventhubs/<EVENT HUB NAME>`
-The following script gets the resource ID for the event hub, and subscribes to an event grid topic. It sets the endpoint type to `eventhub` and uses the event hub ID for the endpoint.
+The following script gets the resource ID for the event hub, and subscribes to an Event Grid topic. It sets the endpoint type to `eventhub` and uses the event hub ID for the endpoint.
```azurecli-interactive hubid=$(az eventhubs eventhub show --name $hubname --namespace-name $namespace --resource-group gridResourceGroup --query id --output tsv)
endpoint=$(az eventgrid topic show --name $topicname -g gridResourceGroup --quer
key=$(az eventgrid topic key list --name $topicname -g gridResourceGroup --query "key1" --output tsv) ```
-To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, use CURL to send the event to the custom topic. The following example sends three events to the event grid topic:
+To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, use CURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
```azurecli-interactive for i in 1 2 3
do
done ```
-Navigate to the event hub in the portal, and notice that Event Grid sent those three events to the event hub.
+On the **Overview** page for your Event Hubs namespace in the Azure portal, notice that Event Grid sent those three events to the event hub. You'll see the same chart on the **Overview** page for the `demohub` Event Hubs instance page.
:::image type="content" source="./media/custom-event-to-eventhub/show-result.png" lightbox="./media/custom-event-to-eventhub/show-result.png" alt-text="Image showing the portal page with incoming message count as 3.":::
event-grid Custom Event To Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-queue-storage.md
Title: 'Quickstart: Send custom events to storage queue - Event Grid, Azure CLI' description: 'Quickstart: Use Azure Event Grid and Azure CLI to publish a topic, and subscribe to that event. A storage queue is used for the endpoint.' Previously updated : 02/02/2021 Last updated : 11/17/2022
-# Quickstart: Route custom events to Azure Queue storage with Azure CLI and Event Grid
+# Quickstart: Route custom events to Azure Queue storage via Event Grid using Azure CLI
-Azure Event Grid is an eventing service for the cloud. Azure Queue storage is one of the supported event handlers. In this article, you use the Azure CLI to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. You send the events to the Queue storage.
+[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications using events. Events are delivered by Event Grid to [supported event handlers](event-handlers.md) and Azure Queue storage is one of them. In this article, you use Azure CLI for the following steps:
---- This article requires version 2.0.56 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--- If you are using Azure PowerShell on your local machine instead of using Cloud Shell in the Azure portal, ensure that you have Azure PowerShell version 1.1.0 or greater. Download the latest version of Azure PowerShell on your Windows machine from [Azure downloads - Command-line tools](https://azure.microsoft.com/downloads/).
+1. Create an Event Grid custom topic.
+1. Create an Azure Queue subscription for the custom topic.
+1. Send sample events to the custom topic.
+1. Verify that those events are delivered to Azure Queue storage.
-This article gives you commands for using Azure CLI.
## Create a resource group Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command.
+Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **gridResourceGroup** in the **westus2** location.
-The following example creates a resource group named *gridResourceGroup* in the *westus2* location.
+> [!NOTE]
+> Select **Try it** next to the CLI example to launch Cloud Shell in the right pane. Select **Copy** button to copy the command, paste it in the Cloud Shell window, and then press ENTER to run the command.
```azurecli-interactive az group create --name gridResourceGroup --location westus2
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An event grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The event grid topic name must be unique because it's represented by a DNS entry.
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a DNS entry.
-```azurecli-interactive
-az eventgrid topic create --name <topic_name> -l westus2 -g gridResourceGroup
-```
+1. Specify a name for the topic.
+
+ ```azurecli-interactive
+ topicname="<TOPIC NAME>"
+ ```
+1. Run the following command to create the topic.
+
+ ```azurecli-interactive
+ az eventgrid topic create --name $topicname -l westus2 -g gridResourceGroup
+ ```
## Create Queue storage Before subscribing to the custom topic, let's create the endpoint for the event message. You create a Queue storage for collecting the events.
-```azurecli-interactive
-storagename="<unique-storage-name>"
-queuename="eventqueue"
+1. Specify a unique name for the Azure Storage account.
-az storage account create -n $storagename -g gridResourceGroup -l westus2 --sku Standard_LRS
-az storage queue create --name $queuename --account-name $storagename
-```
+ ```azurecli-interactive
+ storagename="<STORAGE ACCOUNT NAME>"
+ ```
+1. Run the following commands to create an Azure Storage account and a queue (named `eventqueue`) in the storage.
+
+ ```azurecli-interactive
+ queuename="eventqueue"
+
+ az storage account create -n $storagename -g gridResourceGroup -l westus2 --sku Standard_LRS
+ key="$(az storage account keys list -n $storagename --query "[0].{value:value}" --output tsv)"
+ az storage queue create --name $queuename --account-name $storagename --account-key $key
+ ```
## Subscribe to a custom topic
-You subscribe to a custom topic to tell Event Grid which events you want to track. The following example subscribes to the custom topic you created, and passes the resource ID of the Queue storage for the endpoint. With Azure CLI, you pass the Queue storage ID as the endpoint. The endpoint is in the format:
+The following example subscribes to the custom topic you created, and passes the resource ID of the Queue storage for the endpoint. With Azure CLI, you pass the Queue storage ID as the endpoint. The endpoint is in the format:
+
+`/subscriptions/<AZURE SUBSCRIPTION ID>/resourcegroups/<RESOURCE GROUP NAME>/providers/Microsoft.Storage/storageAccounts/<STORAGE ACCOUNT NAME>/queueservices/default/queues/<QUEUE NAME>`
+
+The following script gets the resource ID of the storage account for the queue. It constructs the ID for the queue storage, and subscribes to an Event Grid topic. It sets the endpoint type to `storagequeue` and uses the queue ID for the endpoint.
-`/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-name>/queueservices/default/queues/<queue-name>`
-The following script gets the resource ID of the storage account for the queue. It constructs the ID for the queue storage, and subscribes to an event grid topic. It sets the endpoint type to `storagequeue` and uses the queue ID for the endpoint.
+> [!IMPORTANT]
+> Replace expiration date placeholder (`<yyyy-mm-dd>`) with an actual value. For example: `2022-11-17` before running the command.
```azurecli-interactive storageid=$(az storage account show --name $storagename --resource-group gridResourceGroup --query id --output tsv) queueid="$storageid/queueservices/default/queues/$queuename"
-topicid=$(az eventgrid topic show --name <topic_name> -g gridResourceGroup --query id --output tsv)
+topicid=$(az eventgrid topic show --name $topicname -g gridResourceGroup --query id --output tsv)
az eventgrid event-subscription create \ --source-resource-id $topicid \
- --name <event_subscription_name> \
+ --name mystoragequeuesubscription \
--endpoint-type storagequeue \ --endpoint $queueid \ --expiration-date "<yyyy-mm-dd>"
If you use the REST API to create the subscription, you pass the ID of the stora
## Send an event to your custom topic
-Let's trigger an event to see how Event Grid distributes the message to your endpoint. First, let's get the URL and key for the custom topic. Again, use your custom topic name for `<topic_name>`.
+Let's trigger an event to see how Event Grid distributes the message to your endpoint. First, let's get the URL and key for the custom topic.
```azurecli-interactive
-endpoint=$(az eventgrid topic show --name <topic_name> -g gridResourceGroup --query "endpoint" --output tsv)
-key=$(az eventgrid topic key list --name <topic_name> -g gridResourceGroup --query "key1" --output tsv)
+endpoint=$(az eventgrid topic show --name $topicname -g gridResourceGroup --query "endpoint" --output tsv)
+key=$(az eventgrid topic key list --name $topicname -g gridResourceGroup --query "key1" --output tsv)
```
-To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, use CURL to send the event to the custom topic. The following example sends three events to the event grid topic:
+To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, you use CURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
```azurecli-interactive for i in 1 2 3
done
Navigate to the Queue storage in the portal, and notice that Event Grid sent those three events to the queue.
-![Show messages](./media/custom-event-to-queue-storage/messages.png)
> [!NOTE] > If you use an [Azure Queue storage trigger for Azure Functions](../azure-functions/functions-bindings-storage-queue-trigger.md) for a queue that receives messages from Event Grid, you may see the following error message on the function execution: `The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.`
hdinsight Hdinsight 50 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-50-component-versioning.md
The Open-source component versions associated with HDInsight 5.0 are listed in t
||| --| |Apache Spark | 3.1.2 | 2.4.4| |Apache Hive | 3.1.2 | 3.1.2 |
-|Apache Kafka | 2.4.1 | 2.1.1|
+|Apache Kafka | - |2.1.1 and 2.4.1|
|Apache Hadoop |3.1.1 | 3.1.1 | |Apache Tez |0.9.1 | 0.9.1 | |Apache Pig | 0.16.1 | 0.16.1 |
This table lists certain HDInsight 4.0 cluster types that have retired or will b
:::image type="content" source="./media/hdinsight-release-notes/interactive-query-3-1-for-hdi-5-0.png" alt-text="Screenshot of interactive query 3.1 for HDI 5.0"::: > [!NOTE]
-> * If you are creating an Interactive Query Cluster, you will see from the dropdown list an other version as Interactive Query 3.1 (HDI 5.0).
+> If you are creating an Interactive Query Cluster, you will see from the dropdown list another version as Interactive Query 3.1 (HDI 5.0).
> * If you are going to use Spark 3.1 version along with Hive which require ACID support via Hive Warehouse Connector (HWC). ++ you need to select this version Interactive Query 3.1 (HDI 5.0). ## Kafka
HDInsight team is working on upgrading other open-source components.
- [Enterprise Security Package](./enterprise-security-package.md) - [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md) +
hdinsight Hdinsight Administer Use Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-dotnet-sdk.md
description: Learn how to perform administrative tasks for the Apache Hadoop clu
Previously updated : 06/30/2022 Last updated : 11/17/2022
The impact of changing the number of data nodes for each type of cluster support
>balancer ```
-* Apache Storm
-
- You can seamlessly add or remove data nodes to your Storm cluster while it is running. But after a successful completion of the scaling operation, you will need to rebalance the topology.
-
- Rebalancing can be accomplished in two ways:
-
- * Storm web UI
- * Command-line interface (CLI) tool
-
- Please refer to the [Apache Storm documentation](https://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html) for more details.
-
- The Storm web UI is available on the HDInsight cluster:
-
- :::image type="content" source="./media/hdinsight-administer-use-powershell/hdinsight-portal-scale-cluster-storm-rebalance.png" alt-text="HDInsight Storm scale rebalance":::
-
- Here is an example how to use the CLI command to rebalance the Storm topology:
-
-
- ```console
- ## Reconfigure the topology "mytopology" to use 5 worker processes,
- ## the spout "blue-spout" to use 3 executors, and
- ## the bolt "yellow-bolt" to use 10 executors
- $ storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
- ```
-
-The following code snippet shows how to resize a cluster synchronously or asynchronously:
-
-```csharp
-_hdiManagementClient.Clusters.Resize("<Resource Group Name>", "<Cluster Name>", <New Size>);
-_hdiManagementClient.Clusters.ResizeAsync("<Resource Group Name>", "<Cluster Name>", <New Size>);
-```
-
-## Grant/revoke access
-
-HDInsight clusters have the following HTTP web services (all of these services have RESTful endpoints):
-
-* ODBC
-* JDBC
-* Apache Ambari
-* Apache Oozie
-* Apache Templeton
-
-By default, these services are granted for access. You can revoke/grant the access. To revoke:
-
-```csharp
-var httpParams = new HttpSettingsParameters
-{
- HttpUserEnabled = false,
- HttpUsername = "admin",
- HttpPassword = "*******",
-};
-_hdiManagementClient.Clusters.ConfigureHttpSettings("<Resource Group Name>, <Cluster Name>, httpParams);
-```
-
-To grant:
-
-```csharp
-var httpParams = new HttpSettingsParameters
-{
- HttpUserEnabled = enable,
- HttpUsername = "admin",
- HttpPassword = "*******",
-};
-_hdiManagementClient.Clusters.ConfigureHttpSettings("<Resource Group Name>, <Cluster Name>, httpParams);
-```
-
-> [!NOTE]
-> By granting/revoking the access, you will reset the cluster user name and password.
-
-This can also be done via the Portal. See [Manage Apache Hadoop clusters in HDInsight by using the Azure portal](hdinsight-administer-use-portal-linux.md).
- ## Update HTTP user credentials It is the same procedure as Grant/revoke HTTP access. If the cluster has been granted the HTTP access, you must first revoke it. And then grant the access with new HTTP user credentials.
hdinsight Hdinsight Administer Use Portal Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-portal-linux.md
description: Learn how to create and manage Azure HDInsight clusters using the A
Previously updated : 04/01/2022 Last updated : 11/11/2022 # Manage Apache Hadoop clusters in HDInsight by using the Azure portal
From the [cluster home page](#homePage), under **Settings** select **Properties
|REGION|Azure location. For a list of supported Azure locations, see the **Region** drop-down list box on [HDInsight pricing](https://azure.microsoft.com/pricing/details/hdinsight/).| |DATE CREATED|The date the cluster was deployed.| |OPERATING SYSTEM|Either **Windows** or **Linux**.|
-|TYPE|Hadoop, HBase, Storm, Spark.|
+|TYPE|Hadoop, HBase, Spark.|
|Version|See [HDInsight versions](hdinsight-component-versioning.md).| |Minimum TLS version|The TLS version.| |SUBSCRIPTION|Subscription name.|
hdinsight Hdinsight Apache Kafka Spark Structured Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-kafka-spark-structured-streaming.md
description: Learn how to use Apache Spark streaming to get data into or out of
Previously updated : 06/10/2022 Last updated : 11/17/2022 #Customer intent: As a developer, I want to learn how to use Spark Structured Streaming with Kafka on HDInsight.
To remove the resource group using the Azure portal:
> [!WARNING] > HDInsight cluster billing starts once a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it is no longer in use. >
-> Deleting a Kafka on HDInsight cluster deletes any data stored in Kafka.
-
-## Next steps
-
-In this tutorial, you learned how to use Apache Spark Structured Streaming. To write and read data from Apache Kafka on HDInsight. Use the following link to learn how to use Apache Storm with Kafka.
-
-> [!div class="nextstepaction"]
-> [Use Apache Storm with Apache Kafka](hdinsight-apache-storm-with-kafka.md)
+> Deleting a Kafka on HDInsight cluster deletes any data stored in Kafka.
hdinsight Hdinsight Apache Spark With Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-spark-with-kafka.md
description: Learn how to use Apache Spark to stream data into or out of Apache
Previously updated : 07/18/2022 Last updated : 11/17/2022 # Apache Spark streaming (DStream) example with Apache Kafka on HDInsight
In this example, you learned how to use Spark to read and write to Kafka. Use th
* [Get started with Apache Kafka on HDInsight](kafk) * [Use MirrorMaker to create a replica of Apache Kafka on HDInsight](kafk)
-* [Use Apache Storm with Apache Kafka on HDInsight](hdinsight-apache-storm-with-kafka.md)
hdinsight Hdinsight Apache Storm With Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-storm-with-kafka.md
- Title: 'Tutorial: Apache Storm with Apache Kafka - Azure HDInsight'
-description: Learn how to create a streaming pipeline using Apache Storm and Apache Kafka on HDInsight. In this tutorial, you use the KafkaBolt and KafkaSpout components to stream data from Kafka.
--- Previously updated : 08/05/2022-
-#Customer intent: As a developer, I want to learn how to build a streaming pipeline that uses Storm and Kafka to process streaming data.
--
-# Tutorial: Use Apache Storm with Apache Kafka on HDInsight
-
-This tutorial demonstrates how to use an [Apache Storm](https://storm.apache.org/) topology to read and write data with [Apache Kafka](https://kafka.apache.org/) on HDInsight. This tutorial also demonstrates how to persist data to the [Apache Hadoop HDFS](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) compatible storage on the Storm cluster.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Storm and Kafka
-> * Understanding the code
-> * Create Kafka and Storm clusters
-> * Build the topology
-> * Configure the topology
-> * Create the Kafka topic
-> * Start the topologies
-> * Stop the topologies
-> * Clean up resources
-
-## Prerequisites
-
-* Familiarity with creating Kafka topics. For more information, see the [Kafka on HDInsight quickstart](./kafk) document.
-
-* [Java JDK 1.8](https://www.oracle.com/technetwork/pt/java/javase/downloads/jdk8-downloads-2133151.html) or higher. HDInsight 3.5 or higher require Java 8.
-
-* [Maven 3.x](https://maven.apache.org/download.cgi)
-
-* An SSH client (you need the `ssh` and `scp` commands) - For information, see [Use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-unix.md).
-
-The following environment variables may be set when you install Java and the JDK on your development workstation. However, you should check that they exist and that they contain the correct values for your system.
-
-* `JAVA_HOME` - should point to the directory where the JDK is installed.
-* `PATH` - should contain the following paths:
-
- * `JAVA_HOME` (or the equivalent path).
- * `JAVA_HOME\bin` (or the equivalent path).
- * The directory where Maven is installed.
-
-> [!IMPORTANT]
-> The steps in this document require an Azure resource group that contains both a Storm on HDInsight and a Kafka on HDInsight cluster. These clusters are both located within an Azure Virtual Network, which allows the Storm cluster to directly communicate with the Kafka cluster.
->
-> For your convenience, this document links to a template that can create all the required Azure resources.
->
-> For more information on using HDInsight in a virtual network, see the [Plan a virtual network for HDInsight](hdinsight-plan-virtual-network-deployment.md) document.
-
-## Storm and Kafka
-
-Apache Storm provides the several components for working with Apache Kafka. The following components are used in this tutorial:
-
-* `org.apache.storm.kafka.KafkaSpout`: This component reads data from Kafka. This component relies on the following components:
-
- * `org.apache.storm.kafka.SpoutConfig`: Provides configuration for the spout component.
-
- * `org.apache.storm.spout.SchemeAsMultiScheme` and `org.apache.storm.kafka.StringScheme`: How the data from Kafka is transformed into a Storm tuple.
-
-* `org.apache.storm.kafka.bolt.KafkaBolt`: This component writes data to Kafka. This component relies on the following components:
-
- * `org.apache.storm.kafka.bolt.selector.DefaultTopicSelector`: Describes the topic that is written to.
-
- * `org.apache.kafka.common.serialization.StringSerializer`: Configures the bolt to serialize data as a string value.
-
- * `org.apache.storm.kafka.bolt.mapper.FieldNameBasedTupleToKafkaMapper`: Maps from the tuple data structure used inside the Storm topology to fields stored in Kafka.
-
-These components are available in the `org.apache.storm : storm-kafka` package. Use the package version that matches the Storm version. For HDInsight 3.6, the Storm version is 1.1.0.
-You also need the `org.apache.kafka : kafka_2.10` package, which contains additional Kafka components. Use the package version that matches the Kafka version. For HDInsight 3.6, the Kafka version is 1.1.1.
-
-The following XML is the dependency declaration in the `pom.xml` for an [Apache Maven](https://maven.apache.org/) project:
-
-```xml
-<!-- Storm components for talking to Kafka -->
-<dependency>
- <groupId>org.apache.storm</groupId>
- <artifactId>storm-kafka</artifactId>
- <version>1.1.0</version>
-</dependency>
-<!-- needs to be the same Kafka version as used on your cluster -->
-<dependency>
- <groupId>org.apache.kafka</groupId>
- <artifactId>kafka_2.10</artifactId>
- <version>1.1.1</version>
- <!-- Exclude components that are loaded from the Storm cluster at runtime -->
- <exclusions>
- <exclusion>
- <groupId>org.apache.zookeeper</groupId>
- <artifactId>zookeeper</artifactId>
- </exclusion>
- <exclusion>
- <groupId>log4j</groupId>
- <artifactId>log4j</artifactId>
- </exclusion>
- <exclusion>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- </exclusion>
- </exclusions>
-</dependency>
-```
-
-## Understanding the code
-
-The code used in this document is available at [https://github.com/Azure-Samples/hdinsight-storm-java-kafka](https://github.com/Azure-Samples/hdinsight-storm-java-kafka).
-
-There are two topologies provided with this tutorial:
-
-* Kafka-writer: Generates random sentences and stores them to Kafka.
-
-* Kafka-reader: Reads data from Kafka and then stores it to the HDFS compatible file store for the Storm cluster.
-
- > [!WARNING]
- > To enable the Storm to work with the HDFS compatible storage used by HDInsight, a script action is required. The script installs several jar files to the `extlib` path for Storm. The template in this tutorial automatically uses the script during cluster creation.
- >
- > If you do not use the template in this document to create the Storm cluster, then you must manually apply the script action to your cluster.
- >
- > The script action is located at [https://hdiconfigactions.blob.core.windows.net/linuxstormextlibv01/stormextlib.sh](https://hdiconfigactions.blob.core.windows.net/linuxstormextlibv01/stormextlib.sh) and is applied to the supervisor and nimbus nodes of the Storm cluster. For more information on using script actions, see the [Customize HDInsight using script actions](hdinsight-hadoop-customize-cluster-linux.md) document.
-
-The topologies are defined using [Flux](https://storm.apache.org/releases/current/flux.html). Flux was introduced in Storm 0.10.x and allows you to separate the topology configuration from the code. For Topologies that use the Flux framework, the topology is defined in a YAML file. The YAML file can be included as part of the topology. It can also be a standalone file used when you submit the topology. Flux also supports variable substitution at run-time, which is used in this example.
-
-The following parameters are set at run time for these topologies:
-
-* `${kafka.topic}`: The name of the Kafka topic that the topologies read/write to.
-
-* `${kafka.broker.hosts}`: The hosts that the Kafka brokers run on. The broker information is used by the KafkaBolt when writing to Kafka.
-
-* `${kafka.zookeeper.hosts}`: The hosts that Zookeeper runs on in the Kafka cluster.
-
-* `${hdfs.url}`: The file system URL for the HDFSBolt component. Indicates whether the data is written to an Azure Storage account or Azure Data Lake Storage.
-
-* `${hdfs.write.dir}`: The directory that data is written to.
-
-For more information on Flux topologies, see [https://storm.apache.org/releases/current/flux.html](https://storm.apache.org/releases/current/flux.html).
-
-### Kafka-writer
-
-In the Kafka-writer topology, the Kafka bolt component takes two string values as parameters. These parameters indicate which tuple fields the bolt sends to Kafka as __key__ and __message__ values. The key is used to partition data in Kafka. The message is the data being stored.
-
-In this example, the `com.microsoft.example.SentenceSpout` component emits a tuple that contains two fields, `key` and `message`. The Kafka bolt extracts these fields and sends the data in them to Kafka.
-
-The fields don't have to use the names `key` and `message`. These names are used in this project to make the mapping easier to understand.
-
-The following YAML is the definition for the Kafka-writer component:
-
-```yaml
-# kafka-writer
--
-# topology definition
-# name to be used when submitting
-name: "kafka-writer"
-
-# Components - constructors, property setters, and builder arguments.
-# Currently, components must be declared in the order they are referenced
-components:
- # Topic selector for KafkaBolt
- - id: "topicSelector"
- className: "org.apache.storm.kafka.bolt.selector.DefaultTopicSelector"
- constructorArgs:
- - "${kafka.topic}"
-
- # Mapper for KafkaBolt
- - id: "kafkaMapper"
- className: "org.apache.storm.kafka.bolt.mapper.FieldNameBasedTupleToKafkaMapper"
- constructorArgs:
- - "key"
- - "message"
-
- # Producer properties for KafkaBolt
- - id: "producerProperties"
- className: "java.util.Properties"
- configMethods:
- - name: "put"
- args:
- - "bootstrap.servers"
- - "${kafka.broker.hosts}"
- - name: "put"
- args:
- - "acks"
- - "1"
- - name: "put"
- args:
- - "key.serializer"
- - "org.apache.kafka.common.serialization.StringSerializer"
- - name: "put"
- args:
- - "value.serializer"
- - "org.apache.kafka.common.serialization.StringSerializer"
-
-
-# Topology configuration
-config:
- topology.workers: 2
-
-# Spout definitions
-spouts:
- - id: "sentence-spout"
- className: "com.microsoft.example.SentenceSpout"
- parallelism: 8
-
-# Bolt definitions
-bolts:
- - id: "kafka-bolt"
- className: "org.apache.storm.kafka.bolt.KafkaBolt"
- parallelism: 8
- configMethods:
- - name: "withProducerProperties"
- args: [ref: "producerProperties"]
- - name: "withTopicSelector"
- args: [ref: "topicSelector"]
- - name: "withTupleToKafkaMapper"
- args: [ref: "kafkaMapper"]
-
-# Stream definitions
-
-streams:
- - name: "spout --> kafka" # Streams data from the sentence spout to the Kafka bolt
- from: "sentence-spout"
- to: "kafka-bolt"
- grouping:
- type: SHUFFLE
-```
-
-### Kafka-reader
-
-In the Kafka-reader topology, the spout component reads data from Kafka as string values. The data is then written the Storm log by the logging component and to the HDFS compatible file system for the Storm cluster by the HDFS bolt component.
-
-```yaml
-# kafka-reader
--
-# topology definition
-# name to be used when submitting
-name: "kafka-reader"
-
-# Components - constructors, property setters, and builder arguments.
-# Currently, components must be declared in the order they are referenced
-components:
- # Convert data from Kafka into string tuples in storm
- - id: "stringScheme"
- className: "org.apache.storm.kafka.StringScheme"
- - id: "stringMultiScheme"
- className: "org.apache.storm.spout.SchemeAsMultiScheme"
- constructorArgs:
- - ref: "stringScheme"
-
- - id: "zkHosts"
- className: "org.apache.storm.kafka.ZkHosts"
- constructorArgs:
- - "${kafka.zookeeper.hosts}"
-
- # Spout configuration
- - id: "spoutConfig"
- className: "org.apache.storm.kafka.SpoutConfig"
- constructorArgs:
- # brokerHosts
- - ref: "zkHosts"
- # topic
- - "${kafka.topic}"
- # zkRoot
- - ""
- # id
- - "readerid"
- properties:
- - name: "scheme"
- ref: "stringMultiScheme"
-
- # How often to sync files to HDFS; every 1000 tuples.
- - id: "syncPolicy"
- className: "org.apache.storm.hdfs.bolt.sync.CountSyncPolicy"
- constructorArgs:
- - 1
-
- # Rotate files when they hit 5 MB
- - id: "rotationPolicy"
- className: "org.apache.storm.hdfs.bolt.rotation.FileSizeRotationPolicy"
- constructorArgs:
- - 5
- - "KB"
-
- # File format; read the directory from filters at run time, and use a .txt extension when writing.
- - id: "fileNameFormat"
- className: "org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat"
- configMethods:
- - name: "withPath"
- args: ["${hdfs.write.dir}"]
- - name: "withExtension"
- args: [".txt"]
-
- # Internal file format; fields delimited by `|`.
- - id: "recordFormat"
- className: "org.apache.storm.hdfs.bolt.format.DelimitedRecordFormat"
- configMethods:
- - name: "withFieldDelimiter"
- args: ["|"]
-
-# Topology configuration
-config:
- topology.workers: 2
-
-# Spout definitions
-spouts:
- - id: "kafka-spout"
- className: "org.apache.storm.kafka.KafkaSpout"
- constructorArgs:
- - ref: "spoutConfig"
- # Set to the number of partitions for the topic
- parallelism: 8
-
-# Bolt definitions
-bolts:
- - id: "logger-bolt"
- className: "com.microsoft.example.LoggerBolt"
- parallelism: 1
-
- - id: "hdfs-bolt"
- className: "org.apache.storm.hdfs.bolt.HdfsBolt"
- configMethods:
- - name: "withConfigKey"
- args: ["hdfs.config"]
- - name: "withFsUrl"
- args: ["${hdfs.url}"]
- - name: "withFileNameFormat"
- args: [ref: "fileNameFormat"]
- - name: "withRecordFormat"
- args: [ref: "recordFormat"]
- - name: "withRotationPolicy"
- args: [ref: "rotationPolicy"]
- - name: "withSyncPolicy"
- args: [ref: "syncPolicy"]
- parallelism: 1
-
-# Stream definitions
-
-streams:
- # Stream data to log
- - name: "kafka --> log" # name isn't used (placeholder for logging, UI, etc.)
- from: "kafka-spout"
- to: "logger-bolt"
- grouping:
- type: SHUFFLE
-
- # stream data to file
- - name: "kafka --> hdfs"
- from: "kafka-spout"
- to: "hdfs-bolt"
- grouping:
- type: SHUFFLE
-```
-
-### Property substitutions
-
-The project contains a file named `dev.properties` that is used to pass parameters used by the topologies. It defines the following properties:
-
-| dev.properties file | Description |
-| | |
-| `kafka.zookeeper.hosts` | The [Apache ZooKeeper](https://zookeeper.apache.org/) hosts for the Kafka cluster. |
-| `kafka.broker.hosts` | The Kafka broker hosts (worker nodes). |
-| `kafka.topic` | The Kafka topic that the topologies use. |
-| `hdfs.write.dir` | The directory that the Kafka-reader topology writes to. |
-| `hdfs.url` | The file system used by the Storm cluster. For Azure Storage accounts, use a value of `wasb://`. For Azure Data Lake Storage Gen2, use a value of `abfs://`. For Azure Data Lake Storage Gen1, use a value of `adl://`. |
-
-## Create the clusters
-
-Apache Kafka on HDInsight does not provide access to the Kafka brokers over the public internet. Anything that uses Kafka must be in the same Azure virtual network. In this tutorial, both the Kafka and Storm clusters are located in the same Azure virtual network.
-
-The following diagram shows how communication flows between Storm and Kafka:
--
-> [!NOTE]
-> Other services on the cluster such as SSH and [Apache Ambari](https://ambari.apache.org/) can be accessed over the internet. For more information on the public ports available with HDInsight, see [Ports and URIs used by HDInsight](hdinsight-hadoop-port-settings-for-services.md).
-
-To create an Azure Virtual Network, and then create the Kafka and Storm clusters within it, use the following steps:
-
-1. Use the following button to sign in to Azure and open the template in the Azure portal.
-
- <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fhdinsight-storm-java-kafka%2Fmaster%2Fcreate-kafka-storm-clusters-in-vnet.json" target="_blank"><img src="./media/hdinsight-apache-storm-with-kafka/hdi-deploy-to-azure1.png" alt="Deploy to Azure button for new cluster"></a>
-
- The Azure Resource Manager template is located at **https://github.com/Azure-Samples/hdinsight-storm-java-kafka/blob/master/create-kafka-storm-clusters-in-vnet.json**. It creates the following resources:
-
- * Azure resource group
- * Azure Virtual Network
- * Azure Storage account
- * Kafka on HDInsight version 3.6 (three worker nodes)
- * Storm on HDInsight version 3.6 (three worker nodes)
-
- > [!WARNING]
- > To guarantee availability of Kafka on HDInsight, your cluster must contain at least three worker nodes. This template creates a Kafka cluster that contains three worker nodes.
-
-2. Use the following guidance to populate the entries on the **Custom deployment** section:
-
- 1. Use the following information to populate the entries on the **Customized template** section:
-
- | Setting | Value |
- | | |
- | Subscription | Your Azure subscription |
- | Resource group | The resource group that contains the resources. |
- | Location | The Azure region that the resources are created in. |
- | Kafka Cluster Name | The name of the Kafka cluster. |
- | Storm Cluster Name | The name of the Storm cluster. |
- | Cluster Login User Name | The admin user name for the clusters. |
- | Cluster Login Password | The admin user password for the clusters. |
- | SSH User Name | The SSH user to create for the clusters. |
- | SSH Password | The password for the SSH user. |
-
- :::image type="content" source="./media/hdinsight-apache-storm-with-kafka/storm-kafka-template.png" alt-text="Picture of the template parameters":::
-
-3. Read the **Terms and Conditions**, and then select **I agree to the terms and conditions stated above**.
-
-4. Finally, check **Pin to dashboard** and then select **Purchase**.
-
-> [!NOTE]
-> It can take up to 20 minutes to create the clusters.
-
-## Build the topology
-
-1. On your development environment, download the project from [https://github.com/Azure-Samples/hdinsight-storm-java-kafka](https://github.com/Azure-Samples/hdinsight-storm-java-kafka), open a command-line, and change directories to the location that you downloaded the project.
-
-2. From the **hdinsight-storm-java-kafka** directory, use the following command to compile the project and create a package for deployment:
-
- ```bash
- mvn clean package
- ```
-
- The package process creates a file named `KafkaTopology-1.0-SNAPSHOT.jar` in the `target` directory.
-
-3. Use the following commands to copy the package to your Storm on HDInsight cluster. Replace `sshuser` with the SSH user name for the cluster. Replace `stormclustername` with the name of the __Storm__ cluster.
-
- ```bash
- scp ./target/KafkaTopology-1.0-SNAPSHOT.jar sshuser@stormclustername-ssh.azurehdinsight.net:KafkaTopology-1.0-SNAPSHOT.jar
- ```
-
- When prompted, enter the password you used when creating the clusters.
-
-## Configure the topology
-
-1. Use one of the following methods to discover the Kafka broker hosts for the **Kafka** on HDInsight cluster:
-
- ```powershell
- $creds = Get-Credential -UserName "admin" -Message "Enter the HDInsight login"
- $clusterName = Read-Host -Prompt "Enter the Kafka cluster name"
- $resp = Invoke-WebRequest -Uri "https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER" `
- -Credential $creds `
- -UseBasicParsing
- $respObj = ConvertFrom-Json $resp.Content
- $brokerHosts = $respObj.host_components.HostRoles.host_name[0..1]
- ($brokerHosts -join ":9092,") + ":9092"
- ```
-
- > [!IMPORTANT]
- > The following Bash example assumes that `$CLUSTERNAME` contains the name of the __Kafka__ cluster name. It also assumes that [jq](https://stedolan.github.io/jq/) version 1.5 or greater is installed. When prompted, enter the password for the cluster login account.
-
- ```bash
- curl -su admin -G "https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/services/KAFKA/components/KAFKA_BROKER" | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2
- ```
-
- The value returned is similar to the following text:
-
- ```output
- <brokername1>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092,<brokername2>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092
- ```
-
- > [!IMPORTANT]
- > While there may be more than two broker hosts for your cluster, you do not need to provide a full list of all hosts to clients. One or two is enough.
-
-2. Use one of the following methods to discover the Zookeeper hosts for the __Kafka__ on HDInsight cluster:
-
- ```powershell
- $creds = Get-Credential -UserName "admin" -Message "Enter the HDInsight login"
- $clusterName = Read-Host -Prompt "Enter the Kafka cluster name"
- $resp = Invoke-WebRequest -Uri "https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/ZOOKEEPER/components/ZOOKEEPER_SERVER" `
- -Credential $creds `
- -UseBasicParsing
- $respObj = ConvertFrom-Json $resp.Content
- $zookeeperHosts = $respObj.host_components.HostRoles.host_name[0..1]
- ($zookeeperHosts -join ":2181,") + ":2181"
- ```
-
- > [!IMPORTANT]
- > The following Bash example assumes that `$CLUSTERNAME` contains the name of the __Kafka__ cluster. It also assumes that [jq](https://stedolan.github.io/jq/) is installed. When prompted, enter the password for the cluster login account.
-
- ```bash
- curl -su admin -G "https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER" | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2
- ```
-
- The value returned is similar to the following text:
-
- ```output
- <zookeepername1>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181,<zookeepername2>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181
- ```
-
- > [!IMPORTANT]
- > While there are more than two Zookeeper nodes, you do not need to provide a full list of all hosts to clients. One or two is enough.
-
- Save this value, as it is used later.
-
-3. Edit the `dev.properties` file in the root of the project. Add the Broker and Zookeeper hosts information for the __Kafka__ cluster to the matching lines in this file. The following example is configured using the sample values from the previous steps:
-
- ```bash
- kafka.zookeeper.hosts: <zookeepername1>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181,<zookeepername2>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181
- kafka.broker.hosts: <brokername1>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092,<brokername2>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092
- kafka.topic: stormtopic
- ```
-
- > [!IMPORTANT]
- > The `hdfs.url` entry is configured for a cluster that uses an Azure Storage account. To use this topology with a Storm cluster that uses Data Lake Storage, change this value from `wasb` to `adl`.
-
-4. Save the `dev.properties` file and then use the following command to upload it to the **Storm** cluster:
-
- ```bash
- scp dev.properties USERNAME@BASENAME-ssh.azurehdinsight.net:dev.properties
- ```
-
- Replace **USERNAME** with the SSH user name for the cluster. Replace **BASENAME** with the base name you used when creating the cluster.
-
-## Create the Kafka topic
-
-Kafka stores data into a _topic_. You must create the topic before starting the Storm topologies. To create the topology, use the following steps:
-
-1. Connect to the __Kafka__ cluster through SSH by using the following command. Replace `sshuser` with the SSH user name used when creating the cluster. Replace `kafkaclustername` with the name of the Kafka cluster:
-
- ```bash
- ssh sshuser@kafkaclustername-ssh.azurehdinsight.net
- ```
-
- When prompted, enter the password you used when creating the clusters.
-
- For information, see [Use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-unix.md).
-
-2. To create the Kafka topic, use the following command. Replace `$KAFKAZKHOSTS` with the Zookeeper host information you used when configuring the topology:
-
- ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 3 --partitions 8 --topic stormtopic --zookeeper $KAFKAZKHOSTS
- ```
-
- This command connects to Zookeeper for the Kafka cluster and creates a new topic named `stormtopic`. This topic is used by the Storm topologies.
-
-## Start the writer
-
-1. Use the following to connect to the **Storm** cluster using SSH. Replace `sshuser` with the SSH user name used when creating the cluster. Replace `stormclustername` with the name the Storm cluster:
-
- ```bash
- ssh sshuser@stormclustername-ssh.azurehdinsight.net
- ```
-
- When prompted, enter the password you used when creating the clusters.
-
- For information, see [Use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-unix.md).
-
-2. From the SSH connection to the Storm cluster, use the following command to start the writer topology:
-
- ```bash
- storm jar KafkaTopology-1.0-SNAPSHOT.jar org.apache.storm.flux.Flux --remote -R /writer.yaml --filter dev.properties
- ```
-
- The parameters used with this command are:
-
- * `org.apache.storm.flux.Flux`: Use Flux to configure and run this topology.
-
- * `--remote`: Submit the topology to Nimbus. The topology is distributed across the worker nodes in the cluster.
-
- * `-R /writer.yaml`: Use the `writer.yaml` file to configure the topology. `-R` indicates that this resource is included in the jar file. It's in the root of the jar, so `/writer.yaml` is the path to it.
-
- * `--filter`: Populate entries in the `writer.yaml` topology using values in the `dev.properties` file. For example, the value of the `kafka.topic` entry in the file is used to replace the `${kafka.topic}` entry in the topology definition.
-
-## Start the reader
-
-1. From the SSH session to the Storm cluster, use the following command to start the reader topology:
-
- ```bash
- storm jar KafkaTopology-1.0-SNAPSHOT.jar org.apache.storm.flux.Flux --remote -R /reader.yaml --filter dev.properties
- ```
-
-2. Wait a minute and then use the following command to view the files created by the reader topology:
-
- ```bash
- hdfs dfs -ls /stormdata
- ```
-
- The output is similar to the following text:
-
- ```output
- Found 173 items
- -rw-r--r-- 1 storm supergroup 5137 2018-04-09 19:00 /stormdata/hdfs-bolt-4-0-1523300453088.txt
- -rw-r--r-- 1 storm supergroup 5128 2018-04-09 19:00 /stormdata/hdfs-bolt-4-1-1523300453624.txt
- -rw-r--r-- 1 storm supergroup 5131 2018-04-09 19:00 /stormdata/hdfs-bolt-4-10-1523300455170.txt
- ...
- ```
-
-3. To view the contents of the file, use the following command. Replace `filename.txt` with the name of a file:
-
- ```bash
- hdfs dfs -cat /stormdata/filename.txt
- ```
-
- The following text is an example of the file contents:
-
- > four score and seven years ago
- >
- > snow white and the seven dwarfs
- >
- > i am at two with nature
- >
- > snow white and the seven dwarfs
- >
- > i am at two with nature
- >
- > four score and seven years ago
- >
- > an apple a day keeps the doctor away
-
-## Stop the topologies
-
-From an SSH session to the Storm cluster, use the following commands to stop the Storm topologies:
-
- ```bash
- storm kill kafka-writer
- storm kill kafka-reader
- ```
-
-## Clean up resources
-
-To clean up the resources created by this tutorial, you can delete the resource group. Deleting the resource group also deletes the associated HDInsight cluster, and any other resources associated with the resource group.
-
-To remove the resource group using the Azure portal:
-
-1. In the Azure portal, expand the menu on the left side to open the menu of services, and then choose __Resource Groups__ to display the list of your resource groups.
-2. Locate the resource group to delete, and then right-click the __More__ button (...) on the right side of the listing.
-3. Select __Delete resource group__, and then confirm.
-
-## Next steps
-
-In this tutorial, you learned how to use an [Apache Storm](https://storm.apache.org/) topology to write to and read from [Apache Kafka](https://kafka.apache.org/) on HDInsight. You also learned how to store data to the [Apache Hadoop HDFS](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) compatible storage used by HDInsight.
-
-> [!div class="nextstepaction"]
-> [Use Apache Kafka Producer and Consumer API](kafk)
hdinsight Hdinsight Apps Install Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-applications.md
description: Learn how to install third-party Apache Hadoop applications on Azur
Previously updated : 08/26/2022 Last updated : 11/17/2022 # Install third-party Apache Hadoop applications on Azure HDInsight
The following list shows the published applications:
|[AtScale Intelligence Platform](https://aws.amazon.com/marketplace/pp/AtScale-AtScale-Intelligence-Platform/B07BWWHH18) |Hadoop |AtScale turns your HDInsight cluster into a scale-out OLAP server, allowing you to query billions of rows of data interactively using the BI tools you already know, own, and love ΓÇô from Microsoft Excel, Power BI, Tableau Software to QlikView. | |[Datameer](https://azuremarketplace.microsoft.com/marketplace/apps/datameer.datameer) |Hadoop |Datameer's self-service scalable platform for preparing, exploring, and governing your data for analytics accelerates turning complex multisource data into valuable business-ready information, delivering faster, smarter insights at an enterprise-scale. | |[Dataiku DSS on HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/dataiku.dataiku-data-science-studio) |Hadoop, Spark |Dataiku DSS in an enterprise data science platform that lets data scientists and data analysts collaborate to design and run new data products and services more efficiently, turning raw data into impactful predictions. |
-|[WANdisco Fusion HDI App](https://community.wandisco.com/s/article/Use-WANdisco-Fusion-for-parallel-operation-of-ADLS-Gen1-and-Gen2) |Hadoop, Spark,HBase,Storm,Kafka |Keeping data consistent in a distributed environment is a massive data operations challenge. WANdisco Fusion, an enterprise-class software platform, solves this problem by enabling unstructured data consistency across any environment. |
+|[WANdisco Fusion HDI App](https://community.wandisco.com/s/article/Use-WANdisco-Fusion-for-parallel-operation-of-ADLS-Gen1-and-Gen2) |Hadoop, Spark,HBase,Kafka |Keeping data consistent in a distributed environment is a massive data operations challenge. WANdisco Fusion, an enterprise-class software platform, solves this problem by enabling unstructured data consistency across any environment. |
|H2O SparklingWater for HDInsight |Spark |H2O Sparkling Water supports the following distributed algorithms: GLM, Naïve Bayes, Distributed Random Forest, Gradient Boosting Machine, Deep Neural Networks, Deep learning, K-means, PCA, Generalized Low Rank Models, Anomaly Detection, Autoencoders. |
-|[Striim for Real-Time Data Integration to HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/striim.striimbyol) |Hadoop,HBase,Storm,Spark,Kafka |Striim (pronounced "stream") is an end-to-end streaming data integration + intelligence platform, enabling continuous ingestion, processing, and analytics of disparate data streams. |
+|[Striim for Real-Time Data Integration to HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/striim.striimbyol) |Hadoop,HBase,Spark,Kafka |Striim (pronounced "stream") is an end-to-end streaming data integration + intelligence platform, enabling continuous ingestion, processing, and analytics of disparate data streams. |
|[Jumbune Enterprise-Accelerating BigData Analytics](https://azuremarketplace.microsoft.com/marketplace/apps/impetus-infotech-india-pvt-ltd.impetus_jumbune) |Hadoop, Spark |At a high level, Jumbune assists enterprises by, 1. Accelerating Tez, MapReduce & Spark engine based Hive, Java, Scala workload performance. 2. Proactive Hadoop Cluster Monitoring, 3. Establishing Data Quality management on distributed file system. | |[Kyligence Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/kyligence.kyligence-cloud-saas) |Hadoop,HBase,Spark |Powered by Apache Kylin, Kyligence Enterprise Enables BI on Big Data. As an enterprise OLAP engine on Hadoop, Kyligence Enterprise empowers business analyst to architect BI on Hadoop with industry-standard data warehouse and BI methodology. | |[StreamSets Data Collector for HDInsight Cloud](https://azuremarketplace.microsoft.com/marketplace/apps/streamsets.streamsets-data-collector-hdinsight) |Hadoop,HBase,Spark,Kafka |StreamSets Data Collector is a lightweight, powerful engine that streams data in real time. Use Data Collector to route and process data in your data streams. It comes with a 30 day trial license. | |[Trifacta Wrangler Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/trifactainc1587522950142.trifactaazure) |Hadoop, Spark,HBase |Trifacta Wrangler Enterprise for HDInsight supports enterprise-wide data wrangling for any scale of data. The cost of running Trifacta on Azure is a combination of Trifacta subscription costs plus the Azure infrastructure costs for the virtual machines. |
-|[Unifi Data Platform](https://www.crunchbase.com/organization/unifi-software) |Hadoop,HBase,Storm,Spark |The Unifi Data Platform is a seamlessly integrated suite of self-service data tools designed to empower the business user to tackle data challenges that drive incremental revenue, reduce costs or operational complexity. |
+|[Unifi Data Platform](https://www.crunchbase.com/organization/unifi-software) |Hadoop,HBase,Spark |The Unifi Data Platform is a seamlessly integrated suite of self-service data tools designed to empower the business user to tackle data challenges that drive incremental revenue, reduce costs or operational complexity. |
The instructions provided in this article use Azure portal. You can also export the Azure Resource Manager template from the portal or obtain a copy of the Resource Manager template from vendors, and use Azure PowerShell and Azure Classic CLI to deploy the template. See [Create Apache Hadoop clusters on HDInsight using Resource Manager templates](hdinsight-hadoop-create-linux-clusters-arm-templates.md).
hdinsight Hdinsight Apps Publish Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-publish-applications.md
Title: Publish Azure HDInsight applications description: Learn how to create an HDInsight application, and then publish it in the Azure Marketplace.- Previously updated : 05/14/2018 Last updated : 11/17/2022 # Publish an HDInsight application in the Azure Marketplace
Two steps are involved in publishing applications in the Marketplace. First, def
"handler": "Microsoft.HDInsight", "version": "0.0.1-preview", "clusterFilters": {
- "types": ["Hadoop", "HBase", "Storm", "Spark"],
+ "types": ["Hadoop", "HBase", "Spark"],
"versions": ["3.6"] } }
Two steps are involved in publishing applications in the Marketplace. First, def
| Field | Description | Possible values | | | | |
-| types |The cluster types that the application is compatible with. |Hadoop, HBase, Storm, Spark (or any combination of these) |
+| types |The cluster types that the application is compatible with. |Hadoop, HBase, Spark (or any combination of these) |
| versions |The HDInsight cluster types that the application is compatible with. |3.4 | ## Application installation script
hdinsight Hdinsight Autoscale Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-autoscale-clusters.md
description: Use the Autoscale feature to automatically scale Azure HDInsight cl
Previously updated : 02/11/2022 Last updated : 11/17/2022 # Automatically scale Azure HDInsight clusters
For scale-down, Autoscale issues a request to remove a certain number of nodes.
The following table describes the cluster types and versions that are compatible with the Autoscale feature.
-| Version | Spark | Hive | Interactive Query | HBase | Kafka | Storm | ML |
+| Version | Spark | Hive | Interactive Query | HBase | Kafka | ML |
|||||||||
-| HDInsight 3.6 without ESP | Yes | Yes | Yes* | No | No | No | No |
-| HDInsight 4.0 without ESP | Yes | Yes | Yes* | No | No | No | No |
-| HDInsight 3.6 with ESP | Yes | Yes | Yes* | No | No | No | No |
-| HDInsight 4.0 with ESP | Yes | Yes | Yes* | No | No | No | No |
+| HDInsight 3.6 without ESP | Yes | Yes | Yes* | No | No | No |
+| HDInsight 4.0 without ESP | Yes | Yes | Yes* | No | No | No |
+| HDInsight 3.6 with ESP | Yes | Yes | Yes* | No | No | No |
+| HDInsight 4.0 with ESP | Yes | Yes | Yes* | No | No | No |
\* Interactive Query clusters can only be configured for schedule-based scaling, not load-based.
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-for-vscode.md
With Spark & Hive Tools for Visual Studio Code, you can submit interactive Hive
### Prerequisite for Pyspark interactive
-Note here that Jupyter Extension version (ms-jupyter): v2022.1.1001614873 and Python Extension version (ms-python): v2021.12.1559732655, python 3.6.x and 3.7.x are required for HDInsight interactive PySpark queries.
+Note here that Jupyter Extension version (ms-jupyter): v2022.1.1001614873 and Python Extension version (ms-python): v2021.12.1559732655, Python 3.6.x and 3.7.x are required for HDInsight interactive PySpark queries.
Users can perform PySpark interactive in the following ways.
hdinsight Hdinsight Hadoop Development Using Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-development-using-azure-resource-manager.md
This section provides pointers to more information on how to perform certain tas
| Submit an Apache Sqoop job using .NET SDK |See [Submit Apache Sqoop jobs](hadoop/apache-hadoop-use-sqoop-dotnet-sdk.md) | | List HDInsight clusters using .NET SDK |See [List HDInsight clusters](hdinsight-administer-use-dotnet-sdk.md#list-clusters) | | Scale HDInsight clusters using .NET SDK |See [Scale HDInsight clusters](hdinsight-administer-use-dotnet-sdk.md#scale-clusters) |
-| Grant/revoke access to HDInsight clusters using .NET SDK |See [Grant/revoke access to HDInsight clusters](hdinsight-administer-use-dotnet-sdk.md#grantrevoke-access) |
+| Grant/revoke access to HDInsight clusters using .NET SDK |See [Grant/revoke access to HDInsight clusters](hdinsight-administer-use-dotnet-sdk.md) |
| Update HTTP user credentials for HDInsight clusters using .NET SDK |See [Update HTTP user credentials for HDInsight clusters](hdinsight-administer-use-dotnet-sdk.md#update-http-user-credentials) | | Find the default storage account for HDInsight clusters using .NET SDK |See [Find the default storage account for HDInsight clusters](hdinsight-administer-use-dotnet-sdk.md#find-the-default-storage-account) | | Delete HDInsight clusters using .NET SDK |See [Delete HDInsight clusters](hdinsight-administer-use-dotnet-sdk.md#delete-clusters) |
industrial-iot Tutorial Publisher Performance Memory Tuning Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-publisher-performance-memory-tuning-opc-publisher.md
Title: Microsoft OPC Publisher Performance and Memory Tuning description: In this tutorial, you learn how to tune the performance and memory of the OPC Publisher.--++ Last updated 3/22/2021
In this tutorial, you learn how to:
> * Adjust the performance > * Adjust the message flow to the memory resources
-When running OPC Publisher in production setups, network performance requirements (throughput and latency) and memory resources must be considered. OPC Publisher exposes the following command-line parameters to help meet these requirements:
+## Definitions
-* Message queue capacity (`mq` for version 2.5 and below, not available in version 2.6, `om` for version 2.7)
-* IoT Hub send interval (`si`)
-* IoT Hub message size (`ms`)
+### Data value changes
+An OPC UA node is exposing a value reflecting a measurement of a sensor. If the value of the sensor changes, the OPC UA node value changes. This tutorial refers to it as data value change. An OPC UA server does track the time when the data change happened and reports it as `SourceTimestamp` upstream with the new value. The time base of this timestamp is either from the OPC UA server itself or provided by a downstream system like a PLC or sensor. The OPC UA specification says: "The sourceTimestamp should be generated as close as possible to the source of the value but the timestamp needs to be set always by the same physical clock."
-## Adjusting IoT Hub send interval and IoT Hub message size
+### Data change notifications
+OPC Publisher does establish a session to an OPC UA server and creates subscriptions to monitor data value changes of OPC UA nodes. Depending on configuration settings, the OPC UA server notifies OPC Publisher and reports data value changes. Those data change notifications might contain more than one data value change.
-The `mq/om` parameter controls the upper limit of the capacity of the internal message queue. This queue buffers all messages before they are sent to IoT Hub. The default size of the queue is up to 2 MB for OPC Publisher version 2.5 and below and 4000 IoT Hub messages for version 2.7 (that is, if the setting for the IoT Hub message size is 256 KB, the size of the queue will be up to 1 GB). If OPC Publisher is not able to send messages to IoT Hub fast enough, the number of items in this queue increases. If this happens during test runs, one or both of the following can be done to mitigate:
+### Telemetry event
+A telemetry event is an event, which is sent to the cloud. Depending on the messaging mode configured in OPC Publisher (`--mm`) this event will contain:
+- for Samples mode (`--mm=Samples`): one data value change
+- for PubSub mode (`--mm=PubSub`): all data value changes in a data change notification
-* decrease the IoT Hub send interval (`si`)
+### Latency
+Latency in the context of this tutorial is the time difference between the `SourceTimestamp` of a data value change and when the corresponding telemetry event is queued in IoT Hub.
-* increase the IoT Hub message size (`ms`, the maximum this can be set to is 256 KB). In version 2.7 or later the default value is already set to 256 KB.
+## Telemetry event creation
+A telemetry event emitted by OPC Publisher is triggered by data value change of a node value in an OPC UA Server. OPC Publisher does use OPC UA subscriptions to get notifications of those changes. The OPC UA subscription mechanism can be configured via a few parameters, which control the timing and content of those notifications. These settings can be configured in OPC Publisher via a JSON configuration file as well via a Direct Method API (only version 2.8.2 and above). The settings supported by OPC Publisher are per OPC UA node are:
+* Sampling interval
+* Publishing interval
+* Queue size
+* Heartbeat interval
-If the queue keeps growing even though the `si` and `ms` parameters have been adjusted, eventually the maximum queue capacity will be reached and messages will be lost. This is due to the fact that both the `si` and `ms` parameter have physical limits and the Internet connection between OPC Publisher and IoT Hub is not fast enough for the number of messages that must be sent in a given scenario. In that case, only setting up several, parallel OPC Publishers will help. The `mq/om` parameter also has the biggest impact on the memory consumption by OPC Publisher.
+The [section](https://reference.opcfoundation.org/Core/Part4/v104/5.12.1/) of the OPC UA Specification describes which affect sampling interval and queue size have on the notifications. The timing is controlled by the publishing interval, it specifies the interval in which notifications will be reported by the OPC UA server to OPC Publisher. The Publishing Interval is a parameter set during the [subscription creation process](https://reference.opcfoundation.org/Core/Part4/v104/5.13.2/).
-The `si` parameter forces OPC Publisher to send messages to IoT Hub at the specified interval. A message is sent either when the maximum IoT Hub message size of 256 KB of data is available (triggering the send interval to reset) or when the specified interval time has passed.
+OPC UA servers are often handling higher priority tasks like controlling machinery. For this reason the settings above are sent to the OPC UA server, which may return revised values in case the OPC UA server doesn't want to support the requested value.
-The `ms` parameter enables batching of messages sent to IoT Hub. In most network setups, the latency of sending a single message to IoT Hub is high, compared to the time it takes to transmit the payload. This is mainly due to Quality of Service (QoS) requirements, since messages are acknowledged only once they have been processed by IoT Hub). Therefore, if a delay for the data to arrive at IoT Hub is acceptable, OPC Publisher should be configured to use the maximal message size of 256 KB by setting the `ms` parameter to 0. It is also the most cost-effective way to use OPC Publisher.
+For OPC UA node values, which don't change their value at all, OPC Publisher supports configuration of a heartbeat interval. The heartbeat interval can be configured similar as the other settings for a node, but isn't part of the OPC UA specification. The configuration of the heartbeat interval might be useful for certain scenarios, which involve time series databases to populate the time series with actual telemetry events. Starting with OPC Publisher v2.8.2 the Heartbeat Interval must be a multiple of the Publishing Interval of the OPC UA node, due to the internal implementation. The SourceTimestamp of a telemetry event generated by the heartbeat implementation will be updated with the OPC UA server time when the heartbeat triggers.
-In version 2.5 the default configuration sends data to IoT Hub every 10 seconds (`si=10`) or when 256 KB of IoT Hub message data is available (`ms=0`). This adds a maximum delay of 10 seconds, but has low probability of losing data because of the large message size. In version 2.7 or later the default configuration is 500 ms for orchestrated mode and 0 for standalone mode (no send interval). The metric `monitored item notifications enqueue failure` in OPC Publisher version 2.5 and below and `messages lost` in OPC Publisher version 2.7 shows how many messages were lost.
-When both `si` and `ms` parameters are set to 0, OPC Publisher sends a message to IoT Hub as soon as data is available. This results in an average IoT Hub message size of just over 200 bytes. However, the advantage of this configuration is that OPC Publisher sends the data from the connected asset without delay. The number of lost messages will be high for use cases where a large amount of data must be published and hence this is not recommended for these scenarios.
+## OPC Publisher command line options impacting latency, performance and cost
-To measure the performance of OPC Publisher, the `di` parameter can be used to print performance metrics to the trace log in the interval specified (in seconds).
+To run OPC Publisher in production, network performance requirements (throughput and latency) and memory resources must be considered. OPC Publisher exposes the following command-line options to help adjust these requirements:
+
+* Message queue capacity `--mq` (or `--monitoreditemqueuecapacity`) for version 2.5 and below (default: 8192), not available in version 2.6, `--om` (or `--maxoutgressmessages`) for version 2.7 and higher (default: 4096): Configures the internal buffer used to buffer telemetry events. If OPC Publisher can't send telemetry events fast enough, then it will buffer them. This option configures how large this buffer will be. In version 2.5 and below this option specifies how many telemetry events can be buffered, whereas in versions 2.7 a higher it specifies it as number of IoT Hub messages.
+
+* Send interval `--si` or `--iothubsendinterval` (in seconds, default: 10). This option configures the interval after which all telemetry events available will be sent. Whenever a message is sent, the send interval timer will be reset. If the send interval is set to 0, this send trigger mechanism is disabled. In version 2.7 or higher `--BatchTriggerInterval` (in .NET TimeSpan format converted to seconds, default: 00:00:10) does have the similar effect.
+
+* Message size `--ms` or `--iothubmessagesize` for version 2.5 and below or in version 2.7 and higher additionally `--IoTHubMaxMessageSize` (in byte, default: 256*1024).
+
+* Batch size for version 2.8.2 and higher `--bs` or `batchsize` or `--BatchSize` (default: 50): Configures a send trigger by specifying how many notifications OPC Publisher will receive from the configured OPC UA subscriptions (one notification may contain multiple data change events). Even with batch size set to 1, an IoT Hub message can contain multiple data value changes.
+
+## Latency considerations
+
+What is typically seen as latency is the time difference between the `iothub-enqueuedtime` of the [device to cloud message](https://learn.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-construct) and the `SourceTimestamp` field of an OPC UA telemetry event. There are multiple factors, which contribute to the latency:
+* The `SourceTimestamp` of the OPC UA telemetry event is a value [defined by the OPC UA Specification](https://reference.opcfoundation.org/Core/Part4/v104/7.7.3/) as to be as close to the source of the value. The origin of `SourceTimestamp` is highly dependent on the setup between sensor and OPC UA server. Independent from the setup, it's important to ensure that the time source is synchronized precisely otherwise the latency calculation will be not correct.
+* It's important that the systems and interconnection between the sensor and the IoT Edge host system where OPC Publisher runs is stable and doesn't introduce latency.
+* The configuration of the OPC UA nodes to publish and the effect of OPC Publisher command line options on latency will be discussed below.
+* OPC Publisher sends messages via IoT Edge edgeHub to IoT Hub. The latency added by internal communication is typically low.
+* Finally the network connectivity to the IoT Hub cloud service adds latency. It must be ensured that the network connection is stable otherwise it will lead to outliers in the telemetry event latency.
+
+### Effect of the node publishing interval
+
+Data value changes of an OPC UA node are reported to OPC Publisher in an interval, which can be configured at a node level. For a data value of a node, this means the maximum latency introduced is lower than the configured publishing interval. Since the actual latency of a node is dependent of the timing of the actual value change, the introduced latency isn't constant, but has an upper limit of the configured publishing interval of the node.
+
+### Effect of the node heartbeat interval
+
+Creation of telemetry events triggered by the heartbeat interval setting is done in OPC Publisher. It still does use the `SourceTimestamp` of the OPC UA server in the telemetry event. The heartbeat interval can introduce latency (similar as the publishing interval), which has an upper limit of the heartbeat interval. In case the OPC UA node value does never change, then the latency introduced won't change and is equal to the configured heartbeat interval.
+
+### Effect of the Send interval command line option
+
+OPC Publisher's send interval configuration will trigger sending all queued telemetry events to IoT Hub. Depending on when OPC Publisher receives data change notifications, a maximum latency of the configured send interval could be introduced. It means that all data change notifications received in this period of time are batched. The configuration of the message size and batch size will still be active and can trigger sending all queued telemetry events to IoT Hub before the send interval has passed. In this case, the send interval timer will be reset.
+
+### Effect of the Batch size command line option
+
+This command line option triggers after OPC Publisher received the configured number of data change notifications (either from the OPC UA server because the publishing interval has passed or by internally created data change notifications due to the heartbeat configuration) and will be sending all queued telemetry events to IoT Hub. As pointed out earlier the notifications from the OPC UA server can contain multiple data change events, whereas the heartbeat created data change events count as one data change notification per node. The configuration of the message size and send interval will still be taken into account and can trigger sending all queued telemetry events to IoT Hub. In this case, the batch size counter will be reset. The batch size adds a nondeterministic latency depending on the node configuration.
+
+### Effect of the Message size command line option
+
+The message size option sets the maximum size of the message, which will be sent to IoT Hub. It doesn't add any latency, but controls how many messages will be created from the queued telemetry events when sending is triggered. If no other option is set, then this setting can trigger sending as well as soon the size of all queued telemetry events hit the message size.
++
+## Optimizations
+
+Depending on the requirements of the use case, volatility of the node values, OPC UA node configuration of the OPC Publisher and OPC Publisher command line options the system behaves differently. It's possible to use the OPC UA node configuration and the command line options to ensure the requirements of the use case are met.
+
+### Optimizing for low latency
+The goal here's to minimize the difference between the enqueued time of a telemetry event and the corresponding `SourceTimestamp` of the data value change. To achieve this goal the publishing interval or heartbeat interval of this node should be minimized taking the maximal accepted latency into account. Additionally the send interval (`--si`) should set to 0 and the batch size (`--bs`) should be set to 1.
+Those settings will have the effect that a data value change will be sent as telemetry event to the cloud as soon as OPC Publisher get notified about it.
+
+### Optimizing to minimize number of cloud messages and maximize throughput
+To minimize the number of messages being send to the cloud, the payload of each message sent must be maximized. The `--ms` command line option defaults to the maximal value (256*1024). Removing other send triggers will ensure that a message is only sent when the message size is hit. Setting the send interval `--si` to 0 and the batch size `--bs` option to 262144 enables using the message size to trigger sending a message to the cloud.
+
+Another side effect of this is that throughput is maximized due to the fact of how the time of sending a message to the cloud is made up. It consists of several parts:
+- Establish the connection if not already established
+- Send the payload
+- Wait for acknowledgment after IoT Hub has processed and stored the message
+Sending the payload is compared to the other parts (especially waiting for the acknowledgment) small. Given this fact it's faster to send one message with large payload than multiple messages with smaller payloads (where sum of the smaller payloads is equal to the large payload size). That means to maximize the payload size also maximizes the overall throughput.
+
+### Optimizing for low cost
+Minimizing the cost of the cloud ingest requires to understand how [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/) works. The ingested data is one part of the price and is counted in 4-KB blocks. Depending on the node configuration and data value change frequency the `--ms` parameter should be set to a multiple of 4096.
+To tune the solution also for the required latency the send interval `--si` can be adjusted. The batch size `--bs` shouldn't trigger sending a message to IoT Hub and be set to 262144.
+
+## High load conditions
+OPC Publisher maintains an internal queue for all data change notifications. The `--mq` and `--om` parameter controls the upper limit of the capacity of this internal message queue. The queue buffers all messages before they're sent to IoT Hub. The default size of the queue is up to 2 MB for OPC Publisher version 2.5 and below and 4000 IoT Hub messages for version 2.7 (this will result in a queue size of 1 GB, if the setting for the IoT Hub message size is 256 KB). The `--mq` and `--om` parameter also has the biggest effect on the memory consumption by OPC Publisher. If OPC Publisher isn't able to send messages to IoT Hub fast enough, the number of items in this queue increases until it hits its size limit. If this happens, one or both of the following can be done to mitigate:
+* decrease the IoT Hub send interval `--si`
+* increase the IoT Hub message size `--ms`
+
+If the queue keeps growing even though the `--si` and `--ms` have been adjusted and the queue capacity will be reached, messages will be discarded. The reason can be that the time it takes to send a message to IoT Hub doesn't provide the required throughput. Since this time is made up of multiple parts to understand if there's a bottleneck several areas should be validated:
+- Validation that the IoT Edge host network connection to the IoT Hub is stable and has low latency.
+- Validation that the modules running in IoT Edge (OPC Publisher, edgeHub, and others) don't hit any limits for CPU and memory consumption. Additionally use of the [IoT Edge metrics collector](https://learn.microsoft.com/azure/iot-edge/how-to-collect-and-transport-metrics?view=iotedge-1.4&tabs=iothub) can give insights on resource usage of the system.
+- Validation that the time to ingest a message from an IoT Edge module not using any OPC UA data does meet expectations even with active workload.
+
+If the capacity of the internal message queue is used and there are still incoming notifications from the OPC UA server, data change notifications will be discarded. The diagnostics output will show the number of discarded messages.
## Next steps
-Now that you have learned how to tune the performance and memory of the OPC Publisher, you can check out the OPC Publisher GitHub repository for further resources:
+Now that you've learned how to tune the performance and memory of the OPC Publisher, you can check out the OPC Publisher GitHub repository for further resources:
> [!div class="nextstepaction"] > [OPC Publisher GitHub repository](https://github.com/Azure/Industrial-IoT)
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
This article introduces you to Azure IoT Central REST API. Use the API to create
The REST API operations are grouped into the: -- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/2022-07-31dataplane/api-tokens) and [preview](/rest/api/iotcentral/2022-06-30-previewdataplane/api-tokens) versions of the data plane API.
+- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/2022-07-31dataplane/api-tokens) and [preview](/rest/api/iotcentral/2022-10-31-previewdataplane/api-tokens) versions of the data plane API.
- *Control plane* operations that let you work with the Azure resources associated with IoT Central applications. Control plane operations let you automate tasks that can also be completed in the Azure portal. ## Data plane operations
iot-develop Quickstart Devkit Stm B L475e Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-iot-hub.md
+
+ Title: Connect an STMicroelectronics B-L475E-IOT01A to Azure IoT Hub quickstart
+description: Use Azure RTOS embedded software to connect an STMicroelectronics B-L475E-IOT01A device to Azure IoT Hub and send telemetry.
+++
+ms.devlang: c
+ Last updated : 11/18/2022++
+# Quickstart: Connect an STMicroelectronics B-L475E-IOT01A Discovery kit to IoT Hub
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 30 minutes
+
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/B-L475E-IOT01A)
+
+In this quickstart, you use Azure RTOS to connect the STMicroelectronics [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) Discovery kit (from now on, the STM DevKit) to Azure IoT.
+
+You'll complete the following tasks:
+
+* Install a set of embedded development tools for programming the STM DevKit in C
+* Build an image and flash it onto the STM DevKit
+* Use Azure CLI to create and manage an Azure IoT hub that the STM DevKit will securely connect to
+* Use Azure IoT Explorer to register a device with your IoT hub, view device properties, view device telemetry, and call direct commands on the device
+
+## Prerequisites
+
+* A PC running Windows 10 or Windows 11
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Azure CLI. You have two options for running Azure CLI commands in this quickstart:
+ * Use the Azure Cloud Shell, an interactive shell that runs CLI commands in your browser. This option is recommended because you don't need to install anything. If you're using Cloud Shell for the first time, sign in to the [Azure portal](https://portal.azure.com). Follow the steps in [Cloud Shell quickstart](../cloud-shell/quickstart.md) to **Start Cloud Shell** and **Select the Bash environment**.
+ * Optionally, run Azure CLI on your local machine. If Azure CLI is already installed, run `az upgrade` to upgrade the CLI and extensions to the current version. To install Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+* [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases): Cross-platform utility to monitor and manage Azure IoT
+* Hardware
+
+ * The [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) (STM DevKit)
+ * Wi-Fi 2.4 GHz
+ * USB 2.0 A male to Micro USB male cable
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prepare the development environment
+
+To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
+
+### Clone the repo for the quickstart
+
+Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/azure-rtos/getting-started.git
+```
+
+### Install the tools
+
+The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
+
+> [!NOTE]
+> The setup script installs the following tools:
+> * [CMake](https://cmake.org): Build
+> * [ARM GCC](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm): Compile
+> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
+
+To install the tools:
+
+1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
+
+ *getting-started\tools\get-toolchain.bat*
+
+1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
+1. Run the following code to confirm that CMake version 3.14 or later is installed.
+
+ ```shell
+ cmake --version
+ ```
+
+## Create the cloud components
+
+### Create an IoT hub
+
+You can use Azure CLI to create an IoT hub that handles events and messaging for your device.
+
+To create an IoT hub:
+
+1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter.
+ - If you're using Cloud Shell, right-click the link for [Cloud Shell](https://shell.azure.com/bash), and select the option to open in a new tab.
+ - If you're using Azure CLI locally, start your CLI console app and sign in to Azure CLI.
+
+1. Run [az extension add](/cli/azure/extension#az-extension-add) to install or upgrade the *azure-iot* extension to the current version.
+
+ ```azurecli-interactive
+ az extension add --upgrade --name azure-iot
+ ```
+
+1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region.
+
+ > [!NOTE]
+ > You can optionally set an alternate `location`. To see available locations, run [az account list-locations](/cli/azure/account#az-account-list-locations).
+
+ ```azurecli
+ az group create --name MyResourceGroup --location centralus
+ ```
+
+1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
+
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name.
+
+ The `--sku F1` parameter creates the IoT hub in the Free tier. Free tier hubs have a limited feature set and are used for proof of concept applications. For more information on IoT Hub tiers, features, and pricing, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+
+ ```azurecli
+ az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} --sku F1 --partition-count 2
+ ```
+
+1. After the IoT hub is created, view the JSON output in the console, and copy the `hostName` value to use in a later step. The `hostName` value looks like the following example:
+
+ `{Your IoT hub name}.azure-devices.net`
+
+### Configure IoT Explorer
+
+In the rest of this quickstart, you'll use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to the IoT hub you created, and to read plug and play models from the public model repository.
+
+To add a connection to your IoT hub:
+
+1. In your CLI app, run the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command to get the connection string for your IoT hub.
+
+ ```azurecli
+ az iot hub connection-string show --hub-name {YourIoTHubName}
+ ```
+
+1. Copy the connection string without the surrounding quotation characters.
+1. In Azure IoT Explorer, select **IoT hubs** on the left menu.
+1. Select **+ Add connection**.
+1. Paste the connection string into the **Connection string** box.
+1. Select **Save**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/iot-explorer-add-connection.png" alt-text="Screenshot of adding a connection in IoT Explorer.":::
+
+If the connection succeeds, IoT Explorer switches to the **Devices** view.
+
+To add the public model repository:
+
+1. In IoT Explorer, select **Home** to return to the home view.
+1. On the left menu, select **IoT Plug and Play Settings**, then select **+Add** and select **Public repository** from the drop-down menu.
+1. An entry appears for the public model repository at `https://devicemodels.azure.com`.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/iot-explorer-add-public-repository.png" alt-text="Screenshot of adding the public model repository in IoT Explorer.":::
+
+1. Select **Save**.
+
+### Register a device
+
+In this section, you create a new device instance and register it with the IoT hub you created. You'll use the connection information for the newly registered device to securely connect your physical device in a later section.
+
+To register a device:
+
+1. From the home view in IoT Explorer, select **IoT hubs**.
+1. The connection you previously added should appear. Select **View devices in this hub** below the connection properties.
+1. Select **+ New** and enter a device ID for your device; for example, `mydevice`. Leave all other properties the same.
+1. Select **Create**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/iot-explorer-device-created.png" alt-text="Screenshot of Azure IoT Explorer device identity.":::
+
+1. Use the copy buttons to copy the **Device ID** and **Primary key** fields.
+
+Before continuing to the next section, save each of the following values retrieved from earlier steps, to a safe location. You use these values in the next section to configure your device.
+
+* `hostName`
+* `deviceId`
+* `primaryKey`
++
+## Prepare the device
+
+To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi and Azure IoT settings, rebuild the image, and flash the image to the device.
+
+### Add configuration
+
+1. Open the following file in a text editor:
+
+ *getting-started\STMicroelectronics\B-L475E-IOT01A\app\azure_config.h*
+
+1. Comment out the following line near the top of the file as shown:
+
+ ```c
+ // #define ENABLE_DPS
+ ```
+
+1. Set the Wi-Fi constants to the following values from your local environment.
+
+ |Constant name|Value|
+ |-|--|
+ |`WIFI_SSID` |{*Your Wi-Fi SSID*}|
+ |`WIFI_PASSWORD` |{*Your Wi-Fi password*}|
+ |`WIFI_MODE` |{*One of the enumerated Wi-Fi mode values in the file*}|
+
+1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
+
+ |Constant name|Value|
+ |-|--|
+ |`IOT_HUB_HOSTNAME` |{*Your Iot hub hostName value*}|
+ |`IOT_HUB_DEVICE_ID` |{*Your Device ID value*}|
+ |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
+
+1. Save and close the file.
+
+### Build the image
+
+1. In your console or in File Explorer, run the batch file *rebuild.bat* at the following path to build the image:
+
+ *getting-started\STMicroelectronics\B-L475E-IOT01A\tools\rebuild.bat*
+
+2. After the build completes, confirm that the binary file was created in the following path:
+
+ *getting-started\STMicroelectronics\B-L475E-IOT01A\build\app\stm32l475_azure_iot.bin*
+
+### Flash the image
+
+1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You'll refer to these items in the next steps. All of them are highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/stm-devkit-board-475.png" alt-text="Photo that that shows key components on the STM DevKit board.":::
+
+1. Connect the Micro USB cable to the **USB STLINK** port on the STM DevKit, and then connect it to your computer.
+
+ > [!NOTE]
+ > For detailed setup information about the STM DevKit, see the instructions on the packaging, or see [B-L475E-IOT01A Resources](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html#resource)
+
+1. In File Explorer, find the binary files that you created in the previous section.
+
+1. Copy the binary file named *stm32l475_azure_iot.bin*.
+
+1. In File Explorer, find the STM Devkit that's connected to your computer. The device appears as a drive on your system with the drive label **DIS_L4IOT**.
+
+1. Paste the binary file into the root folder of the STM Devkit. Flashing starts automatically and completes in a few seconds.
+
+ > [!NOTE]
+ > During the flashing process, an LED toggles between red and green on the STM DevKit.
+
+### Confirm device connection details
+
+You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
+
+1. Start **Termite**.
+ > [!TIP]
+ > If you are unable to connect Termite to your devkit, install the [ST-LINK driver](https://www.st.com/en/development-tools/stsw-link009.html) and try again. See [Troubleshooting](troubleshoot-embedded-device-quickstarts.md) for additional steps.
+1. Select **Settings**.
+1. In the **Serial port settings** dialog, check the following settings and update if needed:
+ * **Baud rate**: 115,200
+ * **Port**: The port that your STM DevKit is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/termite-settings.png" alt-text="Screenshot of serial port settings in the Termite app.":::
+
+1. Select OK.
+1. Press the **Reset** button on the device. The button is black and is labeled on the device.
+1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
+
+ ```output
+ Starting Azure thread
+
+
+ Initializing WiFi
+ Module: ISM43362-M3G-L44-SPI
+ MAC address: ****************
+ Firmware revision: C3.5.2.5.STM
+ SUCCESS: WiFi initialized
+
+ Connecting WiFi
+ Connecting to SSID 'iot'
+ Attempt 1...
+ SUCCESS: WiFi connected
+
+ Initializing DHCP
+ IP address: 192.168.0.35
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
+ SUCCESS: DHCP initialized
+
+ Initializing DNS client
+ DNS address 1: ************
+ DNS address 2: ************
+ SUCCESS: DNS client initialized
+
+ Initializing SNTP time sync
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Nov 18, 2022 0:56:56.127 UTC
+ SUCCESS: SNTP initialized
+
+ Initializing Azure IoT Hub client
+ Hub hostname: *******.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgstml4s5;2
+ SUCCESS: Connected to IoT Hub
+ ```
+ > [!IMPORTANT]
+ > If the DNS client initialization fails and notifies you that the Wi-Fi firmware is out of date, you'll need to update the Wi-Fi module firmware. Download and install the [Inventek ISM 43362 Wi-Fi module firmware update](https://www.st.com/resource/en/utilities/inventek_fw_updater.zip). Then press the **Reset** button on the device to recheck your connection, and continue with this quickstart.
++
+Keep Termite open to monitor device output in the following steps.
+
+## View device properties
+
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you'll use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the STM DevKit. These capabilities rely on the device model published for the STM DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+
+To access IoT Plug and Play components for the device in IoT Explorer:
+
+1. From the home view in IoT Explorer, select **IoT hubs**, then select **View devices in this hub**.
+1. Select your device.
+1. Select **IoT Plug and Play components**.
+1. Select **Default component**. IoT Explorer displays the IoT Plug and Play components that are implemented on your device.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/iot-explorer-default-component-view.png" alt-text="Screenshot of STM DevKit default component in IoT Explorer.":::
+
+1. On the **Interface** tab, view the JSON content in the device model **Description**. The JSON contains configuration details for each of the IoT Plug and Play components in the device model.
+
+ > [!NOTE]
+ > The name and description for the default component refer to the STM L4S5 board. The STM L4S5 plug and play device model is also used for the STM L475E board in this quickstart.
+
+ Each tab in IoT Explorer corresponds to one of the IoT Plug and Play components in the device model.
+
+ | Tab | Type | Name | Description |
+ |||||
+ | **Interface** | Interface | `STM Getting Started Guide` | Example model for the STM DevKit |
+ | **Properties (read-only)** | Property | `ledState` | Whether the led is on or off |
+ | **Properties (writable)** | Property | `telemetryInterval` | The interval that the device sends telemetry |
+ | **Commands** | Command | `setLedState` | Turn the LED on or off |
+
+To view device properties using Azure IoT Explorer:
+
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent.
+1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/iot-explorer-set-telemetry-interval.png" alt-text="Screenshot of setting telemetry interval on STM DevKit in IoT Explorer.":::
+
+1. IoT Explorer responds with a notification. You can also observe the update in Termite.
+1. Set the telemetry interval back to 10.
+
+To use Azure CLI to view device properties:
+
+1. Run the [az iot hub device-identity show](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-show) command.
+
+ ```azurecli
+ az iot hub device-identity show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. Inspect the properties for your device in the console output.
+
+## View telemetry
+
+With Azure IoT Explorer, you can view the flow of telemetry from your device to the cloud. Optionally, you can do the same task using Azure CLI.
+
+To view telemetry in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Telemetry** tab. Confirm that **Use built-in event hub** is set to *Yes*.
+1. Select **Start**.
+1. View the telemetry as the device sends messages to the cloud.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/iot-explorer-device-telemetry.png" alt-text="Screenshot of device telemetry in IoT Explorer.":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
+
+1. Select the **Show modeled events** checkbox to view the events in the data format specified by the device model.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/iot-explorer-show-modeled-events.png" alt-text="Screenshot of modeled telemetry events in IoT Explorer.":::
+
+1. Select **Stop** to end receiving events.
+
+To use Azure CLI to view device telemetry:
+
+1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
+
+ ```azurecli
+ az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. View the JSON output in the console.
+
+ ```json
+ {
+ "event": {
+ "origin": "mydevice",
+ "module": "",
+ "interface": "dtmi:azurertos:devkit:gsgmxchip;1",
+ "component": "",
+ "payload": "{\"humidity\":41.21,\"temperature\":31.37,\"pressure\":1005.18}"
+ }
+ }
+ ```
+
+1. Select CTRL+C to end monitoring.
++
+## Call a direct method on the device
+
+You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can do the same task using Azure CLI.
+
+To call a method in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Commands** tab.
+1. For the **setLedState** command, set the **state** to **true**.
+1. Select **Send command**. You should see a notification in IoT Explorer, and the green LED light on the device should turn on.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l475e-iot-hub/iot-explorer-invoke-method.png" alt-text="Screenshot of calling the setLedState method in IoT Explorer.":::
+
+1. Set the **state** to **false**, and then select **Send command**. The LED should turn off.
+1. Optionally, you can view the output in Termite to monitor the status of the methods.
+
+To use Azure CLI to call a method:
+
+1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` turns on the LED, and setting it to `false` turns it off.
+
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
+ ```
+
+ The CLI console shows the status of your method call on the device, where `204` indicates success.
+
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
+
+1. Check your device to confirm the LED state.
+
+1. View the Termite terminal to confirm the output messages:
+
+ ```output
+ Received command: setLedState
+ Payload: true
+ LED is turned ON
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=15{"ledState":true}
+ ```
+
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+
+## Clean up resources
+
+If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+
+To delete a resource group by name:
+
+1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created.
+
+ ```azurecli-interactive
+ az group delete --name MyResourceGroup
+ ```
+
+1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted.
+
+ ```azurecli-interactive
+ az group list
+ ```
++
+## Next steps
+
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You connected the STM DevKit to Azure, and carried out tasks such as viewing telemetry and calling a method on the device.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [!div class="nextstepaction"]
+> [Connect an STMicroelectronics B-L475E-IOT01A to IoT Central](quickstart-devkit-stm-b-l475e.md)
+
+> [!IMPORTANT]
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
Changes made in `config.toml` to `edgeAgent` environment variables like the `hos
When using Node.js to send device to cloud messages with the AMQP protocol to an IoT Edge runtime, messages stop sending after 2047 messages. No error is thrown and the messages eventually start sending again, then cycle repeats. If the client connects directly to Azure IoT Hub, there's no issue with sending messages. This issue has been fixed in IoT Edge 1.2 and later.
+<!-- end 1.1 -->
+ ### NTLM Authentication IoT Edge does not currently support network proxies that use NTLM authentication. Users may consider bypassing the proxy by adding the required endpoints to the firewall allow-list.
-<!-- end 1.1 -->
- ## Next steps For more information, see [IoT Hub other limits](../iot-hub/iot-hub-devguide-quotas-throttling.md#other-limits).
iot-edge Module Deployment Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-deployment-monitoring.md
Deployments can be rolled back if you receive errors or misconfigurations. Beca
Deleting a deployment doesn't remove the modules from targeted devices. There must be another deployment that defines a new configuration for the devices, even if it's an empty deployment.
+However, deleting a deployment may remove modules from the targeted device if it was a layered deployment. A layered deployment updates the underlying deployment, potentially adding modules. Removing a layered deployment removes its update to the underlying deployment, potentially removing modules.
+
+For example, a device has base deployment A and layered deployments O and M applied onto it (so that the A, O, and M deployments are deployed onto the device). If layered deployment M is then deleted, A and O are applied onto the device, and the modules unique to deployment M are removed.
+ Perform rollbacks in the following sequence: 1. Confirm that a second deployment is also targeted at the same device set. If the goal of the rollback is to remove all modules, the second deployment should not include any modules.
iot-edge Offline Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/offline-capabilities.md
One way to create this trust relationship is described in detail in the followin
## Specify DNS servers
-To improve robustness, it is highly recommended you specify the DNS server addresses used in your environment. To set your DNS server for IoT Edge, see the resolution for [Edge Agent module continually reports 'empty config file' and no modules start on device](troubleshoot-common-errors.md#edge-agent-module-reports-empty-config-file-and-no-modules-start-on-the-device) in the troubleshooting article.
+To improve robustness, it is highly recommended you specify the DNS server addresses used in your environment. To set your DNS server for IoT Edge, see the resolution for [Edge Agent module reports 'empty config file' and no modules start on the device](troubleshoot-common-errors.md#edge-agent-module-reports-empty-config-file-and-no-modules-start-on-the-device) in the troubleshooting article.
## Optional offline settings
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
Title: Common errors - Azure IoT Edge | Microsoft Docs
-description: Use this article to resolve common issues encountered when deploying an IoT Edge solution
+ Title: Troubleshoot Azure IoT Edge common errors
+description: Resolve common issues encountered when using an IoT Edge solution
Previously updated : 02/28/2022 Last updated : 11/17/2022
-# Common issues and resolutions for Azure IoT Edge
+# Solutions to common issues for Azure IoT Edge
[!INCLUDE [iot-edge-version-1.1-or-1.4](./includes/iot-edge-version-1.1-or-1.4.md)]
-Use this article to find steps to resolve common issues that you may experience when deploying IoT Edge solutions. If you need to learn how to find logs and errors from your IoT Edge device, see [Troubleshoot your IoT Edge device](troubleshoot.md).
+Use this article to identify and resolve common issues when using IoT Edge solutions. If you need information on how to find logs and errors from your IoT Edge device, see [Troubleshoot your IoT Edge device](troubleshoot.md).
-## IoT Edge agent stops after about a minute
+## Provisioning and Deployment
-**Observed behavior:**
+### IoT Edge module deploys successfully then disappears from device
-The edgeAgent module starts and runs successfully for about a minute, then stops. The logs indicate that the IoT Edge agent attempts to connect to IoT Hub over AMQP, and then attempts to connect using AMQP over WebSocket. When that fails, the IoT Edge agent exits.
+#### Symptoms
+
+After setting modules for an IoT Edge device, the modules are deployed successfully but after a few minutes they disappear from the device and from the device details in the Azure portal. Other modules than the ones defined might also appear on the device.
+
+#### Cause
+
+If an automatic deployment targets a device, it takes priority over manually setting the modules for a single device. The **Set modules** functionality in Azure portal or **Create deployment for single device** functionality in Visual Studio Code takes effect for a moment. You see the modules that you defined start on the device. Then the automatic deployment's priority starts and overwrites the device's desired properties.
+
+#### Solution
+
+Only use one type of deployment mechanism per device, either an automatic deployment or individual device deployments. If you have multiple automatic deployments targeting a device, you can change priority or target descriptions to make sure the correct one applies to a given device. You can also update the device twin to no longer match the target description of the automatic deployment.
+
+For more information, see [Understand IoT Edge automatic deployments for single devices or at scale](module-deployment-monitoring.md).
++
+<!-- 1.1 -->
+
+### Can't get the IoT Edge runtime logs on Windows
+
+#### Symptoms
+
+You get an EventLogException when using `Get-WinEvent` on Windows.
+
+#### Cause
+
+The `Get-WinEvent` PowerShell command relies on a registry entry to be present to find logs by a specific `ProviderName`.
+
+#### Solution
+
+Set a registry entry for the IoT Edge daemon. Create a **iotedge.reg** file with the following content, and import in to the Windows Registry by double-clicking it or using the `reg import iotedge.reg` command:
+
+```reg
+Windows Registry Editor Version 5.00
+
+[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\iotedged]
+"CustomSource"=dword:00000001
+"EventMessageFile"="C:\\ProgramData\\iotedge\\iotedged.exe"
+"TypesSupported"=dword:00000007
+```
+<!-- end 1.1 -->
+
+<!-- 1.1 -->
+### DPS client error
+
+#### Symptoms
+
+IoT Edge fails to start with error message `failed to provision with IoT Hub, and no valid device backup was found dps client error.`
+
+#### Cause
+
+A group enrollment is used to provision an IoT Edge device to an IoT Hub. The IoT Edge device is moved to a different hub. The registration is deleted in DPS. A new registration is created in DPS for the new hub. The device isn't reprovisioned.
+
+#### Solution
+
+1. Verify your DPS credentials are correct.
+1. Apply your configuration using `sudo iotedge apply config`.
+1. If the device isn't reprovisioned, restart the device using `sudo iotedge system restart`.
+1. If the device isn't reprovisioned, force reprovisioning using `sudo iotedge system reprovision`.
+
+To automatically reprovision, set `dynamic_reprovisioning: true` in the device configuration file. Setting this flag to true opts in to the dynamic reprovisioning feature. IoT Edge detects situations where the device appears to have been reprovisioned in the cloud by monitoring its own IoT Hub connection for certain errors. IoT Edge responds by shutting down all Edge modules and itself. The next time the daemon starts up, it will attempt to reprovision this device with Azure to receive the new IoT Hub provisioning information.
+
+When using external provisioning, the daemon will also notify the external provisioning endpoint about the reprovisioning event before shutting down. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+
+<!-- end 1.1 -->
+
+## IoT Edge runtime
+
+### IoT Edge agent stops after a minute
+
+#### Symptoms
+
+The *edgeAgent* module starts and runs successfully for about a minute, then stops. The logs indicate that the IoT Edge agent attempts to connect to IoT Hub over AMQP, and then attempts to connect using AMQP over WebSocket. When that fails, the IoT Edge agent exits.
Example edgeAgent logs:
Example edgeAgent logs:
2017-11-28 18:46:49 [INF] - Edge agent attempting to connect to IoT Hub via AMQP over WebSocket... ```
-**Root cause:**
+#### Cause
A networking configuration on the host network is preventing the IoT Edge agent from reaching the network. The agent attempts to connect over AMQP (port 5671) first. If the connection fails, it tries WebSockets (port 443). The IoT Edge runtime sets up a network for each of the modules to communicate on. On Linux, this network is a bridge network. On Windows, it uses NAT. This issue is more common on Windows devices using Windows containers that use the NAT network.
-**Resolution:**
-
-Ensure that there is a route to the internet for the IP addresses assigned to this bridge/NAT network. Sometimes a VPN configuration on the host overrides the IoT Edge network.
-
-## IoT Edge agent can't access a module's image (403)
-
-**Observed behavior:**
-
-A container fails to run, and the edgeAgent logs show a 403 error.
-
-**Root cause:**
-
-The IoT Edge agent doesn't have permissions to access a module's image.
+#### Solution
-**Resolution:**
+Ensure that there's a route to the internet for the IP addresses assigned to this bridge/NAT network. Sometimes a VPN configuration on the host overrides the IoT Edge network.
-Make sure that your registry credentials are correctly specified in your deployment manifest.
+### Edge Agent module reports 'empty config file' and no modules start on the device
-## Edge Agent module reports 'empty config file' and no modules start on the device
+#### Symptoms
-**Observed behavior:**
+The device has trouble starting modules defined in the deployment. Only the *edgeAgent* is running but continually reporting 'empty config file...'.
-The device has trouble starting modules defined in the deployment. Only the edgeAgent is running but continually reporting 'empty config file...'.
-
-**Root cause:**
+#### Cause
By default, IoT Edge starts modules in their own isolated container network. The device may be having trouble with DNS name resolution within this private network.
-**Resolution:**
+#### Solution
**Option 1: Set DNS server in container engine settings**
You can set DNS server for each module's *createOptions* in the IoT Edge deploym
Be sure to set this configuration for the *edgeAgent* and *edgeHub* modules as well.
-<!-- 1.1 -->
-## Could not start module due to OS mismatch
+### IoT Edge agent can't access a module's image (403)
- **Observed behavior:**
+#### Symptoms
-The edgeHub module fails to start in IoT Edge version 1.1.
+A container fails to run, and the *edgeAgent* logs report a 403 error.
-**Root cause:**
+#### Cause
-Windows module uses a version of Windows that is incompatible with the version of Windows on the host. IoT Edge Windows version 1809 build 17763 is needed as the base layer for the module image, but a different version is in use.
+The IoT Edge agent module doesn't have permissions to access a module's image.
-**Resolution:**
+#### Solution
-Check the version of your various Windows operating systems in [Troubleshoot host and container image mismatches](/virtualization/windowscontainers/deploy-containers/update-containers#troubleshoot-host-and-container-image-mismatches). If the operating systems are different, update them to IoT Edge Windows version 1809 build 17763 and rebuild the Docker image used for that module.
+Make sure that your container registry credentials are correct your device deployment manifest.
-<!-- end 1.1 -->
+### IoT Edge hub fails to start
-## IoT Edge hub fails to start
-
-**Observed behavior:**
+#### Symptoms
The edgeHub module fails to start. You may see a message like one of the following errors in the logs:
warn: edgelet_utils::logging -- caused by: failed to create endpoint edgeHub
The process cannot access the file because it is being used by another process. (0x20) ```
-**Root cause:**
+#### Cause
Some other process on the host machine has bound a port that the edgeHub module is trying to bind. The IoT Edge hub maps ports 443, 5671, and 8883 for use in gateway scenarios. The module fails to start if another process has already bound one of those ports.
-**Resolution:**
+#### Solution
You can resolve this issue two ways:
In the deployment.json file:
4. Save the file and apply it to your IoT Edge device again.
-## IoT Edge security daemon fails with an invalid hostname
+### IoT Edge module fails to send a message to edgeHub with 404 error
-**Observed behavior:**
+#### Symptoms
-Attempting to [check the IoT Edge security manager logs](troubleshoot.md#check-the-status-of-the-iot-edge-security-manager-and-its-logs) fails and prints the following message:
+A custom IoT Edge module fails to send a message to the IoT Edge hub with a 404 `Module not found` error. The IoT Edge runtime prints the following message to the logs:
```output
-Error parsing user input data: invalid hostname. Hostname cannot be empty or greater than 64 characters
+Error: Time:Thu Jun 4 19:44:58 2018 File:/usr/sdk/src/c/provisioning_client/adapters/hsm_client_http_edge.c Func:on_edge_hsm_http_recv Line:364 executing HTTP request fails, status=404, response_buffer={"message":"Module not found"}u, 04 )
```
-**Root cause:**
-
-The IoT Edge runtime can only support hostnames that are shorter than 64 characters. Physical machines usually don't have long hostnames, but the issue is more common on a virtual machine. The automatically generated hostnames for Windows virtual machines hosted in Azure, in particular, tend to be long.
-
-**Resolution:**
-
-When you see this error, you can resolve it by configuring the DNS name of your virtual machine, and then setting the DNS name as the hostname in the setup command.
-
-<!-- 1.1 -->
+#### Cause
-1. In the Azure portal, navigate to the overview page of your virtual machine.
-2. Select **configure** under DNS name. If your virtual machine already has a DNS name configured, you don't need to configure a new one.
+The IoT Edge runtime enforces process identification for all modules connecting to the edgeHub for security reasons. It verifies that all messages being sent by a module come from the main process ID of the module. If a message is being sent by a module from a different process ID than initially established, it will reject the message with a 404 error message.
- ![Configure DNS name of virtual machine](./media/troubleshoot/configure-dns.png)
+#### Solution
-3. Provide a value for **DNS name label** and select **Save**.
-4. Copy the new DNS name, which should be in the format **\<DNSnamelabel\>.\<vmlocation\>.cloudapp.azure.com**.
-5. Inside the virtual machine, use the following command to set up the IoT Edge runtime with your DNS name:
-
- * On Linux:
-
- ```bash
- sudo nano /etc/iotedge/config.yaml
- ```
-
- * On Windows:
-
- ```cmd
- notepad C:\ProgramData\iotedge\config.yaml
- ```
-
-<!-- end 1.1 -->
+As of version 1.0.7, all module processes are authorized to connect. For more information, see the [1.0.7 release changelog](https://github.com/Azure/iotedge/blob/master/CHANGELOG.md#iotedged-1).
-<!-- iotedge-2020-11 -->
+If upgrading to 1.0.7 isn't possible, complete the following steps. Make sure that the same process ID is always used by the custom IoT Edge module to send messages to the edgeHub. For instance, make sure to `ENTRYPOINT` instead of `CMD` command in your Docker file. The `CMD` command leads to one process ID for the module and another process ID for the bash command running the main program, but `ENTRYPOINT` leads to a single process ID.
-1. In the Azure portal, navigate to the overview page of your virtual machine.
-2. Select **configure** under DNS name. If your virtual machine already has a DNS name configured, you don't need to configure a new one.
- ![Configure DNS name of virtual machine](./media/troubleshoot/configure-dns.png)
+### Stability issues on smaller devices
-3. Provide a value for **DNS name label** and select **Save**.
+#### Symptoms
-4. Copy the new DNS name, which should be in the format **\<DNSnamelabel\>.\<vmlocation\>.cloudapp.azure.com**.
+You may experience stability problems on resource constrained devices like the Raspberry Pi, especially when used as a gateway. Symptoms include out of memory exceptions in the IoT Edge hub module, downstream devices failing to connect, or the device failing to send telemetry messages after a few hours.
-5. On the IoT Edge device, open the config file.
+#### Cause
- ```bash
- sudo nano /etc/aziot/config.toml
- ```
+The IoT Edge hub, which is part of the IoT Edge runtime, is optimized for performance by default and attempts to allocate large chunks of memory. This optimization isn't ideal for constrained edge devices and can cause stability problems.
-6. Replace the value of `hostname` with your DNS name.
+#### Solution
-7. Save and close the file, then apply the changes to IoT Edge.
+For the IoT Edge hub, set an environment variable **OptimizeForPerformance** to **false**. There are two ways to set environment variables:
- ```bash
- sudo iotedge config apply
- ```
+In the Azure portal:
-<!-- end iotedge-2020-11 -->
+In your IoT Hub, select your IoT Edge device and from the device details page and select **Set Modules** > **Runtime Settings**. Create an environment variable for the IoT Edge hub module called *OptimizeForPerformance* that is set to *false*.
-<!-- 1.1 -->
+![OptimizeForPerformance set to false](./media/troubleshoot/optimizeforperformance-false.png)
-## Can't get the IoT Edge daemon logs on Windows
+In the deployment manifest:
-**Observed behavior:**
+```json
+"edgeHub": {
+ "type": "docker",
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-hub:1.1",
+ "createOptions": <snipped>
+ },
+ "env": {
+ "OptimizeForPerformance": {
+ "value": "false"
+ }
+ },
+```
-You get an EventLogException when using `Get-WinEvent` on Windows.
+### Security daemon couldn't start successfully
-**Root cause:**
+#### Symptoms
-The `Get-WinEvent` PowerShell command relies on a registry entry to be present to find logs by a specific `ProviderName`.
+The security daemon fails to start and module containers aren't created. The `edgeAgent`, `edgeHub` and other custom modules aren't started by IoT Edge service. In `aziot-edged` logs, you see this error:
-**Resolution:**
+> - The daemon could not start up successfully: Could not start management service
+> - caused by: An error occurred for path /var/run/iotedge/mgmt.sock
+> - caused by: Permission denied (os error 13)
-Set a registry entry for the IoT Edge daemon. Create a **iotedge.reg** file with the following content, and import in to the Windows Registry by double-clicking it or using the `reg import iotedge.reg` command:
-```reg
-Windows Registry Editor Version 5.00
+#### Cause
-[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\iotedged]
-"CustomSource"=dword:00000001
-"EventMessageFile"="C:\\ProgramData\\iotedge\\iotedged.exe"
-"TypesSupported"=dword:00000007
-```
+For all Linux distros except CentOS 7, IoT Edge's default configuration is to use `systemd` socket activation. A permission error happens if you change the configuration file to not use socket activation but leave the URLs as `/var/run/iotedge/*.sock`, since the `iotedge` user can't write to `/var/run/iotedge` meaning it can't unlock and mount the sockets itself.
-## DPS client error
+#### Solution
-**Observed behavior:**
+You don't need to disable socket activation on a distribution where socket activation is supported. However, if you prefer to not use socket activation at all, put the sockets in `/var/lib/iotedge/`.
+1. Run `systemctl disable iotedge.socket iotedge.mgmt.socket` to disable the socket units so that systemd doesn't start them unnecessarily
+1. Change the iotedge config to use `/var/lib/iotedge/*.sock` in both `connect` and `listen` sections
+1. If you already have modules, they have the old `/var/run/iotedge/*.sock` mounts, so `docker rm -f` them.
-IoT Edge fails to start with error message `failed to provision with IoT Hub, and no valid device backup was found dps client error.`
+<!-- 1.1 -->
+### Could not start module due to OS mismatch
-**Root cause:**
+#### Symptom
-A group enrollment is used to provision an IoT Edge device to an IoT Hub. The IoT Edge device is moved to a different hub. The registration is deleted in DPS. A new registration is created in DPS for the new hub. The device is not reprovisioned.
+The edgeHub module fails to start in IoT Edge version 1.1.
-**Resolution:**
+#### Cause
-1. Verify your DPS credentials are correct.
-1. Apply your configuration using `sudo iotedge apply config`.
-1. If the device isn't reprovisioned, restart the device using `sudo iotedge system restart`.
-1. If the device isn't reprovisioned, force reprovisioning using `sudo iotedge system reprovision`.
+Windows module uses a version of Windows that is incompatible with the version of Windows on the host. IoT Edge Windows version 1809 build 17763 is needed as the base layer for the module image, but a different version is in use.
-To automatically reprovision, set `dynamic_reprovisioning: true` in the device configuration file. Setting this flag to true opts in to the dynamic re-provisioning feature. IoT Edge detects situations where the device appears to have been reprovisioned in the cloud by monitoring its own IoT Hub connection for certain errors. IoT Edge responds by shutting itself and all Edge modules down. The next time the daemon starts up, it will attempt to reprovision this device with Azure to receive the new IoT Hub provisioning information.
+#### Solution
-When using external provisioning, the daemon will also notify the external provisioning endpoint about the re-provisioning event before shutting down. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+Check the version of your various Windows operating systems in [Troubleshoot host and container image mismatches](/virtualization/windowscontainers/deploy-containers/update-containers#troubleshoot-host-and-container-image-mismatches). If the operating systems are different, update them to IoT Edge Windows version 1809 build 17763 and rebuild the Docker image used for that module.
:::moniker-end <!-- end 1.1 -->
-## Stability issues on smaller devices
-**Observed behavior:**
+## Networking
-You may experience stability problems on resource constrained devices like the Raspberry Pi, especially when used as a gateway. Symptoms include out of memory exceptions in the IoT Edge hub module, downstream devices failing to connect, or the device failing to send telemetry messages after a few hours.
+### IoT Edge security daemon fails with an invalid hostname
-**Root cause:**
+#### Symptoms
-The IoT Edge hub, which is part of the IoT Edge runtime, is optimized for performance by default and attempts to allocate large chunks of memory. This optimization is not ideal for constrained edge devices and can cause stability problems.
+Attempting to [check the IoT Edge security manager logs](troubleshoot.md#check-the-status-of-the-iot-edge-security-manager-and-its-logs) fails and prints the following message:
-**Resolution:**
+```output
+Error parsing user input data: invalid hostname. Hostname cannot be empty or greater than 64 characters
+```
-For the IoT Edge hub, set an environment variable **OptimizeForPerformance** to **false**. There are two ways to set environment variables:
+#### Cause
-In the Azure portal:
+The IoT Edge runtime can only support hostnames that are shorter than 64 characters. Physical machines usually don't have long hostnames, but the issue is more common on a virtual machine. The automatically generated hostnames for Windows virtual machines hosted in Azure, in particular, tend to be long.
-In your IoT Hub, select your IoT Edge device and from the device details page and select **Set Modules** > **Runtime Settings**. Create an environment variable for the IoT Edge hub module called *OptimizeForPerformance* that is set to *false*.
+#### Solution
-![OptimizeForPerformance set to false](./media/troubleshoot/optimizeforperformance-false.png)
+When you see this error, you can resolve it by configuring the DNS name of your virtual machine, and then setting the DNS name as the hostname in the setup command.
-In the deployment manifest:
+<!-- 1.1 -->
-```json
-"edgeHub": {
- "type": "docker",
- "settings": {
- "image": "mcr.microsoft.com/azureiotedge-hub:1.1",
- "createOptions": <snipped>
- },
- "env": {
- "OptimizeForPerformance": {
- "value": "false"
- }
- },
-```
+1. In the Azure portal, navigate to the overview page of your virtual machine.
+2. Select **configure** under DNS name. If your virtual machine already has a DNS name configured, you don't need to configure a new one.
-## IoT Edge module fails to send a message to edgeHub with 404 error
+ ![Configure DNS name of virtual machine](./media/troubleshoot/configure-dns.png)
-**Observed behavior:**
+3. Provide a value for **DNS name label** and select **Save**.
+4. Copy the new DNS name, which should be in the format **\<DNSnamelabel\>.\<vmlocation\>.cloudapp.azure.com**.
+5. Inside the virtual machine, use the following command to set up the IoT Edge runtime with your DNS name:
-A custom IoT Edge module fails to send a message to the IoT Edge hub with a 404 `Module not found` error. The IoT Edge daemon prints the following message to the logs:
+ * On Linux:
-```output
-Error: Time:Thu Jun 4 19:44:58 2018 File:/usr/sdk/src/c/provisioning_client/adapters/hsm_client_http_edge.c Func:on_edge_hsm_http_recv Line:364 executing HTTP request fails, status=404, response_buffer={"message":"Module not found"}u, 04 )
-```
+ ```bash
+ sudo nano /etc/iotedge/config.yaml
+ ```
-**Root cause:**
+ * On Windows:
-The IoT Edge daemon enforces process identification for all modules connecting to the edgeHub for security reasons. It verifies that all messages being sent by a module come from the main process ID of the module. If a message is being sent by a module from a different process ID than initially established, it will reject the message with a 404 error message.
+ ```cmd
+ notepad C:\ProgramData\iotedge\config.yaml
+ ```
-**Resolution:**
+<!-- end 1.1 -->
-As of version 1.0.7, all module processes are authorized to connect. For more information, see the [1.0.7 release changelog](https://github.com/Azure/iotedge/blob/master/CHANGELOG.md#iotedged-1).
+<!-- iotedge-2020-11 -->
-If upgrading to 1.0.7 isn't possible, complete the following steps. Make sure that the same process ID is always used by the custom IoT Edge module to send messages to the edgeHub. For instance, make sure to `ENTRYPOINT` instead of `CMD` command in your Docker file. The `CMD` command leads to one process ID for the module and another process ID for the bash command running the main program, but `ENTRYPOINT` leads to a single process ID.
+1. In the Azure portal, navigate to the overview page of your virtual machine.
-## IoT Edge module deploys successfully then disappears from device
+2. Select **configure** under DNS name. If your virtual machine already has a DNS name configured, you don't need to configure a new one.
-**Observed behavior:**
+ ![Configure DNS name of virtual machine](./media/troubleshoot/configure-dns.png)
-After setting modules for an IoT Edge device, the modules are deployed successfully but after a few minutes they disappear from the device and from the device details in the Azure portal. Other modules than the ones defined might also appear on the device.
+3. Provide a value for **DNS name label** and select **Save**.
-**Root cause:**
+4. Copy the new DNS name, which should be in the format **\<DNSnamelabel\>.\<vmlocation\>.cloudapp.azure.com**.
-If an automatic deployment targets a device, it takes priority over manually setting the modules for a single device. The **Set modules** functionality in Azure portal or **Create deployment for single device** functionality in Visual Studio Code will take effect for a moment. You see the modules that you defined start on the device. Then the automatic deployment's priority kicks in and overwrites the device's desired properties.
+5. On the IoT Edge device, open the config file.
-**Resolution:**
+ ```bash
+ sudo nano /etc/aziot/config.toml
+ ```
-Only use one type of deployment mechanism per device, either an automatic deployment or individual device deployments. If you have multiple automatic deployments targeting a device, you can change priority or target descriptions to make sure the correct one applies to a given device. You can also update the device twin to no longer match the target description of the automatic deployment.
+6. Replace the value of `hostname` with your DNS name.
-For more information, see [Understand IoT Edge automatic deployments for single devices or at scale](module-deployment-monitoring.md).
+7. Save and close the file, then apply the changes to IoT Edge.
-## IoT Edge module reports connectivity errors
+ ```bash
+ sudo iotedge config apply
+ ```
+
+<!-- end iotedge-2020-11 -->
+
+### IoT Edge module reports connectivity errors
-**Observed behavior:**
+#### Symptoms
IoT Edge modules that connect directly to cloud services, including the runtime modules, stop working as expected and return errors around connection or networking failures.
-**Root cause:**
+#### Cause
-Containers rely on IP packet forwarding in order to connect to the internet so that they can communicate with cloud services. IP packet forwarding is enabled by default in Docker, but if it gets disabled then any modules that connect to cloud services will not work as expected. For more information, see [Understand container communication](https://docs.docker.com/config/containers/container-networking/) in the Docker documentation.
+Containers rely on IP packet forwarding in order to connect to the internet so that they can communicate with cloud services. IP packet forwarding is enabled by default in Docker, but if it gets disabled then any modules that connect to cloud services won't work as expected. For more information, see [Understand container communication](https://docs.docker.com/config/containers/container-networking/) in the Docker documentation.
-**Resolution:**
+#### Solution
Use the following steps to enable IP packet forwarding.
On Linux:
1. Restart the network service and docker service to apply the changes. + <!-- iotedge-2020-11 --> ::: moniker range=">=iotedge-2020-11"
-## IoT Edge behind a gateway cannot perform HTTP requests and start edgeAgent module
+### IoT Edge behind a gateway can't perform HTTP requests and start edgeAgent module
-**Observed behavior:**
+#### Symptoms
-The IoT Edge daemon is active with a valid configuration file, but it cannot start the edgeAgent module. The command `iotedge list` returns an empty list. The IoT Edge daemon logs report `Could not perform HTTP request`.
+The IoT Edge runtime is active with a valid configuration file, but it can't start the *edgeAgent* module. The command `iotedge list` returns an empty list. The IoT Edge runtime reports `Could not perform HTTP request` in the logs.
-**Root cause:**
+#### Cause
IoT Edge devices behind a gateway get their module images from the parent IoT Edge device specified in the `parent_hostname` field of the config file. The `Could not perform HTTP request` error means that the child device isn't able to reach its parent device via HTTP.
-**Resolution:**
+#### Solution
Make sure the parent IoT Edge device can receive incoming requests from the child IoT Edge device. Open network traffic on ports 443 and 6617 for requests coming from the child device.
-## IoT Edge behind a gateway cannot connect when migrating from one IoT hub to another
+<!-- end iotedge-2020-11 -->
+
+<!-- iotedge-2020-11 -->
-**Observed behavior:**
+### IoT Edge behind a gateway can't perform HTTP requests and start edgeAgent module
-When attempting to migrate a hierarchy of IoT Edge devices from one IoT hub to another, the top level parent IoT Edge device can connect to IoT Hub, but downstream IoT Edge devices cannot. The logs report `Unable to authenticate client downstream-device/$edgeAgent with module credentials`.
+#### Symptoms
-**Root cause:**
+The IoT Edge daemon is active with a valid configuration file, but it can't start the edgeAgent module. The command `iotedge list` returns an empty list. The IoT Edge daemon logs report `Could not perform HTTP request`.
-The credentials for the downstream devices were not updated properly when the migration to the new IoT hub happened. Because of this, `edgeAgent` and `edgeHub` modules were set to have authentication type of `none` (default if not set explicitly). During connection, the modules on the downstream devices use old credentials, causing the authentication to fail.
+#### Cause
-**Resolution:**
+IoT Edge devices behind a gateway get their module images from the parent IoT Edge device specified in the `parent_hostname` field of the config file. The `Could not perform HTTP request` error means that the child device isn't able to reach its parent device via HTTP.
-When migrating to the new IoT hub (assuming not using DPS), follow these steps in order:
-1. Follow [this guide to export and then import device identities](../iot-hub/iot-hub-bulk-identity-mgmt.md) from the old IoT hub to the new one
-1. Reconfigure all IoT Edge deployments and configurations in the new IoT hub
-1. Reconfigure all parent-child device relationships in the new IoT hub
-1. Update each device to point to the new IoT hub hostname (`iothub_hostname` under `[provisioning]` in `config.toml`)
-1. If you chose to exclude authentication keys during the device export, reconfigure each device with the new keys given by the new IoT hub (`device_id_pk` under `[provisioning.authentication]` in `config.toml`)
-1. Restart the top-level parent Edge device first, make sure it's up and running
-1. Restart each device in hierarchy level by level from top to the bottom
+#### Solution
+
+Make sure the parent IoT Edge device can receive incoming requests from the child IoT Edge device. Open network traffic on ports 443 and 6617 for requests coming from the child device.
:::moniker-end <!-- end iotedge-2020-11 -->
-## Security daemon couldn't start successfully
+<!-- iotedge-2020-11 -->
-**Observed behavior:**
+### IoT Edge behind a gateway can't connect when migrating from one IoT hub to another
-The security daemon fails to start and module containers aren't created. The `edgeAgent`, `edgeHub` and other custom modules aren't started by IoT Edge service. In `aziot-edged` logs, you see this error:
+#### Symptoms
-> - The daemon could not start up successfully: Could not start management service
-> - caused by: An error occurred for path /var/run/iotedge/mgmt.sock
-> - caused by: Permission denied (os error 13)
+When attempting to migrate a hierarchy of IoT Edge devices from one IoT hub to another, the top level parent IoT Edge device can connect to IoT Hub, but downstream IoT Edge devices can't. The logs report `Unable to authenticate client downstream-device/$edgeAgent with module credentials`.
+#### Cause
-**Root cause:**
+The credentials for the downstream devices weren't updated properly when the migration to the new IoT hub happened. Because of this, `edgeAgent` and `edgeHub` modules were set to have authentication type of `none` (default if not set explicitly). During connection, the modules on the downstream devices use old credentials, causing the authentication to fail.
-For all Linux distros except CentOS 7, IoT Edge's default configuration is to use `systemd` socket activation. A permission error happens if you change the configuration file to not use socket activation but leave the URLs as `/var/run/iotedge/*.sock`, since the `iotedge` user can't write to `/var/run/iotedge` meaning it can't unlock and mount the sockets itself.
+#### Solution
-**Resolution:**
+When migrating to the new IoT hub (assuming not using DPS), follow these steps in order:
+1. Follow [this guide to export and then import device identities](../iot-hub/iot-hub-bulk-identity-mgmt.md) from the old IoT hub to the new one
+1. Reconfigure all IoT Edge deployments and configurations in the new IoT hub
+1. Reconfigure all parent-child device relationships in the new IoT hub
+1. Update each device to point to the new IoT hub hostname (`iothub_hostname` under `[provisioning]` in `config.toml`)
+1. If you chose to exclude authentication keys during the device export, reconfigure each device with the new keys given by the new IoT hub (`device_id_pk` under `[provisioning.authentication]` in `config.toml`)
+1. Restart the top-level parent Edge device first, make sure it's up and running
+1. Restart each device in hierarchy level by level from top to the bottom
-You do not need to disable socket activation on a distro where socket activation is supported. However, if you prefer to not use socket activation at all, put the sockets in `/var/lib/iotedge/`. To do this
-1. Run `systemctl disable iotedge.socket iotedge.mgmt.socket` to disable the socket units so that systemd doesn't start them unnecessarily
-1. Change the iotedge config to use `/var/lib/iotedge/*.sock` in both `connect` and `listen` sections
-1. If you already have modules, they have the old `/var/run/iotedge/*.sock` mounts, so `docker rm -f` them.
+<!-- end iotedge-2020-11 -->
## Next steps
iot-edge Tutorial Deploy Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-stream-analytics.md
For this tutorial, you deploy two modules. The first is **SimulatedTemperatureSe
By default, the Stream Analytics module takes the same name as the job it's based on. You can change the module name on this page if you like, but it's not necessary.
-1. Select **Update** or **Cancel**.
+1. Select **Apply** or **Cancel**.
-1. Make a note of the name of your Stream Analytics module because you'll need it in the next step. Then, select **Next: Routes** to continue.
+1. Make a note of the name of your Stream Analytics module because you'll need it in the next step.
-1. On the **Routes** tab, you define how messages are passed between modules and the IoT Hub. Messages are constructed using name/value pairs. Replace the default `route` and `upstream` name and values with the pairs shown in following table, the following name/value pairs, replacing instances of _{moduleName}_ with the name of your Azure Stream Analytics module.
+1. Select **Next: Routes**.
+
+1. On the **Routes** tab, you define how messages are passed between modules and the IoT Hub. Messages are constructed using name and value pairs. Replace the default route name and values with the pairs shown in following table. Replacing instances of {moduleName}_ with the name of your Azure Stream Analytics module.
| Name | Value | | | |
For this tutorial, you deploy two modules. The first is **SimulatedTemperatureSe
1. In the **Review + Create** tab, you can see how the information you provided in the wizard is converted into a JSON deployment manifest. When you're done reviewing the manifest, select **Create**.
-1. You're returned to the device details page. Select **Refresh**.
+1. Return to your device details page. Select **Refresh**.
You should see the new Stream Analytics module running, along with the IoT Edge agent and IoT Edge hub modules. It may take a few minutes for the information to reach your IoT Edge device, and then for the new modules to start. If you don't see the modules running right away, continue refreshing the page.
Now you can go to your IoT Edge device to check out the interaction between the
iotedge list ```
-1. View all system logs and metrics data. Use the Stream Analytics module name:
+1. View all system logs and metrics data. Replace *{moduleName}* with the name of your Azure Stream Analytics module:
```cmd/sh iotedge logs -f {moduleName} ```
-1. View the reset command affect the SimulatedTemperatureSensor by viewing the sensor logs:
+1. See how the reset command affects the SimulatedTemperatureSensor by viewing the sensor logs:
```cmd/sh iotedge logs SimulatedTemperatureSensor
iot-hub How To Routing Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-arm.md
+
+ Title: Message routing with IoT Hub ΓÇö Azure Resource Manager | Microsoft Docs
+description: A how-to article that creates and deletes routes and endpoints in IoT Hub, using Azure Resource Manager.
++++ Last updated : 11/11/2022+++
+# Message routing with IoT Hub ΓÇö Azure Resource Manager
+
+This article shows you how to export your IoT hub template, add a route to it, then deploy the template back to your IoT hub using Azure CLI or PowerShell. We use a Resource Manager template to create routes and endpoints to Event Hubs, Service Bus queue, Service Bus topic, and Azure Storage.
+
+[Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) are useful when you need to define resources in a JSON file. Each resource in Azure has a template to export that defines the components used in that resource.
+
+> [!IMPORTANT]
+> The Resource Manager template will replace the existing resource, if there is one, when deployed. If you're creating a new IoT hub, this is not a concern and you can use a [basic template](/azure/azure-resource-manager/templates/syntax#template-format) with the required properties instead of exporting an existing template from your IoT hub.
+>
+> However, adding a route to an existing IoT hub Resource Manager template, exported from your IoT hub, ensures all other resources and properties connected will remain after deployment (they won't be replaced). For example, an exported Resource Manager template might contain storage information for your IoT hub, if you've connected it to storage.
+
+To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](iot-hub-devguide-messages-d2c.md). To walk through setting up a route that sends messages to storage, testing on a simulated device, see [Tutorial: Send device data to Azure Storage using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal).
+
+## Prerequisites
+
+**Azure Resource Manager**
+
+This article uses a template from Resource Manager. To understand more about Resource Manager, see [What are ARM templates?](../azure-resource-manager/templates/overview.md)
+
+**IoT Hub and an endpoint service**
+
+You need an IoT hub and at least one other service to serve as an endpoint to an IoT hub route. You can choose which Azure service (Event Hubs, Service Bus queue or topic, or Azure Storage) endpoint that you'd like to connect with your IoT Hub route.
+
+* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps in [Create an IoT hub using Azure Resource Manager template (PowerShell)](iot-hub-rm-template-powershell.md).
+
+* (Optional) An Event Hubs resource (with container). If you need to create a new Event Hubs resource, see [Quickstart: Create an event hub by using an ARM template](../event-hubs/event-hubs-resource-manager-namespace-event-hub.md).
+
+* (Optional) A Service Bus queue resource. If you need to create a new Service Bus queue, see [Quickstart: Create a Service Bus namespace and a queue using an ARM template](../service-bus-messaging/service-bus-resource-manager-namespace-queue.md).
+
+* (Optional) A Service Bus topic resource. If you need to create a new Service Bus topic, see [Quickstart: Create a Service Bus namespace with topic and subscription using an Azure Resource Manager template](../service-bus-messaging/service-bus-resource-manager-namespace-topic.md).
+
+* (Optional) An Azure Storage resource. If you need to create a new Azure Storage, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=template).
+
+## Create a route
+
+In IoT Hub, you can create a route to send messages or capture events. Each route has an endpoint, where the messages or events end up, and a data source, where the messages or events originate. You choose these locations when creating a new route in the IoT Hub. Routing queries are then used to filter messages or events before they go to the endpoint.
+
+You can use Events Hubs, a Service Bus queue or topic, or an Azure storage as an endpoint in your IoT hub route. A resource for the service must first be created in your Azure account.
+
+### Export the Resource Manager template from your IoT hub
+
+Let's export a Resource Manager template from your IoT hub, then we'll add a route to it.
+
+1. Go to your IoT hub in the Azure portal and select **Export template** at the bottom of the menu under **Automation**.
+
+ :::image type="content" source="media/how-to-routing-arm/export-menu-option.jpg" alt-text="Screenshot that shows location of the export template option in the menu of the IoT Hub.":::
+
+1. You see a JSON file generated for your IoT hub. Uncheck the **Include parameters** box.
+
+ Select **Download** to download a local copy of this file.
+
+ :::image type="content" source="media/how-to-routing-arm/download-template.jpg" alt-text="Screenshot that shows location of the download button on the Export template page.":::
+
+ There are several placeholders for information in this template in case you want to add features or services to your IoT hub in the future. But for this article, we only need to add something to the `routes` property.
+
+### Add a new endpoint to your Resource Manager template
+
+In the JSON file, find the `"endpoints": []` property, nested under `"routing"`, and add the following new endpoint, according to the endpoint service (Event Hubs, Service Bus queue or topic, or Azure Storage) you choose.
+
+* The **id** will be added for you, so leave it as a blank string for now.
+
+# [Event Hubs](#tab/eventhubs)
+
+If you need to create an Event Hubs resource (with container), see [Quickstart: Create an event hub by using an ARM template](../event-hubs/event-hubs-resource-manager-namespace-event-hub.md).
+
+Grab your primary connection string from your Event Hubs resource in the [Azure portal](https://portal.azure.com/#home) from its **Shared access policies** page. Select one of your policies to see the key and connection string information. Add your Event Hubs name to the entity path at the end of the connection string, for example `;EntityPath=my-event-hubs`. This name is your event hub name, not your namespace name.
+
+Use a unique name for your endpoint `name`. Leave the `id` parameter as an empty string. Azure will provide an `id` when you deploy this endpoint.
+
+```json
+"routing": {
+ "endpoints": {
+ "serviceBusQueues": [],
+ "serviceBusTopics": [],
+ "eventHubs": [
+ {
+ "connectionString": "my Event Hubs connection string + entity path",
+ "authenticationType": "keyBased",
+ "name": "my-event-hubs-endpoint",
+ "id": "",
+ "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "resourceGroup": "my-resource-group"
+ }
+ ],
+ "storageContainers": [],
+ "cosmosDBSqlCollections": []
+ },
+```
+
+# [Service Bus queue](#tab/servicebusqueue)
+
+If you need to create a Service Bus queue resource (a namespace and queue), see [Quickstart: Create a Service Bus namespace and a queue using an ARM template](../service-bus-messaging/service-bus-resource-manager-namespace-queue.md).
+
+Grab your primary connection string from your Service Bus resource in the [Azure portal](https://portal.azure.com/#home) from its **Shared access policies** page. Select one of your policies to see the key and connection string information. Add your Service Bus queue name to the entity path at the end of the connection string, for example `;EntityPath=my-service-bus-queue`. This name is your queue name, not your namespace name.
+
+Use a unique name for your endpoint `name`. Leave the `id` parameter as an empty string. Azure will provide an `id` when you deploy this endpoint.
+
+```json
+"routing": {
+ "endpoints": {
+ "serviceBusQueues": [
+ {
+ "connectionString": "my Service Bus connection string + entity path",
+ "authenticationType": "keyBased",
+ "name": "my-service-bus-queue-endpoint",
+ "id": "",
+ "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "resourceGroup": "my-resource-group"
+ }
+ ],
+ "serviceBusTopics": [],
+ "eventHubs": [],
+ "storageContainers": [],
+ "cosmosDBSqlCollections": []
+ },
+```
+
+# [Service Bus topic](#tab/servicebustopic)
+
+If you need to create a Service Bus topic resource (a namespace, topic, and subscription), see [Quickstart: Create a Service Bus namespace and a queue using an ARM template](../service-bus-messaging/service-bus-resource-manager-namespace-topic.md).
+
+Grab your primary connection string from your Service Bus resource in the [Azure portal](https://portal.azure.com/#home) from its **Shared access policies** page. Select one of your policies to see the key and connection string information. Add your Service Bus topic name to the entity path at the end of the connection string, for example `;EntityPath=my-service-bus-topic`. This name is your topic name, not your namespace name.
+
+Use a unique name for your endpoint `name`. Leave the `id` parameter as an empty string. Azure will provide an `id` when you deploy this endpoint.
+
+```json
+"routing": {
+ "endpoints": {
+ "serviceBusQueues": [],
+ "serviceBusTopics": [
+ {
+ "connectionString": "my Service Bus connection string + entity path",
+ "authenticationType": "keyBased",
+ "name": "my-service-bus-topic-endpoint",
+ "id": "",
+ "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "resourceGroup": "my-resource-group"
+ }
+ ],
+ "eventHubs": [],
+ "storageContainers": [],
+ "cosmosDBSqlCollections": []
+ },
+```
+
+# [Azure Storage](#tab/azurestorage)
+
+If you need to create an Azure Storage resource (a namespace, topic, and subscription), see [Create a storage account](/azure/storage/common/storage-account-create?tabs=template).
+
+Grab your primary connection string from your Azure Storage resource in the [Azure portal](https://portal.azure.com/#home) from its **Access keys** page.
+
+Use a unique name for your endpoint `name`. Leave the `id` parameter as an empty string. Azure will provide an `id` when you deploy this endpoint.
+
+```json
+"routing": {
+ "endpoints": {
+ "serviceBusQueues": [],
+ "serviceBusTopics": [],
+ "eventHubs": [],
+ "storageContainers": [
+ {
+ "connectionString": "my Azure storage connection string",
+ "containerName": "my-container",
+ "fileNameFormat": "{iothub}/{partition}/{YYYY}/{MM}/{DD}/{HH}/{mm}.avro",
+ "batchFrequencyInSeconds": 100,
+ "maxChunkSizeInBytes": 104857600,
+ "encoding": "avro",
+ "authenticationType": "keyBased",
+ "name": "my-storage-endpoint",
+ "id": "",
+ "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "resourceGroup": "my-resource-group"
+ }
+ ],
+ "cosmosDBSqlCollections": []
+ },
+```
++
+### Add a new route to your Resource Manager template
+
+In the JSON file, find the `"routes": []` property, nested under `"routing"`, and add the following new route, according to the endpoint service (Event Hubs, Service Bus queue or topic, or Azure Storage) you choose.
+
+Our default fallback route collects messages from `DeviceMessages`, let's choose another option such as `DeviceConnectionStateEvents`. For more information on source options, see [az iot hub route](/cli/azure/iot/hub/route#az-iot-hub-route-create-required-parameters).
+
+> [!CAUTION]
+> If you replace your existing `"routes"` with the following route, the old ones will be removed upon deployment. To preserve existing routes, *add* the route object to the `"routes"` list.
+
+For more information about the template, see [ARM template resource definition](/azure/templates/microsoft.devices/iothubs?pivots=deployment-language-arm-template#routeproperties-1).
+
+# [Event Hubs](#tab/eventhubs)
+
+```json
+"routes": [
+ {
+ "name": "MyIotHubRoute",
+ "source": "DeviceConnectionStateEvents",
+ "condition": "true",
+ "endpointNames": [
+ "my-event-hubs-endpoint"
+ ],
+ "isEnabled": true
+ }
+],
+```
+
+Save your JSON file.
+
+# [Service Bus queue](#tab/servicebusqueue)
+
+```json
+"routes": [
+ {
+ "name": "MyIotHubRoute",
+ "source": "DeviceConnectionStateEvents",
+ "condition": "true",
+ "endpointNames": [
+ "my-service-bus-queue-endpoint"
+ ],
+ "isEnabled": true
+ }
+],
+```
+
+Save your JSON file.
+
+# [Service Bus topic](#tab/servicebustopic)
+
+```json
+"routes": [
+ {
+ "name": "MyIotHubRoute",
+ "source": "DeviceConnectionStateEvents",
+ "condition": "true",
+ "endpointNames": [
+ "my-service-bus-topic-endpoint"
+ ],
+ "isEnabled": true
+ }
+],
+```
+
+Save your JSON file.
+
+# [Azure Storage](#tab/azurestorage)
+
+```json
+"routes": [
+ {
+ "name": "MyIotHubRoute",
+ "source": "DeviceConnectionStateEvents",
+ "condition": "true",
+ "endpointNames": [
+ "my-storage-endpoint"
+ ],
+ "isEnabled": true
+ }
+],
+```
+
+Save your JSON file.
+++
+## Deploy the Resource Manager template
+
+With your new endpoint and route added to the Resource Manager template, you can now deploy the JSON file back to your IoT hub.
+
+### Local deployment
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az deployment group create \
+ --name my-iot-hub-template \
+ --resource-group my-resource-group \
+ --template-file "my\path\to\template.json"
+
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+New-AzResourceGroupDeployment `
+ -Name MyTemplate `
+ -ResourceGroupName MyResourceGroup `
+ -TemplateFile "my\path\to\template.json"
+```
++
+### Azure Cloud Shell deployment
+
+Since [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) is run from a browser, you can [upload](/azure/cloud-shell/using-the-shell-window#upload-and-download-files) the template file before running the deployment command. With the file uploaded, you only need the template file name (instead of the entire filepath) for the `template-file` parameter.
++
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az deployment group create \
+ --name my-iot-hub-template \
+ --resource-group my-resource-group \
+ --template-file "template.json"
+
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+New-AzResourceGroupDeployment `
+ -Name MyIoTHubTemplate `
+ -ResourceGroupName MyResourceGroup `
+ -TemplateFile "template.json"
+```
++
+To view your new route in the [Azure portal](https://portal.azure.com/), go to your IoT Hub resource and look on the **Message routing** page to see your route listed under the **Routes** tab.
+
+> [!NOTE]
+> If the deployment fails, use the verbose switch to get information about the resources you're creating. Use the debug switch to get more information for debugging.
+
+## Confirm deployment
+
+To confirm your template deployed successfully to Azure, check in your resource group resource on the **Deployments** page of the **Settings** menu in the Azure portal.
++
+## Next Steps
+
+In this how-to article you learned how to create a route and endpoint for your Event Hubs, Service Bus queue or topic, and Azure Storage.
+
+To further your exploration into message routing, see [Tutorial: Send device data to Azure Storage using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal). In this tutorial, you'll create a storage route and test it with a device in your IoT hub.
iot-hub How To Routing Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-azure-cli.md
+
+ Title: Message routing with IoT Hub ΓÇö Azure CLI | Microsoft Docs
+description: A how-to article that creates and deletes routes and endpoints in IoT Hub, using Azure CLI.
++++ Last updated : 11/11/2022+++
+# Message routing with IoT Hub ΓÇö Azure CLI
+
+This article shows you how to create an endpoint and route in your IoT hub, then delete your endpoint and route. You can also update a route. We use the Azure CLI to create endpoints and routes to Event Hubs, Service Bus queue, Service Bus topic, or Azure Storage.
+
+To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](iot-hub-devguide-messages-d2c.md). To walk through setting up a route that sends messages to storage and testing on a simulated device, see [Tutorial: Send device data to Azure Storage using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal).
+
+## Prerequisites
+
+**Azure CLI**
++
+**IoT Hub and an endpoint service**
+
+You need an IoT hub and at least one other service to serve as an endpoint to an IoT hub route.
+
+* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps in [Create an IoT hub using the Azure CLI](iot-hub-create-using-cli.md).
+
+You can choose which Azure service (Event Hubs, Service Bus queue or topic, or Azure Storage) endpoint that you'd like to connect with your IoT Hub route. You only need one service to assign the endpoint to a route.
+
+# [Event Hubs](#tab/eventhubs)
+
+You can choose an Event Hubs resource (namespace and entity).
+
+### Create an Event Hubs resource with authorization rule
+
+1. Create the Event Hubs namespace. The `name` should be unique. The location, `l`, should be your resource group region.
+
+ ```azurecli
+ az eventhubs namespace create --name my-routing-namespace --resource-group my-resource-group -l westus3
+ ```
+
+1. Create your Event hubs instance. The `name` should be unique. Use the `namespace-name` you created in the previous command.
+
+ ```azurecli
+ az eventhubs eventhub create --name my-event-hubs --resource-group my-resource-group --namespace-name my-routing-namespace
+ ```
+
+1. Create an authorization rule for your Event hubs resource.
+
+ > [!TIP]
+ > The `name` parameter's value `RootManageSharedAccessKey` is the default name that allows **Manage, Send, Listen** claims (access). If you wanted to restrict the claims, give the `name` parameter your own unique name and include the `--rights` flag followed by one of the claims. For example, `--name my-name --rights Send`.
+
+ For more information about access, see [Authorize access to Azure Event Hubs](/azure/event-hubs/authorize-access-event-hubs).
+
+ ```azurecli
+ az eventhubs eventhub authorization-rule create --resource-group my-resource-group --namespace-name my-routing-namespace --eventhub-name my-event-hubs --name RootManageSharedAccessKey
+ ```
+
+For more information, see [Quickstart: Create an event hub using Azure CLI](/azure/event-hubs/event-hubs-quickstart-cli).
+
+# [Service Bus queue](#tab/servicebusqueue)
+
+You can choose a Service Bus queue resource (namespace and queue).
+
+### Create a Service Bus queue resource with authorization rule
+
+Create the namespace first, followed by the Service Bus queue entity, then the authorization rule. You need an authorization rule to access the Service Bus queue resource.
+
+1. Create a new Service Bus namespace. Use a unique `name` for your namespace.
+
+ ```azurecli
+ az servicebus namespace create --resource-group my-resource-group --name my-namespace
+ ```
+
+1. Create a new Service Bus queue. Use a unique `name` for your queue.
+
+ ```azurecli
+ az servicebus queue create --resource-group my-resource-group --namespace-name my-namespace --name my-queue
+ ```
+
+1. Create a Service Bus authorization rule. Use a unique `name` for your authorization rule.
+
+ ```azurecli
+ az servicebus queue authorization-rule create --resource-group my-resource-group --namespace-name my-namespace --queue-name my-queue --name my-auth-rule --rights Listen
+ ```
+
+ For more authorization rule options, see [az servicebus queue authorization-rule create](/cli/azure/servicebus/queue/authorization-rule#az-servicebus-queue-authorization-rule-create).
+
+For more information, see [Use the Azure CLI to create a Service Bus namespace and a queue](/azure/service-bus-messaging/service-bus-quickstart-cli).
+
+# [Service Bus topic](#tab/servicebustopic)
+
+You can choose a Service Bus topic resource (namespace, topic, and subscription).
+
+### Create a Service Bus topic resource with subscription
+
+Create the namespace first, followed by the Service Bus topic entity, then the authorization rule. You need an authorization rule to access the Service Bus topic resource.
+
+1. Create a new Service Bus namespace. Use a unique `name` for your namespace.
+
+ ```azurecli
+ az servicebus namespace create --resource-group my-resource-group --name my-namespace
+ ```
+
+1. Create a new Service Bus topic. Use a unique `name` for your topic.
+
+ ```azurecli
+ az servicebus topic create --resource-group my-resource-group --namespace-name my-namespace --name my-topic
+ ```
+
+1. Create a Service Bus topic subscription.
+
+ ```azurecli
+ az servicebus topic subscription create --resource-group my-resource-group --namespace-name my-namespace --topic-name my-topic --name my-subscription
+ ```
+
+1. (Optional) If you'd like to filter messages for a subscription, create a Service Bus subscription rule. Use a unique `name` for your filter. A filter can be a SQL expression, such as "StoreId IN ('Store1','Store2','Store3')".
+
+ ```azurecli
+ az servicebus topic subscription rule create --resource-group my-resource-group --namespace-name my-namespace --topic-name my-topic --subscription-name my-subscription --name my-filter --filter-sql-expression "my-sql-expression"
+ ```
+
+For more information, see [Use Azure CLI to create a Service Bus topic and subscriptions to the topic](/azure/service-bus-messaging/service-bus-tutorial-topics-subscriptions-cli).
+
+# [Azure Storage](#tab/azurestorage)
+
+You can choose an Azure Storage resource (account and container).
+
+### Create an Azure Storage resource with container
+
+1. Create a new storage account.
+ > [!TIP]
+ > Your storage name must be between 3 and 24 characters in length and use numbers and lower-case letters only. No dashes are allowed.
+
+ ```azurecli
+ az storage account create --name mystorageaccount --resource-group myresourcegroup
+ ```
+
+1. Create a new container in your storage account.
+
+ ```azurecli
+ az storage container create --name my-storage-container --account-name mystorageaccount
+ ```
+
+ You see a confirmation that your container was created in your console.
+ ```azurecli
+ {
+ "created": true
+ }
+ ```
+
+For more information, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-cli).
+++
+## Create an endpoint
+
+Endpoints can be used in an IoT Hub route. An endpoint can be standalone, for example you can create one for later use. However, a route needs an endpoint, so we create the endpoint first and then the route later in this article.
+
+You can use Events Hubs, a Service Bus queue or topic, or an Azure storage to be the endpoint for your IoT hub route. The Azure resource must first exist in your Azure account.
+
+# [Event Hubs](#tab/eventhubs)
+
+References used in the following commands:
+* [az eventhubs](/cli/azure/eventhubs)
+* [az iot hub](/cli/azure/iot/hub)
+
+### Create an Event Hubs endpoint
+
+1. List your authorization rule to get your Event Hubs connection string. Copy your connection string for later use.
+
+ ```azurecli
+ az eventhubs eventhub authorization-rule keys list --resource-group my-resource-group --namespace-name my-routing-namespace --eventhub-name my-event-hubs --name RootManageSharedAccessKey
+ ```
+
+1. Create your custom endpoint. Use the connection string in this command that you copied in the last step. The `endpoint-type` must be `eventhub`, otherwise all other values should be your own.
+
+ ```azurecli
+ az iot hub routing-endpoint create --endpoint-name my-event-hub-endpoint --endpoint-type eventhub --endpoint-resource-group my-resource-group --endpoint-subscription-id xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --hub-name my-iot-hub --connection-string "my connection string"
+ ```
+
+ To see all routing endpoint options, see [az iot hub routing-endpoint](/cli/azure/iot/hub/routing-endpoint).
+
+# [Service Bus queue](#tab/servicebusqueue)
+
+References used in the following commands:
+* [az servicebus](/cli/azure/servicebus)
+* [az iot hub](/cli/azure/iot/hub)
+
+### Create a Service Bus queue endpoint
+
+1. List your authorization rule keys to get your Service Bus queue connection string. Copy your connection string for later use.
+
+ ```azurecli
+ az servicebus queue authorization-rule keys list --resource-group my-resource-group --namespace-name my-namespace --queue-name my-queue --name my-auth-rule
+ ```
+
+1. Create a new Service Bus queue endpoint. The `endpoint-type` must be `servicebusqueue`, otherwise all parameters should have your own values.
+
+ ```azurecli
+ az iot hub routing-endpoint create --endpoint-name my-service-bus-queue-endpoint --endpoint-type servicebusqueue --endpoint-resource-group my-resource-group --endpoint-subscription-id xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx --hub-name my-iot-hub --connection-string "Endpoint=<my connection string>"
+ ```
+
+# [Service Bus topic](#tab/servicebustopic)
+
+References used in the following commands:
+* [az servicebus](/cli/azure/servicebus)
+* [az iot hub](/cli/azure/iot/hub)
+
+### Create a Service Bus topic endpoint
+
+1. List your authorization rule keys to get your Service Bus topic connection string. Copy your connection string for later use. The default name of your authorization rule that comes with a new namespace is **RootManageSharedAccessKey**.
+
+ ```azurecli
+ az servicebus topic authorization-rule keys list --resource-group my-resource-group --namespace-name my-namespace --topic-name my-topic --name RootManageSharedAccessKey
+ ```
+
+1. Create a new Service Bus topic endpoint. The `endpoint-type` must be `servicebustopic`, otherwise all parameters should have your own values. Replace `Endpoint=<my connection string>` with the connection string you copied from the previous step. Add `;EntityPath=my-service-bus-topic` to the end of your connection string. Since we didn't create a custom authorization rule, the namespace connection string doesn't include the entity path information, but the entity path is required to make a Service Bus topic endpoint. Replace the `my-service-bus-topic` part of the entity path string with the name of your Service Bus topic.
+
+ ```azurecli
+ az iot hub routing-endpoint create --endpoint-name my-service-bus-topic-endpoint --endpoint-type servicebustopic --endpoint-resource-group my-resource-group --endpoint-subscription-id xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx --hub-name my-iot-hub --connection-string "Endpoint=<my connection string>;EntityPath=my-service-bus-topic"
+ ```
+
+# [Azure Storage](#tab/azurestorage)
+
+References used in the following commands:
+* [az storage](/cli/azure/storage)
+* [az iot hub](/cli/azure/iot/hub)
+
+### Create an Azure Storage endpoint
+
+1. You need the connection string from your Azure Storage resource to create an endpoint. Get the string using the `show-connection-string` command and copy it for the next step.
+
+ ```azurecli
+ az storage account show-connection-string --resource-group my-resource-group --name my-storage-account
+ ```
+
+1. Create a new Azure Storage endpoint. The `endpoint-type` must be `azurestoragecontainer`, otherwise all parameters should have your own values. Use the connection string you copied from the previous step.
+
+ ```azurecli
+ az iot hub routing-endpoint create --resource-group my-resource-group --hub-name my-iot-hub --endpoint-name my-storage-endpoint --endpoint-type azurestoragecontainer --container my-storage-container --endpoint-resource-group my-resource-group --endpoint-subscription-id xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx --connection-string "DefaultEndpointsProtocol=<my connection string>"
+ ```
+
+ For more parameter options, see [az iot hub routing-endpoint](/cli/azure/iot/hub/routing-endpoint).
+++
+## Create an IoT Hub route
+
+In IoT Hub, you can create a route to send messages or capture events. Each route has an endpoint, where the messages or events end up, and a data source, where the messages or events originate. You choose these locations when creating a new route in the IoT Hub. Optionally, you can [Add queries to message routes](iot-hub-devguide-routing-query-syntax.md) to filter messages or events before they go to the endpoint.
+
+# [Event Hubs](#tab/eventhubs)
+
+1. With your existing Event Hubs endpoint, create a new IoT Hub route, using that endpoint. Use the endpoint name for `endpoint`. Use a unique name for `route-name`.
+
+ The default fallback route in IoT Hub collects messages from `DeviceMessages`, so let's choose another option for our custom route, such as `DeviceConnectionStateEvents`. For more source options, see [az iot hub route](/cli/azure/iot/hub/route#az-iot-hub-route-create-required-parameters).
+
+ ```azurecli
+ az iot hub route create --endpoint my-event-hub-endpoint --hub-name my-iot-hub --route-name my-event-hub-route --source deviceconnectionstateevents
+ ```
+
+1. A new route should show in your IoT hub. Run this command to confirm the route is there.
+
+ ```azurecli
+ az iot hub route list -g my-resource-group --hub-name my-iot-hub
+ ```
+
+ You should see a similar response in your console.
+
+ ```json
+ [
+ {
+ "condition": "true",
+ "endpointNames": [
+ "my-event-hub-endpoint"
+ ],
+ "isEnabled": true,
+ "name": "my-event-hub-route",
+ "source": "DeviceConnectionStateEvents"
+ }
+ ]
+ ```
+
+# [Service Bus queue](#tab/servicebusqueue)
+
+1. With your existing Service Bus queue endpoint, create a new IoT Hub route, using that endpoint. Use the endpoint name for `endpoint`. Use a unique name for `route-name`.
+
+ The default fallback route in IoT Hub collects messages from `DeviceMessages`, so let's choose another option for our custom route, such as `DeviceConnectionStateEvents`. For more source options, see [az iot hub route](/cli/azure/iot/hub/route#az-iot-hub-route-create-required-parameters).
+
+ ```azurecli
+ az iot hub route create --endpoint my-service-bus-queue-endpoint --hub-name my-iot-hub --route-name my-route --source deviceconnectionstateevents
+ ```
+
+1. List your IoT hub routes to confirm your new Service Bus queue route.
+
+ ```azurecli
+ az iot hub route list --resource-group my-resource-group --hub-name my-iot-hub
+ ```
+
+ You should see something similar in your Azure CLI.
+
+ ```json
+ {
+ "condition": "true",
+ "endpointNames": [
+ "my-service-bus-queue-endpoint"
+ ],
+ "isEnabled": true,
+ "name": "my-service-bus-queue-route",
+ "source": "DeviceConnectionStateEvents"
+ }
+ ```
+
+# [Service Bus topic](#tab/servicebustopic)
+
+1. With your existing Service Bus topic endpoint, create a new IoT Hub route, using that endpoint. Use the endpoint name for `endpoint`. Use a unique name for `route-name`.
+
+ The default fallback route in IoT Hub collects messages from `DeviceMessages`, so let's choose another option for our custom route, such as `DeviceConnectionStateEvents`. For more source options, see [az iot hub route](/cli/azure/iot/hub/route#az-iot-hub-route-create-required-parameters).
+
+ ```azurecli
+ az iot hub route create --endpoint my-service-bus-topic-endpoint --hub-name my-iot-hub --route-name my-route --source deviceconnectionstateevents
+ ```
+
+1. List your IoT hub routes to confirm your new Service Bus topic route.
+
+ ```azurecli
+ az iot hub route list --resource-group my-resource-group --hub-name my-iot-hub
+ ```
+
+ You should see something similar in your Azure CLI.
+
+ ```json
+ {
+ "condition": "true",
+ "endpointNames": [
+ "my-service-bus-topic-endpoint"
+ ],
+ "isEnabled": true,
+ "name": "my-service-bus-topic-route",
+ "source": "DeviceConnectionStateEvents"
+ }
+ ```
+
+# [Azure Storage](#tab/azurestorage)
+
+1. With your existing Azure storage endpoint, create a new IoT Hub route, using that endpoint. Use the endpoint name for `endpoint`. Use a unique name for `route-name`.
+
+ The default fallback route in IoT Hub collects messages from `DeviceMessages`, so let's choose another option for our custom route, such as `DeviceConnectionStateEvents`. For more source options, see [az iot hub route](/cli/azure/iot/hub/route#az-iot-hub-route-create-required-parameters).
+
+ ```azurecli
+ az iot hub route create --resource-group my-resource-group --hub-name my-iot-hub --endpoint-name my-storage-endpoint --source deviceconnectionstateevents --route-name my-route
+ ```
+
+1. Confirm that your new route is in your IoT hub.
+
+ ```azurecli
+ az iot hub route list --resource-group my-resource-group --hub-name my-iot-hub
+ ```
+
+ You should see something similar in your Azure CLI.
+
+ ```json
+ {
+ "condition": "true",
+ "endpointNames": [
+ "my-storage-endpoint"
+ ],
+ "isEnabled": true,
+ "name": "my-storage-route",
+ "source": "DeviceConnectionStateEvents"
+ }
+ ```
+++
+### Update an IoT Hub route
+
+With an IoT Hub route, no matter the type of endpoint, you can update some properties of the route.
+
+1. To change a parameter, use the [az iot hub route update](/cli/azure/iot/hub/route#az-iot-hub-route-update) command. For example, you can change `source` from `deviceconnectionstateevents` to `devicejoblifecycleevents`.
+
+ ```azurecli
+ az iot hub route update --resource-group my-resource-group --hub-name my-iot-hub --source devicejoblifecycleevents --route-name my-route
+ ```
+
+1. Use the `az iot hub route show` command to confirm the change in your route.
+
+ ```azurecli
+ az iot hub route show --resource-group my-resource-group --hub-name my-iot-hub --route-name my-route
+ ```
+
+### Delete an endpoint
+
+```azurecli
+az iot hub routing-endpoint delete --resource-group my-resource-group --hub-name my-iot-hub --endpoint-name my-endpoint
+```
+
+### Delete an IoT Hub route
+
+```azurecli
+az iot hub route delete --resource-group my-resource-group --hub-name my-iot-hub --route-name my-route
+```
+
+> [!TIP]
+> Deleting a route won't delete endpoints in your Azure account. The endpoints must be deleted separately.
++
+## Next Steps
+
+In this how-to article you learned how to create a route and endpoint for your Event Hubs, Service Bus queue or topic, and Azure Storage.
+
+To further your exploration into message routing, see [Tutorial: Send device data to Azure Storage using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=cli). In this tutorial, you'll create a storage route and test it with a device in your IoT hub.
iot-hub How To Routing Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-portal.md
+
+ Title: Message routing with IoT Hub ΓÇö Azure portal | Microsoft Docs
+description: A how-to article that creates and deletes routes and endpoints in IoT Hub, using the Azure portal.
++++ Last updated : 11/11/2022+++
+# Message routing with IoT Hub ΓÇö Azure portal
+
+This article shows you how to create a route and endpoint in your IoT hub, then delete your route and endpoint. We use the Azure portal to create routes and endpoints to Event Hubs, Service Bus queue, Service Bus topic, and Azure Storage.
+
+To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](/azure/iot-hub/iot-hub-devguide-messages-d2c). To walk through setting up a route that sends messages to storage and testing on a simulated device, see [Tutorial: Send device data to Azure Storage using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal).
+
+## Prerequisites
+
+**Azure portal**
+
+This article uses the Azure portal interface to work with IoT Hub and other Azure services. To understand the portal better, see [What is the Azure portal?](/azure/azure-portal/azure-portal-overview)
+
+**IoT Hub and an endpoint service**
+
+You need an IoT hub and at least one other service to serve as an endpoint to an IoT hub route. You can choose which Azure service (Event Hubs, Service Bus queue or topic, or Azure Storage) endpoint that you'd like to connect with your IoT Hub route.
+
+* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps in [Create an IoT hub using the Azure portal](/azure/iot-hub/iot-hub-create-through-portal).
+
+* (Optional) An Event Hubs resource (namespace and entity). If you need to create a new Event Hubs resource, see [Quickstart: Create an event hub using Azure portal](/azure/event-hubs/event-hubs-create).
+
+* (Optional) A Service Bus queue resource (namespace and queue). If you need to create a new Service Bus queue, see [Use Azure portal to create a Service Bus namespace and a queue](/azure/service-bus-messaging/service-bus-quickstart-portal).
+
+* (Optional) A Service Bus topic resource (namespace and topic). If you need to create a new Service Bus topic, see [Use the Azure portal to create a Service Bus topic and subscriptions to the topic](/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal).
+
+* (Optional) An Azure Storage resource (account and container). If you need to create a new Azure Storage, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-portal). There are lots of options (tabs) with a storage account, but you only need a new container in your account for this article.
+
+* (Optional) A Cosmos DB resource (account, database, and container). If you need to create a new Cosmos DB, see [Create an Azure Cosmos DB account](/azure/cosmos-db/nosql/quickstart-portal#create-account). For the API option, choose **Azure Cosmos DB for NoSQL**.
+
+## Create a route
+
+In IoT Hub, you can create a route to send messages or capture events. Each route has an endpoint, where the messages or events end up, and a data source, where the messages or events originate. You choose these locations when creating a new route in the IoT Hub. Routing queries are then used to filter messages or events before they go to the endpoint.
+
+You can use Events Hubs, a Service Bus queue or topic, or an Azure storage to be the endpoint for your IoT hub route. The service must first exist in your Azure account.
+
+Go to your IoT hub in the Azure portal and select **Message routing** from the **Hub settings** menu. Select **+ Add** to add a new route.
++
+Next, decide which route type (Event Hubs, Service Bus queue or topic, or Azure Storage) you want to create and do the steps in the following tab.
+
+# [Event Hubs](#tab/eventhubs)
+
+If you need to create an Event Hubs resource, see [Quickstart: Create an event hub using Azure portal](/azure/event-hubs/event-hubs-create).
+
+In the Azure portal, you can create a route and endpoint at the same time. Using Azure CLI or PowerShell, you must create an endpoint first and then create a route.
+
+1. In the **Add a route** blade that appears, create a unique **Name**. Optionally, you might want to include the endpoint type in the name, such as **my-event-hubs-route**.
+
+1. For **Endpoint**, select the **+ Add endpoint** dropdown list and choose **Event hubs**.
+
+ :::image type="content" source="media/how-to-routing-portal/add-endpoint-event-hubs.jpg" alt-text="Screenshot that shows location of the 'Add endpoint' dropdown list.":::
+
+1. A new page, **Add an event hub endpoint**, opens. Create an **Endpoint name** for your IoT Hub. This will display in your IoT Hub.
+
+ For **Event hub namespace**, choose the namespace you previously created in your Event Hubs resource from the dropdown list.
+
+ For **Event hub instance**, choose the event hub you created in your Event Hubs resource from the dropdown list.
+
+ Select **Create** at the bottom and you'll go back to the **Add a route** page.
+
+ :::image type="content" source="media/how-to-routing-portal/add-event-hub.jpg" alt-text="Screenshot that shows all options to choose on the 'Add an event hub endpoint' page.":::
+
+1. Leave all the other values as their defaults on the **Add a route** page.
+
+1. Select **Save** at the bottom to create your new route. You should now see the route on your **Message routing** page.
+
+ :::image type="content" source="media/how-to-routing-portal/see-new-route.jpg" alt-text="Screenshot that shows the new route you created on the 'Message routing' page." lightbox="media/how-to-routing-portal/see-new-route.jpg":::
+
+# [Service Bus queue](#tab/servicebusqueue)
+
+If you need to create a Service Bus queue, see [Use Azure portal to create a Service Bus namespace and a queue](/azure/service-bus-messaging/service-bus-quickstart-portal).
+
+1. In the **Add a route** blade that appears, create a unique **Name**. Optionally, you might want to include the endpoint type in the name, such as **my-service-bus-route**.
+
+1. For **Endpoint**, select the **+ Add endpoint** dropdown list and choose **Service bus queue**.
+
+1. A new page appears called **Add a service bus endpoint**. Think of a unique name for the **Endpoint name** field.
+
+1. Select the dropdown list for **Service bus namespace** and select your service bus.
+
+1. Select the dropdown list for **Service bus queue** and select your service bus queue.
+
+1. Leave all other values in their default states and choose **Create** at the bottom.
+
+ :::image type="content" source="media/how-to-routing-portal/add-service-bus-endpoint.jpg" alt-text="Screenshot that shows the 'Add a service bus endpoint' page with correct options selected.":::
+
+1. Leave all the other values as their defaults on the **Add a route** page.
+
+1. Select **Save** at the bottom to create your new route. You should now see the route on your **Message routing** page.
+
+ :::image type="content" source="media/how-to-routing-portal/see-new-service-bus-route.jpg" alt-text="Screenshot that shows the new service bus queue route you created on the 'Message routing' page." lightbox="media/how-to-routing-portal/see-new-service-bus-route.jpg":::
+
+# [Service Bus topic](#tab/servicebustopic)
+
+If you need to create a Service Bus topic, see [Use the Azure portal to create a Service Bus topic and subscriptions to the topic](/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal).
+
+1. In the **Add a route** blade that appears, create a unique **Name**. Optionally, you might want to include the endpoint type in the name, such as **my-service-bus-topic-route**.
+
+1. For **Endpoint**, select the **+ Add endpoint** dropdown list and choose **Service bus topic**.
+
+1. A new page appears called **Add a service bus endpoint**. Think of a unique name for the **Endpoint name** field.
+
+1. Select the dropdown list for **Service bus namespace** and select your service bus.
+
+1. Select the dropdown list for **Service Bus Topic** and select your Service Bus topic.
+
+1. Leave all other values in their default states and choose **Create** at the bottom.
+
+ :::image type="content" source="media/how-to-routing-portal/add-service-bus-topic-endpoint.jpg" alt-text="Screenshot that shows the 'Add a service bus endpoint' page with correct options selected.":::
+
+1. Leave all the other values as their defaults on the **Add a route** page.
+
+1. Select **Save** at the bottom to create your new route. You should now see the route on your **Message routing** page.
+
+ :::image type="content" source="media/how-to-routing-portal/see-new-service-bus-topic-route.jpg" alt-text="Screenshot that shows your new Service Bus topic route on the 'Message routing' page." lightbox="media/how-to-routing-portal/see-new-service-bus-topic-route.jpg":::
+
+# [Azure Storage](#tab/azurestorage)
+
+If you need to create an Azure storage resource (with container), see [Create a storage account](/azure/iot-hub/tutorial-routing?tabs=portal#create-a-storage-account).
+
+1. In the **Add a route** blade that appears, create a unique **Name**. Optionally, you might want to include the endpoint type in the name, such as **my-service-bus-topic-route**.
+
+1. For **Endpoint**, select the **+ Add endpoint** dropdown list and choose **Storage**.
+
+1. A new page appears called **Add a storage endpoint**. Think of a unique name for the **Endpoint name** field.
+
+1. Select the button for **Pick a container**. Select your storage account and then its container. You return to the **Add a storage endpoint** page.
+
+1. Leave all other values in their default states and choose **Create** at the bottom.
+
+ :::image type="content" source="media/how-to-routing-portal/add-storage-endpoint.jpg" alt-text="Screenshot that shows the 'Add a storage endpoint' page with correct options selected.":::
+
+1. Leave all the other values as their defaults on the **Add a route** page.
+
+1. Select **Save** at the bottom to create your new route. You should now see the route on your **Message routing** page.
+
+ :::image type="content" source="media/how-to-routing-portal/see-new-storage-route.jpg" alt-text="Screenshot that shows your new storage route on the 'Message routing' page." lightbox="media/how-to-routing-portal/see-new-storage-route.jpg":::
+
+# [Cosmos DB](#tab/cosmosdb)
+
+If you need to create a Cosmos DB resource, see [Create an Azure Cosmos DB account](/azure/cosmos-db/nosql/quickstart-portal#create-account).
+
+1. Go to your IoT hub resource on the **Message routing** page and select the **Custom endpoints** tab.
+
+1. Select **+ Add**, then choose **CosmosDB** in the dropdown menu.
+
+ :::image type="content" source="media/how-to-routing-portal/add-cosmosdb-endpoint.png" alt-text="Screenshot that shows location of the 'Add' button on the 'Message routing' page in the 'Custom endpoint' tab of the IoT Hub resource.":::
+
+1. Complete the fields in the form **Add a Cosmos DB endpoint**.
+
+ * **Endpoint name** ΓÇö create a unique name.
+
+ * **Cosmos DB account**, **Database**, and **Collection** ΓÇö choose your Cosmos DB account, database, and collection from the dropdown lists.
+
+ * **Partition key name** and **Partition key template** ΓÇö these will fill in automatically, based on the previous selections, or you can change the partition template based on your business logic. For more information on partitioning, see [Partitioning and horizontal scaling in Azure Cosmos DB](/azure/cosmos-db/partitioning-overview).
+
+ :::image type="content" source="media/how-to-routing-portal/add-cosmosdb-endpoint-form.jpg" alt-text="Screenshot that shows details of the 'Add a Cosmos DB endpoint' form." lightbox="media/how-to-routing-portal/add-cosmosdb-endpoint-form.jpg":::
+
+ * Select **Save** at the bottom of the form. You should now see the route on your **Message routing** page.
+
+ :::image type="content" source="media/how-to-routing-portal/cosmosdb-confirm.jpg" alt-text="Screenshot that shows a new Cosmos DB route in the IoT Hub 'Message routing' page." lightbox="media/how-to-routing-portal/cosmosdb-confirm.jpg":::
+++
+## Update a route
+
+Updating a route in the Azure portal is as easy as selecting your route from the **Message routing** menu in your IoT hub and changing the properties.
+
+You can make changes to an existing route:
+
+* Select a different endpoint from the **Endpoint** dropdown list or create a new endpoint
+* Select a new source from the **Date source** dropdown list
+* Enable or disable your route in the **Enable route** section
+* Create or change queries in the **Routing query** section
++
+Select **Save** at the bottom after making changes.
+
+> [!NOTE]
+> While you can't modify an existing endpoint, you can create new ones for your IoT hub route and change the endpoint your route uses with the **Endpoint** dropdown list.
+
+## Delete a route
+
+To delete a route in the Azure portal:
+
+1. Check the box next to your route located in the **Message routing** menu.
+
+1. Select the delete button.
++
+## Delete a custom endpoint
+
+To delete a custom endpoint in the Azure portal:
+
+1. From the **Message routing** menu, select the **Custom endpoints** tab.
+
+1. Check the box next to your Event hubs endpoint.
+
+1. Select the **Delete** button.
++
+## Next Steps
+
+In this how-to article you learned how to create a route and endpoint for your Event Hubs, Service Bus queue or topic, and Azure Storage.
+
+To further your exploration into message routing, see [Tutorial: Send device data to Azure Storage using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal). In this tutorial, you'll create a storage route and test it with a device in your IoT hub.
iot-hub How To Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-powershell.md
+
+ Title: Message routing with IoT Hub ΓÇö PowerShell | Microsoft Docs
+description: A how-to article that creates and deletes routes and endpoints in IoT Hub, using PowerShell.
++++ Last updated : 11/11/2022+++
+# Message routing with IoT Hub ΓÇö PowerShell
+
+This article shows you how to create a route and endpoint in your IoT hub, then delete your route and endpoint. We use the PowerShell to create routes and endpoints to Event Hubs, Service Bus queue, Service Bus topic, and Azure Storage.
+
+To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](/azure/iot-hub/iot-hub-devguide-messages-d2c). To walk through setting up a route that sends messages to storage and testing on a simulated device, see [Tutorial: Send device data to Azure Storage using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal).
+
+## Prerequisites
+
+**Azure PowerShell**
+
+To use PowerShell locally, install the [Azure PowerShell module](/powershell/azure/install-az-ps). Alternatively, to use the Azure PowerShell in a browser enable [Azure Cloud Shell](/azure/cloud-shell/overview).
+
+**IoT Hub and an endpoint service**
+
+You need an IoT hub and at least one other service to serve as an endpoint to an IoT hub route. You can choose which Azure service (Event Hubs, Service Bus queue or topic, or Azure Storage) endpoint that you'd like to connect with your IoT Hub route.
+
+* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps in [Create an IoT hub using the New-AzIotHub cmdlet](/azure/iot-hub/iot-hub-create-using-powershell).
+
+* (Optional) An Event Hubs resource (with container). If you need to create a new Event Hubs resource, see [Quickstart: Create an event hub using Azure PowerShell](/azure/event-hubs/event-hubs-quickstart-powershell).
+
+* (Optional) A Service Bus queue resource. If you need to create a new Service Bus queue, see [Use Azure PowerShell to create a Service Bus namespace and a queue](/azure/service-bus-messaging/service-bus-quickstart-powershell).
+
+* (Optional) A Service Bus topic resource. If you need to create a new Service Bus topic, see the [New-AzServiceBusTopic](/powershell/module/az.servicebus/new-azservicebustopic) reference and the [Azure Service Bus Messaging](/azure/service-bus-messaging/) documentation.
+
+* (Optional) An Azure Storage resource. If you need to create a new Azure Storage, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-powershell).
+
+## Create, update, or remove endpoints and routes
+
+In IoT Hub, you can create a route to send messages or capture events. Each route has an endpoint, where the messages or events end up, and a data source, where the messages or events originate. You choose these locations when creating a new route in the IoT Hub. Routing queries are then used to filter messages or events before they go to the endpoint.
+
+You can use Events Hubs, a Service Bus queue or topic, or an Azure storage to be the endpoint for your IoT hub route. The service must first exist in your Azure account.
+
+> [!NOTE]
+> If you're using a local version of PowerShell, [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps) before proceeding.
+>
+
+# [Event Hubs](#tab/eventhubs)
+
+References used in the following commands:
+* [Az.IotHub](/powershell/module/az.iothub/)
+* [Az.EventHub](/powershell/module/az.eventhub/)
+
+### Create an Event hub
+
+Let's create a new Event Hubs resource with an authorization rule.
+
+1. Create a new Event Hubs namespace. Use a unique **NamespaceName**.
+
+ ```powershell
+ New-AzEventHubNamespace -ResourceGroupName MyResourceGroup -NamespaceName MyNamespace -Location MyLocation
+ ```
+
+1. Create your new Event hubs entity. Use a unique **Name**. Use the same **NamespaceName** you created in the previous step.
+
+ ```powershell
+ New-AzEventHub -Name MyEventHub -NamespaceName MyNamespace -ResourceGroupName MyResourceGroup
+ ```
+
+1. Create a new authorization rule. Use the **Name** of your entity for **EventHubName**. Use a unique **Name** for your authorization rule.
+
+ ```powershell
+ New-AzEventHubAuthorizationRule -ResourceGroupName MyResourceGroup -NamespaceName MyNamespace -EventHubName MyEventHub -Name MyAuthRule -Rights @('Manage', 'Send', 'Listen')
+ ```
+
+ For more information about access, see [Authorize access to Azure Event Hubs](/azure/event-hubs/authorize-access-event-hubs).
+
+### Create an Event Hubs endpoint
+
+1. Retrieve the primary connection string from your Event hub. Copy the connection string for later use.
+
+ ```powershell
+ Get-AzEventHubKey -ResourceGroupName MyResourceGroup -NamespaceName MyNamespace -EventHubName MyEventHub -Name MyAuthRule
+ ```
+
+1. Create a new IoT hub endpoint to Event Hubs. Use your primary connection string from the previous step. The `EndpointType` must be `EventHub`, otherwise all other values should be your own.
+
+ ```powershell
+ Add-AzIotHubRoutingEndpoint -ResourceGroupName MyResourceGroup -Name MyIotHub -EndpointName MyEndpoint -EndpointType EventHub -EndpointResourceGroup MyResourceGroup -EndpointSubscriptionId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -ConnectionString "Endpoint=<my connection string>"
+ ```
+
+ To see all routing endpoint options, see [Add-AzIotHubRoutingEndpoint](/powershell/module/az.iothub/add-aziothubroutingendpoint).
+
+# [Service Bus queue](#tab/servicebusqueue)
+
+References used in the following commands:
+* [Az.IotHub](/powershell/module/az.iothub/)
+* [Az.ServiceBus](/powershell/module/az.servicebus/)
+
+### Create a Service Bus namespace and queue
+
+Let's create a new [Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) queue resource.
+
+1. Create a new Service Bus namespace. Use a unique `Name`.
+
+ ```powershell
+ New-AzServiceBusNamespace -ResourceGroupName MyResourceGroup -Name MyNamespace -Location MyRegion
+ ```
+
+1. Create a new Service Bus queue. Use the same `NamespaceName` you created in the previous step. Use a unique `Name` for your queue.
+
+ ```powershell
+ New-AzServiceBusQueue -ResourceGroupName MyResourceGroup -NamespaceName MyNamespace -Name MyQueue
+ ```
+
+### Create a Service Bus queue endpoint
+
+1. Retrieve the primary connection string from your Service Bus namespace. Copy the connection string for later use.
+
+ ```powershell
+ Get-AzServiceBusKey -ResourceGroupName MyResourceGroup -Namespace MyNamespace -Name RootManageSharedAccessKey
+ ```
+
+1. Create a new IoT hub endpoint to your Service Bus queue. Use your primary connection string from the previous step with `;EntityPath=MyServiceBusQueue` added to the end. The `EndpointType` must be `ServiceBusQueue`, otherwise all other values should be your own. Use a unique name for your `EndpointName`.
+
+ ```powershell
+ Add-AzIotHubRoutingEndpoint -ResourceGroupName MyResourceGroup -Name MyIotHub -EndpointName MyEndpoint -EndpointType EventHub -EndpointResourceGroup MyResourceGroup -EndpointSubscriptionId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -ConnectionString "Endpoint=my-connection-string;EntityPath=MyServiceBusQueue"
+ ```
+
+ To see all routing endpoint options, see [Add-AzIotHubRoutingEndpoint](/powershell/module/az.iothub/add-aziothubroutingendpoint).
+
+# [Service Bus topic](#tab/servicebustopic)
+
+References used in the following commands:
+* [Az.IotHub](/powershell/module/az.iothub/)
+* [Az.ServiceBus](/powershell/module/az.servicebus/)
+
+With [Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) topics, users can subscribe to one or more topics. Let's create a Service Bus namespace, topic, and then subscribe to the topic.
+
+### Create a Service Bus namespace, topic, and subscription
+
+1. Create a new Service Bus namespace. Use a unique `Name` for your namespace.
+
+ ```powershell
+ New-AzServiceBusNamespace -ResourceGroupName MyResourceGroup -Name MyNamespace -Location MyRegion
+ ```
+
+1. Create a new Service Bus topic. Use a unique `Name` for your topic.
+
+ ```powershell
+ New-AzServiceBusTopic -ResourceGroupName MyResourceGroup -NamespaceName MyNamespace -Name MyTopic
+ ```
+
+1. Create a new subscription to your topic. Use your topic from the previous step for `TopicName`. Use a unique `Name` for your subscription.
+
+ ```powershell
+ New-AzServiceBusSubscription -ResourceGroupName MyResourceGroup -NamespaceName MyNamespace -TopicName MyTopic -Name MySubscription
+ ```
+
+### Create a Service Bus topic endpoint
+
+1. Retrieve the primary connection string from your Service Bus namespace. Copy the connection string for later use.
+
+ ```powershell
+ Get-AzServiceBusKey -ResourceGroupName MyResourceGroup -Namespace MyNamespace -Name RootManageSharedAccessKey
+ ```
+
+1. Create a new IoT hub endpoint to your Service Bus topic. Use your primary connection string from the previous step with `;EntityPath=MyServiceBusTopic` added to the end. The `EndpointType` must be `ServiceBusTopic`, otherwise all other values should be your own. Use a unique name for your `EndpointName`.
+
+ ```powershell
+ Add-AzIotHubRoutingEndpoint -ResourceGroupName MyResourceGroup -Name MyIotHub -EndpointName MyEndpoint -EndpointType ServiceBusTopic -EndpointResourceGroup MyResourceGroup -EndpointSubscriptionId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -ConnectionString "Endpoint=<my connection string>;EntityPath=MyServiceBusTopic"
+ ```
+
+ To see all routing endpoint options, see [Add-AzIotHubRoutingEndpoint](/powershell/module/az.iothub/add-aziothubroutingendpoint).
+
+# [Azure Storage](#tab/azurestorage)
+
+References used in the following commands:
+* [Az.IotHub](/powershell/module/az.iothub/)
+* [Az.Storage](/powershell/module/az.storage/)
+
+To create an Azure Storage endpoint and route, you need an Azure Storage account and container.
+
+### Create an Azure Storage account and container
+
+1. Create a new Azure Storage account. Your storage account `Name` must be lowercase and letters or numbers only. For `SkuName` options, see [SkuName](/powershell/module/az.storage/new-azstorageaccount#-skuname).
+
+ ```powershell
+ New-AzStorageAccount -ResourceGroupName MyResourceGroup -Name mystorageaccount -Location westus -SkuName Standard_GRS
+ ```
+
+1. Create a new container in your storage account. You need to create a context to your storage account in a variable, then add it to the `Context` parameter. For more options when creating a container, see [Manage blob containers using PowerShell](/azure/storage/blobs/blob-containers-powershell). Use a unique `Name` for the name of your container.
+
+ ```powershell
+ $ctx = New-AzStorageContext -StorageAccountName mystorageaccount -UseConnectedAccount `
+ New-AzStorageContainer -Name ContainerName -Context $ctx
+ ```
+
+### Create an Azure Storage endpoint
+
+To create an endpoint to Azure Storage, you need your access key to construct a connection string. The connection string is then a part of the IoT Hub command to create an endpoint.
+
+1. Retrieve your access key from your storage account and copy it.
+
+ ```powershell
+ Get-AzStorageAccountKey -ResourceGroupName MyResourceGroup -Name mystorageaccount
+ ```
+
+1. Construct a primary connection string for your storage based on this template. Replace `mystorageaccount` (in two places) with the name of your storage account. Replace `mykey` with your key from the previous step.
+
+ ```powershell
+ "DefaultEndpointsProtocol=https;BlobEndpoint=https://mystorageaccount.blob.core.windows.net;AccountName=mystorageaccount;AccountKey=mykey"
+ ```
+
+ This connection string is needed for the next step, creating the endpoint.
+
+1. Create your Azure Storage endpoint with the connection string you constructed. `EndpointType` must have the value `azurestorage`, otherwise all other values should be your own. Use a unique name for `EndpointName`. You'll be prompted for your container name after running this command. Type your container name as input and press **Enter** to complete the endpoint creation.
+
+ ```powershell
+ Add-AzIotHubRoutingEndpoint -ResourceGroupName MyResourceGroup -Name MyIotHub -EndpointName MyEndpoint -EndpointType azurestorage -EndpointResourceGroup MyResourceGroup -EndpointSubscriptionId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -ConnectionString "my connection string"
+ ```
+++
+## Create an IoT Hub route
+
+Finally, with your new endpoint in your IoT hub, you can create a new route.
+
+The default fallback route in IoT Hub collects messages from `DeviceMessages`, so let's choose another option for our custom route, such as `DeviceConnectionStateEvents`. For more information on source options, see [Add-AzIotHubRoute](/powershell/module/az.iothub/add-aziothubroute#parameters). The `Enabled` parameter is a switch, so no value needs to follow it.
+
+```powershell
+Add-AzIotHubRoute -ResourceGroupName MyResourceGroup -Name MyIotHub -RouteName MyRoute -Source DeviceLifecycleEvents -EndpointName MyEndpoint -Enabled
+```
+
+You see a confirmation in your console:
+
+```powershell
+RouteName : MyIotHub
+DataSource : DeviceLifecycleEvents
+EndpointNames : MyEndpoint
+Condition : true
+IsEnabled : True
+```
+
+## Update your IoT hub route
+
+To make changes to an existing route, use the following command. Try changing the name of your route.
+
+```powershell
+Set-AzIotHubRoute -ResourceGroupName MyResourceGroup -Name MyIotHub -RouteName MyRoute
+```
+
+Use the `Get-AzIotHubRoute` command to confirm the change in your route.
+
+```powershell
+Get-AzIotHubRoute -ResourceGroupName MyResourceGroup -Name MyIotHub
+```
+
+## Delete your Event hubs endpoint
+
+```powershell
+Remove-AzIotHubRoutingEndpoint -ResourceGroupName MyResourceGroup -Name MyIotHub -EndpointName MyEndpoint -PassThru
+```
+
+## Delete your IoT hub route
+
+```powershell
+Remove-AzIotHubRoute -ResourceGroupName MyResourceGroup -Name MyIotHub -RouteName MyRoute -PassThru
+```
+
+> [!TIP]
+> Deleting a route won't delete endpoints in your Azure account. The endpoints must be deleted separately.
+
+## Next Steps
+
+In this how-to article you learned how to create a route and endpoint for your Event Hubs, Service Bus queue or topic, and Azure Storage.
+
+To further your exploration into message routing, see [Tutorial: Send device data to Azure Storage using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal). In this tutorial, you'll create a storage route and test it with a device in your IoT hub.
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md
You can integrate IoT Hub with other Azure services to build complete, end-to-en
- [Azure Logic Apps](../logic-apps/index.yml) to automate business processes. -- [Azure Machine Learning](iot-hub-weather-forecast-machine-learning.md) to add machine learning and AI models to your solution.
+- [Azure Machine Learning](../machine-learning/index.yml) to add machine learning and AI models to your solution.
- [Azure Stream Analytics](../stream-analytics/index.yml) to run real-time analytic computations on the data streaming from your devices.
iot-hub Iot Hub Weather Forecast Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-weather-forecast-machine-learning.md
- Title: Weather forecast using Machine Learning Studio (classic) with IoT Hub data
-description: Use ML Studio (classic) to predict the chance of rain based on the temperature and humidity data your IoT hub collects from a sensor.
--
-keywords: weather forecast machine learning
--- Previously updated : 10/26/2021----
-# Weather forecast using the sensor data from your IoT hub in Machine Learning Studio (classic)
-
-![End-to-end diagram](media/iot-hub-get-started-e2e-diagram/6.png)
--
-Machine learning is a technique of data science that helps computers learn from existing data to forecast future behaviors, outcomes, and trends. ML Studio (classic) is a cloud predictive analytics service that makes it possible to quickly create and deploy predictive models as analytics solutions. In this article, you learn how to use ML Studio (classic) to do weather forecasting (chance of rain) using the temperature and humidity data from your Azure IoT hub. The chance of rain is the output of a prepared weather prediction model. The model is built upon historic data to forecast chance of rain based on temperature and humidity.
--
-## Prerequisites
--- Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials. For example, you can go to [Raspberry Pi with Node.js](iot-hub-raspberry-pi-kit-node-get-started.md) or to one of the [Send telemetry](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) quickstarts. These articles cover the following requirements:
- - An active Azure subscription.
- - An Azure IoT hub under your subscription.
- - A client application that sends messages to your Azure IoT hub.
-- An [ML Studio (classic)](https://studio.azureml.net/) account.-- An [Azure Storage account](../storage/common/storage-account-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#types-of-storage-accounts), A **General-purpose v2** account is preferred, but any Azure Storage account that supports Azure Blob storage will also work.-
-> [!Note]
-> This article uses Azure Stream Analytics and several other paid services. Extra charges are incurred in Azure Stream Analytics when data must be transferred across Azure regions. For this reason, it would be good to ensure that your Resource Group, IoT Hub, and Azure Storage account -- as well as the Machine Learning Studio (classic) workspace and Azure Stream Analytics Job added later in this tutorial -- are all located in the same Azure region. You can check regional support for ML Studio (classic) and other Azure services on the [Azure product availability by region page](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-studio&regions=all).
-
-## Deploy the weather prediction model as a web service
-
-In this section you get the weather prediction model from the Azure AI Library. Then you add an R-script module to the model to clean the temperature and humidity data. Lastly, you deploy the model as a predictive web service.
-
-### Get the weather prediction model
-
-In this section you get the weather prediction model from the Azure AI Gallery and open it in ML Studio (classic).
-
-1. Go to the [weather prediction model page](https://gallery.cortanaintelligence.com/Experiment/Weather-prediction-model-1).
-
- ![Open the weather prediction model page in Azure AI Gallery](media/iot-hub-weather-forecast-machine-learning/weather-prediction-model-in-azure-ai-gallery.png)
-
-1. Select **Open in Studio (classic)** to open the model in Microsoft ML Studio (classic). Select a region near your IoT hub and the correct workspace in the **Copy experiment from Gallery** pop-up.
-
- ![Open the weather prediction model in ML Studio (classic)](media/iot-hub-weather-forecast-machine-learning/open-ml-studio.png)
-
-### Add an R-script module to clean temperature and humidity data
-
-For the model to behave correctly, the temperature and humidity data must be convertible to numeric data. In this section, you add an R-script module to the weather prediction model that removes any rows that have data values for temperature or humidity that cannot be converted to numeric values.
-
-1. On the left-side of the ML Studio (classic) window, select the arrow to expand the tools panel. Enter "Execute" into the search box. Select the **Execute R Script** module.
-
- ![Select Execute R Script module](media/iot-hub-weather-forecast-machine-learning/select-r-script-module.png)
-
-1. Drag the **Execute R Script** module near the **Clean Missing Data** module and the existing **Execute R Script** module on the diagram. Delete the connection between the **Clean Missing Data** and the **Execute R Script** modules and then connect the inputs and outputs of the new module as shown.
-
- ![Add Execute R Script module](media/iot-hub-weather-forecast-machine-learning/add-r-script-module.png)
-
-1. Select the new **Execute R Script** module to open its properties window. Copy and paste the following code into the **R Script** box.
-
- ```r
- # Map 1-based optional input ports to variables
- data <- maml.mapInputPort(1) # class: data.frame
-
- data$temperature <- as.numeric(as.character(data$temperature))
- data$humidity <- as.numeric(as.character(data$humidity))
-
- completedata <- data[complete.cases(data), ]
-
- maml.mapOutputPort('completedata')
-
- ```
-
- When you're finished, the properties window should look similar to the following:
-
- ![Add code to Execute R Script module](media/iot-hub-weather-forecast-machine-learning/add-code-to-module.png)
-
-### Deploy predictive web service
-
-In this section, you validate the model, set up a predictive web service based on the model, and then deploy the web service.
-
-1. Select **Run** to validate the steps in the model. This step might take a few minutes to complete.
-
- ![Run the experiment to validate the steps](media/iot-hub-weather-forecast-machine-learning/run-experiment.png)
-
-1. Select **SET UP WEB SERVICE** > **Predictive Web Service**. The predictive experiment diagram opens.
-
- ![Deploy the weather prediction model in ML Studio (classic)](media/iot-hub-weather-forecast-machine-learning/predictive-experiment.png)
-
-1. In the predictive experiment diagram, delete the connection between the **Web service input** module and the **Select Columns in Dataset** at the top. Then drag the **Web service input** module somewhere near the **Score Model** module and connect it as shown:
-
- ![Connect two modules in ML Studio (classic)](media/iot-hub-weather-forecast-machine-learning/connect-modules-azure-machine-learning-studio.png)
-
-1. Select **RUN** to validate the steps in the model.
-
-1. Select **DEPLOY WEB SERVICE** to deploy the model as a web service.
-
-1. On the dashboard of the model, download the **Excel 2010 or earlier workbook** for **REQUEST/RESPONSE**.
-
- > [!Note]
- > Make sure that you download the **Excel 2010 or earlier workbook** even if you are running a later version of Excel on your computer.
-
- ![Download the Excel for the REQUEST RESPONSE endpoint](media/iot-hub-weather-forecast-machine-learning/download-workbook.png)
-
-1. Open the Excel workbook, make a note of the **WEB SERVICE URL** and **ACCESS KEY**.
--
-## Create, configure, and run a Stream Analytics job
-
-### Create a Stream Analytics job
-
-1. In the [Azure portal](https://portal.azure.com/), select **Create a resource**. Type "stream analytics job" in the Search box, and select **Stream Analytics job** from the results dropdown. When the **Stream Analytics job** pane opens, select **Create**.
-1. Enter the following information for the job.
-
- **Job name**: The name of the job. The name must be globally unique.
-
- **Subscription**: Select your subscription if it is different than the default.
-
- **Resource group**: Use the same resource group that your IoT hub uses.
-
- **Location**: Use the same location as your resource group.
-
- Leave all other fields at their default.
-
- ![Create a Stream Analytics job in Azure](media/iot-hub-weather-forecast-machine-learning/create-stream-analytics-job.png)
-
-1. Select **Create**.
-
-### Add an input to the Stream Analytics job
-
-1. Open the Stream Analytics job.
-1. Under **Job topology**, select **Inputs**.
-1. In the **Inputs** pane, select **Add stream input**, and then select **IoT Hub** from the dropdown. On the **New input** pane, choose the **Select IoT Hub from your subscriptions** and enter the following information:
-
- **Input alias**: The unique alias for the input.
-
- **Subscription**: Select your subscription if it is different than the default.
-
- **IoT Hub**: Select the IoT hub from your subscription.
-
- **Shared access policy name**: Select **service**. (You can also use **iothubowner**.)
-
- **Consumer group**: Select the consumer group you created.
-
- Leave all other fields at their default.
-
- ![Add an input to the Stream Analytics job in Azure](media/iot-hub-weather-forecast-machine-learning/add-input-stream-analytics-job.png)
-
-1. Select **Save**.
-
-### Add an output to the Stream Analytics job
-
-1. Under **Job topology**, select **Outputs**.
-1. In the **Outputs** pane, select **Add**, and then select **Blob storage/Data Lake Storage** from the dropdown. On the **New output** pane, choose the **Select storage from your subscriptions** and enter the following information:
-
- **Output alias**: The unique alias for the output.
-
- **Subscription**: Select your subscription if it is different than the default.
-
- **Storage account**: The storage account for your blob storage. You can create a storage account or use an existing one.
-
- **Container**: The container where the blob is saved. You can create a container or use an existing one.
-
- **Event serialization format**: Select **CSV**.
-
- ![Add an output to the Stream Analytics job in Azure](media/iot-hub-weather-forecast-machine-learning/add-output-stream-analytics-job.png)
-
-1. Select **Save**.
-
-### Add a function to the Stream Analytics job to call the web service you deployed
-
-1. Under **Job Topology**, select **Functions**.
-1. In the **Functions** pane, select **Add**, and then select **Azure ML Studio** from the dropdown. (Make sure you select **Azure ML Studio**, not **Azure ML Service**.) On the **New function** pane, choose the **Provide Azure Machine Learning function settings manually** and enter the following information:
-
- **Function Alias**: Enter `machinelearning`.
-
- **URL**: Enter the WEB SERVICE URL that you noted down from the Excel workbook.
-
- **Key**: Enter the ACCESS KEY that you noted down from the Excel workbook.
-
- ![Add a function to the Stream Analytics job in Azure](media/iot-hub-weather-forecast-machine-learning/add-function-stream-analytics-job.png)
-
-1. Select **Save**.
-
-### Configure the query of the Stream Analytics job
-
-1. Under **Job topology**, select **Query**.
-1. Replace the existing code with the following code:
-
- ```sql
- WITH machinelearning AS (
- SELECT EventEnqueuedUtcTime, temperature, humidity, machinelearning(temperature, humidity) as result from [YourInputAlias]
- )
- Select System.Timestamp time, CAST (result.[temperature] AS FLOAT) AS temperature, CAST (result.[humidity] AS FLOAT) AS humidity, CAST (result.[scored probabilities] AS FLOAT ) AS 'probabalities of rain'
- Into [YourOutputAlias]
- From machinelearning
- ```
-
- Replace `[YourInputAlias]` with the input alias of the job.
-
- Replace `[YourOutputAlias]` with the output alias of the job.
-
-1. Select **Save query**.
-
-> [!Note]
-> If you select **Test query**, you'll be presented with the following message: Query testing with Machine Learning functions is not supported. Please modify the query and try again. You can safely ignore this message and select **OK** to close the message box. Make sure you save the query before proceeding to the next section.
-
-### Run the Stream Analytics job
-
-In the Stream Analytics job, select **Overview** on the left pane. Then select **Start** > **Now** > **Start**. Once the job successfully starts, the job status changes from **Stopped** to **Running**.
-
-![Run the Stream Analytics job](media/iot-hub-weather-forecast-machine-learning/run-stream-analytics-job.png)
-
-## Use Microsoft Azure Storage Explorer to view the weather forecast
-
-Run the client application to start collecting and sending temperature and humidity data to your IoT hub. For each message that your IoT hub receives, the Stream Analytics job calls the weather forecast web service to produce the chance of rain. The result is then saved to your Azure blob storage. Azure Storage Explorer is a tool that you can use to view the result.
-
-1. [Download and install Microsoft Azure Storage Explorer](https://storageexplorer.com/).
-1. Open Azure Storage Explorer.
-1. Sign in to your Azure account.
-1. Select your subscription.
-1. Select your subscription > **Storage Accounts** > your storage account > **Blob Containers** > your container.
-1. Download a .csv file to see the result. The last column records the chance of rain.
-
- ![Get weather forecast result with ML Studio (classic)](media/iot-hub-weather-forecast-machine-learning/weather-forecast-result.png)
-
-## Summary
-
-YouΓÇÖve successfully used ML Studio (classic) to produce the chance of rain based on the temperature and humidity data that your IoT hub receives.
-
iot-hub Tutorial Manual Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-manual-failover.md
Title: Tutorial - Manual failover of an Azure IoT hub | Microsoft Docs
-description: Tutorial - Learn how to perform a manual failover of your IoT hub to a different region and confirm it's working, and then return it to the original region and check it again.
+ Title: Tutorial - Manually failover an Azure IoT hub
+description: Learn how to perform a manual failover of your IoT hub to a different region and then return it to the original region.
-+ Previously updated : 08/10/2021 Last updated : 11/17/2022 #Customer intent: As an IT Pro, I want to be able to perform a manual failover of my IoT hub to a different region, and then return it to the original region.
# Tutorial: Perform manual failover for an IoT hub
-Manual failover is a feature of the IoT Hub service that allows customers to [failover](https://en.wikipedia.org/wiki/Failover) their hub's operations from a primary region to the corresponding Azure geo-paired region. Manual failover can be done in the event of a regional disaster or an extended service outage. You can also perform a planned failover to test your disaster recovery capabilities, although we recommend using a test IoT hub rather than one running in production. The manual failover feature is offered to customers at no additional cost for IoT hubs created after May 18, 2017.
+Manual failover is a feature of the IoT Hub service that allows customers to [failover](https://en.wikipedia.org/wiki/Failover) their hub's operations from a primary region to the corresponding [Azure geo-paired region](../reliability/cross-region-replication-azure.md). Manual failover can be done in the event of a regional disaster or an extended service outage. You can also perform a planned failover to test your disaster recovery capabilities, although we recommend using a test IoT hub rather than one running in production. The manual failover feature is offered to customers at no additional cost for IoT hubs created after May 18, 2017.
In this tutorial, you perform the following tasks: > [!div class="checklist"]
-> * Using the Azure portal, create an IoT hub.
-> * Perform a failover.
+>
+> * Using the Azure portal, create an IoT hub.
+> * Perform a failover.
> * See the hub running in the secondary location. > * Perform a failback to return the IoT hub's operations to the primary location. > * Confirm the hub is running correctly in the right location.
For more information about manual failover and Microsoft-initiated failover with
* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
- ## Create an IoT hub [!INCLUDE [iot-hub-include-create-hub](../../includes/iot-hub-include-create-hub.md)]
For more information about manual failover and Microsoft-initiated failover with
> [!NOTE] > There is a limit of two failovers and two failbacks per day for an IoT hub.
-1. Click **Resource groups** and then select your resource group. Click on your hub in the list of resources.
-
-1. Under **Hub settings** on the IoT Hub pane, click **Failover**.
-
+1. Navigate to your IoT hub in the Azure portal.
-1. On the Manual failover pane, you see the **Current location** and the **Failover location**. The current location always indicates the location in which the hub is currently active. The failover location is the standard [Azure geo-paired region](../availability-zones/cross-region-replication-azure.md) that is paired to the current location. You cannot change the location values. For this tutorial, the current location is `West US 2` and the failover location is `West Central US`.
+1. Under **Hub settings** on the navigation menu, select **Failover**.
- ![Screenshot showing Manual Failover pane](./media/tutorial-manual-failover/trigger-failover-02.png)
+ :::image type="content" source="./media/tutorial-manual-failover/trigger-failover-01.png" alt-text="Screenshot showing IoT Hub properties pane.":::
-1. At the top of the Manual failover pane, click **Start failover**.
-
-1. In the confirmation pane, fill in the name of your IoT hub to confirm it's the one you want to failover. Then, to initiate the failover, click **Failover**.
-
- The amount of time it takes to perform the manual failover is proportional to the number of devices that are registered for your hub. For example, if you have 100,000 devices, it might take 15 minutes, but if you have five million devices, it might take an hour or longer.
+1. On the **Failover** pane, you see the **Current location** and the **Failover location** listed for your IoT hub. The current location always indicates the location in which the hub is currently active. The failover location is the standard [Azure geo-paired region](../availability-zones/cross-region-replication-azure.md) that is paired to the current location. You cannot change the location values.
- ![Screenshot showing Manual Failover confirmation pane](./media/tutorial-manual-failover/trigger-failover-03-confirm.png)
+1. At the top of the **Failover** pane, select **Start failover**.
- While the manual failover process is running, a banner appears to tell you a manual failover is in progress.
+ :::image type="content" source="./media/tutorial-manual-failover/trigger-failover-02.png" alt-text="Screenshot showing Manual Failover pane.":::
- ![Screenshot showing Manual Failover in progress](./media/tutorial-manual-failover/trigger-failover-04-in-progress.png)
+1. In the confirmation pane, fill in the name of your IoT hub to confirm it's the one you want to failover. Then, to initiate the failover, select **Failover**.
- If you close the IoT Hub pane and open it again by clicking it on the Resource Group pane, you see a banner that tells you the hub is in the middle of a manual failover.
+ :::image type="content" source="./media/tutorial-manual-failover/trigger-failover-03-confirm.png" alt-text="Screenshot showing Manual Failover confirmation pane.":::
- ![Screenshot showing IoT Hub failover in progress](./media/tutorial-manual-failover/trigger-failover-05-hub-inactive.png)
+ The amount of time it takes to perform the manual failover is proportional to the number of devices that are registered for your hub. For example, if you have 100,000 devices, it might take 15 minutes, but if you have five million devices, it might take an hour or longer.
- After it's finished, the current and failover regions on the Manual Failover page are flipped and the hub is active again. In this example, the current location is now `WestCentralUS` and the failover location is now `West US 2`.
+ While the manual failover process is running, a banner appears to tell you a manual failover is in progress.
- ![Screenshot showing failover is complete](./media/tutorial-manual-failover/trigger-failover-06-finished.png)
+ If you select **Overview** to view the IoT hub details, you see a banner telling you that the hub is in the middle of a manual failover.
- The overview page also shows a banner indicating that the failover is complete and the IoT Hub is running in `West Central US`.
+ After it's finished, the current and failover regions on the Manual Failover page are flipped and the hub is active again. In this example, the current location is now `WestCentralUS` and the failover location is now `West US 2`.
- ![Screenshot showing failover is complete in overview page](./media/tutorial-manual-failover/trigger-failover-06-finished-overview.png)
+ :::image type="content" source="./media/tutorial-manual-failover/trigger-failover-06-finished.png" alt-text="Screenshot showing failover is complete.":::
+ The overview page also shows a banner indicating that the failover is complete and the IoT Hub is running in the paired region.
-## Perform a failback
+## Perform a failback
-After you have performed a manual failover, you can switch the hub's operations back to the original primary region -- this is called a failback. If you have just performed a failover, you have to wait about an hour before you can request a failback. If you try to perform the failback in a shorter amount of time, an error message is displayed.
+After you have performed a manual failover, you can switch the hub's operations back to the original primary region. This action is called a *failback*. If you have just performed a failover, you have to wait about an hour before you can request a failback. If you try to perform the failback in a shorter amount of time, an error message is displayed.
-A failback is performed just like a manual failover. These are the steps:
+A failback is performed just like a manual failover. These are the steps:
-1. To perform a failback, return to the Iot Hub pane for your Iot hub.
+1. To perform a failback, return to the **Failover** pane for your IoT hub.
-2. Under **Settings** on the IoT Hub pane, click **Failover**.
+2. Select **Start failover** at the top of the **Failover** pane.
-3. At the top of the Manual failover pane, click **Start failover**.
+3. In the confirmation pane, fill in the name of your IoT hub to confirm it's the one you want to failback. To then initiate the failback, select **Failover**.
-4. In the confirmation pane, fill in the name of your IoT hub to confirm it's the one you want to failback. To then initiate the failback, click **OK**.
+ :::image type="content" source="./media/tutorial-manual-failover/trigger-failover-03-confirm.png" alt-text="Screenshot showing Manual Failover confirmation pane.":::
- ![Screenshot of manual failback request](./media/tutorial-manual-failover/trigger-failover-03-confirm.png)
+ After the failback is complete, your IoT hub again shows the original region as the current location and the paired region as the failover location, as you saw originally.
- The banners are displayed as explained in the perform a failover section. After the failback is complete, it again shows `West US 2` as the current location and `West Central US` as the failover location, as set originally.
+## Clean up resources
-## Clean up resources
+To remove the resources you've created for this tutorial, delete the resource group. This action deletes all resources contained within the group. In this case, it removes the IoT hub and the resource group itself.
-To remove the resources you've created for this tutorial, delete the resource group. This action deletes all resources contained within the group. In this case, it removes the IoT hub and the resource group itself.
+1. Click **Resource Groups**.
-1. Click **Resource Groups**.
+2. Locate and select the resource group that contains your IoT hub.
-2. Locate and select the resource group **ManlFailRG**. Click on it to open it.
+3. If you want to delete the entire group and all the resources in it, select **Delete resource group**. When prompted, enter the name of the resource group and select **Delete** to confirm the action.
-3. Click **Delete resource group**. When prompted, enter the name of the resource group and click **Delete** to confirm.
+ If you only want to delete specific resources from the group, check the boxes next to each resource you want to delete then select **Delete**. When prompted, type **yes** and select **Delete** to confirm the action.
## Next steps
-In this tutorial, you learned how to configure and perform a manual failover, and how to request a failback by performing the following tasks:
-
-> [!div class="checklist"]
-> * Using the Azure portal, create an IoT hub.
-> * Perform a failover.
-> * See the hub running in the secondary location.
-> * Perform a failback to return the IoT hub's operations to the primary location.
-> * Confirm the hub is running correctly in the right location.
+In this tutorial, you learned how to configure and perform a manual failover, and how to initiate a failback.
-Advance to the next tutorial to learn how to configure your device from a back-end service.
+Advance to the next tutorial to learn how to configure your device from a back-end service.
> [!div class="nextstepaction"] > [Configure your devices](tutorial-device-twins.md)
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/azure-policy.md
If the compliance results show up as "Not Started" it may be due to the followin
> [!NOTE] > Azure Policy
-> [Resouce Provider modes](../../governance/policy/concepts/definition-structure.md#resource-provider-modes),
+> [Resource Provider modes](../../governance/policy/concepts/definition-structure.md#resource-provider-modes),
> such as those for Azure Key Vault, provide information about compliance on the > [Component Compliance](../../governance/policy/how-to/get-compliance-data.md#component-compliance) > page.
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-appservice-insights.md
Azure Load Testing Preview collects detailed resource metrics across your Azure app components to help identify performance bottlenecks. In this article, you learn how to use App Service Diagnostics to get additional insights when load testing Azure App Service workloads.
-[App Service diagnostics](/azure/app-service/overview-diagnostics.md) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
+[App Service diagnostics](/azure/app-service/overview-diagnostics) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
This article provides a brief overview, considerations, and information about ho
### [Standard](#tab/standard)
-Availability zone redundancy is available for Standard logic apps, which are powered by Azure Functions extensibility. For more information, review [Azure Functions support for availability zone redundancy](../azure-functions/azure-functions-az-redundancy.md#overview).
+Availability zone support is available for Standard logic apps, which are powered by Azure Functions extensibility. For more information, see [What is reliability in Azure Functions?](../reliability/reliability-functions.md#availability-zone-support).
* You can enable availability zone redundancy *only when you create* Standard logic apps, either in a [supported Azure region](../azure-functions/azure-functions-az-redundancy.md#requirements) or in an [App Service Environment v3 (ASE v3) - Windows plans only](../app-service/environment/overview-zone-redundancy.md). Currently, this capability supports only built-in connector operations, not Azure (managed) connector operations.
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
Last updated 04/05/2022-+ # Git integration for Azure Machine Learning
Azure Machine Learning provides a shared file system for all users in the worksp
To clone a Git repository into this file share, we recommend that you create a compute instance & [open a terminal](how-to-access-terminal.md). Once the terminal is opened, you have access to a full Git client and can clone and work with Git via the Git CLI experience.
-We recommend that you clone the repository into your users directory so that others will not make collisions directly on your working branch.
+We recommend that you clone the repository into your user directory so that others will not make collisions directly on your working branch.
> [!TIP] > There is a performance difference between cloning to the local file system of the compute instance or cloning to the mounted filesystem (mounted as the `~/cloudfiles/code` directory). In general, cloning to the local filesystem will have better performance than to the mounted filesystem. However, the local filesystem is lost if you delete and recreate the compute instance. The mounted filesystem is kept if you delete and recreate the compute instance.
The logged information contains text similar to the following JSON:
### Python SDK
-After submitting a training run, a [Run](/python/api/azureml-core/azureml.core.run%28class%29) object is returned. The `properties` attribute of this object contains the logged git information. For example, the following code retrieves the commit hash:
+After submitting a training run, a [Job](/python/api/azure-ai-ml/azure.ai.ml.entities.job) object is returned. The `properties` attribute of this object contains the logged git information. For example, the following code retrieves the commit hash:
```python
-run.properties['azureml.git.commit']
+job.properties["azureml.git.commit"]
``` ## Next steps
-* [Use compute targets for model training](v1/how-to-set-up-training-targets.md)
+* [Access a compute instance terminal in your workspace](how-to-access-terminal.md)
machine-learning Linux Dsvm Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/linux-dsvm-walkthrough.md
Before you can use a Linux DSVM, you must have the following prerequisites:
* **Azure subscription**. To get an Azure subscription, see [Create your free Azure account today](https://azure.microsoft.com/free/).
-* [**Ubuntu Data Science Virtual Machine**](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804). For information about provisioning the virtual machine, see [Provision the Ubuntu Data Science Virtual Machine](./release-notes.md).
+* [**Ubuntu Data Science Virtual Machine**](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004). For information about provisioning the virtual machine, see [Provision the Ubuntu Data Science Virtual Machine](./release-notes.md).
* [**X2Go**](https://wiki.x2go.org/doku.php) installed on your computer with an open XFCE session. For more information, see [Install and configure the X2Go client](dsvm-ubuntu-intro.md#x2go). ## Download the spambase dataset
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Main Changes:
- General OS level updates. ## July 11, 2022
-[Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview) and [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+[Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview) and [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
Version `22.07.08`
Main changes:
Version `22.06.10`
-[Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview)
+[Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview)
Version `22.06.13`
Main changes:
- Upgraded `log4j(v2)` to version `2.17.2` ## April 29, 2022
-[Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview) and [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+[Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview) and [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
Version `22.04.27`
New DSVM offering for [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplac
Version: `22.04.05` ## April 04, 2022
-New Image for [Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview)
+New Image for [Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview)
Version: `22.04.01`
Main changes:
## November 4, 2021
-New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview).
+New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview).
Version: `21.11.04`
Main changes:
## October 7, 2021
-New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview).
+New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview).
Version: `21.10.07`
Main changes:
## July 12, 2021
-New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview).
+New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview).
Main changes:
Main changes:
## June 1, 2021
-New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview).
+New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview).
Version: `21.06.01`
Dark mode, changed icons on desktop, wallpaper background change.
## May 12, 2021
-New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview).
+New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview).
Selected version updates are: - CUDA 11.3, cuDNN 8, NCCL2
sudo systemctl start docker
## February 24, 2020
-Data Science Virtual Machine images for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview) and [Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview) images are now available.
+Data Science Virtual Machine images for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/tidalmediainc.ubuntu-18-04?tab=Overview) and [Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview) images are now available.
## October 8, 2019
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Access the custom applications that you set up in studio:
> [!NOTE] > It might take a few minutes after setting up a custom application until you can access it via the links above. The amount of time taken will depend on the size of the image used for your custom application. If you see a 502 error message when trying to access the application, wait for some time for the application to be set up and try again.
+Once you launch **RStudio**, you may not see any of your files, even after specifying the correct **Bind mounts** above. If this happens:
+
+1. Select the **...** at the far right of the Files pane
+1. For the **Path to folder**, type `/home/azureuser/cloudfiles/code`
++ ## Manage Start, stop, restart, and delete a compute instance. A compute instance doesn't automatically scale down, so make sure to stop the resource to prevent ongoing charges. Stopping a compute instance deallocates it. Then start it again when you need it. While stopping the compute instance stops the billing for compute hours, you'll still be billed for disk, public IP, and standard load balancer.
To create a compute instance, you'll need permissions for the following actions:
Once a compute instance is deployed, it does not get automatically updated. Microsoft [releases](azure-machine-learning-ci-image-release-notes.md) new VM images on a monthly basis. To understand options for keeping recent with the latest version, see [vulnerability management](concept-vulnerability-management.md#compute-instance).
-To keep track of whether a compute instance's operating system version is current, you could query an instance's version using the Studio UI, CLI and SDK.
-
-# [Python SDK](#tab/python)
--
-```python
-from azure.ai.ml.entities import ComputeInstance, AmlCompute
-
-# Display operating system version
-instance = ml_client.compute.get("myci")
-print instance.os_image_metadata
-```
-
-For more information on the classes, methods, and parameters used in this example, see the following reference documents:
-
-* [`AmlCompute` class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute)
-* [`ComputeInstance` class](/python/api/azure-ai-ml/azure.ai.ml.entities.computeinstance)
-
-# [Azure CLI](#tab/azure-cli)
--
-```azurecli
-az ml compute show --name "myci"
-```
-
-# [Studio](#tab/azure-studio)
-
-In your workspace in Azure Machine Learning studio, select Compute, then select compute instance on the top. Select a compute instance's compute name to see its properties including the current operating system. When a more recent instance OS version is, use the creation wizard to create a new instance. Enable 'audit and observe compute instance os version' under the previews management panel to see these preview properties.
--
+To keep track of whether an instance's operating system version is current, you could query its version using the Studio UI. In your workspace in Azure Machine Learning studio, select Compute, then select compute instance on the top. Select a compute instance's compute name to see its properties including the current operating system. Enable 'audit and observe compute instance os version' under the previews management panel to see these preview properties.
Administrators can use [Azure Policy](./../governance/policy/overview.md) definitions to audit instances that are running on outdated operating system versions across workspaces and subscriptions. The following is a sample policy:
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-custom-image.md
fastai_env.docker.base_dockerfile = "./Dockerfile"
> Azure Machine Learning only supports Docker images that provide the following software: > * Ubuntu 18.04 or greater. > * Conda 4.7.# or greater.
-> * Python 3.6+.
+> * Python 3.7+.
> * A POSIX compliant shell available at /bin/sh is required in any container image used for training. For more information about creating and managing Azure Machine Learning environments, see [Create and use software environments](how-to-use-environments.md).
machine-learning How To Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models-v1.md
Automated ML supports model training for computer vision tasks like image classi
* [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. > [!NOTE]
- > Only Python 3.6 and 3.7 are compatible with automated ML support for computer vision tasks.
+ > Only Python 3.7 and 3.8 are compatible with automated ML support for computer vision tasks.
## Select your task type Automated ML for images supports the following task types:
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-inferencing-gpus.md
name: project_environment
dependencies: # The Python interpreter version. # Currently Azure ML only supports 3.5.2 and later.-- python=3.6.2
+- python=3.7
- pip: # You must list azureml-defaults as a pip dependency
aks_target.delete()
* [Deploy model on FPGA](how-to-deploy-fpga-web-service.md) * [Deploy model with ONNX](../concept-onnx.md#deploy-onnx-models-in-azure)
-* [Train TensorFlow DNN Models](../how-to-train-tensorflow.md)
+* [Train TensorFlow DNN Models](../how-to-train-tensorflow.md)
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-keras.md
First, define your conda dependencies in a YAML file; in this example the file i
channels: - conda-forge dependencies:-- python=3.6.2
+- python=3.7
- pip: - azureml-defaults - tensorflow-gpu==2.0.0
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
First, define your conda dependencies in a YAML file; in this example the file i
channels: - conda-forge dependencies:-- python=3.6.2
+- python=3.7
- pip=21.3.1 - pip: - azureml-defaults
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-scikit-learn.md
You can also create your own custom environment. Define your conda dependencies
```yaml dependencies:
- - python=3.6.2
+ - python=3.7
- scikit-learn - numpy - pip:
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-tensorflow.md
First, define your conda dependencies in a YAML file; in this example the file i
channels: - conda-forge dependencies:-- python=3.6.2
+- python=3.7
- pip: - azureml-defaults - tensorflow-gpu==2.2.0
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-labeled-dataset.md
pip install azureml-dataprep
``` In the following code, the `animal_labels` dataset is the output from a labeling project previously saved to the workspace.
-The exported dataset is a [TabularDataset](/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset). If you plan to use [download()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-download) or [mount()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-mount) methods, be sure to set the parameter `stream column ='image_url'`.
-
-> [!NOTE]
-> The public preview methods download() and mount() are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time.
+The exported dataset is a [TabularDataset](/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset).
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
from azureml.core import Dataset, Workspace
animal_labels = Dataset.get_by_name(workspace, 'animal_labels') animal_pd = animal_labels.to_pandas_dataframe()
-# download the images to local
-download_path = animal_labels.download(stream_column='image_url')
- import matplotlib.pyplot as plt import matplotlib.image as mpimg
-#read images from downloaded path
-img = mpimg.imread(download_path[0])
+#read images from dataset
+img = mpimg.imread(animal_pd['image_url'].iloc(0).open())
imgplot = plt.imshow(img) ```
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-sdk-train.md
First you'll create a file with the package dependencies.
- defaults - pytorch dependencies:
- - python=3.6.2
+ - python=3.7
- pytorch - torchvision ```
channels:
- defaults - pytorch dependencies:
- - python=3.6.2
+ - python=3.7
- pytorch - torchvision - pip
machine-learning Tutorial Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models-v1.md
You'll write code using the Python SDK in this tutorial and learn the following
* If you donΓÇÖt have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://azure.microsoft.com/free/) of Azure Machine Learning today.
-* Python 3.6 or 3.7 are supported for this feature
+* Python 3.7 or 3.8 are supported for this feature
* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
Create a file `conda_dependencies.yml` with the following contents:
```yml dependencies:-- python=3.6.2
+- python=3.7
- pip: - azureml-core - azureml-dataset-runtime
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
Azure Database for MySQL currently supports the following major and minor versio
| Version | [Single Server](single-server/overview.md) <br/> Current minor version |[Flexible Server](flexible-server/overview.md) <br/> Current minor version | |:-|:-|:| |MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html)(Retired) | Not supported|
-|MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
+|MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)| > [!NOTE]
network-watcher Diagnose Vm Network Traffic Filtering Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md
description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher. documentationcenter: network-watcher-- Previously updated : 10/12/2022++ Last updated : 11/18/2022
To determine why the rules in steps 3-5 of **Use IP flow verify** allow or deny
:::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png" alt-text="Screenshot of Effective security rules." lightbox="./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png" ::: In step 3 of **Use IP flow verify**, you learned that the reason the communication was allowed is because of the **AllowInternetOutbound** rule. You can see in the previous picture that the **Destination** for the rule is **Internet**. It's not clear how 13.107.21.200, the address you tested in step 3 of **Use IP flow verify**, relates to **Internet** though.
-1. Select the **AllowInternetOutBound** rule, and then scroll down to **Destination**, as shown in the following picture:
-
- :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/security-rule-prefixes.png" alt-text="Screenshot of Security rule prefixes.":::
+1. Select the **AllowInternetOutBound** rule, and then scroll down to **Destination**.
- One of the prefixes in the list is **12.0.0.0/8**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the picture in step 2 that override this rule. Close the **Address prefixes** box. To deny outbound communication to 13.107.21.200, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
+ One of the prefixes in the list is **13.0.0.0/8**, which encompasses the 13.0.0.1-13.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the picture in step 2 that override this rule. Close the **Address prefixes** box. To deny outbound communication to 13.107.21.200, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
1. When you ran the outbound check to 172.131.0.100 in step 4 of **Use IP flow verify**, you learned that the **DenyAllOutBound** rule denied communication. That rule equates to the **DenyAllOutBound** rule shown in the picture in step 2 that specifies **0.0.0.0/0** as the **Destination**. This rule denies the outbound communication to 172.131.0.100 because the address is not within the **Destination** of any of the other **Outbound rules** shown in the picture. To allow the outbound communication, you can add a security rule with a higher priority, that allows outbound traffic to port 80 for the 172.131.0.100 address. 1. When you ran the inbound check from 172.131.0.100 in step 5 of **Use IP flow verify**, you learned that the **DenyAllInBound** rule denied communication. That rule equates to the **DenyAllInBound** rule shown in the picture in step 2. The **DenyAllInBound** rule is enforced because no other higher priority rule exists that allows port 80 inbound to the VM from 172.31.0.100. To allow the inbound communication, you could add a security rule with a higher priority, that allows port 80 inbound from 172.31.0.100.
network-watcher Network Watcher Connectivity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-overview.md
Title: Introduction to Azure Network Watcher Connection Troubleshoot | Microsoft
description: This page provides an overview of the Network Watcher connection troubleshooting capability documentationcenter: na-+ na Previously updated : 07/11/2017- Last updated : 11/02/2022+ # Introduction to connection troubleshoot in Azure Network Watcher
-The connection troubleshoot feature of Network Watcher provides the capability to check a direct TCP connection from a virtual machine to a virtual machine (VM), fully qualified domain name (FQDN), URI, or IPv4 address. Network scenarios are complex, they are implemented using network security groups, firewalls, user-defined routes, and resources provided by Azure. Complex configurations make troubleshooting connectivity issues challenging. Network Watcher helps reduce the amount of time to find and detect connectivity issues. The results returned can provide insights into whether a connectivity issue is due to a platform or a user configuration issue. Connectivity can be checked with [PowerShell](network-watcher-connectivity-powershell.md), [Azure CLI](network-watcher-connectivity-cli.md), and [REST API](network-watcher-connectivity-rest.md).
+The connection troubleshoot feature of Network Watcher provides the capability to check a direct TCP connection from a virtual machine to a virtual machine (VM), fully qualified domain name (FQDN), URI, or IPv4 address. Network scenarios are complex, they're implemented using network security groups, firewalls, user-defined routes, and resources provided by Azure. Complex configurations make troubleshooting connectivity issues challenging. Network Watcher helps reduce the amount of time to find and detect connectivity issues. The results returned can provide insights into whether a connectivity issue is due to a platform or a user configuration issue. Connectivity can be checked with [PowerShell](network-watcher-connectivity-powershell.md), [Azure CLI](network-watcher-connectivity-cli.md), and [REST API](network-watcher-connectivity-rest.md).
> [!IMPORTANT] > Connection troubleshoot requires that the VM you troubleshoot from has the `AzureNetworkWatcherExtension` VM extension installed. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). The extension is not required on the destination endpoint.
The connection troubleshoot feature of Network Watcher provides the capability t
The following table shows the properties returned when connection troubleshoot has finished running.
-|Property |Description |
+|**Property** |**Description** |
||| |ConnectionStatus | The status of the connectivity check. Possible results are **Reachable** and **Unreachable**. |
-|AvgLatencyInMs | Average latency during the connectivity check in milliseconds. (Only shown if check status is reachable) |
-|MinLatencyInMs | Minimum latency during the connectivity check in milliseconds. (Only shown if check status is reachable) |
-|MaxLatencyInMs | Maximum latency during the connectivity check in milliseconds. (Only shown if check status is reachable) |
+|AvgLatencyInMs | Average latency during the connectivity check, in milliseconds. (Only shown if check status is reachable) |
+|MinLatencyInMs | Minimum latency during the connectivity check, in milliseconds. (Only shown if check status is reachable) |
+|MaxLatencyInMs | Maximum latency during the connectivity check, in milliseconds. (Only shown if check status is reachable) |
|ProbesSent | Number of probes sent during the check. Max value is 100. | |ProbesFailed | Number of probes that failed during the check. Max value is 100. | |Hops | Hop by hop path from source to destination. | |Hops[].Type | Type of resource. Possible values are **Source**, **VirtualAppliance**, **VnetLocal**, and **Internet**. | |Hops[].Id | Unique identifier of the hop.| |Hops[].Address | IP address of the hop.|
-|Hops[].ResourceId | ResourceID of the hop if the hop is an Azure resource. If it is an internet resource, ResourceID is **Internet**. |
+|Hops[].ResourceId | ResourceID of the hop if the hop is an Azure resource. If it's an internet resource, ResourceID is **Internet**. |
|Hops[].NextHopIds | The unique identifier of the next hop taken.| |Hops[].Issues | A collection of issues that were encountered during the check at that hop. If there were no issues, the value is blank.| |Hops[].Issues[].Origin | At the current hop, where issue occurred. Possible values are:<br/> **Inbound** - Issue is on the link from the previous hop to the current hop<br/>**Outbound** - Issue is on the link from the current hop to the next hop<br/>**Local** - Issue is on the current hop.|
The following is an example of an issue found on a hop.
Connection troubleshoot returns fault types about the connection. The following table provides a list of the current fault types returned.
-|Type |Description |
+|**Type** |**Description** |
||| |CPU | High CPU utilization. | |Memory | High Memory utilization. |
-|GuestFirewall | Traffic is blocked due to a virtual machine firewall configuration. |
+|GuestFirewall | Traffic is blocked due to a virtual machine firewall configuration. <br><br> Note that a TCP ping is a unique use case in which, if there's no allowed rule, the firewall itself responds to the client's TCP ping request even though the TCP ping doesn't reach the target IP address/FQDN. This event isn't logged. If there's a network rule that allows access to the target IP address/FQDN, the ping request reaches the target server and its response is relayed back to the client. This event is logged in the Network rules log. |
|DNSResolution | DNS resolution failed for the destination address. | |NetworkSecurityRule | Traffic is blocked by an NSG Rule (Rule is returned) | |UserDefinedRoute|Traffic is dropped due to a user defined or system route. |
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
To verify if you are using SSL connection to connect to the server refer [SSL ve
No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
-### 13. What if you are using docker image of PgBouncer sidecar provided by Microsoft?
-A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting December, 2022.
-### 14. What if I have further questions?
+### 13. What if I have further questions?
If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help please create a [support request](https://learn.microsoft.com/azure/azure-portal/supportability/how-to-create-azure-support-request): * ForΓÇ»*Issue type*, selectΓÇ»*Technical*. * ForΓÇ»*Subscription*, select your *subscription*.
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-monitoring.md
Azure Database for PostgreSQL provides various metrics that give insight into th
These metrics are available for Azure Database for PostgreSQL:
-|Metric|Metric Display Name|Unit|Description|
-|||||
-|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
-|memory_percent|Memory percent|Percent|The percentage of memory in use.|
-|io_consumption_percent|IO percent|Percent|The percentage of IO in use. (Not applicable for Basic tier servers.)|
-|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
-|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
-|storage_limit|Storage limit|Bytes|The maximum storage for this server.|
-|serverlog_storage_percent|Server Log storage percent|Percent|The percentage of server log storage used out of the server's maximum server log storage.|
-|serverlog_storage_usage|Server Log storage used|Bytes|The amount of server log storage in use.|
-|serverlog_storage_limit|Server Log storage limit|Bytes|The maximum server log storage for this server.|
-|active_connections|Active Connections|Count|The number of active connections to the server.|
-|connections_failed|Failed Connections|Count|The number of established connections that failed.|
-|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
-|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
-|backup_storage_used|Backup Storage Used|Bytes|The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.|
-|pg_replica_log_delay_in_bytes|Max Lag Across Replicas|Bytes|The lag in bytes between the primary and the most-lagging replica. This metric is available on the primary server only.|
-|pg_replica_log_delay_in_seconds|Replica Lag|Seconds|The time since the last replayed transaction. This metric is available for replica servers only.|
+##### `Error`
+
+|Display Name|Metric ID |Unit |Description|
+||--|-|--|
+|**Failed Connections**|connections_failed |Count |The number of established connections that failed.|
+
+##### `Latency`
+
+|Display Name|Metric ID |Unit |Description|
+||--|-|--|
+|**Max Lag Across Replicas**|pg_replica_log_delay_in_bytes|Bytes |The lag in bytes between the primary and the most-lagging replica. This metric is available on the primary server only.|
+|**Replica Lag** |pg_replica_log_delay_in_seconds|Seconds |The time since the last replayed transaction. This metric is available for replica servers only.|
+
+##### `Saturation`
+
+|Display Name|Metric ID |Unit |Description|
+||--|-|--|
+|**Backup Storage Used**|backup_storage_used |Bytes |The amount of backup storage used. This metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained in the [concepts article](concepts-backup.md). For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.|
+|**CPU percent** |cpu_percent |Percent |The percentage of CPU in use.|
+|**IO percent** |io_consumption_percent |Percent |The percentage of IO in use. (Not applicable for Basic tier servers.)|
+|**Memory percent** |memory_percent |Percent |The percentage of memory in use.|
+|**Server Log storage limit** |serverlog_storage_limit |Bytes |The maximum server log storage for this server.|
+|**Server Log storage percent** |serverlog_storage_percent |Percent |The percentage of server log storage used out of the server's maximum server log storage.|
+|**Server Log storage used** |serverlog_storage_usage |Bytes |The amount of server log storage in use.|
+|**Storage limit** |storage_limit |Bytes |The maximum storage for this server.|
+|**Storage percentage** |storage_percent |Percent |The percentage of storage used out of the server's maximum.|
+|**Storage used** |storage_used |Bytes |The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
+
+##### `Traffic`
+
+|Display Name|Metric ID |Unit |Description|
+||--|-|--|
+|**Active Connections**|active_connections |Count |The number of active connections to the server.|
+|**Network Out** |network_bytes_egress |Bytes |Network Out across active connections.|
+|**Network In** |network_bytes_ingress |Bytes |Network In across active connections.|
## Server logs
private-link Create Private Link Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-portal.md
Previously updated : 07/11/2022 Last updated : 11/17/2022 #Customer intent: As someone with a basic network background who's new to Azure, I want to create an Azure Private Link service by using the Azure portal # Quickstart: Create a Private Link service by using the Azure portal
-Get started creating a Private Link service that refers to your service. Give Private Link access to your service or resource deployed behind an Azure Standard Load Balancer. Users of your service have private access from their virtual network.
+Get started creating a Private Link service that refers to your service. Give Private Link access to your service or resource deployed behind an Azure Standard Load Balancer. Users of your service have private access from their virtual network.
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Sign in to the Azure portal
-
-Sign in to the Azure portal at https://portal.azure.com.
- ## Create an internal load balancer In this section, you'll create a virtual network and an internal Azure Load Balancer.
-### Virtual network
+### Load balancer virtual network
-In this section, you create a virtual network and subnet to host the load balancer that accesses your Private Link service.
+Create a virtual network and subnet to host the load balancer that accesses your Private Link service.
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-3. Select **Create**.
+3. Select **+ Create**.
4. In **Create virtual network**, enter or select this information in the **Basics** tab:
In this section, you create a virtual network and subnet to host the load balanc
| Resource Group | Select **Create new**. Enter **CreatePrivLinkService-rg**. </br> Select **OK**. | | **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **(US) East US 2** |
+ | Region | Select **East US 2** |
5. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
In this section, you create a virtual network and subnet to host the load balanc
### Create load balancer
-In this section, you create a load balancer that load balances virtual machines.
+Create an internal load balancer that load balances virtual machines.
During the creation of the load balancer, you'll configure:
During the creation of the load balancer, you'll configure:
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-1. In the **Load balancer** page, select **Create**.
+1. In the **Load balancer** page, select **+ Create**.
-1. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+2. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
| Setting | Value | |-|--|
During the creation of the load balancer, you'll configure:
| Resource group | Select **CreatePrivLinkService-rg**. | | **Instance details** | | | Name | Enter **myLoadBalancer** |
- | Region | Select **(US) East US 2**. |
+ | Region | Select **East US 2**. |
| SKU | Leave the default **Standard**. | | Type | Select **Internal**. | | Tier | Select **Regional**. |
-1. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-1. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-
-1. Enter **LoadBalancerFrontend** in **Name**.
+3. Select **Next: Frontend IP configuration**.
-1. Select **myBackendSubnet** in **Subnet**.
+4. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-1. Select **Dynamic** for **Assignment**.
+5. Enter or select the following information in **Add frontend IP configuration**.
-1. Select **Zone-redundant** in **Availability zone**.
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **LoadBalancerFrontend**.|
+ | Virtual network | Select **myVNet (CreatePrivLinkService-rg)**. |
+ | Subnet | Select **myBackendSubnet (10.1.0.0/24)**. |
+ | Assignment | Leave the default of **Dynamic**. |
+ | Availability zone | Leave the default of **Zone-redundant**. |
> [!NOTE] > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
-1. Select **Add**.
+6. Select **Add**.
-1. Select **Next: Backend pools** at the bottom of the page.
+7. Select **Next: Backend pools**.
-1. In the **Backend pools** tab, select **+ Add a backend pool**.
+8. In **Backend pools**, select **+ Add a backend pool**.
-1. Enter **myBackendPool** for **Name** in **Add backend pool**.
+9. Enter **myBackendPool** for **Name**.
-1. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
+10. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
-1. Select **Save**.
+11. Select **Save**.
-1. Select the **Next: Inbound rules** button at the bottom of the page.
+12. Select **Next: Inbound rules**.
-1. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+13. In **Load balancing rule**, select **+ Add a load balancing rule**.
-1. In **Add load balancing rule**, enter or select the following information:
+14. In **Add load balancing rule**, enter or select the following information:
| Setting | Value | | - | -- | | Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. | | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**. |
- | HA Ports | Check box. |
| Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. | | Floating IP | Select **Disabled**. |
-1. Select **Add**.
+15. Select **Add**.
-1. Select the blue **Review + create** button at the bottom of the page.
+16. Select the blue **Review + create** button.
-1. Select **Create**.
+17. Select **Create**.
## Create a private link service
-In this section, you'll create a Private Link service behind a standard load balancer.
-
-1. On the upper-left part of the page in the Azure portal, select **Create a resource**.
+Create a Private Link service behind the load balancer you created in the previous section.
-1. Search for **Private Link** in the **Search the Marketplace** box.
+1. In the search box at the top of the portal, enter **Private link**. Select **Private link services** in the search results.
-1. Select **Create**.
+2. Select **+ Create**.
-1. In **Overview** under **Private Link Center**, select the blue **Create private link service** button.
-
-1. In the **Basics** tab under **Create private link service**, enter, or select the following information:
+3. In the **Basics** tab, enter or select the following information:
| Setting | Value | | - | -- |
In this section, you'll create a Private Link service behind a standard load bal
| Resource Group | Select **CreatePrivLinkService-rg**. | | **Instance details** | | | Name | Enter **myPrivateLinkService**. |
- | Region | Select **(US) East US 2**. |
+ | Region | Select **East US 2**. |
-1. Select the **Outbound settings** tab or select **Next: Outbound settings** at the bottom of the page.
+4. Select **Next: Outbound settings**.
-1. In the **Outbound settings** tab, enter or select the following information:
+5. In the **Outbound settings** tab, enter or select the following information:
| Setting | Value | | - | -- | | Load balancer | Select **myLoadBalancer**. | | Load balancer frontend IP address | Select **LoadBalancerFrontEnd (10.1.0.4)**. |
- | Source NAT subnet | Select **mySubnet (10.1.0.0/24)**. |
+ | Source NAT subnet | Select **myVNet/myBackendSubnet (10.1.0.0/24)**. |
| Enable TCP proxy V2 | Leave the default of **No**. </br> If your application expects a TCP proxy v2 header, select **Yes**. | | **Private IP address settings** | | | Leave the default settings | |
-1. Select the **Access security** tab or select **Next: Access security** at the bottom of the page.
+6. Select **Next: Access security**.
-1. Leave the default of **Role-based access control only** in the **Access security** tab.
+7. Leave the default of **Role-based access control only** in the **Access security** tab.
-1. Select the **Tags** tab or select **Next: Tags** at the bottom of the page.
+8. Select **Next: Tags**.
-1. Select the **Review + create** tab or select **Next: Review + create** at the bottom of the page.
+9. Select **Next: Review + create**.
-1. Select **Create** in the **Review + create** tab.
+10. Select **Create**.
Your private link service is created and can receive traffic. If you want to see traffic flows, configure your application behind your standard load balancer.
In this section, you'll map the private link service to a private endpoint. A vi
### Create private endpoint virtual network
-1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **+ Create**.
-1. In **Create virtual network**, enter or select this information in the **Basics** tab:
+3. In the **Basics** tab, enter or select the following information:
| **Setting** | **Value** | ||--|
In this section, you'll map the private link service to a private endpoint. A vi
| Resource Group | Select **CreatePrivLinkService-rg** | | **Instance details** | | | Name | Enter **myVNetPE** |
- | Region | Select **(US) East US 2** |
+ | Region | Select **East US 2** |
-1. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+4. Select **Next: IP Addresses** or the **IP Addresses** tab.
-1. In the **IP Addresses** tab, enter this information:
+5. In the **IP Addresses** tab, enter the following information:
| Setting | Value | |--|-|
- | IPv4 address space | Enter **11.1.0.0/16** |
+ | IPv4 address space | Enter **10.1.0.0/16** |
-1. Under **Subnet name**, select the word **default**.
+6. Select **+Add subnet**.
-1. In **Edit subnet**, enter this information:
+7. In **Add subnet**, enter this information:
| Setting | Value | |--|-| | Subnet name | Enter **mySubnetPE** |
- | Subnet address range | Enter **11.1.0.0/24** |
+ | Subnet address range | Enter **10.1.0.0/24** |
-1. Select **Save**.
+8. Select **Add**.
-1. Select the **Review + create** tab or select the **Review + create** button.
+9. Select the **Review + create** tab or select **Review + create**.
-1. Select **Create**.
+10. Select **Create**.
### Create private endpoint
-1. On the upper-left side of the screen in the portal, select **Create a resource** > **Networking** > **Private Link**, or in the search box enter **Private Link**.
-
-1. Select **Create**.
+1. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
-1. In **Private Link Center**, select **Private endpoints** in the left-hand menu.
+2. Select **+ Create**.
-1. In **Private endpoints**, select **+ Add**.
-
-1. In the **Basics** tab of **Create a private endpoint**, enter, or select this information:
+3. In the **Basics** tab, enter or select the following information:
| Setting | Value | | - | -- |
In this section, you'll map the private link service to a private endpoint. A vi
| Resource group | Select **CreatePrivLinkService-rg**. You created this resource group in the previous section.| | **Instance details** | | | Name | Enter **myPrivateEndpoint**. |
- | Region | Select **(US) East US 2**. |
+ | Network Interface Name | Leave the default of **myPrivateEndpoint-nic**. |
+ | Region | Select **East US 2**. |
-1. Select the **Next: Resource** button at the bottom of the page.
+4. Select **Next: Resource**.
-1. In **Resource**, enter or select this information:
+5. In the **Resource** tab, enter or select the following information:
| Setting | Value | | - | -- |
In this section, you'll map the private link service to a private endpoint. A vi
| Resource type | Select **Microsoft.Network/privateLinkServices**. | | Resource | Select **myPrivateLinkService**. |
-1. Select the **Next: Virtual Network** button at the bottom of the screen.
+6. Select **Next: Virtual Network**.
-1. In **Configuration**, enter or select this information:
+7. In **Virtual Network**, enter or select the following information.
| Setting | Value | | - | -- | | **Networking** | |
- | Virtual Network | Select **myVNetPE**. |
- | Subnet | Select **mySubnetPE**. |
+ | Virtual network | Select **myVNetPE**. |
+ | Subnet | Select **myVNet/mySubnetPE (10.1.0.0/24)**. |
+ | Network policy for private endpoints | Select **edit** to apply Network security groups and/or Route tables to the subnet that contains the private endpoint. </br> In **Edit subnet network policy**, select the checkbox next to **Network security groups** and **Route Tables**. </br> Select **Save**. </br></br>For more information, see [Manage network policies for private endpoints](disable-private-endpoint-network-policy.md) |
+
+# [**Dynamic IP**](#tab/dynamic-ip)
+
+| Setting | Value |
+| - | -- |
+| **Private IP configuration** | Select **Dynamically allocate IP address**. |
++
+# [**Static IP**](#tab/static-ip)
+
+| Setting | Value |
+| - | -- |
+| **Private IP configuration** | Select **Statically allocate IP address**. |
+| Name | Enter **myIPconfig**. |
+| Private IP | Enter **10.1.0.10**. |
+
-1. Select **Next** until the **Review + create** tab then select **Create**.
++
+8. Select **Next: DNS**.
+
+9. Select **Next: Tags**.
+
+10. Select **Next: Review + create**.
+
+11. Select **Create**.
### IP address of private endpoint In this section, you'll find the IP address of the private endpoint that corresponds with the load balancer and private link service.
-1. In the left-hand column of the Azure portal, select **Resource groups**.
-
-2. Select the **CreatePrivLinkService-rg** resource group.
+1. Enter **CreatePrivLinkService-rg** in the search box at the top of the portal. Select **CreatePrivLinkService-rg** in the search results in **Resource Groups**.
-3. In the **CreatePrivLinkService-rg** resource group, select **myPrivateEndpoint**.
+2. In the **CreatePrivLinkService-rg** resource group, select **myPrivateEndpoint**.
-4. In the **Overview** page of **myPrivateEndpoint**, select the name of the network interface associated with the private endpoint. The network interface name begins with **myPrivateEndpoint.nic**.
+4. In the **Overview** page of **myPrivateEndpoint**, select the name of the network interface associated with the private endpoint. The network interface name begins with **myPrivateEndpoint.nic**.
5. In the **Overview** page of the private endpoint nic, the IP address of the endpoint is displayed in **Private IP address**.
In this section, you'll find the IP address of the private endpoint that corresp
When you're done using the private link service, delete the resource group to clean up the resources used in this quickstart.
-1. Enter **CreatePrivLinkService-rg** in the search box at the top of the portal, and select **CreatePrivLinkService-rg** from the search results.
+1. Enter **CreatePrivLinkService-rg** in the search box at the top of the portal. Select **CreatePrivLinkService-rg** in the search results.
2. Select **Delete resource group**.
purview Concept Metamodel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-metamodel.md
+
+ Title: Microsoft Purview metamodel
+description: The Microsoft Purview metamodel helps you represent a business perspective of your data, how itΓÇÖs grouped into data domains, used in business processes, organized into systems, and more.
+++++ Last updated : 11/10/2022+++
+# Microsoft Purview metamodel
++
+Metamodel is a feature in the Microsoft Purview Data Map that helps add rich business context to your data catalog. It tells a story about how your data is grouped in data domains, how it's used in business processes, what projects are impacted by the data, and ultimately how the data fits in the day to day of your business.
+
+The context metamodel provides is important because business users, the people who are consuming the data, often have non-technical questions about the data. Questions like: What department produces this dataset? Are there any projects that are using this dataset? Where does this report come from?
+
+When you scan data into Microsoft Purview, the technical metadata can tell you what the data looks like, if the data has been classified, or if it has glossary terms assigned, but it can't tell you where and how that data is used. The metamodel gives your users that information. It can tell you what data is mission critical for a product launch or used by a high performing sales team to convert leads to prospects.
+
+So not only does the metamodel help data consumers find the data they're looking for, but it also tells your data managers what data is critical and how healthy that critical data is, so you can better manage and govern it. For example: you may have different privacy obligations if you use personal information for marketing activities vs. analytics that improve a product or service, and metamodel can help you determine where data is being used.
+
+## So what is the metamodel?
+
+What does it look like? The metamodel is built from **assets** and the **relationships** between them.
+
+For example, you might have a sales reporting team (asset) that consumes data (relationship) from some SQL tables (assets).
+
+When you scan data sources into the data map, you already have the technical data assets like SQL tables available for your metamodel. But what about assets like a sales reporting team, or a marketing strategy that represent processes or people instead of a data source? Metamodel provides **asset types** that allow you to describe other important parts of your business.
+
+An **asset type** is a template for important concepts like business processes, departments, lines of business, or even products. They're the building blocks you'll use to describe how data is used in your business. The **asset type** creates a template you can use over and over to describe specific assets. For example, you can define an asset type "department" and then create new department assets for each of your business departments. These new assets are stored in Microsoft Purview like any other data assets that were scanned in the data map, so you can search and browse for them in the data catalog.
+
+Metamodel includes several [predefined asset types](how-to-metamodel.md#predefined-asset-types) to help you get started, but you can also create your own.
+
+Similarly a **relationship definition** is the template for the kinds of interactions between assets that you want to represent. For example, a department *manages* a business process. An organization *has* departments. The business process, organization, and departments are the assets, "manages" and "has" are the relationships between them.
+
+For example, if we want to use Microsoft Purview to show how key data sets are used in our business processes, we can represent that information as a template:
++
+Which we can then use to describe how a specific business process uses a specific data set:
++
+## Metamodel example
+
+For a simple example of a metamodel, let's consider marketing campaign management in a business. This process is an asset that we'll add to our metamodel. It's not a data source we can scan, since it's a set of business processes, but during this process real data will be used and referenced. It's important to show what data is being used and how, so we can properly manage and govern it. We'll create an asset for the marketing campaign management from the asset type "Business Process".
+
+We know there are two tables in SQL that the marketing campaign management team uses. These are assets too, but they were data source assets created when the SQL database was scanned into the Microsoft Purview Data Map.
+
+The relationship between the marketing campaign management asset and the SQL table assets is that campaign management **consumes**, or uses, the SQL tables when developing campaigns. We can record that in our metamodel, so now anyone that looks at those SQL tables can see that they're being used to develop marketing campaigns. We'll also be able to see if there are any other teams that use this data, or maybe which department develops this data as well. So now, with metamodel, we know not only what the data is, but we have a story about how it's being used that helps us understand and manage it.
++
+And now that marketing campaign management is an asset in your data catalog like any other asset, so you can find it in a search. A user could search for that process and quickly know what data it uses and produces, without needing to know anything about the data beforehand.
+
+## Next steps
+
+If you're ready to get started, follow the [how to create a metamodel](how-to-metamodel.md) article.
purview How To Metamodel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-metamodel.md
+
+ Title: Manage assets with metamodel
+description: Manage asset types with Microsoft Purview metamodel
+++++ Last updated : 11/10/2022+++
+# Manage assets with metamodel
++
+Metamodel is a feature in the Microsoft Purview Data Map that gives the technical data in your data map relationships and reference points to make it easier to navigate and understand in the day to day. Like adding streets and cities to a map, the metamodel orients users so they know where they are and can discover the information they need.
+
+This article will get you started in building a metamodel for your Microsoft Purview Data Map.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Create a new, or use an existing Microsoft Purview account. You can [follow our quick-start guide to create one](create-catalog-portal.md).
+- Create a new, or use an existing resource group, and place new data sources under it. [Follow this guide to create a new resource group](../azure-resource-manager/management/manage-resource-groups-portal.md).
+- [Data Curator role](catalog-permissions.md#roles) on the collection where the data asset is housed. See the guide on [managing Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
+
+## Current limitations
+
+- New relationships will always be association relationships.
+- When a new asset created, you have to refresh the asset to see relationships
+- You can't set relationships between two data assets in the Microsoft Purview governance portal
+- The related tab only shows a "business lineage" view for business assets, not data assets
+
+## Create and modify asset types
+
+1. To get started, open the data map and select **Asset types**. YouΓÇÖll see a list of available asset types. [Predefined asset types](#predefined-asset-types) will have unique icons. All custom assets are designated with a puzzle piece icon.
+
+1. To create a new asset type, select **New asset type** and add a name, description, and attributes.
+
+ :::image type="content" source="./media/how-to-metamodel/create-and-modify-metamodel-asset-types-inline.png" alt-text="Screenshot of the asset types page in the Microsoft Purview Data Map, with the buttons in steps 1 through 3 highlighted." lightbox="./media/how-to-metamodel/create-and-modify-metamodel-asset-types.png":::
+
+1. To define a relationship between two asset types, select **New relationship type**.
+
+1. Give the relationship a name and define its reverse direction. Assign it to one or more pairs of assets. Select **Create** to save your new relationship type.
+
+ :::image type="content" source="./media/how-to-metamodel/create-new-relationship-type.png" alt-text="Screenshot of the new relationship type page with a relationship defined and the create button highlighted." border="true":::
+
+1. As you create more asset types, your canvas may get crowded with asset types. To hide an asset from the canvas, select the eye icon on the asset card.
+
+ :::image type="content" source="./media/how-to-metamodel/hide-asset.png" alt-text="Screenshot of an asset card in the asset types canvas, the eye icon in the right corner is highlighted." border="true":::
+
+1. To add an asset type back to the canvas, drag it from the left panel.
+
+ :::image type="content" source="./media/how-to-metamodel/add-asset.png" alt-text="Screenshot of the asset list to the left of the asset canvas with one item highlighted." border="true":::
+
+## Create and modify assets
+
+1. When youΓÇÖre ready to begin working with assets, go to the data catalog and select **Business assets**.
+
+ :::image type="content" source="./media/how-to-metamodel/metamodel-assets-in-catalog.png" alt-text="Screenshot of left menu in the Microsoft Purview governance portal, the data map and business assets buttons highlighted." border="true":::
+
+1. Currently there's no integration with collections, so all assets created via the metamodel canvas will be listed under the data catalog.
+
+ :::image type="content" source="./media/how-to-metamodel/assets-page.png" alt-text="Screenshot of the business assets page." border="true":::
+
+1. To create a new asset, select **New asset**, select the asset type from the drop-down menu, give it a name, description, and complete any required attributes. Select **Create** to save your new asset.
+
+ :::image type="content" source="./media/how-to-metamodel/select-new-asset.png" alt-text="Screenshot of the business assets page with the new asset button highlighted." border="true":::
+
+ :::image type="content" source="./media/how-to-metamodel/create-new-asset.png" alt-text="Screenshot of the new asset page with a name and description added and the create button highlighted." border="true":::
+
+1. To establish a relationship between two assets, go to the asset detail page and select **Edit > Related**, and the relationship youΓÇÖd like to populate.
+
+ :::image type="content" source="./media/how-to-metamodel/select-edit.png" alt-text="Screenshot of an asset page with the edit button highlighted." border="true":::
+
+ :::image type="content" source="./media/how-to-metamodel/establish-relationships.png" alt-text="Screenshot of the edit asset page with the Related tab open and the relationships highlighted." border="true":::
+
+1. Select the assets or assets youΓÇÖd like to link from the data catalog and select **OK**.
+
+ :::image type="content" source="./media/how-to-metamodel/select-related-assets.png" alt-text="Screenshot of the select assets page with two assets selected and the Ok button highlighted." border="true":::
+
+1. Save your changes. You can see the relationships you established in the asset overview.
+
+1. In the **Related** tab of the asset you can also explore a visual representation of related assets.
+
+ :::image type="content" source="./media/how-to-metamodel/visualize-related-assets.png" alt-text="Screenshot of the related tab of a business asset." border="true":::
+
+ >[!NOTE]
+ >This is the experience provided by default from Atlas.
+
+## Predefined asset types
+
+An asset type is a template for storing a concept thatΓÇÖs important to your organizationΓÇöanything you might want to represent in your data map alongside your physical metadata. You can create your own, but Purview also comes with a prepackaged set of business asset types you can modify to meet your needs.
+
+| Asset Type | Description |
+|||
+| Application service| A well-defined software component, especially one that implements a specific business function such as on-boarding a new customer, taking an order, or sending an invoice.ΓÇ» |
+| Business process | A set of activities that are performed in coordination in an organizational or technical environment that jointly realizes a business goal. |
+| Data Domain | A category of data that is governed or explicitly managed for master data management. |
+| Department | An organizational subunit that only has full recognition within the context of that organization. A department wouldn't be regarded as a legal entity in its own right. |
+| Line of business | An organization subdivision focused on a single product or family of products. |
+| Organization | A collection of people organized together into a community or other social, commercial or political structure. The group has some common purpose or reason for existence that goes beyond the set of people belonging to it and can act as a unit. Organizations are often decomposable into hierarchical structures. |
+| Product | Any offered product or service. |
+| Project | A specific activity used to control the use of resources and associated costs so they're used appropriately in order to successfully achieve the project's goals, such as building a new capability or improving an existing capability. |
+| System | An IT system including hardware and software. |
+
+## Next steps
+
+For more information about the metamodel, see the metamodel [concept page](concept-metamodel.md).
purview How To Use Workflow Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-use-workflow-connectors.md
Currently the following connectors are available for a workflow in Microsoft Pur
|Check data source registration for data use governance |Validate if data source has been registered with Data Use Management enabled. |None | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |Data access request | |Condition |Evaluate a value to true or false. Based on the evaluation the workflow will be re-directed to different branches | <br> - Add row <br> - Title <br> - Add group | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |All workflows templates | |Create Glossary Term |Create a new glossary term |None | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |Create glossary term template |
-|Create task and wait for task completion |Creates, assigns, and tracks a task to a user or Azure Active Directory group as part of a workflow | <br> - Assigned to <br> - Task title <br> - Task body | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |All workflows templates |
+|Create task and wait for task completion |Creates, assigns, and tracks a task to a user or Azure Active Directory group as part of a workflow. <br> - Reminder settings - You can set reminders to periodically remind the task owner till they complete the task. <br> - Expiry settings - You can set an expiration or deadline for the task activity. Also, you can set who needs to be notified (user/AAD group) after the expiry. | <br> - Assigned to <br> - Task title <br> - Task body | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |All workflows templates |
|Delete glossary term |Delete an existing glossary term |None | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |Delete glossary term | |Grant access |Create an access policy to grant access to the requested user. |None | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |Data access request | |Http |Integrate with external applications through http or https call. <br> For more information, see [Workflows HTTP connector](how-to-use-workflow-http-connector.md) | <br> - Host <br> - Method <br> - Path <br> - Headers <br> - Queries <br> - Body <br> - Authentication | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Settings: Secured Input and Secure outputs (Enabled by default) <br> - Multiple per workflow |All workflows templates | |Import glossary terms |Import one or more glossary terms |None | <br> - Renamable: Yes <br> - Deletable: No <br> - Multiple per workflow |Import terms | |Send email notification |Send email notification to one or more recipients | <br> - Subject <br> - Message body <br> - Recipient | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Settings: Secured Input and Secure outputs (Enabled by default) <br> - Multiple per workflow |All workflows templates |
-|Start and wait for an approval |Generates approval requests and assign the requests to individual users or Microsoft Azure Active Directory groups. Microsoft Purview workflow approval connector currently supports two types of approval types: <br> - First to Respond ΓÇô This implies that the first approver's outcome (Approve/Reject) is considered final. <br> - Everyone must approve ΓÇô This implies everyone identified as an approver must approve the request for the request to be considered approved. If one approver rejects the request, regardless of other approvers, the request is rejected. | <br> - Approval Type <br> - Title <br> - Assigned To | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |All workflows templates |
+|Start and wait for an approval |Generates approval requests and assign the requests to individual users or Microsoft Azure Active Directory groups. Microsoft Purview workflow approval connector currently supports two types of approval types: <br> - First to Respond ΓÇô This implies that the first approver's outcome (Approve/Reject) is considered final. <br> - Everyone must approve ΓÇô This implies everyone identified as an approver must approve the request for the request to be considered approved. If one approver rejects the request, regardless of other approvers, the request is rejected. <br> - Reminder settings - You can set reminders to periodically remind the approver till they approve or reject. <br> - Expiry settings - You can set an expiration or deadline for the approval activity. Also, you can set who needs to be notified (user/AAD group) after the expiry. | <br> - Approval Type <br> - Title <br> - Assigned To | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |All workflows templates |
|Update glossary term |Update an existing glossary term |None | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |Update glossary term | |When term creation request is submitted |Triggers a workflow with all term details when a new term request is submitted |None | <br> - Renamable: Yes <br> - Deletable: No <br> - Only one per workflow |Create glossary term template | |When term deletion request is submitted |Triggers a workflow with all term details when a request to delete an existing term is submitted |None | <br> - Renamable: Yes <br> - Deletable: No <br> - Only one per workflow |Delete glossary term |
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Microsoft Purview's solutions in the governance portal provide a unified data go
>[!TIP] > Looking to govern your data in Microsoft 365 by keeping what you need and deleting what you don't? Use [Microsoft Purview Data Lifecycle Management](/microsoft-365/compliance/data-lifecycle-management).
-The [Data Map](#data-map): Microsoft Purview automates data discovery by providing data scanning and classification as a service for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Atop this map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape.
+## Data Map
+
+Microsoft Purview automates data discovery by providing data scanning and classification for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Atop this map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape.
|App |Description |
The [Data Map](#data-map): Microsoft Purview automates data discovery by providi
|[Data Sharing](#data-sharing) | Allows you to securely share data internally or cross organizations with business partners and customers. | |[Data Policy](#data-policy) | A set of central, cloud-based experiences that help you provision access to data securely and at scale. |
-## Data Map
-Microsoft Purview Data Map provides the foundation for data discovery and effective data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Microsoft Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.2 APIs.
+Microsoft Purview Data Map provides the foundation for data discovery and data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the data map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.2 APIs.
Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview Data Estate Insights as unified experiences within the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). For more information, see our [introduction to Data Map](concept-elastic-data-map.md).
With the Microsoft Purview Data Catalog, business and technical users can quickl
For more information, see our [introduction to search using Data Catalog](how-to-search-catalog.md). ## Data Estate Insights+ With the Microsoft Purview Data Estate Insights, the chief data officers and other governance stakeholders can get a birdΓÇÖs eye view of their data estate and can gain actionable insights into the governance gaps that can be resolved from the experience itself. For more information, see our [introduction to Data Estate Insights](concept-insights.md).
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure Event Grid](../event-grid/overview.md) | ![An icon that signifies this service is zone-redundant](media/icon-zone-redundant.svg) | | [Azure Firewall](../firewall/deploy-availability-zone-powershell.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Firewall Manager](../firewall-manager/quick-firewall-policy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Functions](../azure-functions/azure-functions-az-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Functions](./reliability-functions.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure HDInsight](../hdinsight/hdinsight-use-availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure IoT Hub](../iot-hub/iot-hub-ha-dr.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Kubernetes Service (AKS)](../aks/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
reliability Migrate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-functions.md
If you want your function app to use availability zones, redeploy your app into
The following steps describe how to enable availability zones. 1. If you're already using the Premium SKU and are in one of the [supported regions](../azure-functions/azure-functions-az-redundancy.md#regional-availability), you can move on to the next step. Otherwise, you should create a new resource group in one of the supported regions.
-1. Create a Premium plan in one of the supported regions and the resource group. Ensure the [new Premium plan has zone redundancy enabled](../azure-functions/azure-functions-az-redundancy.md#how-to-deploy-a-function-app-on-a-zone-redundant-premium-plan).
+1. Create a Premium plan in one of the supported regions and the resource group. Ensure the [new Premium plan has zone redundancy enabled](./reliability-functions.md#create-a-zone-redundant-premium-plan-and-function-app).
1. Create and deploy your function apps into the new Premium plan using your desired [deployment method](../azure-functions/functions-deployment-technologies.md). 1. After testing and enabling the new function apps, you can optionally disable or delete your previous non-availability zone apps.
The following steps describe how to enable availability zones.
> [Learn about the Azure Functions Premium plan](../azure-functions/functions-premium-plan.md) > [!div class="nextstepaction"]
-> [Learn about Azure Functions support for availability zone redundancy](../azure-functions/azure-functions-az-redundancy.md)
+> [Learn about Azure Functions support for availability zone redundancy](./reliability-functions.md)
> [!div class="nextstepaction"] > [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
resource-mover About Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/about-move-process.md
These components are used during region move.
-**Component** | **Details**
- |
-**Resource Mover** | Resource Mover coordinates with [Azure resource providers](../azure-resource-manager/management/resource-providers-and-types.md) to orchestrate the move of resources between regions. Resource Mover analyzes resource dependencies, and maintains and manages the state of resources during the move process.
-**Move collection** | A move collection is an [Azure Resource Manager](../azure-resource-manager/management/overview.md) object.<br/><br/> The move collection is created during the region move process, for each paired combination of source and target regions in a subscription. The collection contains metadata and configuration information about the resources you want to move.<br/><br/>Resources added to a move collection must be in the same subscription, but can be in different resource groups.
-**Move resource** | When you add a resource to a move collection, it's tracked by Resource Mover as a move resource.<br/><br/> Resource Mover maintains information for all of the move resources in the move collection, and maintains a one-to-one relationship between the source and target resource.
-**Dependencies** | Resource Mover validates resources that you add to a collection, and checks whether resources have any dependencies that aren't in the move collection.<br/><br/> After identifying dependencies for a resource, you can either add them dependencies to the move collection and move them too, or you can select alternate existing resources in the target region. All dependencies must be resolved before you start the move.
-
+| **Component** | **Details** |
+| | |
+| **Resource Mover** | Resource Mover coordinates with [Azure resource providers](../azure-resource-manager/management/resource-providers-and-types.md) to orchestrate the move of resources between regions. Resource Mover analyzes resource dependencies, and maintains and manages the state of resources during the move process. |
+| **Move collection** | A move collection is an [Azure Resource Manager](../azure-resource-manager/management/overview.md) object.<br/><br/> The move collection is created during the region move process, for each paired combination of source and target regions in a subscription. The collection contains metadata and configuration information about the resources you want to move.<br/><br/>Resources added to a move collection must be in the same subscription, but can be in different resource groups. |
+| **Move resource** | When you add a resource to a move collection, it's tracked by Resource Mover as a move resource.<br/><br/> Resource Mover maintains information for all of the move resources in the move collection, and maintains a one-to-one relationship between the source and target resource. |
+| **Dependencies** | Resource Mover validates resources that you add to a collection, and checks whether resources have any dependencies that aren't in the move collection.<br/><br/> After identifying dependencies for a resource, you can either add them dependencies to the move collection and move them too, or you can select alternate existing resources in the target region. All dependencies must be resolved before you start the move. |
## Move region process
If you donΓÇÖt want to move a resource, you can remove it from the move collecti
The table summarizes what's impacted when you're moving across regions.
-**Behavior** | **Across regions**
- | |
-**Data** | Resource data and metadata are moved.<br/><br/> Metadata is stored temporarily to track status of resource dependencies and operations.
-**Resource** | The source resource stays intact to ensure that apps continue to work, and can optionally be removed after the move.<br/><br/> A resource is created in the target region.
-**Move process** | Multi-step process requiring manual intervention and monitoring.
-**Testing** | Testing the move is important, since the apps should continue to work as expected in the target region, after the move.
-**Downtime** | No data loss expected, but some downtime to move resources.
--
+| **Behavior** | **Across regions** |
+| | | |
+| **Data** | Resource data and metadata are moved.<br/><br/> Metadata is stored temporarily to track status of resource dependencies and operations. |
+| **Resource** | The source resource stays intact to ensure that apps continue to work, and can optionally be removed after the move.<br/><br/> A resource is created in the target region. |
+| **Move process** | Multi-step process requiring manual intervention and monitoring. |
+| **Testing** | Testing the move is important, since the apps should continue to work as expected in the target region, after the move. |
+| **Downtime** | No data loss expected, but some downtime to move resources. |
## Next steps
resource-mover Support Matrix Move Region Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md
Network security group | Supported | Specify an existing resource in the target
Reserved (static) IP address | Supported | You can't currently configure this. The value defaults to the source value. <br/><br/> If the NIC on the source VM has a static IP address, and the target subnet has the same IP address available, it's assigned to the target VM.<br/><br/> If the target subnet doesn't have the same IP address available, the initiate move for the VM will fail. Dynamic IP address | Supported | You can't currently configure this. The value defaults to the source value.<br/><br/> If the NIC on the source has dynamic IP addressing, the NIC on the target VM is also dynamic by default. IP configurations | Supported | You can't currently configure this. The value defaults to the source value.
-VNET Peering | Not Retained | The VNET which is moved to the target region will not retain itΓÇÖs VNET peering configuration present in the source region. To retain the peering, it needs to do be done again manually in the target region.
+VNET Peering | Not Retained | The VNET which is moved to the target region will not retain its VNET peering configuration present in the source region. To retain the peering, it needs to do be done again manually in the target region.
## Outbound access requirements
If you're using a network security group (NSG) rules to control outbound connect
- *GuestAndHybridManagement* - We recommend you test rules in a non-production environment. [Review some examples](../site-recovery/azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags). - ## Next steps Try [moving an Azure VM](tutorial-move-region-virtual-machines.md) to another region with Resource Mover.
resource-mover Support Matrix Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-sql.md
This article summarizes support and prerequisites for moving Azure SQL resources
Requirements are summarized in the following table.
-**Feature** | **Supported/Not supported** | **Details**
- | |
-**Azure SQL Database Hyperscale** | Not supported | Can't move databases in the Azure SQL Hyperscale service tier with Resource Mover.
-**Zone redundancy** | Supported | Supported move options:<br/><br/> - Between regions that support zone redundancy.<br/><br/> - Between regions that don't support zone redundancy.<br/><br/> - Between a region that supports zone redundancy to a region that doesn't support zone redundancy.<br/><br/> - Between a region that doesn't support zone redundancy to a region that does support zone redundancy.
-**Data sync** | Hub/sync database: Not supported<br/><br/> Sync member: Supported. | If a sync member is moved, you need to set up data sync to the new target database.
-**Existing geo-replication** | Supported | Existing geo replicas are remapped to the new primary in the target region.<br/><br/> Seeding must be initialized after the move. [Learn more](/azure/azure-sql/database/active-geo-replication-configure-portal)
-**Transparent Data Encryption (TDE) with Bring Your Own Key (BYOK)** | Supported | [Learn more](../key-vault/general/move-region.md) about moving key vaults across regions.
-**TDE with service-managed key** | Supported. | [Learn more](../key-vault/general/move-region.md) about moving key vaults across regions.
-**Dynamic data masking rules** | Supported. | Rules are automatically copied over to the target region as part of the move. [Learn more](/azure/azure-sql/database/dynamic-data-masking-configure-portal).
-**Advanced data security** | Not supported. | Workaround: Set up at the SQL Server level in the target region. [Learn more](/azure/azure-sql/database/azure-defender-for-sql).
-**Firewall rules** | Not supported. | Workaround: Set up firewall rules for SQL Server in the target region. Database-level firewall rules are copied from the source server to the target server. [Learn more](/azure/azure-sql/database/firewall-create-server-level-portal-quickstart).
-**Auditing policies** | Not supported. | Policies will reset to default after the move. [Learn](/azure/azure-sql/database/auditing-overview) how to reset.
-**Backup retention** | Supported. | Backup retention policies for the source database are carried over to the target database. [Learn](/azure/azure-sql/database/long-term-backup-retention-configure) how to modify settings after the move.
-**Auto tuning** | Not supported. | Workaround: Set auto tuning settings after the move. [Learn more](/azure/azure-sql/database/automatic-tuning-enable).
-**Database alerts** | Not supported. | Workaround: Set alerts after the move. [Learn more](/azure/azure-sql/database/alerts-insights-configure-portal).
-**Azure SQL Server stretch database** | Not Supported | Can't move SQL server stretch databases with Resource Mover.
+| **Feature** | **Supported/Not supported** | **Details**|
+| | | |
+| **Azure SQL Database Hyperscale** | Not supported | Can't move databases in the Azure SQL Hyperscale service tier with Resource Mover.|
+| **Zone redundancy** | Supported | Supported move options:<br/><br/> - Between regions that support zone redundancy.<br/><br/> - Between regions that don't support zone redundancy.<br/><br/> - Between a region that supports zone redundancy to a region that doesn't support zone redundancy.<br/><br/> - Between a region that doesn't support zone redundancy to a region that does support zone redundancy. |
+| **Data sync** | Hub/sync database: Not supported<br/><br/> Sync member: Supported. | If a sync member is moved, you need to set up data sync to the new target database.|
+| **Existing geo-replication** | Supported | Existing geo replicas are remapped to the new primary in the target region.<br/><br/> Seeding must be initialized after the move. [Learn more](/azure/azure-sql/database/active-geo-replication-configure-portal). |
+| **Transparent Data Encryption (TDE) with Bring Your Own Key (BYOK)** | Supported | [Learn more](../key-vault/general/move-region.md) about moving key vaults across regions. |
+| **TDE with service-managed key** | Supported. | [Learn more](../key-vault/general/move-region.md) about moving key vaults across regions.|
+| **Dynamic data masking rules** | Supported. | Rules are automatically copied over to the target region as part of the move. [Learn more](/azure/azure-sql/database/dynamic-data-masking-configure-portal). |
+| **Advanced data security** | Not supported. | Workaround: Set up at the SQL Server level in the target region. [Learn more](/azure/azure-sql/database/azure-defender-for-sql). |
+| **Firewall rules** | Not supported. | Workaround: Set up firewall rules for SQL Server in the target region. Database-level firewall rules are copied from the source server to the target server. [Learn more](/azure/azure-sql/database/firewall-create-server-level-portal-quickstart). |
+| **Auditing policies** | Not supported. | Policies will reset to default after the move. [Learn](/azure/azure-sql/database/auditing-overview) how to reset. |
+| **Backup retention** | Supported. | Backup retention policies for the source database are carried over to the target database. [Learn](/azure/azure-sql/database/long-term-backup-retention-configure) how to modify settings after the move. |
+| **Auto tuning** | Not supported. | Workaround: Set auto tuning settings after the move. [Learn more](/azure/azure-sql/database/automatic-tuning-enable). |
+| **Database alerts** | Not supported. | Workaround: Set alerts after the move. [Learn more](/azure/azure-sql/database/alerts-insights-configure-portal). |
+| **Azure SQL Server stretch database** | Not Supported | Can't move SQL server stretch databases with Resource Mover.
**Azure Synapse Analytics** | Not Supported | CanΓÇÖt move Azure Synapse Analytics with Resource Mover.+ ## Next steps Try [Azure SQL resources](tutorial-move-region-sql.md) to another region with Resource Mover.
search Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md
Code samples from the Cognitive Search team demonstrate features and workflows.
| Samples | Article | ||-| | [quickstart](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart) | Source code for [Quickstart: Create a search index ](search-get-started-dotnet.md). Covers the basic workflow for creating, loading, and querying a search index using sample data. |
-| [search-website](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website) | Source code for [Tutorial: Add search to web apps](tutorial-csharp-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
+| [search-website](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-csharp-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
| [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo) | Source code for [How to use the .NET client library](search-howto-dotnet-sdk.md). Steps through the basic workflow, but in more detail and discussion of API usage. | | [DotNetHowToSynonyms](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToSynonyms) | Source code for [Example: Add synonyms in C#](search-synonyms-tutorial-sdk.md). Synonym lists are used for query expansion, providing matchable terms that are external to an index. | | [DotNetToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | Source code for [Tutorial: Index Azure SQL data using the .NET SDK](search-indexer-tutorial.md). This article shows how to configure an Azure SQL indexer that has a schedule, field mappings, and parameters. |
search Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-javascript.md
Code samples from the Cognitive Search team demonstrate features and workflows.
| Samples | Article | ||| | [quickstart](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/master/quickstart/v11) | Source code for [Quickstart: Create a search index in JavaScript](search-get-started-javascript.md). Covers the basic workflow for creating, loading, and querying a search index using sample data. |
-| [search-website](https://github.com/azure-samples/azure-search-javascript-samples/tree/master/search-website) | Source code for [Tutorial: Add search to web apps](tutorial-javascript-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
+| [search-website](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-javascript-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
> [!Tip] > Try the [Samples browser](/samples/browse/?languages=javascript&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language.
search Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md
Code samples from the Cognitive Search team demonstrate features and workflows.
| Samples | Article | ||| | [quickstart](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Quickstart) | Source code for [Quickstart: Create a search index in Python](search-get-started-python.md). This article covers the basic workflow for creating, loading, and querying a search index using sample data. |
-| [search-website](https://github.com/azure-samples/azure-search-python-samples/tree/master/search-website) | Source code for [Tutorial: Add search to web apps](tutorial-python-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
+| [search-website](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-python-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
| [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Tutorial-AI-Enrichment) | Source code for [Tutorial: Use Python and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob-python.md). This article shows how to create a blob indexer with a cognitive skillset, where the skillset creates and transforms raw content to make it searchable or consumable. | | [AzureML-Custom-Skill](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill) | Source code for [Example: Create a custom skill using Python](cognitive-search-custom-skill-python.md). This article demonstrates indexer and skillset integration with deep learning models in Azure Machine Learning. |
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
You'll need a search client that supports preview APIs on the query request. Her
+ [Search explorer](search-explorer.md) in Azure portal, recommended for initial exploration.
-+ [Postman Desktop App](https://www.postman.com/downloads/) using the [2021-04-30-Preview REST APIs](/rest/api/searchservice/preview-api/). See this [Quickstart](search-get-started-rest.md) for help with setting up your requests.
++ [Postman Desktop App](https://www.postman.com/downloads/) using the [2021-04-30-Preview REST APIs](/rest/api/searchservice/preview-api/search-documents). See this [Quickstart](search-get-started-rest.md) for help with setting up your requests. + [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5) in the Azure SDK for .NET Preview.
security Secure Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-develop.md
Perform DAST, preferably with the assistance of a security professional (a [pene
### Perform fuzz testing
-In [fuzz testing](https://cloudblogs.microsoft.com/microsoftsecure/2007/09/20/fuzz-testing-at-microsoft-and-the-triage-process/), you induce program failure by deliberately introducing malformed or random data to an application. Inducing program failure helps reveal potential security issues before the application is released.
+In [fuzz testing](https://www.microsoft.com/security/blog/2007/09/20/fuzz-testing-at-microsoft-and-the-triage-process/), you induce program failure by deliberately introducing malformed or random data to an application. Inducing program failure helps reveal potential security issues before the application is released.
[Security Risk Detection](https://www.microsoft.com/en-us/security-risk-detection/) is the Microsoft unique fuzz testing service for finding security-critical bugs in software.
security Threat Modeling Tool Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authentication.md
na
Last updated 02/07/2017 -+ # Security Frame: Authentication | Mitigations
| **Machine Trust Boundary** | <ul><li>[Ensure that deployed application's binaries are digitally signed](#binaries-signed)</li></ul> | | **WCF** | <ul><li>[Enable authentication when connecting to MSMQ queues in WCF](#msmq-queues)</li><li>[WCF-Do not set Message clientCredentialType to none](#message-none)</li><li>[WCF-Do not set Transport clientCredentialType to none](#transport-none)</li></ul> | | **Web API** | <ul><li>[Ensure that standard authentication techniques are used to secure Web APIs](#authn-secure-api)</li></ul> |
-| **Azure AD** | <ul><li>[Use standard authentication scenarios supported by Azure Active Directory](#authn-aad)</li><li>[Override the default ADAL token cache with a scalable alternative](#adal-scalable)</li><li>[Ensure that TokenReplayCache is used to prevent the replay of ADAL authentication tokens](#tokenreplaycache-adal)</li><li>[Use ADAL libraries to manage token requests from OAuth2 clients to AAD (or on-premises AD)](#adal-oauth2)</li></ul> |
+| **Azure AD** | <ul><li>[Use standard authentication scenarios supported by Azure Active Directory](#authn-aad)</li><li>[Override the default MSAL token cache with a distributed cache](#msal-distributed-cache)</li><li>[Ensure that TokenReplayCache is used to prevent the replay of inbound authentication tokens](#tokenreplaycache-msal)</li><li>[Use MSAL libraries to manage token requests from OAuth2 clients to AAD (or on-premises AD)](#msal-oauth2)</li></ul> |
| **IoT Field Gateway** | <ul><li>[Authenticate devices connecting to the Field Gateway](#authn-devices-field)</li></ul> | | **IoT Cloud Gateway** | <ul><li>[Ensure that devices connecting to Cloud gateway are authenticated](#authn-devices-cloud)</li><li>[Use per-device authentication credentials](#authn-cred)</li></ul> | | **Azure Storage** | <ul><li>[Ensure that only the required containers and blobs are given anonymous read access](#req-containers-anon)</li><li>[Grant limited access to objects in Azure storage using SAS or SAP](#limited-access-sas)</li></ul> |
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **References** | [Authentication Scenarios for Azure AD](../../active-directory/develop/authentication-vs-authorization.md), [Azure Active Directory Code Samples](../../active-directory/azuread-dev/sample-v1-code.md), [Azure Active Directory developer's guide](../../active-directory/develop/index.yml) | | **Steps** | <p>Azure Active Directory (Azure AD) simplifies authentication for developers by providing identity as a service, with support for industry-standard protocols such as OAuth 2.0 and OpenID Connect. Below are the five primary application scenarios supported by Azure AD:</p><ul><li>Web Browser to Web Application: A user needs to sign in to a web application that is secured by Azure AD</li><li>Single Page Application (SPA): A user needs to sign in to a single page application that is secured by Azure AD</li><li>Native Application to Web API: A native application that runs on a phone, tablet, or PC needs to authenticate a user to get resources from a web API that is secured by Azure AD</li><li>Web Application to Web API: A web application needs to get resources from a web API secured by Azure AD</li><li>Daemon or Server Application to Web API: A daemon application or a server application with no web user interface needs to get resources from a web API secured by Azure AD</li></ul><p>Please refer to the links in the references section for low-level implementation details</p>|
-## <a id="adal-scalable"></a>Override the default ADAL token cache with a scalable alternative
+## <a id="msal-distributed-cache"></a>Override the default MSAL token cache with a distributed cache
| Title | Details | | -- | |
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [Modern Authentication with Azure Active Directory for Web Applications](/archive/blogs/microsoft_press/new-book-modern-authentication-with-azure-active-directory-for-web-applications), [Using Redis as ADAL token cache](https://blogs.msdn.microsoft.com/mrochon/2016/09/19/using-redis-as-adal-token-cache/) |
-| **Steps** | <p>The default cache that ADAL (Active Directory Authentication Library) uses is an in-memory cache that relies on a static store, available process-wide. While this works for native applications, it does not scale for mid tier and backend applications for the following reasons:</p><ul><li>These applications are accessed by many users at once. Saving all access tokens in the same store creates isolation issues and presents challenges when operating at scale: many users, each with as many tokens as the resources the app accesses on their behalf, can mean huge numbers and very expensive lookup operations</li><li>These applications are typically deployed on distributed topologies, where multiple nodes must have access to the same cache</li><li>Cached tokens must survive process recycles and deactivations</li></ul><p>For all the above reasons, while implementing web apps, it is recommended to override the default ADAL token cache with a scalable alternative such as Azure Cache for Redis.</p>|
+| **References** | [Token cache serialization in MSAL.NET](../../active-directory/develop/msal-net-token-cache-serialization.md) |
+| **Steps** | <p>The default cache that MSAL (Microsoft Authentication Library) uses is an in-memory cache, and is scalable. However there are different options available that you can use as an alternative, such as a distributed token cache. These have L1/L2 mechanisms, where L1 is in memory and L2 is the distributed cache implementation. These can be accordingly configured to limit L1 memory, encrypt or set eviction policies. Other alternatives include Redis, SQL Server or Azure Comsos DB caches. An implementation of a distributed token cache can be found in the following [Tutorial: Get started with ASP.NET Core MVC](https://learn.microsoft.com/aspnet/core/tutorials/first-mvc-app/start-mvc.md).</p>|
-## <a id="tokenreplaycache-adal"></a>Ensure that TokenReplayCache is used to prevent the replay of ADAL authentication tokens
+## <a id="tokenreplaycache-msal"></a>Ensure that TokenReplayCache is used to prevent the replay of MSAL authentication tokens
| Title | Details | | -- | |
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
### Example ```csharp
-// ITokenReplayCache defined in ADAL
+// ITokenReplayCache defined in MSAL
public interface ITokenReplayCache { bool TryAdd(string securityToken, DateTime expiresOn);
OpenIdConnectOptions openIdConnectOptions = new OpenIdConnectOptions
Please note that to test the effectiveness of this configuration, login into your local OIDC-protected application and capture the request to `"/signin-oidc"` endpoint in fiddler. When the protection is not in place, replaying this request in fiddler will set a new session cookie. When the request is replayed after the TokenReplayCache protection is added, the application will throw an exception as follows: `SecurityTokenReplayDetectedException: IDX10228: The securityToken has previously been validated, securityToken: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik1uQ19WWmNBVGZNNXBPWWlKSE1iYTlnb0VLWSIsImtpZCI6Ik1uQ1......`
-## <a id="adal-oauth2"></a>Use ADAL libraries to manage token requests from OAuth2 clients to AAD (or on-premises AD)
+## <a id="msal-oauth2"></a>Use MSAL libraries to manage token requests from OAuth2 clients to AAD (or on-premises AD)
| Title | Details | | -- | |
Please note that to test the effectiveness of this configuration, login into you
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [ADAL](../../active-directory/azuread-dev/active-directory-authentication-libraries.md) |
-| **Steps** | <p>The Azure AD authentication Library (ADAL) enables client application developers to easily authenticate users to cloud or on-premises Active Directory (AD), and then obtain access tokens for securing API calls.</p><p>ADAL has many features that make authentication easier for developers, such as asynchronous support, a configurable token cache that stores access tokens and refresh tokens, automatic token refresh when an access token expires and a refresh token is available, and more.</p><p>By handling most of the complexity, ADAL can help a developer focus on business logic in their application and easily secure resources without being an expert on security. Separate libraries are available for .NET, JavaScript (client and Node.js), Python, iOS, Android and Java.</p>|
+| **References** | [MSAL](../../active-directory/develop/msal-overview.md) |
+| **Steps** | <p>The Microsoft Authentication Library (MSAL) enables developers to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs. It can be used to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS.
+
+MSAL gives you many ways to get tokens, with a consistent API for many platforms. There is no need to directly use the OAuth libraries or code against the protocol in your application, and can acquire tokens on behalf of a user or application (when applicable to the platform).
+
+MSAL also maintains a token cache and refreshes tokens for you when they're close to expiring. MSAL can also help you specify which audience you want your application to sign in, and help you set up your application from configuration files, and troubleshoot your app.
## <a id="authn-devices-field"></a>Authenticate devices connecting to the Field Gateway
await deviceClient.SendEventAsync(message);
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | [Shared Access Signatures, Part 1: Understanding the SAS model](../../storage/common/storage-sas-overview.md), [Shared Access Signatures, Part 2: Create and use a SAS with Blob storage](../../storage/common/storage-sas-overview.md), [How to delegate access to objects in your account using Shared Access Signatures and Stored Access Policies](../../storage/blobs/security-recommendations.md#identity-and-access-management) |
-| **Steps** | <p>Using a shared access signature (SAS) is a powerful way to grant limited access to objects in a storage account to other clients, without having to expose account access key. The SAS is a URI that encompasses in its query parameters all of the information necessary for authenticated access to a storage resource. To access storage resources with the SAS, the client only needs to pass in the SAS to the appropriate constructor or method.</p><p>You can use a SAS when you want to provide access to resources in your storage account to a client that can't be trusted with the account key. Your storage account keys include both a primary and secondary key, both of which grant administrative access to your account and all of the resources in it. Exposing either of your account keys opens your account to the possibility of malicious or negligent use. Shared access signatures provide a safe alternative that allows other clients to read, write, and delete data in your storage account according to the permissions you've granted, and without need for the account key.</p><p>If you have a logical set of parameters that are similar each time, using a Stored Access Policy (SAP) is a better idea. Because using a SAS derived from a Stored Access Policy gives you the ability to revoke that SAS immediately, it is the recommended best practice to always use Stored Access Policies when possible.</p>|
+| **Steps** | <p>Using a shared access signature (SAS) is a powerful way to grant limited access to objects in a storage account to other clients, without having to expose account access key. The SAS is a URI that encompasses in its query parameters all of the information necessary for authenticated access to a storage resource. To access storage resources with the SAS, the client only needs to pass in the SAS to the appropriate constructor or method.</p><p>You can use a SAS when you want to provide access to resources in your storage account to a client that can't be trusted with the account key. Your storage account keys include both a primary and secondary key, both of which grant administrative access to your account and all of the resources in it. Exposing either of your account keys opens your account to the possibility of malicious or negligent use. Shared access signatures provide a safe alternative that allows other clients to read, write, and delete data in your storage account according to the permissions you've granted, and without need for the account key.</p><p>If you have a logical set of parameters that are similar each time, using a Stored Access Policy (SAP) is a better idea. Because using a SAS derived from a Stored Access Policy gives you the ability to revoke that SAS immediately, it is the recommended best practice to always use Stored Access Policies when possible.</p>|
security Threat Modeling Tool Session Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-session-management.md
na
Last updated 02/07/2017 -+ # Security Frame: Session Management | Product/Service | Article | | | - |
-| **Azure AD** | <ul><li>[Implement proper logout using ADAL methods when using Azure AD](#logout-adal)</li></ul> |
+| **Azure AD** | <ul><li>[Implement proper logout using MSAL methods when using Azure AD](#logout-msal)</li></ul> |
| **IoT Device** | <ul><li>[Use finite lifetimes for generated SaS tokens](#finite-tokens)</li></ul> | | **Azure Document DB** | <ul><li>[Use minimum token lifetimes for generated Resource tokens](#resource-tokens)</li></ul> | | **ADFS** | <ul><li>[Implement proper logout using WsFederation methods when using ADFS](#wsfederation-logout)</li></ul> |
| **Web Application** | <ul><li>[Applications available over HTTPS must use secure cookies](#https-secure-cookies)</li><li>[All http based application should specify http only for cookie definition](#cookie-definition)</li><li>[Mitigate against Cross-Site Request Forgery (CSRF) attacks on ASP.NET web pages](#csrf-asp)</li><li>[Set up session for inactivity lifetime](#inactivity-lifetime)</li><li>[Implement proper logout from the application](#proper-app-logout)</li></ul> | | **Web API** | <ul><li>[Mitigate against Cross-Site Request Forgery (CSRF) attacks on ASP.NET Web APIs](#csrf-api)</li></ul> |
-## <a id="logout-adal"></a>Implement proper logout using ADAL methods when using Azure AD
+## <a id="logout-msal"></a>Implement proper sign-out using MSAL methods when using Azure AD
| Title | Details | | -- | |
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | N/A |
-| **Steps** | If the application relies on access token issued by Azure AD, the logout event handler should call |
-
-### Example
-```csharp
-HttpContext.GetOwinContext().Authentication.SignOut(OpenIdConnectAuthenticationDefaults.AuthenticationType, CookieAuthenticationDefaults.AuthenticationType)
-```
+| **References** | [Enable your Web app to sign-in users using the Microsoft Identity Platform](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-6-SignOut) |
+| **Steps** | The ASP.NET Core OpenIdConnect middleware enables your app to intercept the call to the Microsoft identity platform logout endpoint by providing an OpenIdConnect event named `OnRedirectToIdentityProviderForSignOut` |
### Example
-It should also destroy user's session by calling Session.Abandon() method. Following method shows secure implementation of user logout:
```csharp
- [HttpPost]
- [ValidateAntiForgeryToken]
- public void LogOff()
- {
- string userObjectID = ClaimsPrincipal.Current.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier").Value;
- AuthenticationContext authContext = new AuthenticationContext(Authority + TenantId, new NaiveSessionCache(userObjectID));
- authContext.TokenCache.Clear();
- Session.Clear();
- Session.Abandon();
- Response.SetCookie(new HttpCookie("ASP.NET_SessionId", string.Empty));
- HttpContext.GetOwinContext().Authentication.SignOut(
- OpenIdConnectAuthenticationDefaults.AuthenticationType,
- CookieAuthenticationDefaults.AuthenticationType);
- }
+services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
+{
+ options.Events.OnRedirectToIdentityProviderForSignOut = async context =>
+ {
+ //Your logic here
+ };
+});
``` ## <a id="finite-tokens"></a>Use finite lifetimes for generated SaS tokens
security Data Encryption Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/data-encryption-best-practices.md
Because the vast majority of attacks target the end user, the endpoint becomes o
## Protect data at rest
-[Data encryption at rest](https://cloudblogs.microsoft.com/microsoftsecure/2015/09/10/cloud-security-controls-series-encrypting-data-at-rest/) is a mandatory step toward data privacy, compliance, and data sovereignty.
+[Data encryption at rest](https://www.microsoft.com/security/blog/2015/09/10/cloud-security-controls-series-encrypting-data-at-rest/) is a mandatory step toward data privacy, compliance, and data sovereignty.
**Best practice**: Apply disk encryption to help safeguard your data. **Detail**: Use [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md). Disk Encryption combines the industry-standard Linux dm-crypt or Windows BitLocker feature to provide volume encryption for the OS and the data disks.
See [Azure security best practices and patterns](best-practices-and-patterns.md)
The following resources are available to provide more general information about Azure security and related Microsoft * [Azure Security Team Blog](/archive/blogs/azuresecurity/) - for up to date information on the latest in Azure Security
-* [Microsoft Security Response Center](https://technet.microsoft.com/library/dn440717.aspx) - where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to secure@microsoft.com
+* [Microsoft Security Response Center](https://technet.microsoft.com/library/dn440717.aspx) - where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to secure@microsoft.com
security Pen Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/pen-testing.md
As of June 15, 2017, Microsoft no longer requires pre-approval to conduct a pene
Standard tests you can perform include: * Tests on your endpoints to uncover the [Open Web Application Security Project (OWASP) top 10 vulnerabilities](https://owasp.org/www-project-top-ten/)
-* [Fuzz testing](https://cloudblogs.microsoft.com/microsoftsecure/2007/09/20/fuzz-testing-at-microsoft-and-the-triage-process/) of your endpoints
+* [Fuzz testing](https://www.microsoft.com/security/blog/2007/09/20/fuzz-testing-at-microsoft-and-the-triage-process/) of your endpoints
* [Port scanning](https://en.wikipedia.org/wiki/Port_scanner) of your endpoints One type of pen test that you canΓÇÖt perform is any kind of [Denial of Service (DoS)](https://en.wikipedia.org/wiki/Denial-of-service_attack) attack. This test includes initiating a DoS attack itself, or performing related tests that might determine, demonstrate, or simulate any type of DoS attack.
service-bus-messaging Service Bus Nodejs How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-nodejs-how-to-use-queues.md
Title: Get started with Azure Service Bus queues (JavaScript)
description: This tutorial shows you how to send messages to and receive messages from Azure Service Bus queues using the JavaScript programming language. Previously updated : 02/16/2022 Last updated : 11/17/2022 ms.devlang: javascript
> * [JavaScript](service-bus-nodejs-how-to-use-queues.md) > * [Python](service-bus-python-how-to-use-queues.md)
-In this tutorial, you learn how to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package in a JavaScript program to send messages to and receive messages from a Service Bus queue.
+In this tutorial, you complete the following steps:
+
+1. Create a Service Bus namespace, using the Azure portal.
+2. Create a Service Bus queue, using the Azure portal.
+3. Write a JavaScript application to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package to:
+ 1. Send a set of messages to the queue.
+ 1. Write a .NET console application to receive those messages from the queue.
> [!NOTE] > This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7). ## Prerequisites+
+If you're new to the service, see [Service Bus overview](service-bus-messaging-overview.md) before you do this quickstart.
+ - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue. Note down the **connection string** for your Service Bus namespace and the name of the **queue** you created.
+- [Node.js LTS](https://nodejs.org/en/download/)
+- If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue.
+
+### [Passwordless](#tab/passwordless)
+
+To use this quickstart with your own Azure account, you need:
+* Install [Azure CLI](/cli/azure/install-azure-cli), which provides the passwordless authentication to your developer machine.
+* Sign in with your Azure account at the terminal or command prompt with `az login`.
+* Use the same account when you add the appropriate data role to your resource.
+* Run the code in the same terminal or command prompt.
+* Note down your **queue** name for your Service Bus namespace. You'll need that in the code.
+
+### [Connection string](#tab/connection-string)
+
+Note down the following, which you'll use in the code below:
+* Service Bus namespace **connection string**
+* Service Bus namespace **queue** you created
++ > [!NOTE] > - This tutorial works with samples that you can copy and run using [Nodejs](https://nodejs.org/). For instructions on how to create a Node.js application, see [Create and deploy a Node.js application to an Azure Website](../app-service/quickstart-nodejs.md), or [Node.js cloud service using Windows PowerShell](../cloud-services/cloud-services-nodejs-develop-deploy-app.md).
-### Use Node Package Manager (NPM) to install the package
-To install the npm package for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
-```bash
-npm install @azure/service-bus
-```
+++
+## Use Node Package Manager (NPM) to install the package
+
+### [Passwordless](#tab/passwordless)
+
+1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
+
+1. Install the following packages:
+
+ ```bash
+ npm install @azure/service-bus @azure/identity
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
+
+1. Install the following package:
+
+ ```bash
+ npm install @azure/service-bus
+ ```
++ ## Send messages to a queue
-The following sample code shows you how to send a message to a queue.
+
+The following sample code shows you how to send a message to a queue.
+
+### [Passwordless](#tab/passwordless)
+
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+1. Create a file called `send.js` and paste the below code into it. This code sends the names of scientists as messages to your queue.
+
+ The passwordless credential is provided with the [**DefaultAzureCredential**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#defaultazurecredential).
+
+ ```javascript
+ const { ServiceBusClient } = require("@azure/service-bus");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ // Replace `<SERVICE-BUS-NAMESPACE>` with your namespace
+ const fullyQualifiedNamespace = "<SERVICE-BUS-NAMESPACE>.servicebus.windows.net";
+
+ // Passwordless credential
+ const credential = new DefaultAzureCredential();
+
+ // name of the queue
+ const queueName = "<QUEUE NAME>"
+
+ const messages = [
+ { body: "Albert Einstein" },
+ { body: "Werner Heisenberg" },
+ { body: "Marie Curie" },
+ { body: "Steven Hawking" },
+ { body: "Isaac Newton" },
+ { body: "Niels Bohr" },
+ { body: "Michael Faraday" },
+ { body: "Galileo Galilei" },
+ { body: "Johannes Kepler" },
+ { body: "Nikolaus Kopernikus" }
+ ];
+
+ async function main() {
+ // create a Service Bus client using the passwordless authentication to the Service Bus namespace
+ const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+
+ // createSender() can also be used to create a sender for a topic.
+ const sender = sbClient.createSender(queueName);
+
+ try {
+ // Tries to send all messages in a single batch.
+ // Will fail if the messages cannot fit in a batch.
+ // await sender.sendMessages(messages);
+
+ // create a batch object
+ let batch = await sender.createMessageBatch();
+ for (let i = 0; i < messages.length; i++) {
+ // for each message in the array
+
+ // try to add the message to the batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it fails to add the message to the current batch
+ // send the current batch as it is full
+ await sender.sendMessages(batch);
+
+ // then, create a new batch
+ batch = await sender.createMessageBatch();
+
+ // now, add the message failed to be added to the previous batch to this batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it still can't be added to the batch, the message is probably too big to fit in a batch
+ throw new Error("Message too big to fit in a batch");
+ }
+ }
+ }
+
+ // Send the last created batch of messages to the queue
+ await sender.sendMessages(batch);
+
+ console.log(`Sent a batch of messages to the queue: ${queueName}`);
+
+ // Close the sender
+ await sender.close();
+ } finally {
+ await sbClient.close();
+ }
+ }
+
+ // call the main function
+ main().catch((err) => {
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
+ ```
+
+3. Replace `<SERVICE-BUS-NAMESPACE>` with your Service Bus namespace.
+4. Replace `<QUEUE NAME>` with the name of the queue.
+5. Then run the command in a command prompt to execute this file.
+
+ ```console
+ node send.js
+ ```
+6. You should see the following output.
+
+ ```console
+ Sent a batch of messages to the queue: myqueue
+ ```
+
+### [Connection string](#tab/connection-string)
1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-2. Create a file called `send.js` and paste the below code into it. This code sends the names of scientists as messages to your queue.
+1. Create a file called `send.js` and paste the below code into it. This code sends the names of scientists as messages to your queue.
```javascript const { ServiceBusClient } = require("@azure/service-bus");
The following sample code shows you how to send a message to a queue.
}); ``` 3. Replace `<CONNECTION STRING TO SERVICE BUS NAMESPACE>` with the connection string to your Service Bus namespace.
-1. Replace `<QUEUE NAME>` with the name of the queue.
-1. Then run the command in a command prompt to execute this file.
+4. Replace `<QUEUE NAME>` with the name of the queue.
+5. Then run the command in a command prompt to execute this file.
```console node send.js ```
-1. You should see the following output.
+6. You should see the following output.
```console Sent a batch of messages to the queue: myqueue ``` ++ ## Receive messages from a queue
+### [Passwordless](#tab/passwordless)
+
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. Create a file called `receive.js` and paste the following code into it.
+
+ ```javascript
+ const { delay, ServiceBusClient, ServiceBusMessage } = require("@azure/service-bus");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ // Replace `<SERVICE-BUS-NAMESPACE>` with your namespace
+ const fullyQualifiedNamespace = "<SERVICE-BUS-NAMESPACE>.servicebus.windows.net";
+
+ // Passwordless credential
+ const credential = new DefaultAzureCredential();
+
+ // name of the queue
+ const queueName = "<QUEUE NAME>"
+
+ async function main() {
+ // create a Service Bus client using the passwordless authentication to the Service Bus namespace
+ const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+
+ // createReceiver() can also be used to create a receiver for a subscription.
+ const receiver = sbClient.createReceiver(queueName);
+
+ // function to handle messages
+ const myMessageHandler = async (messageReceived) => {
+ console.log(`Received message: ${messageReceived.body}`);
+ };
+
+ // function to handle any errors
+ const myErrorHandler = async (error) => {
+ console.log(error);
+ };
+
+ // subscribe and specify the message and error handlers
+ receiver.subscribe({
+ processMessage: myMessageHandler,
+ processError: myErrorHandler
+ });
+
+ // Waiting long enough before closing the sender to send messages
+ await delay(20000);
+
+ await receiver.close();
+ await sbClient.close();
+ }
+ // call the main function
+ main().catch((err) => {
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
+ ```
+3. Replace `<SERVICE-BUS-NAMESPACE>` with your Service Bus namespace.
+4. Replace `<QUEUE NAME>` with the name of the queue.
+5. Then run the command in a command prompt to execute this file.
+
+ ```console
+ node receive.js
+ ```
+
+### [Connection string](#tab/connection-string)
+ 1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/) 2. Create a file called `receive.js` and paste the following code into it.
The following sample code shows you how to send a message to a queue.
process.exit(1); }); ```+ 3. Replace `<CONNECTION STRING TO SERVICE BUS NAMESPACE>` with the connection string to your Service Bus namespace.
-1. Replace `<QUEUE NAME>` with the name of the queue.
-1. Then, run the command in a command prompt to execute this file.
- ```console
- node receive.js
- ```
-1. You should see the following output.
+4. Replace `<QUEUE NAME>` with the name of the queue.
+5. Then run the command in a command prompt to execute this file.
```console
- Received message: Albert Einstein
- Received message: Werner Heisenberg
- Received message: Marie Curie
- Received message: Steven Hawking
- Received message: Isaac Newton
- Received message: Niels Bohr
- Received message: Michael Faraday
- Received message: Galileo Galilei
- Received message: Johannes Kepler
- Received message: Nikolaus Kopernikus
+ node receive.js
``` ++
+You should see the following output.
+
+```console
+Received message: Albert Einstein
+Received message: Werner Heisenberg
+Received message: Marie Curie
+Received message: Steven Hawking
+Received message: Isaac Newton
+Received message: Niels Bohr
+Received message: Michael Faraday
+Received message: Galileo Galilei
+Received message: Johannes Kepler
+Received message: Nikolaus Kopernikus
+```
+ On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values. :::image type="content" source="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png" alt-text="Incoming and outgoing message count":::
On the **Overview** page for the Service Bus namespace in the Azure portal, you
Select the queue on this **Overview** page to navigate to the **Service Bus Queue** page. You see the **incoming** and **outgoing** message count on this page too. You also see other information such as the **current size** of the queue, **maximum size**, **active message count**, and so on. :::image type="content" source="./media/service-bus-java-how-to-use-queues/queue-details.png" alt-text="Queue details":::+
+## Troubleshooting
+
+If you receive one of the following errors when running the **passwordless** version of the JavaScript code, make sure you are signed in via the Azure CLI command, `az login` and the [appropriate role](#azure-built-in-roles-for-azure-service-bus) is applied to your Azure user account:
+
+* 'Send' claim(s) are required to perform this operation
+* 'Receive' claim(s) are required to perform this operation
+
+## Clean up resources
+
+Navigate to your Service Bus namespace in the Azure portal, and select **Delete** on the Azure portal to delete the namespace and the queue in it.
+ ## Next steps See the following documentation and samples:
service-bus-messaging Service Bus Nodejs How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-nodejs-how-to-use-topics-subscriptions.md
Title: Get started with Azure Service Bus topics (JavaScript)
description: This tutorial shows you how to send messages to Azure Service Bus topics and receive messages from topics' subscriptions using the JavaScript programming language. Previously updated : 02/16/2022 Last updated : 11/18/2022 ms.devlang: javascript
> * [JavaScript](service-bus-nodejs-how-to-use-topics-subscriptions.md) > * [Python](service-bus-python-how-to-use-topics-subscriptions.md)
+In this tutorial, you complete the following steps:
-In this tutorial, you learn how to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package in a JavaScript program to send messages to a Service Bus topic and receive messages from a Service Bus subscription to that topic.
+1. Create a Service Bus namespace, using the Azure portal.
+2. Create a Service Bus topic, using the Azure portal.
+3. Create a Service Bus subscription to that topic, using the Azure portal.
+4. Write a JavaScript application to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package to:
+ * Send a set of messages to the topic.
+ * Receive those messages from the subscription.
> [!NOTE] > This quick start provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7). ## Prerequisites - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md). Note down the connection string, topic name, and a subscription name. You will use only one subscription for this quickstart.
+- [Node.js LTS](https://nodejs.org/en/download/)
+- Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md). You will use only one subscription for this quickstart.
++
+### [Passwordless](#tab/passwordless)
+
+To use this quickstart with your own Azure account, you need:
+* Install [Azure CLI](/cli/azure/install-azure-cli), which provides the passwordless authentication to your developer machine.
+* Sign in with your Azure account at the terminal or command prompt with `az login`.
+* Use the same account when you add the appropriate role to your resource.
+* Run the code in the same terminal or command prompt.
+* Note down your **topic** name and **subscription** for your Service Bus namespace. You'll need that in the code.
+
+### [Connection string](#tab/connection-string)
+
+Note down the following, which you'll use in the code below:
+* Service Bus namespace **connection string**
+* Service Bus namespace **topic** name you created
+* Service Bus namespace **subscription**
++ > [!NOTE] > - This tutorial works with samples that you can copy and run using [Nodejs](https://nodejs.org/). For instructions on how to create a Node.js application, see [Create and deploy a Node.js application to an Azure Website](../app-service/quickstart-nodejs.md), or [Node.js Cloud Service using Windows PowerShell](../cloud-services/cloud-services-nodejs-develop-deploy-app.md).
-### Use Node Package Manager (NPM) to install the package
-To install the npm package for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
-```bash
-npm install @azure/service-bus
-```
+++
+## Use Node Package Manager (NPM) to install the package
+
+### [Passwordless](#tab/passwordless)
+
+1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
+
+1. Install the following packages:
+
+ ```bash
+ npm install @azure/service-bus @azure/identity
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
+
+1. Install the following package:
+
+ ```bash
+ npm install @azure/service-bus
+ ```
++ ## Send messages to a topic The following sample code shows you how to send a batch of messages to a Service Bus topic. See code comments for details.
+### [Passwordless](#tab/passwordless)
+
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. Create a file called `sendtotopic.js` and paste the below code into it. This code will send a message to your topic.
+
+ ```javascript
+ const { ServiceBusClient } = require("@azure/service-bus");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ // Replace `<SERVICE-BUS-NAMESPACE>` with your namespace
+ const fullyQualifiedNamespace = "<SERVICE-BUS-NAMESPACE>.servicebus.windows.net";
+
+ // Passwordless credential
+ const credential = new DefaultAzureCredential();
+
+ const topicName = "<TOPIC NAME>";
+
+ const messages = [
+ { body: "Albert Einstein" },
+ { body: "Werner Heisenberg" },
+ { body: "Marie Curie" },
+ { body: "Steven Hawking" },
+ { body: "Isaac Newton" },
+ { body: "Niels Bohr" },
+ { body: "Michael Faraday" },
+ { body: "Galileo Galilei" },
+ { body: "Johannes Kepler" },
+ { body: "Nikolaus Kopernikus" }
+ ];
+
+ async function main() {
+ // create a Service Bus client using the passwordless authentication to the Service Bus namespace
+ const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+
+ // createSender() can also be used to create a sender for a queue.
+ const sender = sbClient.createSender(topicName);
+
+ try {
+ // Tries to send all messages in a single batch.
+ // Will fail if the messages cannot fit in a batch.
+ // await sender.sendMessages(messages);
+
+ // create a batch object
+ let batch = await sender.createMessageBatch();
+ for (let i = 0; i < messages.length; i++) {
+ // for each message in the arry
+
+ // try to add the message to the batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it fails to add the message to the current batch
+ // send the current batch as it is full
+ await sender.sendMessages(batch);
+
+ // then, create a new batch
+ batch = await sender.createMessageBatch();
+
+ // now, add the message failed to be added to the previous batch to this batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it still can't be added to the batch, the message is probably too big to fit in a batch
+ throw new Error("Message too big to fit in a batch");
+ }
+ }
+ }
+
+ // Send the last created batch of messages to the topic
+ await sender.sendMessages(batch);
+
+ console.log(`Sent a batch of messages to the topic: ${topicName}`);
+
+ // Close the sender
+ await sender.close();
+ } finally {
+ await sbClient.close();
+ }
+ }
+
+ // call the main function
+ main().catch((err) => {
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
+ ```
+3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace.
+1. Replace `<TOPIC NAME>` with the name of the topic.
+1. Then run the command in a command prompt to execute this file.
+
+ ```console
+ node sendtotopic.js
+ ```
+1. You should see the following output.
+
+ ```console
+ Sent a batch of messages to the topic: mytopic
+ ```
+
+### [Connection string](#tab/connection-string)
+ 1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/) 2. Create a file called `sendtotopic.js` and paste the below code into it. This code will send a message to your topic.
The following sample code shows you how to send a batch of messages to a Service
Sent a batch of messages to the topic: mytopic ``` ++ ## Receive messages from a subscription+
+### [Passwordless](#tab/passwordless)
+
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. Create a file called **receivefromsubscription.js** and paste the following code into it. See code comments for details.
+
+ ```javascript
+ const { delay, ServiceBusClient, ServiceBusMessage } = require("@azure/service-bus");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ // Replace `<SERVICE-BUS-NAMESPACE>` with your namespace
+ const fullyQualifiedNamespace = "<SERVICE-BUS-NAMESPACE>.servicebus.windows.net";
+
+ // Passwordless credential
+ const credential = new DefaultAzureCredential();
+
+ const topicName = "<TOPIC NAME>";
+ const subscriptionName = "<SUBSCRIPTION NAME>";
+
+ async function main() {
+ // create a Service Bus client using the passwordless authentication to the Service Bus namespace
+ const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+
+ // createReceiver() can also be used to create a receiver for a queue.
+ const receiver = sbClient.createReceiver(topicName, subscriptionName);
+
+ // function to handle messages
+ const myMessageHandler = async (messageReceived) => {
+ console.log(`Received message: ${messageReceived.body}`);
+ };
+
+ // function to handle any errors
+ const myErrorHandler = async (error) => {
+ console.log(error);
+ };
+
+ // subscribe and specify the message and error handlers
+ receiver.subscribe({
+ processMessage: myMessageHandler,
+ processError: myErrorHandler
+ });
+
+ // Waiting long enough before closing the sender to send messages
+ await delay(5000);
+
+ await receiver.close();
+ await sbClient.close();
+ }
+
+ // call the main function
+ main().catch((err) => {
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
+ ```
+3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to the namespace.
+4. Replace `<TOPIC NAME>` with the name of the topic.
+5. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+6. Then run the command in a command prompt to execute this file.
+
+ ```console
+ node receivefromsubscription.js
+ ```
+
+### [Connection string](#tab/connection-string)
+ 1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/) 2. Create a file called **receivefromsubscription.js** and paste the following code into it. See code comments for details.
The following sample code shows you how to send a batch of messages to a Service
}); ``` 3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to the namespace.
-1. Replace `<TOPIC NAME>` with the name of the topic.
-1. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
-1. Then run the command in a command prompt to execute this file.
+4. Replace `<TOPIC NAME>` with the name of the topic.
+5. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+6. Then run the command in a command prompt to execute this file.
```console node receivefromsubscription.js ```
-1. You should see the following output.
- ```console
- Received message: Albert Einstein
- Received message: Werner Heisenberg
- Received message: Marie Curie
- Received message: Steven Hawking
- Received message: Isaac Newton
- Received message: Niels Bohr
- Received message: Michael Faraday
- Received message: Galileo Galilei
- Received message: Johannes Kepler
- Received message: Nikolaus Kopernikus
- ```
++
+You should see the following output.
+
+```console
+Received message: Albert Einstein
+Received message: Werner Heisenberg
+Received message: Marie Curie
+Received message: Steven Hawking
+Received message: Isaac Newton
+Received message: Niels Bohr
+Received message: Michael Faraday
+Received message: Galileo Galilei
+Received message: Johannes Kepler
+Received message: Nikolaus Kopernikus
+```
In the Azure portal, navigate to your Service Bus namespace, switch to **Topics** in the bottom pane, and select your topic to see the **Service Bus Topic** page for your topic. On this page, you should see 10 incoming and 10 outgoing messages in the **Messages** chart.
On this page, if you select a subscription in the bottom pane, you get to the **
:::image type="content" source="./media/service-bus-nodejs-how-to-use-topics-subscriptions/active-message-count.png" alt-text="Active message count":::
+## Troubleshooting
+
+If you receive an error when running the **passwordless** version of the JavaScript code about required claims, make sure you are signed in via the Azure CLI command, `az login` and the [appropriate role](#azure-built-in-roles-for-azure-service-bus) is applied to your Azure user account.
+
+## Clean up resources
+
+Navigate to your Service Bus namespace in the Azure portal, and select **Delete** on the Azure portal to delete the namespace and the queue in it.
+ ## Next steps See the following documentation and samples:
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
You deploy an on-premises replication appliance when you use [Azure Site Recover
| CPU cores | 8 RAM | 32 GB
-Number of disks | 3, including the OS disk - 80 GB, data disk 1 - 620 GB, data disk 2 - 620 GB
+Number of disks | 2, including the OS disk - 80 GB and a data disk - 620 GB
### Software requirements
Ensure the following URLs are allowed and reachable from the Azure Site Recovery
|`*.microsoftonline.com `|Create Azure Active Directory (AD) apps for the appliance to communicate with Azure Site Recovery. | |management.azure.com |Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. | |`*.services.visualstudio.com `|Upload app logs used for internal monitoring. |
- |`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure machines to replicate have access to this. |
+ |`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure that the machines which need to be replicated have access to this. |
|aka.ms |Allow access to also known as links. Used for Azure Site Recovery appliance updates. | |download.microsoft.com/download |Allow downloads from Microsoft download. | |`*.servicebus.windows.net `|Communication between the appliance and the Azure Site Recovery service. | |`*.discoverysrv.windowsazure.com `<br><br>`*.hypervrecoverymanager.windowsazure.com `<br><br> `*.backup.windowsazure.com ` |Connect to Azure Site Recovery micro-service URLs. |`*.blob.core.windows.net `|Upload data to Azure storage which is used to create target disks. |
+Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity, when enabling replication to a government cloud:
+
+ | **URL for Fairfax** | **URL for Mooncake** | **Details** |
+ | - | -| -|
+ | `login.microsoftonline.us/*` <br> `graph.microsoftazure.us` | `login.chinacloudapi.cn/*` <br> `graph.chinacloudapi.cn` | To sign-in to your Azure subscription. |
+ | `portal.azure.us` | `portal.azure.cn` |Navigate to the Azure portal. |
+ | `*.microsoftonline.us/*` <br> `management.usgovcloudapi.net` | `*.microsoftonline.cn/*` <br> `management.chinacloudapi.cn/*` | Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. |
+ | `*.hypervrecoverymanager.windowsazure.us` <br> `*.migration.windowsazure.us` <br> `*.backup.windowsazure.us` | `*.hypervrecoverymanager.windowsazure.cn` <br> `*.migration.windowsazure.cn` <br> `*.backup.windowsazure.cn` | Connect to Azure Site Recovery micro-service URLs. |
+ |`*.vault.usgovcloudapi.net`| `*.vault.azure.cn` |Manage secrets in the Azure Key Vault. Note: Ensure that the machines which need to be replicated have access to this. |
+ ### Folder exclusions from Antivirus program
C:\Program Files\Microsoft Azure VMware Discovery Service <br>
C:\Program Files\Microsoft On-Premise to Azure Replication agent <br> E:\ <br>
-#### If Antivirus software is active on Source machine
+#### If Antivirus software is active on source machine
If source machine has an Antivirus software active, installation folder should be excluded. So, exclude folder C:\ProgramData\ASR\agent for smooth replication.
site-recovery Failover Failback Overview Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview-modernized.md
To reprotect and fail back VMware machines and physical servers from Azure to on
- You can select any of the Azure Site Recovery replication appliances registered under a vault to re-protect to on-premises. You do not require a separate Process server in Azure for re-protect operation and a scale-out Master Target server for Linux VMs. - Replication appliance doesnΓÇÖt require additional network connection/ports (as compared with forward protection) during failback. Same appliance can be used for forward and backward protections if it is in healthy state. It should not impact the performance of the replications.-- When selecting target datastore, ensure that the ESX Host where the replication appliance is located is able to access it.
+- When selecting the appliance, ensure that the target datastore where the source machine is located, is accessible by the appliance. The datastore of the source machine should always be accessible by the appliance. Even if the machine and appliance are located in different ESX servers, as long as the data store is shared between them, reprotection will succeed.
> [!NOTE] > Storage vMotion of replication appliance is not supported after re-protect operation.
+ > When selecting the appliance, ensure that the target datastore where the source machine is located, is accessible by the appliance.
**Re-protect job**
site-recovery Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-from-classic-to-modernized-vmware-disaster-recovery.md
Ensure the following for a successful movement of replicated item:
Ensure the following before you move from classic architecture to modernized architecture: -- [Create a Recovery Services vault](./azure-to-azure-tutorial-enable-replication.md#create-a-recovery-services-vault) and ensure the experience has [not been switched to classic](./vmware-azure-common-questions.md#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-modernized-experience).
+- [Create a Recovery Services vault](./azure-to-azure-tutorial-enable-replication.md#create-a-recovery-services-vault) and ensure the experience has [not been switched to classic](./vmware-azure-common-questions.md#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-modernized-experience)
- [Deploy an Azure Site Recovery replication appliance](./deploy-vmware-azure-replication-appliance-modernized.md). - [Add the on-premises machine’s vCenter Server details](./deploy-vmware-azure-replication-appliance-modernized.md) to the appliance, so that it successfully performs discovery.  
site-recovery Vmware Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-common-questions.md
This article answers common questions that might come up when you deploy disaste
### How do I use the classic experience in the Recovery Services vault rather than the modernized experience? - A new and more reliable way to protect VMware virtual machines using the Azure Site Recovery replication appliance is now generally available. When a new Recovery Services vault is created, by default the modernized experience will be selected.
To change the experience -
1. Open the vault on Azure portal. 2. Click on **Site Recovery** in the **Getting started** section. 3. Click on the banner on top of this page.
-
+ [![Modify VMware stack step 1](./media/vmware-azure-common-questions/change-stack-step-1.png)](./media/vmware-azure-common-questions/change-stack-step-1.png#lightbox) 4. This will open the experience selection blade. Select the classic experience if you want to use configuration server and then click on **OK**. If not, close the pane.
To change the experience -
> [!NOTE] > Note that once the experience type has been switched to classic from modernized, it cannot be switched again in the same Recovery Services vault. Ensure that the desired experience is selected, before saving this change.
+### Can I migrate to the modernized experience?
+
+All VMware VMs or Physical servers which are being replicated using the classic experience can be migrated to the modernized experience. Check the details [here](move-from-classic-to-modernized-vmware-disaster-recovery.md) and follow the [tutorial](how-to-move-from-classic-to-modernized-vmware-disaster-recovery.md).
### What do I need for VMware VM disaster recovery?
site-recovery Vmware Azure Tutorial Failover Failback Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-failover-failback-modernized.md
If issue persists, contact Microsoft support. **Do not** disable replication.
After successful planned failover, the machine is active in your on-premises. To protect your machine in the future, ensure that the machine is replicated to Azure (re-protected).
-To do this, go to the machine > **Re-protect**, select the appliance of your choice, select the replication policy and proceed.
+To do this, go to the machine > **Re-protect**, select the appliance of your choice, select the cache storage account and proceed. When selecting the appliance, ensure that the target datastore where the source machine is located, is accessible by the appliance. The datastore of the source machine should always be accessible by the appliance. Even if the machine and appliance are located in different ESX servers, as long as the data store is shared between them, reprotection will succeed.
+
+ > [!NOTE]
+ > When selecting the appliance, ensure that the target datastore where the source machine is located, is accessible by the appliance.
After successfully enabling replication and initial replication, recovery points will be generated to offer business continuity from unwanted disruptions.
After successfully enabling replication and initial replication, recovery points
After failover, reprotect the Azure VMs to on-premises. After the VMs are reprotected and replicating to the on-premises site, fail back from Azure when you're ready. > [!div class="nextstepaction"]
-> [Reprotect Azure VMs](vmware-azure-reprotect.md)
-> [Fail back from Azure](vmware-azure-failback.md)
+> [Reprotect Azure VMs](failover-failback-overview-modernized.md)
+> [Fail back from Azure](failover-failback-overview-modernized.md)
spring-apps Expose Apps Gateway End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-end-to-end-tls.md
This article explains how to expose applications to the internet using Applicati
## Configure Application Gateway for Azure Spring Apps
-We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation.md).
+We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation).
To configure Application Gateway in front of Azure Spring Apps, use the following steps.
spring-apps Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-tls-termination.md
When an Azure Spring Apps service instance is deployed in your virtual network (
## Configure Application Gateway for Azure Spring Apps
-We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation.md).
+We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation).
To configure Application Gateway in front of Azure Spring Apps in a private VNET, use the following steps.
spring-apps How To Integrate Azure Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-integrate-azure-load-balancers.md
Azure already provides [different load-balance solutions](/azure/architecture/gu
In the examples below, we will load balance requests for a custom domain of `www.contoso.com` towards two deployments of Azure Spring Apps in two different regions: `eastus.azuremicroservices.io` and `westus.azuremicroservices.io`.
-We recommend that the domain name, as seen by the browser, is the same as the host name which the load balancer uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using a load balancer to expose applications hosted in Azure Spring Apps. If the domain exposed by the load balancer is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation.md).
+We recommend that the domain name, as seen by the browser, is the same as the host name which the load balancer uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using a load balancer to expose applications hosted in Azure Spring Apps. If the domain exposed by the load balancer is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation).
## Prerequisites
To integrate with Azure Spring Apps service, complete the following configuratio
### Add Custom Probe 1. Select **Health Probes** then **Add** to open custom **Probe** dialog.
-1. The key point is to select *No* for **Pick host name from backend HTTP settings** option and explicitly specify the host name. For more information, see [Application Gateway configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation.md#application-gateway).
+1. The key point is to select *No* for **Pick host name from backend HTTP settings** option and explicitly specify the host name. For more information, see [Application Gateway configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation#application-gateway).
![App Gateway 2](media/spring-cloud-load-balancers/app-gateway-2.png)
To integrate with Azure Spring Apps service and configure an origin group, use t
1. Specify **origin type** as *Azure Spring Apps*. 1. Select your Azure Spring Apps instance for the **host name**.
-1. Keep the **origin host header** empty, so that the incoming host header will be used towards the backend. For more information, see [Azure Front Door configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation.md#azure-front-door).
+1. Keep the **origin host header** empty, so that the incoming host header will be used towards the backend. For more information, see [Azure Front Door configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation#azure-front-door).
![Front Door 2](media/spring-cloud-load-balancers/front-door-2.png)
storage Blobfuse2 Commands Completion Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-bash.md
Title: How to use the 'blobfuse2 completion bash' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 completion bash' command to generate the autocompletion script for BlobFuse2 (preview)
+ description: Learn how to use the completion bash command to generate the autocompletion script for BlobFuse2 (preview).
storage Blobfuse2 Commands Completion Fish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-fish.md
Title: How to use the 'blobfuse2 completion fish' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 completion fish' command to generate the autocompletion script for BlobFuse2 (preview)
+ description: Learn how to use the 'blobfuse2 completion fish' command to generate the autocompletion script for BlobFuse2 (preview).
storage Blobfuse2 Commands Completion Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-powershell.md
Title: How to use the 'blobfuse2 completion powershell' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 completion powershell' command to generate the autocompletion script for BlobFuse2 (preview)
+ description: Learn how to use the 'blobfuse2 completion powershell' command to generate the autocompletion script for BlobFuse2 (preview).
storage Blobfuse2 Commands Completion Zsh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion-zsh.md
Title: How to use the 'blobfuse2 completion zsh' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 completion zsh' command to generate the autocompletion script for BlobFuse2 (preview)
+ description: Learn how to use the 'blobfuse2 completion zsh' command to generate the autocompletion script for BlobFuse2 (preview).
storage Blobfuse2 Commands Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-completion.md
Title: How to use the 'blobfuse2 completion' command to generate the autocompletion script for BlobFuse2 (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 completion' command to generate the autocompletion script for BlobFuse2 (preview)
+ description: Learn how to use the 'blobfuse2 completion' command to generate the autocompletion script for BlobFuse2 (preview).
storage Blobfuse2 Commands Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-help.md
Title: How to use 'blobfuse2 help' to get help info for the BlobFuse2 command and subcommands (preview) | Microsoft Docs-
+ Title: How to use 'blobfuse2 help' to get help info for the BlobFuse2 command and subcommands (preview)
+ description: Learn how to use 'blobfuse2 help' to get help info for the BlobFuse2 command and subcommands (preview).
storage Blobfuse2 Commands Mount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-all.md
Title: How to use the 'blobfuse2 mount all' command to mount all blob containers in a storage account as a Linux file system (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 mount all' command to mount all blob containers in a storage account as a Linux file system (preview)
+ description: Learn how to use the 'blobfuse2 mount all' all command to mount all blob containers in a storage account as a Linux file system (preview).
storage Blobfuse2 Commands Mount List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-list.md
Title: How to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points (preview)
+ description: Learn how to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points. (preview)
storage Blobfuse2 Commands Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount.md
Title: How to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview). | Microsoft Docs-
+ Title: How to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview).
+ description: Learn how to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview).
storage Blobfuse2 Commands Mountv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mountv1.md
Title: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file (preview) | Microsoft Docs-
+ Title: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file (preview)
+ description: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file (preview).
storage Blobfuse2 Commands Secure Decrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-decrypt.md
Title: How to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file (preview) | Microsoft Docs-
+ Title: How to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file (preview)
+ description: Learn how to use the `blobfuse2 secure decrypt` command to decrypt a BlobFuse2 configuration file (preview).
storage Blobfuse2 Commands Secure Encrypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-encrypt.md
Title: How to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file (preview) | Microsoft Docs-
+ Title: How to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file (preview)
+ description: Learn how to use the `blobfuse2 secure encrypt` command to encrypt a BlobFuse2 configuration file. (preview)
storage Blobfuse2 Commands Secure Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-get.md
Title: How to use the 'blobfuse2 secure get' command to display the value of a parameter from an encrypted BlobFuse2 configuration file (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 secure get' command to display the value of a parameter from an encrypted BlobFuse2 configuration file (preview)
+ description: Learn how to use the 'blobfuse2 secure get' command to display the value of a parameter from an encrypted BlobFuse2 configuration file (preview)
storage Blobfuse2 Commands Secure Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure-set.md
Title: How to use the 'blobfuse2 secure set' command to change the value of a parameter in an encrypted BlobFuse2 configuration file (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 secure set' command to change the value of a parameter in an encrypted BlobFuse2 configuration file (preview)
+ description: Learn how to use the 'blobfuse2 secure set' command to change the value of a parameter in an encrypted BlobFuse2 configuration file (preview)
storage Blobfuse2 Commands Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-secure.md
Title: How to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file (preview)
+ description: Learn how to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file (preview).
storage Blobfuse2 Commands Unmount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount-all.md
Title: How to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system (preview)
+ description: Learn how to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system (preview).
storage Blobfuse2 Commands Unmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount.md
Title: How to use the 'blobfuse2 unmount' command to unmount an existing mount point (preview)| Microsoft Docs-+ description: How to use the 'blobfuse2 unmount' command to unmount an existing mount point. (preview)
storage Blobfuse2 Commands Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-version.md
Title: How to use the 'blobfuse2 version' command to get the current version and optionally check for a newer one (preview) | Microsoft Docs-
+ Title: How to use the 'blobfuse2 version' command to get the current version and optionally check for a newer one (preview)
+ description: Learn how to use the 'blobfuse2 version' command to get the current version and optionally check for a newer one (preview).
storage Blobfuse2 Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands.md
Title: How to use the BlobFuse2 command set (preview) | Microsoft Docs-
+ Title: How to use the BlobFuse2 command set (preview)
+ description: Learn how to use the BlobFuse2 command set to mount blob storage containers as file systems on Linux, and manage them (preview).
storage Blobfuse2 Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-configuration.md
Title: Configure settings for BlobFuse2 (preview)-+ description: Learn about your options for setting and changing configuration settings for BlobFuse2 (preview).
Previously updated : 10/17/2022 Last updated : 11/17/2022 # Configure settings for BlobFuse2 (preview)
For a list of all BlobFuse2 settings and their descriptions, see the [base confi
To manage configuration settings for BlobFuse2, you have three options (in order of precedence):
-(1) [Configuration file](#configuration-file)
-
-(2) [Environment variables](#environment-variables)
-
-(3) [CLI parameters](#cli-parameters)
+- [Configuration file](#configuration-file)
+- [Environment variables](#environment-variables)
+- [CLI parameters](#cli-parameters)
Using a configuration file is the preferred method, but the other methods might be useful in some circumstances.
storage Blobfuse2 Health Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-health-monitor.md
Title: Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage-+ description: Learn how to Use Health Monitor to gain insights into BlobFuse2 mount activities and resource usage.
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
Title: Use BlobFuse to mount an Azure Blob Storage container on Linux - BlobFuse2 (preview)-+ description: Learn how to use the latest version of BlobFuse, BlobFuse2, to mount an Azure Blob Storage container on Linux.
storage Blobfuse2 Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-troubleshooting.md
Title: Troubleshoot issues in BlobFuse2 (preview)-+ description: Learn how to troubleshoot issues in BlobFuse2 (preview).
storage Blobfuse2 What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md
Title: What is BlobFuse? - BlobFuse2 (preview)-+ description: An overview of how to use BlobFuse to mount an Azure Blob Storage container through the Linux file system.
The open source BlobFuse2 project is on GitHub:
### Licensing
-The BlobFuse2 project is [licensed under MIT](https://github.com/Azure/azure-storage-fuse/blob/main/LICENSE).
+The BlobFuse2 project is [licensed under the MIT license](https://github.com/Azure/azure-storage-fuse/blob/main/LICENSE).
## Features
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md
Title: Best practices for using Azure Data Lake Storage Gen2 | Microsoft Docs
+ Title: Best practices for using Azure Data Lake Storage Gen2
description: Learn how to optimize performance, reduce costs, and secure your Data Lake Storage Gen2 enabled Azure Storage account.
storage Data Lake Storage Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-events.md
Title: 'Tutorial: Implement the data lake capture pattern to update a Azure Databricks Delta table | Microsoft Docs'
+ Title: 'Tutorial: Implement the data lake capture pattern to update a Azure Databricks Delta table'
description: This tutorial shows you how to use an Event Grid subscription, an Azure Function, and an Azure Databricks job to insert rows of data into a table that is stored in Azure DataLake Storage Gen2.
storage Data Lake Storage Supported Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-supported-azure-services.md
Title: Azure services that support Azure Data Lake Storage Gen2 | Microsoft Docs
+ Title: Azure services that support Azure Data Lake Storage Gen2
description: Learn about which Azure services integrate with Azure Data Lake Storage Gen2
storage Data Lake Storage Supported Open Source Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-supported-open-source-platforms.md
Title: Open source platforms that support Azure Data Lake Storage Gen2 | Microsoft Docs
+ Title: Open source platforms that support Azure Data Lake Storage Gen2
description: Learn about which open source platforms that support Azure Data Lake Storage Gen2
storage Data Lake Storage Use Databricks Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-databricks-spark.md
Title: 'Tutorial: Azure Data Lake Storage Gen2, Azure Databricks & Spark | Microsoft Docs'
+ Title: 'Tutorial: Azure Data Lake Storage Gen2, Azure Databricks & Spark'
description: This tutorial shows how to run Spark queries on an Azure Databricks cluster to access data in an Azure Data Lake Storage Gen2 storage account.
storage Data Lake Storage Use Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-sql.md
Title: 'Tutorial: Azure Data Lake Storage Gen2, Azure Synapse | Microsoft Docs'
+ Title: 'Tutorial: Azure Data Lake Storage Gen2, Azure Synapse'
description: This tutorial shows how to run SQL queries on an Azure Synapse serverless SQL endpoint to access data in an Azure Data Lake Storage Gen2 storage account.
storage Monitor Blob Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage-reference.md
Title: Azure Blob Storage monitoring data reference | Microsoft Docs
+ Title: Azure Blob Storage monitoring data reference
description: Log and metrics reference for monitoring data from Azure Blob Storage. recommendations: false
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Title: Mount Azure Blob Storage by using the NFS 3.0 protocol | Microsoft Docs
+ Title: Mount Azure Blob Storage by using the NFS 3.0 protocol
description: Learn how to mount a container in Blob Storage from an Azure virtual machine (VM) or a client that runs on-premises by using the NFS 3.0 protocol.
storage Secure File Transfer Protocol Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-performance.md
Title: SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage | Microsoft Docs
+ Title: SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage
description: Optimize the performance of your SSH File Transfer Protocol (SFTP) requests by using the recommendations in this article.
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Title: Connect to Azure Blob Storage using SFTP | Microsoft Docs
+ Title: Connect to Azure Blob Storage using SFTP
description: Learn how to enable SFTP support for Azure Blob Storage so that you can directly connect to your Azure Storage account by using an SFTP client.
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Title: SFTP support for Azure Blob Storage | Microsoft Docs
+ Title: SFTP support for Azure Blob Storage
description: Blob storage now supports the SSH File Transfer Protocol (SFTP).
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
Title: Create a blob container with JavaScript - Azure Storage description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library. -+ Last updated 03/28/2022-+ ms.devlang: javascript-+ # Create a container in Azure Storage with JavaScript
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
Last updated 07/25/2022
ms.devlang: csharp-+ # Create a container in Azure Storage with .NET
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
Last updated 03/28/2022 ms.devlang: javascript-+ # Delete and restore a container in Azure Storage with JavaScript
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
Last updated 03/28/2022
ms.devlang: csharp-+ # Delete and restore a container in Azure Storage with .NET
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
Last updated 03/28/2022 ms.devlang: csharp-+ # Create and manage blob or container leases with .NET
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
Last updated 03/28/2022 ms.devlang: javascript-+ # Manage container properties and metadata with JavaScript
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
Last updated 03/28/2022 ms.devlang: csharp-+ # Manage container properties and metadata with .NET
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
Last updated 03/28/2022
ms.devlang: javascript-+ # List blob containers with JavaScript
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
Last updated 03/28/2022
ms.devlang: csharp-+ # List blob containers with .NET
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
ms.devlang: javascript-+ # Copy a blob with Azure Storage using the JavaScript client library
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Last updated 03/28/2022
-
+ms.devlang: csharp
+ # Copy a blob with Azure Storage using the .NET client library
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
Last updated 07/15/2022 -+
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
ms.devlang: javascript-+ # Delete and restore a blob in your Azure Storage account using the JavaScript client library
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
Last updated 03/28/2022
-
+ms.devlang: csharp
+ # Delete and restore a blob in your Azure Storage account using the .NET client library
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
Last updated 03/28/2022 -+ # Get started with Azure Blob Storage and .NET
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
ms.devlang: javascript-+ # Download a blob in Azure Storage using the JavaScript client library
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
Last updated 03/28/2022
-
+ms.devlang: csharp
+ # Download a blob in Azure Storage using the .NET client library
You can download a blob by using any of the following methods:
- [DownloadTo](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadto) - [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) - [DownloadContent](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadcontent)--[DownloadContentAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadcontentasync)
+- [DownloadContentAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadcontentasync)
You can also open a stream to read from a blob. The stream will only download the blob as the stream is read from. Use either of the following methods:
storage Storage Blob Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-overview.md
Title: Reacting to Azure Blob storage events | Microsoft Docs
+ Title: Reacting to Azure Blob storage events
description: Use Azure Event Grid to subscribe and react to Blob storage events. Understand the event model, filtering events, and practices for consuming events.
storage Storage Blob Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-quickstart-powershell.md
Title: Send Azure Blob storage events to web endpoint - PowerShell | Microsoft Docs
+ Title: Send Azure Blob storage events to web endpoint - PowerShell
description: Use Azure Event Grid to subscribe to Blob storage events, trigger an event, and view the result. Use Azure PowerShell to route storage events to a web endpoint.
storage Storage Blob Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-quickstart.md
Title: Send Azure Blob storage events to web endpoint - Azure CLI | Microsoft Docs
+ Title: Send Azure Blob storage events to web endpoint - Azure CLI
description: Use Azure Event Grid to subscribe to Blob storage events. Send the events to a Webhook. Handle the events in a web application.
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
ms.devlang: javascript-+ # Get URL for container or blob in Azure Storage using the JavaScript client library
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
Last updated 09/19/2022 -+
storage Storage Blob Pageblob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-pageblob-overview.md
Title: Overview of Azure page blobs | Microsoft Docs
+ Title: Overview of Azure page blobs
description: An overview of Azure page blobs and their advantages, including use cases with sample scripts.
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
ms.devlang: csharp-+ # Manage blob properties and metadata with JavaScript
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
ms.devlang: csharp-+ # Manage blob properties and metadata with .NET
storage Storage Blob Scalable App Download Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-download-files.md
Title: Download large amounts of random data from Azure Storage | Microsoft Docs
+ Title: Download large amounts of random data from Azure Storage
description: Learn how to use the Azure SDK to download large amounts of random data from an Azure Storage account
storage Storage Blob Scalable App Verify Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-verify-metrics.md
Title: Verify throughput and latency metrics for a storage account in the Azure portal | Microsoft Docs
+ Title: Verify throughput and latency metrics for a storage account in the Azure portal
description: Learn how to verify throughput and latency metrics for a storage account in the portal.
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
ms.devlang: javascript-+ # Use blob index tags to manage and find data in Azure Blob Storage (JavaScript)
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
Last updated 03/28/2022
-
+ms.devlang: csharp
+ # Use blob index tags to manage and find data in Azure Blob Storage (.NET)
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
ms.devlang: javascript-+ # Upload a blob to Azure Storage by using the JavaScript client library
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
Last updated 03/28/2022
-
+ms.devlang: csharp
+ # Upload a blob to Azure Storage by using the .NET client library
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
Last updated 03/28/2022
ms.devlang: javascript-+ # List blobs using the Azure Storage client library for JavaScript
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
Last updated 03/28/2022 -
+ms.devlang: csharp
+ # List blobs using the Azure Storage client library for .NET
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
Title: How to mount Azure Blob Storage as a file system on Linux with BlobFuse v1 | Microsoft Docs-
+ Title: How to mount Azure Blob Storage as a file system on Linux with BlobFuse v1
+ description: Learn how to mount an Azure Blob Storage container with BlobFuse v1, a virtual file system driver on Linux. Previously updated : 11/03/2022 Last updated : 11/17/2022
This guide shows you how to use BlobFuse v1 and mount a Blob Storage container o
BlobFuse binaries are available on [the Microsoft software repositories for Linux](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software) for Ubuntu, Debian, SUSE, CentOS, Oracle Linux and RHEL distributions. To install BlobFuse on those distributions, configure one of the repositories from the list. You can also build the binaries from source code following the [Azure Storage installation steps](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation#option-2build-from-source) if there are no binaries available for your distribution.
-BlobFuse is published in the Linux repo for Ubuntu versions: 16.04, 18.04, and 20.04, RHELversions: 7.5, 7.8, 7.9, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, Oracle Linux 8.1. Run this command to make sure that you have one of those versions deployed:
+BlobFuse is published in the Linux repo for Ubuntu versions: 16.04, 18.04, and 20.04, RHEL versions: 7.5, 7.8, 7.9, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, Oracle Linux 8.1. Run this command to make sure that you have one of those versions deployed:
```bash lsb_release -a
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
Title: Azure Quickstart - Create a blob in object storage using Go | Microsoft Docs
+ Title: Azure Quickstart - Create a blob in object storage using Go
description: In this quickstart, you create a storage account and a container in object (Blob) storage. Then you use the storage client library for Go to upload a blob to Azure Storage, download a blob, and list the blobs in a container.
storage Storage Quickstart Blobs Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-php.md
Title: Azure Quickstart - Create a blob in object storage using PHP | Microsoft Docs
+ Title: Azure Quickstart - Create a blob in object storage using PHP
description: Quickly learn to transfer objects to/from Azure Blob storage using PHP. Upload, download, and list block blobs in a container in Azure Blob storage.
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
To see Blob storage sample apps, continue to:
> [!div class="nextstepaction"] > [Azure Blob Storage library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob/samples) -- To learn more, see the [Azure Storage client libraries for Python](/azure/developer/python/sdk/storage/overview).-- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Python Developers](/azure/python/).
+- To learn more, see the [Azure Storage client libraries for Python](/azure/developer/python/sdk/azure-sdk-overview).
+- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Python Developers](/azure/python/).
storage Storage Samples Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-cli.md
Title: Azure CLI samples for Blob storage | Microsoft Docs
+ Title: Azure CLI samples for Blob storage
description: See links to Azure CLI samples for working with Azure Blob Storage, such as creating a storage account, deleting containers with a specific prefix, and more.
storage Storage Samples Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-powershell.md
Title: Azure PowerShell samples for Azure Blob storage | Microsoft Docs
+ Title: Azure PowerShell samples for Azure Blob storage
description: See links to Azure PowerShell script samples for working with Azure Blob storage, such as creating a storage account, migrating blobs across accounts, and more.
storage Upgrade To Data Lake Storage Gen2 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
Title: Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities | Microsoft Docs
+ Title: Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities
description: Shows you how to use Resource Manager templates to upgrade from Azure Blob Storage to Data Lake Storage.
storage Upgrade To Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2.md
Title: Upgrading Azure Blob Storage to Azure Data Lake Storage Gen2 | Microsoft Docs
+ Title: Upgrading Azure Blob Storage to Azure Data Lake Storage Gen2
description: Description goes here.
storage Manage Storage Analytics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-logs.md
Title: Enable and manage Azure Storage Analytics logs (classic) | Microsoft Docs
+ Title: Enable and manage Azure Storage Analytics logs (classic)
description: Learn how to monitor a storage account in Azure by using Azure Storage Analytics.
storage Manage Storage Analytics Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-metrics.md
Title: Enable and manage Azure Storage Analytics metrics (classic) | Microsoft Docs
+ Title: Enable and manage Azure Storage Analytics metrics (classic)
description: Learn how to enable, edit, and view Azure Storage Analytics metrics.
storage Storage Account Get Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-get-info.md
Previously updated : 05/26/2022 Last updated : 11/17/2022
az storage account show \
You can use a connection string to authorize access to Azure Storage with the account access keys (Shared Key authorization). To learn more about connection strings, see [Configure Azure Storage connection strings](storage-configure-connection-string.md).
+> [!IMPORTANT]
+> Your storage account access keys are similar to a root password for your storage account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. Avoid distributing access keys to other users, hard-coding them, or saving them anywhere in plain text that is accessible to others. Rotate your keys if you believe they may have been compromised.
+>
+> Microsoft recommends using Azure Active Directory (Azure AD) to authorize requests against blob and queue data if possible, rather than using the account keys (Shared Key authorization). Authorization with Azure AD provides superior security and ease of use over Shared Key authorization. For more information, see [Authorize access to data in Azure Storage](authorize-data-access.md).
# [Portal](#tab/portal)
storage Storage Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md
Title: Move an Azure Storage account to another region | Microsoft Docs
+ Title: Move an Azure Storage account to another region
description: Shows you how to move an Azure Storage account to another region.
storage Storage Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-analytics.md
Title: Use Azure Storage analytics to collect logs and metrics data | Microsoft Docs
+ Title: Use Azure Storage analytics to collect logs and metrics data
description: Storage Analytics enables you to track metrics data for all storage services, and to collect logs for Blob, Queue, and Table storage.
storage Storage Compliance Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-compliance-offerings.md
Title: Azure Storage compliance offerings | Microsoft Docs
+ Title: Azure Storage compliance offerings
description: Read a summary of compliance offerings on Azure Storage for national, regional, and industry-specific requirements governing the collection and usage of data.
storage Storage Explorer Blob Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-blob-versioning.md
Title: Azure Storage Explorer blob versioning guide | Microsoft Docs
+ Title: Azure Storage Explorer blob versioning guide
description: Blob versioning guidance for Azure Storage Explorer
storage Storage Explorer Command Line Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-command-line-options.md
Title: Azure Storage Explorer command-line options | Microsoft Docs
+ Title: Azure Storage Explorer command-line options
description: Documentation of Azure Storage Explorer start-up command-line options
storage Storage Explorer Direct Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-direct-link.md
Title: Azure Storage Explorer direct link | Microsoft Docs
+ Title: Azure Storage Explorer direct link
description: Documentation of Azure Storage Explorer direct link
storage Storage Explorer Emulators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-emulators.md
Title: Connect an emulator to Azure Storage Explorer | Microsoft Docs
+ Title: Connect an emulator to Azure Storage Explorer
description: Documentation on using an emulator with Azure Storage Explorer
storage Storage Explorer Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-network.md
Title: Network Connections in Azure Storage Explorer | Microsoft Docs
+ Title: Network Connections in Azure Storage Explorer
description: Documentation on connecting to your network in Azure Storage Explorer
storage Storage Explorer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-security.md
Title: Azure Storage Explorer security guide | Microsoft Docs
+ Title: Azure Storage Explorer security guide
description: Security guidance for Azure Storage Explorer
storage Storage Explorer Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-sign-in.md
Title: Sign in to Azure Storage Explorer | Microsoft Docs
+ Title: Sign in to Azure Storage Explorer
description: Documentation on signing into Azure Storage Explorer
storage Storage Explorer Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-soft-delete.md
Title: Azure Storage Explorer soft delete guide | Microsoft Docs
+ Title: Azure Storage Explorer soft delete guide
description: Soft delete in Azure Storage Explorer
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-support-policy-lifecycle.md
Title: Azure Storage Explorer support lifecycle | Microsoft Docs
+ Title: Azure Storage Explorer support lifecycle
description: Overview of the support policy and lifecycle for Azure Storage Explorer
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-troubleshooting.md
Title: Azure Storage Explorer troubleshooting guide | Microsoft Docs
+ Title: Azure Storage Explorer troubleshooting guide
description: Overview of debugging techniques for Azure Storage Explorer
storage Storage Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-insights-overview.md
Title: Monitor Azure Storage services with Azure Monitor Storage insights | Microsoft Docs
+ Title: Monitor Azure Storage services with Azure Monitor Storage insights
description: This article describes the Storage insights feature of Azure Monitor that provides storage admins with a quick understanding of performance and utilization issues with their Azure Storage accounts. recommendations: false
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Blob Storage is ideal for:
- Storing data for backup and restore, disaster recovery, and archiving. - Storing data for analysis by an on-premises or Azure-hosted service.
-Objects in Blob Storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client applications can access blobs via URLs, the [Azure Storage REST API](/rest/api/storageservices/blob-service-rest-api), [Azure PowerShell](/powershell/module/azure.storage), [Azure CLI](/cli/azure/storage), or an Azure Storage client library. The storage client libraries are available for multiple languages, including [.NET](/dotnet/api/overview/azure/storage), [Java](/java/api/overview/azure/storage), [Node.js](https://azure.github.io/azure-storage-node), [Python](https://azure-storage.readthedocs.io/), [PHP](https://azure.github.io/azure-storage-php/), and [Ruby](https://azure.github.io/azure-storage-ruby).
+Objects in Blob Storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client applications can access blobs via URLs, the [Azure Storage REST API](/rest/api/storageservices/blob-service-rest-api), [Azure PowerShell](/powershell/module/azure.storage), [Azure CLI](/cli/azure/storage), or an Azure Storage client library. The storage client libraries are available for multiple languages, including [.NET](/dotnet/api/overview/azure/storage), [Java](/java/api/overview/azure/storage), [Node.js](https://azure.github.io/azure-storage-node), [Python](/python/api/overview/azure/storage), [PHP](https://azure.github.io/azure-storage-php/), and [Ruby](https://azure.github.io/azure-storage-ruby).
Clients can also securely connect to Blob Storage by using SSH File Transfer Protocol (SFTP) and mount Blob Storage containers by using the Network File System (NFS) 3.0 protocol.
storage Storage Metrics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-metrics-migration.md
Title: Move from Storage Analytics metrics to Azure Monitor metrics | Microsoft Docs
+ Title: Move from Storage Analytics metrics to Azure Monitor metrics
description: Learn how to transition from Storage Analytics metrics (classic metrics) to metrics in Azure Monitor.
storage Storage Monitoring Diagnosing Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-monitoring-diagnosing-troubleshooting.md
Title: Monitor and troubleshoot Azure Storage (classic logs & metrics) | Microsoft Docs
+ Title: Monitor and troubleshoot Azure Storage (classic logs & metrics)
description: Use features like storage analytics, client-side logging, and other third-party tools to identify, diagnose, and troubleshoot Azure Storage-related issues.
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks | Microsoft Docs
+ Title: Configure Azure Storage firewalls and virtual networks
description: Configure layered network security for your storage account using Azure Storage firewalls and Azure Virtual Network.
storage Storage Ref Azcopy Bench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-bench.md
Title: azcopy bench | Microsoft Docs
+ Title: azcopy bench
description: This article provides reference information for the azcopy bench command.
storage Storage Ref Azcopy Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-configuration-settings.md
Title: AzCopy v10 configuration setting (Azure Storage) | Microsoft Docs
+ Title: AzCopy v10 configuration setting (Azure Storage)
description: This article provides reference information for AzCopy V10 configuration settings.
storage Storage Ref Azcopy Doc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-doc.md
Title: azcopy doc | Microsoft Docs
+ Title: azcopy doc
description: This article provides reference information for the azcopy doc command.
storage Storage Ref Azcopy Env https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-env.md
Title: azcopy env | Microsoft Docs
+ Title: azcopy env
description: This article provides reference information for the azcopy env command.
storage Storage Ref Azcopy Jobs Clean https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-clean.md
Title: azcopy jobs clean | Microsoft Docs
+ Title: azcopy jobs clean
description: This article provides reference information for the azcopy jobs clean command.
storage Storage Ref Azcopy Jobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-list.md
Title: azcopy jobs list | Microsoft Docs
+ Title: azcopy jobs list
description: This article provides reference information for the azcopy jobs list command.
storage Storage Ref Azcopy Jobs Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-remove.md
Title: azcopy jobs remove | Microsoft Docs
+ Title: azcopy jobs remove
description: This article provides reference information for the azcopy jobs remove command.
storage Storage Ref Azcopy Jobs Resume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-resume.md
Title: azcopy jobs resume | Microsoft Docs
+ Title: azcopy jobs resume
description: This article provides reference information for the azcopy jobs resume command.
storage Storage Ref Azcopy Jobs Show https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs-show.md
Title: azcopy jobs show | Microsoft Docs
+ Title: azcopy jobs show
description: This article provides reference information for the azcopy jobs show command.
storage Storage Ref Azcopy Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-jobs.md
Title: azcopy jobs | Microsoft Docs
+ Title: azcopy jobs
description: This article provides reference information for the azcopy jobs command.
storage Storage Ref Azcopy List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-list.md
Title: azcopy list | Microsoft Docs
+ Title: azcopy list
description: This article provides reference information for the azcopy list command.
storage Storage Ref Azcopy Login Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-login-status.md
Title: azcopy login status | Microsoft Docs
+ Title: azcopy login status
description: This article provides reference information for the azcopy login status command.
storage Storage Ref Azcopy Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-login.md
Title: azcopy login | Microsoft Docs
+ Title: azcopy login
description: This article provides reference information for the azcopy login command.
storage Storage Ref Azcopy Logout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-logout.md
Title: azcopy logout | Microsoft Docs
+ Title: azcopy logout
description: This article provides reference information for the azcopy logout command.
storage Storage Ref Azcopy Make https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-make.md
Title: azcopy make | Microsoft Docs
+ Title: azcopy make
description: This article provides reference information for the azcopy make command.
storage Storage Ref Azcopy Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-remove.md
Title: azcopy remove | Microsoft Docs
+ Title: azcopy remove
description: This article provides reference information for the azcopy remove command.
storage Storage Ref Azcopy Set Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-set-properties.md
Title: azcopy set-properties | Microsoft Docs
+ Title: azcopy set-properties
description: This article provides reference information for the azcopy set-properties command.
storage Storage Ref Azcopy Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-sync.md
Title: azcopy sync | Microsoft Docs
+ Title: azcopy sync
description: This article provides reference information for the azcopy sync command.
storage Storage Ref Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy.md
Title: azcopy | Microsoft Docs
+ Title: azcopy
description: This article provides reference information for the azcopy command.
storage Storage Samples C Plus Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-c-plus-plus.md
Title: Azure Storage samples using C++ | Microsoft Docs
+ Title: Azure Storage samples using C++
description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the C++ storage client libraries.
storage Storage Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-dotnet.md
Title: Azure Storage samples using .NET | Microsoft Docs
+ Title: Azure Storage samples using .NET
description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the .NET storage client libraries.
storage Storage Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-java.md
Title: Azure Storage samples using Java | Microsoft Docs
+ Title: Azure Storage samples using Java
description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the Java storage client libraries.
storage Storage Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-javascript.md
Title: Azure Storage samples using JavaScript | Microsoft Docs
+ Title: Azure Storage samples using JavaScript
description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the JavaScript/Node.js storage client libraries.
storage Storage Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-python.md
Title: Azure Storage samples using Python | Microsoft Docs
+ Title: Azure Storage samples using Python
description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the Python storage client libraries.
storage Storage Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples.md
Title: Azure Storage code samples | Microsoft Docs
+ Title: Azure Storage code samples
description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the .NET, Java, Python, Node.js, Azure CLI, and C++ storage client libraries.
To explore the Azure CLI samples, first [Install the Azure CLI](/cli/azure/insta
|-||-| | .NET | [.NET Client Library Reference](/dotnet/api/overview/azure/storage) | [Source code for the .NET storage client library](https://github.com/Azure/azure-storage-net) | | Java | [Java Client Library Reference](/java/api/overview/azure/storage) | [Source code for the Java storage client library](https://github.com/azure/azure-storage-java) |
-| Python | [Python Client Library Reference](https://azure-storage.readthedocs.io/) | [Source code for the Python storage client library](https://github.com/Azure/azure-storage-python) |
+| Python | [Python Client Library Reference](/python/api/overview/azure/storage) | [Source code for the Python storage client library](https://github.com/Azure/azure-storage-python) |
| Node.js | [Node.js Client Library Reference](https://azure.github.io/azure-storage-node) | [Source code for the Node.js storage client library](https://github.com/Azure/azure-storage-node) | | C++ | [C++ Client Library Reference](https://azure.github.io/azure-sdk-for-cpp/) | [Source code for the C++ storage client library](https://github.com/Azure/azure-sdk-for-cpp/tree/master/sdk/storage)| | Azure CLI | [Azure CLI Library Reference](/cli/azure/storage) | [Source code for the Azure CLI storage client library](https://github.com/Azure-Samples/azure-cli-samples/tree/master/storage)
storage Storage Use Azcopy Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-authorize-azure-active-directory.md
Title: Authorize access to blobs with AzCopy & Azure Active Directory | Microsoft Docs
+ Title: Authorize access to blobs with AzCopy & Azure Active Directory
description: You can provide authorization credentials for AzCopy operations by using Azure Active Directory (Azure AD).
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
Title: Copy blobs between Azure storage accounts with AzCopy v10 | Microsoft Docs
+ Title: Copy blobs between Azure storage accounts with AzCopy v10
description: This article contains a collection of AzCopy example commands that help you copy blobs between storage accounts.
storage Storage Use Azcopy Blobs Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-download.md
Title: Download blobs from Azure Blob Storage by using AzCopy v10 | Microsoft Docs
+ Title: Download blobs from Azure Blob Storage by using AzCopy v10
description: This article contains a collection of AzCopy example commands that help you download blobs from Azure Blob Storage.
storage Storage Use Azcopy Blobs Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-properties-metadata.md
Title: Replace Azure Blob Storage properties & metadata with AzCopy (preview) | Microsoft Docs
+ Title: Replace Azure Blob Storage properties & metadata with AzCopy (preview)
description: This article contains a collection of AzCopy example commands that help you set properties and metadata.
storage Storage Use Azcopy Blobs Synchronize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-synchronize.md
Title: Synchronize with Azure Blob storage by using AzCopy v10 | Microsoft Docs
+ Title: Synchronize with Azure Blob storage by using AzCopy v10
description: This article contains a collection of AzCopy example commands that help you synchronize with Azure Blob storage.
storage Storage Use Azcopy Blobs Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-upload.md
Title: Upload files to Azure Blob storage by using AzCopy v10 | Microsoft Docs
+ Title: Upload files to Azure Blob storage by using AzCopy v10
description: This article contains a collection of AzCopy example commands that help you upload files to Azure Blob storage.
storage Storage Use Azcopy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-configure.md
Title: Find errors & resume jobs with logs in AzCopy (Azure Storage) | Microsoft Docs
+ Title: Find errors & resume jobs with logs in AzCopy (Azure Storage)
description: Learn how to use logs to diagnose errors, and to resume jobs that are paused by using plan files.
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
description: An overview of Azure Elastic SAN (preview), a service that enables
Previously updated : 10/12/2022 Last updated : 11/17/2022
Elastic SAN simplifies deploying and managing storage at scale through grouping
### Performance
-With an Elastic SAN, it's possible to scale your performance up to millions of IOPS, with double-digit GB/s throughput, and have single-digit millisecond latency. The performance of a SAN is shared across all of its volumes, as long as the SAN's caps aren't exceeded and the volumes are large enough, each volume can scale up to 64,000 IOPs. Elastic SAN volumes connect to your clients using the [iSCSI](https://en.wikipedia.org/wiki/ISCSI) protocol, which allows them to bypass the IOPS limit of an Azure VM and offers high throughput limits.
+With an Elastic SAN, it's possible to scale your performance up to millions of IOPS, with double-digit GB/s throughput, and have single-digit millisecond latency. The performance of a SAN is shared across all of its volumes. As long as the SAN's caps aren't exceeded and the volumes are large enough, each volume can scale up to 64,000 IOPs. Elastic SAN volumes connect to your clients using the [iSCSI](https://en.wikipedia.org/wiki/ISCSI) protocol, which allows them to bypass the IOPS limit of an Azure VM and offers high throughput limits.
### Cost optimization and consolidation
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
description: Understand planning for an Azure Elastic SAN deployment. Learn abou
Previously updated : 10/12/2022 Last updated : 11/17/2022
Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Sa
## Networking
-In Preview, Elastic SAN supports public endpoint from selected virtual network, restricting access to specified virtual networks. You configure volume groups to allow network access only from specific vnet subnets. Once a volume group is configured to allow access from a subnet, this configuration is inherited by all volumes belonging to the volume group. You can then mount volumes from any clients in the subnet, with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. Note that you need to first enable [service point for Azure Storage] (../../virtual-network/virtual-network-service-endpoints-overview.md) in your virtual network before setting up the network rule on volume group.
+In Preview, Elastic SAN supports public endpoint from selected virtual network, restricting access to specified virtual networks. You configure volume groups to allow network access only from specific vnet subnets. Once a volume group is configured to allow access from a subnet, this configuration is inherited by all volumes belonging to the volume group. You can then mount volumes from any clients in the subnet, with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. You must enable [service endpoint for Azure Storage](../../virtual-network/virtual-network-service-endpoints-overview.md) in your virtual network before setting up the network rule on volume group.
## Redundancy
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
Title: Overview - Azure Files identity-based authorization
+ Title: Overview - Azure Files identity-based authentication
description: Azure Files supports identity-based authentication over SMB (Server Message Block) with Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (Azure AD DS), and Azure Active Directory (Azure AD) Kerberos for hybrid identities. Previously updated : 11/09/2022 Last updated : 11/18/2022 # Overview of Azure Files identity-based authentication options for SMB access
-This article focuses on how Azure file shares can use domain services, either on-premises or in Azure, to support identity-based access to Azure file shares over SMB. Enabling identity-based access for your Azure file shares allows you to replace existing file servers with Azure file shares without replacing your existing directory service, maintaining seamless user access to shares.
+This article explains how Azure file shares can use domain services, either on-premises or in Azure, to support identity-based access to Azure file shares over SMB. Enabling identity-based access for your Azure file shares allows you to replace existing file servers with Azure file shares without replacing your existing directory service, maintaining seamless user access to shares.
## Applies to | File share type | SMB | NFS |
It's helpful to understand some key terms relating to identity-based authenticat
- **On-premises Active Directory Domain Services (AD DS)**
- On-premises Active Directory Domain Services (AD DS) integration with Azure Files provides the methods for storing directory data while making it available to network users and administrators. Security is integrated with AD DS through logon authentication and access control to objects in the directory. With a single network logon, administrators can manage directory data and organization throughout their network, and authorized network users can access resources anywhere on the network. AD DS is commonly adopted by enterprises in on-premises environments and AD DS credentials are used as the identity for access control. For more information, see [Active Directory Domain Services Overview](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview).
+ On-premises Active Directory Domain Services (AD DS) integration with Azure Files provides the methods for storing directory data while making it available to network users and administrators. Security is integrated with AD DS through logon authentication and access control to objects in the directory. With a single network logon, administrators can manage directory data and organization throughout their network, and authorized network users can access resources anywhere on the network. AD DS is commonly adopted by enterprises in on-premises environments, and AD DS credentials are used for access control. For more information, see [Active Directory Domain Services Overview](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview).
- **Azure role-based access control (Azure RBAC)**
It's helpful to understand some key terms relating to identity-based authenticat
- **Hybrid identities**
- [Hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) are identities in AD DS that are synced to Azure AD using Azure AD Connect.
+ [Hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) are identities in AD DS that are synced to Azure AD using either the on-premises [Azure AD Connect sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application or [Azure AD Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md), a lightweight agent that can be installed from the Azure Active Directory Admin Center.
+
+## Supported authentication scenarios
+
+Azure Files supports identity-based authentication for Windows file shares over SMB through the following three methods. You can only use one method per storage account.
+
+- **On-premises AD DS authentication:** On-premises AD DS-joined or Azure AD DS-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Azure AD over SMB. Your client must have line of sight to your AD DS. If you already have AD DS set up on-premises or on a VM in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication.
+- **Azure AD DS authentication:** Cloud-based, Azure AD DS-joined Windows VMs can access Azure file shares with Azure AD credentials. In this solution, Azure AD runs a traditional Windows Server AD domain on behalf of the customer, which is a child of the customerΓÇÖs Azure AD tenant.
+- **Azure AD Kerberos for hybrid identities:** Using Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. Cloud-only identities aren't currently supported.
+
+## Restrictions
+
+- None of the authentication methods support assigning share-level permissions to computer accounts (machine accounts) using Azure RBAC, because computer accounts can't be synced to Azure AD. If you want to allow a computer account to access Azure file shares using identity-based authentication, [use a default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities) or consider using a service logon account instead.
+- Neither on-premises AD DS authentication nor Azure AD DS authentication is supported against Azure AD-joined devices or Azure AD-registered devices.
+- Identity-based authentication isn't supported with Network File System (NFS) shares.
## Common use cases
Identity-based authentication with Azure Files can be useful in a variety of sce
### Replace on-premises file servers
-Deprecating and replacing scattered on-premises file servers is a common problem that every enterprise encounters in their IT modernization journey. Azure file shares with on-premises AD DS authentication is the best fit here, when you can migrate the data to Azure Files. A complete migration will allow you to take advantage of the high availability and scalability benefits while also minimizing the client-side changes. It provides a seamless migration experience to end users, so they can continue to access their data with the same credentials using their existing domain joined machines.
+Deprecating and replacing scattered on-premises file servers is a common problem that every enterprise encounters in their IT modernization journey. Azure file shares with on-premises AD DS authentication is the best fit here, when you can migrate the data to Azure Files. A complete migration will allow you to take advantage of the high availability and scalability benefits while also minimizing the client-side changes. It provides a seamless migration experience to end users, so they can continue to access their data with the same credentials using their existing domain-joined machines.
### Lift and shift applications to Azure
When you lift and shift applications to the cloud, you want to keep the same aut
If you're keeping your primary file storage on-premises, Azure file shares can serve as an ideal storage for backup or DR, to improve business continuity. You can use Azure file shares to back up your data from existing file servers while preserving Windows DACLs. For DR scenarios, you can configure an authentication option to support proper access control enforcement at failover.
-## Supported scenarios
-
-This section summarizes the supported Azure file shares authentication scenarios for Azure AD DS, on-premises AD DS, and Azure AD Kerberos for hybrid identities. We recommend selecting the domain service that you adopted for your client environment for integration with Azure Files. If you have AD DS already set up on-premises or on a VM in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication.
--- **On-premises AD DS authentication:** On-premises AD DS-joined or Azure AD DS-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Azure AD over SMB. Your client must have line of sight to your AD DS.-- **Azure AD DS authentication:** Azure AD DS-joined Windows machines can access Azure file shares with Azure AD credentials over SMB. -- **Azure AD Kerberos for hybrid identities:** Using Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. You can also use this feature to store FSLogix profiles on Azure file shares for Azure AD-joined VMs. For more information, see [Create a profile container with Azure Files and Azure Active Directory](../../virtual-desktop/create-profile-container-azure-ad.md).-
-### Restrictions
--- None of the authentication methods support assigning share-level permissions to computer accounts (machine accounts) using Azure RBAC, because computer accounts can't be synced to Azure AD. If you want to allow a computer account to access Azure file shares using identity-based authentication, [use a default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities) or consider using a service logon account instead.-- Neither on-premises AD DS authentication nor Azure AD DS authentication is supported against Azure AD-joined devices or Azure AD-registered devices.-- Identity-based authentication isn't supported with Network File System (NFS) shares.- ## Advantages of identity-based authentication Identity-based authentication for Azure Files offers several benefits over using Shared Key authentication:
Identity-based authentication for Azure Files offers several benefits over using
## How it works
-Azure file shares use the Kerberos protocol to authenticate with an AD source. When an identity associated with a user or application running on a client attempts to access data in Azure file shares, the request is sent to the AD source to authenticate the identity. If authentication is successful, it returns a Kerberos token. The client sends a request that includes the Kerberos token, and Azure file shares use that token to authorize the request. Azure file shares only receive the Kerberos token, not access credentials.
+Azure file shares use the Kerberos protocol to authenticate with an AD source. When an identity associated with a user or application running on a client attempts to access data in Azure file shares, the request is sent to the AD source to authenticate the identity. If authentication is successful, it returns a Kerberos token. The client sends a request that includes the Kerberos token, and Azure file shares use that token to authorize the request. Azure file shares only receive the Kerberos token, not the user's access credentials.
-Before you can enable identity-based authentication on your storage account, you must first set up your domain environment.
+You can enable identity-based authentication on your new and existing storage accounts using one of three AD sources: AD DS, Azure AD DS, or Azure AD Kerberos for hybrid identities. Only one AD source can be used for file access authentication on the storage account, which applies to all file shares in the account. Before you can enable identity-based authentication on your storage account, you must first set up your domain environment.
### AD DS
-For on-premises AD DS authentication, you must set up your AD domain controllers and domain join your machines or VMs. You can host your domain controllers on Azure VMs or on-premises. Either way, your domain-joined clients must have line of sight to the domain service, so they must be within the corporate network or virtual network (VNET) of your domain service.
+For on-premises AD DS authentication, you must set up your AD domain controllers and domain-join your machines or VMs. You can host your domain controllers on Azure VMs or on-premises. Either way, your domain-joined clients must have line of sight to the domain controller, so they must be within the corporate network or virtual network (VNET) of your domain service.
The following diagram depicts on-premises AD DS authentication to Azure file shares over SMB. The on-premises AD DS must be synced to Azure AD using Azure AD Connect sync or Azure AD Connect cloud sync. Only [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) that exist in both on-premises AD DS and Azure AD can be authenticated and authorized for Azure file share access. This is because the share-level permission is configured against the identity represented in Azure AD, whereas the directory/file-level permission is enforced with that in AD DS. Make sure that you configure the permissions correctly against the same hybrid user. :::image type="content" source="media/storage-files-active-directory-overview/Files-on-premises-AD-DS-Diagram.png" alt-text="Diagram that depicts on-premises AD DS authentication to Azure file shares over SMB.":::
+To learn how to enable AD DS authentication, first read [Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md) and then see [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md).
+ ### Azure AD DS For Azure AD DS authentication, you should enable Azure AD DS and domain-join the VMs you plan to access file data from. Your domain-joined VM must reside in the same virtual network (VNET) as your Azure AD DS.
-The following diagram represents the workflow for Azure AD DS authentication to Azure file shares over SMB. It follows a similar pattern to on-premises AD DS authentication to Azure file shares. There are two major differences:
+The following diagram represents the workflow for Azure AD DS authentication to Azure file shares over SMB. It follows a similar pattern to on-premises AD DS authentication, but there are two major differences:
1. You don't need to create the identity in Azure AD DS to represent the storage account. This is performed by the enablement process in the background.
-2. All users that exist in Azure AD can be authenticated and authorized. The user can be cloud-only or hybrid. The sync from Azure AD to Azure AD DS is managed by the platform without requiring any user configuration. However, the client must be domain-joined to Azure AD DS. It can't be Azure AD joined or registered.
+2. All users that exist in Azure AD can be authenticated and authorized. The user can be cloud-only or hybrid. The sync from Azure AD to Azure AD DS is managed by the platform without requiring any user configuration. However, the client must be joined to the Azure AD DS hosted domain. It can't be Azure AD joined or registered. Azure AD DS doesn't support non-cloud VMs (i.e. user laptops, workstations, VMs in other clouds, etc.) being domain-joined to the Azure AD DS hosted domain.
:::image type="content" source="media/storage-files-active-directory-overview/Files-Azure-AD-DS-Diagram.png" alt-text="Diagram":::
-### Azure AD Kerberos for hybrid identities
-
-Enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring Windows ACLs and permissions might require line-of-sight to the domain controller.
+To learn how to enable Azure AD DS authentication, see [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md).
-For more information, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
+### Azure AD Kerberos for hybrid identities
-## Access control
-Azure Files enforces authorization on user access to both the share and the directory/file levels. Share-level permission assignment can be performed on Azure AD users or groups managed through Azure RBAC. With Azure RBAC, the credentials you use for file access should be available or synced to Azure AD. You can assign Azure built-in roles like Storage File Data SMB Share Reader to users or groups in Azure AD to grant access to an Azure file share.
+Enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring directory and file-level permissions for a user or group requires line-of-sight to the on-premises domain controller. You can also use this feature to store FSLogix profiles on Azure file shares for Azure AD-joined VMs. For more information, see [Create a profile container with Azure Files and Azure Active Directory](../../virtual-desktop/create-profile-container-azure-ad.md).
-At the directory/file level, Azure Files supports preserving, inheriting, and enforcing [Windows ACLs](/windows/win32/secauthz/access-control-lists) just like any Windows file servers. You can choose to keep Windows ACLs when copying data over SMB between your existing file share and your Azure file shares. Whether you plan to enforce authorization or not, you can use Azure file shares to back up ACLs along with your data.
+> [!IMPORTANT]
+> Azure AD Kerberos authentication only supports hybrid user identities; it doesn't support cloud-only identities. A traditional AD DS deployment is required, and it must be synced to Azure AD using Azure AD Connect sync or Azure AD Connect cloud sync.
-### Enable identity-based authentication
+To learn how to enable Azure AD Kerberos authentication for hybrid identities, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
-You can enable identity-based authentication on your new and existing storage accounts using one of three AD sources: AD DS, Azure AD DS, and Azure AD Kerberos for hybrid identities. Only one AD source can be used for file access authentication on the storage account, which applies to all file shares in the account.
+## Access control
+Azure Files enforces authorization on user access to both the share level and the directory/file levels. Share-level permission assignment can be performed on Azure AD users or groups managed through Azure RBAC. With Azure RBAC, the credentials you use for file access should be available or synced to Azure AD. You can assign Azure built-in roles like **Storage File Data SMB Share Reader** to users or groups in Azure AD to grant access to an Azure file share.
-To learn how to enable on-premises Active Directory Domain Services authentication for Azure file shares, see [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md).
+At the directory/file level, Azure Files supports preserving, inheriting, and enforcing [Windows ACLs](/windows/win32/secauthz/access-control-lists) just like any Windows file server. You can choose to keep Windows ACLs when copying data over SMB between your existing file share and your Azure file shares. Whether you plan to enforce authorization or not, you can use Azure file shares to back up ACLs along with your data.
-To learn how to enable Azure AD DS authentication for Azure file shares, see [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md).
+### Configure share-level permissions for Azure Files
-To learn how to enable Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
+Once you've enabled an AD source on your storage account, you can do one of the following:
-### Configure share-level permissions for Azure Files
+- Use a default share-level permission
+- Assign built-in Azure RBAC roles, or
+- Configure custom roles for Azure AD identities and assign access rights to file shares in your storage account.
-Once you've enabled an AD source on your storage account, you can use Azure built-in RBAC roles, or configure custom roles for Azure AD identities and assign access rights to any file shares in your storage accounts. The assigned permission allows the granted identity to get access to the share only, nothing else, not even the root directory. You still need to separately configure directory and file-level permissions for Azure file shares.
+The assigned share-level permission allows the granted identity to get access to the share only, nothing else, not even the root directory. You still need to separately configure directory and file-level permissions.
### Configure directory or file-level permissions for Azure Files
-Azure file shares enforce standard Windows file permissions at both the directory and file level, including the root directory. Configuration of directory or file-level permissions is supported over both SMB and REST. Mount the target file share from your VM and configure permissions using Windows File Explorer, Windows [icacls](/windows-server/administration/windows-commands/icacls), or the [Set-ACL](/powershell/module/microsoft.powershell.security/get-acl) command.
+Azure file shares enforce standard Windows ACLs at both the directory and file level, including the root directory. Configuration of directory or file-level permissions is supported over both SMB and REST. Mount the target file share from your VM and configure permissions using Windows File Explorer, Windows [icacls](/windows-server/administration/windows-commands/icacls), or the [Set-ACL](/powershell/module/microsoft.powershell.security/get-acl) command.
### Use the storage account key for superuser permissions
A user with the storage account key can access Azure file shares with superuser
### Preserve directory and file ACLs when importing data to Azure file shares
-Azure Files supports preserving directory or file level ACLs when copying data to Azure file shares. You can copy ACLs on a directory or file to Azure file shares using either Azure File Sync or common file movement toolsets. For example, you can use [robocopy](/windows-server/administration/windows-commands/robocopy) with the `/copy:s` flag to copy data as well as ACLs to an Azure file share. ACLs are preserved by default, you are not required to enable identity-based authentication on your storage account to preserve ACLs.
+Azure Files supports preserving directory or file level ACLs when copying data to Azure file shares. You can copy ACLs on a directory or file to Azure file shares using either Azure File Sync or common file movement toolsets. For example, you can use [robocopy](/windows-server/administration/windows-commands/robocopy) with the `/copy:s` flag to copy data as well as ACLs to an Azure file share. ACLs are preserved by default, so you don't need to enable identity-based authentication on your storage account to preserve ACLs.
## Pricing
-There is no additional service charge to enable identity-based authentication over SMB on your storage account. For more information on pricing, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) and [Azure AD Domain Services pricing](https://azure.microsoft.com/pricing/details/active-directory-ds/).
+There's no additional service charge to enable identity-based authentication over SMB on your storage account. For more information on pricing, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) and [Azure AD Domain Services pricing](https://azure.microsoft.com/pricing/details/active-directory-ds/).
## Next steps For more information about Azure Files and identity-based authentication over SMB, see these resources: - [Planning for an Azure Files deployment](storage-files-planning.md)-- [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md)
+- [Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md)
- [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md) - [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md) - [FAQ](storage-files-faq.md)
synapse-analytics Get Started Analyze Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-data-explorer.md
Title: 'Quickstart: Get started analyzing with Data Explorer pools (Preview)' description: In this quickstart, you'll learn to analyze data with Data Explorer. Previously updated : 09/30/2021 Last updated : 11/18/2022
synapse-analytics Get Started Analyze Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-spark.md
Previously updated : 10/10/2022 Last updated : 11/18/2022 # Analyze with Apache Spark
Data is available via the dataframe named **df**. Load it into a Spark database
```py %%pyspark df = spark.sql("""
- SELECT PassengerCount,
- SUM(TripDistanceMiles) as SumTripDistance,
- AVG(TripDistanceMiles) as AvgTripDistance
+ SELECT passenger_count,
+ SUM(trip_distance) as SumTripDistance,
+ AVG(trip_distance) as AvgTripDistance
FROM nyctaxi.trip
- WHERE TripDistanceMiles > 0 AND PassengerCount > 0
- GROUP BY PassengerCount
- ORDER BY PassengerCount
+ WHERE trip_distance > 0 AND passenger_count > 0
+ GROUP BY passenger_count
+ ORDER BY passenger_count
""") display(df) df.write.saveAsTable("nyctaxi.passengercountstats")
Data is available via the dataframe named **df**. Load it into a Spark database
1. In the cell results, select **Chart** to see the data visualized. - ## Next steps > [!div class="nextstepaction"]
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
Previously updated : 02/02/2022 Last updated : 11/18/2022 # Analyze data with a serverless SQL pool
synapse-analytics Get Started Analyze Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-pool.md
Previously updated : 10/07/2022 Last updated : 11/18/2022
synapse-analytics Get Started Analyze Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-storage.md
Previously updated : 12/31/2020 Last updated : 11/18/2022 # Analyze data in a storage account
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-create-workspace.md
Previously updated : 03/17/2021 Last updated : 11/18/2022 # Creating a Synapse workspace
Select **Review + create** > **Create**. Your workspace is ready in a few minute
> [!NOTE] > To enable workspace features from an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](./sql-data-warehouse/workspace-connected-create.md). - ## Open Synapse Studio After your Azure Synapse workspace is created, you have two ways to open Synapse Studio:
After your Azure Synapse workspace is created, you have two ways to open Synapse
> [!NOTE] > To sign into your workspace, there are two **Account selection methods**. One is from **Azure subscription**, the other is from **Enter manually**. If you have the Synapse Azure role or higher level Azure roles, you can use both methods to log into the workspace. If you don't have the related Azure roles, and you were granted as the Synapse RBAC role, **Enter manually** is the only way to log into the workspace. To learn more about the Synapse RBAC, refer to [What is Synapse role-based access control (RBAC)](./security/synapse-workspace-synapse-rbac.md). - ## Place sample data into the primary storage account We are going to use a small 100K row sample dataset of NYX Taxi Cab data for many examples in this getting started guide. We begin by placing it in the primary storage account you created for the workspace.
-* Download this file to your computer: https://azuresynapsestorage.blob.core.windows.net/sampledata/NYCTaxiSmall/NYCTripSmall.parquet
-* In Synapse Studio, navigate to the Data Hub.
+* Download the [NYC Taxi - green trip dataset](/open-datasets/dataset-taxi-green.md?tabs=azureml-opendatasets#additional-information) to your computer. Navigate to the [original dataset location](https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page) from the above link, choose a specific year and download the Green taxi trip records in Parquet format.
+* Rename the downloaded file to *NYCTripSmall.parquet*.
+* In Synapse Studio, navigate to the **Data** Hub.
* Select **Linked**. * Under the category **Azure Data Lake Storage Gen2** you'll see an item with a name like **myworkspace ( Primary - contosolake )**. * Select the container named **users (Primary)**.
Once the parquet file is uploaded it is available through two equivalent URIs:
In the examples that follow in this tutorial, make sure to replace **contosolake** in the UI with the name of the primary storage account that you selected for your workspace. -- ## Next steps > [!div class="nextstepaction"]
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
You can create host pools in the following Azure regions:
- West US - West US 2
+>[!IMPORTANT]
+>This list refers to the list of regions where the _metadata_ for the host pool will be stored. Virtual machines (hosts) in a host pool can be located in any region, as well as [on-premises](azure-stack-hci-overview.md).
+ ## Prerequisites Before you can create a host pool, make sure you've completed the prerequisites. For more information, see [Prerequisites for Azure Virtual Desktop](prerequisites.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Instance Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md
Title: Understand instance IDs for Azure virtual machine scale set VMs
-description: Understand instance IDs for Azure virtual machine scale sets virtual machines and the various ways that they surface.
+ Title: Understand instance IDs for Azure Virtual Machine Scale Set VMs
+description: Understand instance IDs for Azure Virtual Machine Scale Set virtual machines and the various ways that they surface.
-# Understand instance IDs for Azure virtual machine scale set VMs
+# Understand names and instance IDs for Azure Virtual Machine Scale Set VMs
-> [!NOTE]
-> This article focuses on virtual machine scale sets running in Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-This article describes instance IDs for scale sets and the various ways they surface.
+Each VM in a scale set gets a name and instance ID that uniquely identifies it. These are used in the scale set APIs to do operations on a specific VM in the scale set. This article describes instance IDs for scale sets and the various ways they surface.
-## Scale set instance IDs
+## Scale set VM names
-Each VM in a scale set gets an instance ID that uniquely identifies it. This instance ID is used in the scale set APIs to do operations on a specific VM in the scale set. For instance, you can specify a specific instance ID to reimage when using the reimage API:
+Virtual Machine Scale Sets will generate a unique name for each VM in the scale set. The naming convention differs by orchestration mode:
-REST API: `POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/reimage?api-version={apiVersion}` (for more information, see the [REST API documentation](/rest/api/compute/virtualmachinescalesetvms/reimage))
+* Flexible orchestration Mode: `{scale-set-name}_{8-char-guid}`
+* Uniform orchestration mode: `{scale-set-name}_{instance-id}`
-PowerShell: `Set-AzVmssVM -ResourceGroupName {resourceGroupName} -VMScaleSetName {vmScaleSetName} -InstanceId {instanceId} -Reimage` (for more information, see the [PowerShell documentation](/powershell/module/az.compute/set-azvmssvm))
+## Scale set instance ID for Flexible Orchestration Mode
-CLI: `az vmss reimage -g {resourceGroupName} -n {vmScaleSetName} --instance-id {instanceId}` (for more information, see the [CLI documentation](/cli/azure/vmss)).
+For Virtual Machine Scale Sets in Flexible Orchestration mode, the instance ID is simply the name of the virtual machine.
-You can get the list of instance IDs by listing all instances in a scale set:
+## Scale set instance ID for Uniform Orchestration Mode
-REST API: `GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualMachines?api-version={apiVersion}` (for more information, see the [REST API documentation](/rest/api/compute/virtualmachinescalesetvms/list))
+For scale sets in Uniform orchestration mode, the instance ID a decimal number. The instance IDs may be reused for new instances once old instances are deleted.
-PowerShell: `Get-AzVmssVM -ResourceGroupName {resourceGroupName} -VMScaleSetName {vmScaleSetName}` (for more information, see the [PowerShell documentation](/powershell/module/az.compute/get-azvmssvm))
+>[!NOTE]
+> There is **no guarantee** on the way instance IDs are assigned to the VMs in the scale set. They might seem sequentially increasing at times, but this is not always the case. Do not take a dependency on the specific way in which instance IDs are assigned to the VMs.
-CLI: `az vmss list-instances -g {resourceGroupName} -n {vmScaleSetName}` (for more information, see the [CLI documentation](/cli/azure/vmss)).
+You can get the list of instance IDs by listing all instances in a scale set.
-You can also use [resources.azure.com](https://resources.azure.com) or the [Azure SDKs](https://azure.microsoft.com/downloads/) to list the VMs in a scale set.
+### REST API
+For more information, see the [REST API documentation](/rest/api/compute/virtualmachinescalesetvms/list).
+```restapi
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualMachines?api-version={apiVersion}
+```
-The exact presentation of the output depends on the options you provide to the command, but here is some sample output from the CLI:
+You can also specify a specific instance ID to reimage when using the reimage API. For more information, see the [REST API documentation](/rest/api/compute/virtualmachinescalesetvms/reimage)
-```azurecli
-az vmss show -g {resourceGroupName} -n {vmScaleSetName}
+```restapi
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/reimage?api-version={apiVersion}
```
-```output
-[
- {
- "instanceId": "85",
- "latestModelApplied": true,
- "location": "westus",
- "name": "nsgvmss_85",
- .
- .
- .
+### PowerShell
+For more information, see the [PowerShell documentation](/powershell/module/az.compute/get-azvmssvm).
+
+```powershell
+Get-AzVmssVM -ResourceGroupName {resourceGroupName} -VMScaleSetName {vmScaleSetName}
```
-As you can see, the "instanceId" property is just a decimal number. The instance IDs may be reused for new instances once old instances are deleted.
+You can also specify a specific instance ID to reimage when using the reimage API. For more information, see the [PowerShell documentation](/powershell/module/az.compute/set-azvmssvm)
->[!NOTE]
-> There is **no guarantee** on the way instance IDs are assigned to the VMs in the scale set. They might seem sequentially increasing at times, but this is not always the case. Do not take a dependency on the specific way in which instance IDs are assigned to the VMs.
+```powershell
+Set-AzVmssVM -ResourceGroupName {resourceGroupName} -VMScaleSetName {vmScaleSetName} -InstanceId {instanceId} -Reimage
+```
-## Scale set VM names
-In the sample output above, there is also a "name" for the VM. This name takes the form "{scale-set-name}_{instance-id}". This name is the one that you see in the Azure portal when you list instances in a scale set:
+### CLI
+For more information, see the [CLI documentation](/cli/azure/vmss).
+```cli
+az vmss list-instances -g {resourceGroupName} -n {vmScaleSetName}
+```
-![Screenshot showing a list of instances in a virtual machine scale set in the Azure portal.](./media/virtual-machine-scale-sets-instance-ids/vmssInstances.png)
+You can also specify a specific instance ID to reimage when using the reimage API. For more information, see the [CLI documentation](/cli/azure/vmss).
+
+```cli
+az vmss reimage -g {resourceGroupName} -n {vmScaleSetName} --instance-id {instanceId}
+```
-The {instance-id} part of the name is the same decimal number as the "instanceId" property discussed previously.
## Instance Metadata VM name + If you query the [instance metadata](../virtual-machines/windows/instance-metadata-service.md) from within a scale set VM, you see a "name" in the output: ```output
If you query the [instance metadata](../virtual-machines/windows/instance-metada
"compute": { "location": "westus", "name": "nsgvmss_85",
- .
- .
- .
```
-This name is the same as the name discussed previously.
+ ## Scale set VM computer name
-Each VM in a scale set also gets a computer name assigned to it. This computer name is the hostname of the VM in the [Azure-provided DNS name resolution within the virtual network](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md). This computer name is of the form "{computer-name-prefix}{base-36-instance-id}".
+Each VM in a scale set also gets a computer name assigned to it. This computer name is the hostname of the VM in the [Azure-provided DNS name resolution within the virtual network](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md). The computer name naming convention differs by orchestration mode:
-The {base-36-instance-id} is in [base 36](https://en.wikipedia.org/wiki/Base36) and is always six digits in length. If the base 36 representation of the number takes fewer than six digits, the {base-36-instance-id} is padded with zeros to make it six digits in length. For example, an instance with {computer-name-prefix} "nsgvmss" and instance ID 85 will have computer name "nsgvmss00002D".
+* Flexible orchestration mode: {computer-name-prefix}{6-char-guid}
+* Uniform orchestration mode: {computer-name-prefix}{base-36-instance-id}
->[!NOTE]
-> The computer name prefix is a property of the scale set model that you can set, so it can be different from the scale set name itself.
+The computer name prefix is a property of the scale set model that you can set, so it can be different from the scale set name itself. The scale set VM computer name can also be changed from inside the guest OS once the VM has been created.
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
Title: Orchestration modes for virtual machine scale sets in Azure
-description: Learn how to use Flexible and Uniform orchestration modes for virtual machine scale sets in Azure.
+ Title: Orchestration modes for Virtual Machine Scale Sets in Azure
+description: Learn how to use Flexible and Uniform orchestration modes for Virtual Machine Scale Sets in Azure.
-# Orchestration modes for virtual machine scale sets in Azure
+# Orchestration modes for Virtual Machine Scale Sets in Azure
Virtual Machines Scale Sets provide a logical grouping of platform-managed virtual machines. With scale sets, you create a virtual machine configuration model, automatically add or remove additional instances based on CPU or memory load, and automatically upgrade to the latest OS version. Traditionally, scale sets allow you to create virtual machines using a VM configuration model provided at the time of scale set creation, and the scale set can only manage virtual machines that are implicitly created based on the configuration model.
Scale set orchestration modes allow you to have greater control over how virtual
## Scale sets with Uniform orchestration Optimized for large-scale stateless workloads with identical instances.
-Virtual machine scale sets with Uniform orchestration use a virtual machine profile or template to scale up to desired capacity. While there is some ability to manage or customize individual virtual machine instances, Uniform uses identical VM instances. Individual Uniform VM instances are exposed via the virtual machine scale set VM API commands. Individual instances are not compatible with the standard Azure IaaS VM API commands, Azure management features such as Azure Resource Manager resource tagging RBAC permissions, Azure Backup, or Azure Site Recovery. Uniform orchestration provides fault domain high availability guarantees when configured with fewer than 100 instances. Uniform orchestration is generally available and supports a full range of scale set management and orchestration, including metrics-based autoscaling, instance protection, and automatic OS upgrades.
+Virtual Machine Scale Sets with Uniform orchestration use a virtual machine profile or template to scale up to desired capacity. While there is some ability to manage or customize individual virtual machine instances, Uniform uses identical VM instances. Individual Uniform VM instances are exposed via the virtual machine scale set VM API commands. Individual instances aren't compatible with the standard Azure IaaS VM API commands, Azure management features such as Azure Resource Manager resource tagging RBAC permissions, Azure Backup, or Azure Site Recovery. Uniform orchestration provides fault domain high availability guarantees when configured with fewer than 100 instances. Uniform orchestration is generally available and supports a full range of scale set management and orchestration, including metrics-based autoscaling, instance protection, and automatic OS upgrades.
## Scale sets with Flexible orchestration
With Flexible orchestration, Azure provides a unified experience across the Azur
- Open-Source databases - Stateful applications - Services that require High Availability and large scale-- Services that want to mix virtual machine types, or leverage Spot and on-demand VMs together
+- Services that want to mix virtual machine types or Spot and on-demand VMs together
- Existing Availability Set applications ## What has changed with Flexible orchestration mode? One of the main advantages of Flexible orchestration is that it provides orchestration features over standard Azure IaaS VMs, instead of scale set child virtual machines. This means you can use all of the standard VM APIs when managing Flexible orchestration instances, instead of the virtual machine scale set VM APIs you use with Uniform orchestration. There are several differences between managing instances in Flexible orchestration versus Uniform orchestration. In general, we recommend that you use the standard Azure IaaS VM APIs when possible. In this section, we highlight examples of best practices for managing VM instances with Flexible orchestration.
+Flexible orchestration mode can be used with all VM sizes. Flexible orchestration mode provides the highest scale and configurability for VM sizes that support memory preserving updates or live migration such as when using the B, D, E and F-series or when the scale set is configured for maximum spreading between instances `platformFaultDomainCount=1`. Currently, the Flexible orchestration mode has additional constraints for VM sizes that don't support memory preserving updates including the G, H, L, M, and N-series VMs and instances are spread across multiple fault domains. You can use the Compute Resource SKUs API to determine whether a specific VM SKU support memory preserving updates.
+
+
+| Feature | Memory Preserving Updates Supported **or** Scale set with Max Spreading (`platformFaultDomainCount=1`) | Memory Preserving Updates Not Supported **and** Fixed Spreading (`platformFaultDomainCount > 1`) |
+||||
+|Maximum Virtual Machine Scale Sets Instance Count | 1000 | 200 |
+| Mix operating systems | Yes | Yes |
+| Mix Spot and On-demand instances | Yes | No |
+| Mix General Purpose and Specialty SKU Types | Yes (`FDCount = 1`) | No |
+| Maximum Fault Domain Count | Regional ΓÇô 3 (depending on the regional fault domain max count) <br> Zonal ΓÇô 1 | Regional ΓÇô 3 <br> Zonal ΓÇô 1 |
+| Spread instances across zones | Yes | Yes |
+| Assign VM to a Specific Zone | Yes | Yes |
+| Assign VM to a Specific Fault domain | Yes | No |
+| Update Domains | No | No |
+| Single Placement Group | Optional. This will be set to false based on first VM deployed | Optional. This will be set to true based on first VM deployed |
+ ### Scale out with standard Azure virtual machines
-Virtual machine scale sets in Flexible Orchestration mode manage standard Azure VMs. You have full control over the virtual machine lifecycle, as well as network interfaces and disks using the standard Azure APIs and commands. Virtual machines created with Uniform orchestration mode are exposed and managed via the virtual machine scale set VM API commands. Individual instances are not compatible with the standard Azure IaaS VM API commands, Azure management features such as Azure Resource Manager resource tagging RBAC permissions, Azure Backup, or Azure Site Recovery.
+Virtual Machine Scale Sets in Flexible Orchestration mode manage standard Azure VMs. You have full control over the virtual machine lifecycle, as well as network interfaces and disks using the standard Azure APIs and commands. Virtual machines created with Uniform orchestration mode are exposed and managed via the virtual machine scale set VM API commands. Individual instances aren't compatible with the standard Azure IaaS VM API commands, Azure management features such as Azure Resource Manager resource tagging RBAC permissions, Azure Backup, or Azure Site Recovery.
### Assign fault domain during VM creation You can choose the number of fault domains for the Flexible orchestration scale set. By default, when you add a VM to a Flexible scale set, Azure evenly spreads instances across fault domains. While it is recommended to let Azure assign the fault domain, for advanced or troubleshooting scenarios you can override this default behavior and specify the fault domain where the instance will land.
Querying resources with [Azure Resource Graph](../governance/resource-graph/over
- Use the Get VM API and commands to get model and instance view for a single instance. ### Scale sets VM batch operations
-Use the standard VM commands to start, stop, restart, delete instances, instead of the Virtual Machine Scale Set VM APIs. The Virtual Machine Scale Set VM Batch operations (start all, stop all, reimage all, etc.) are not used with Flexible orchestration mode.
+Use the standard VM commands to start, stop, restart, delete instances, instead of the Virtual Machine Scale Set VM APIs. The Virtual Machine Scale Set VM Batch operations (start all, stop all, reimage all, etc.) aren't used with Flexible orchestration mode.
### Monitor application health Application health monitoring allows your application to provide Azure with a heartbeat to determine whether your application is healthy or unhealthy. Azure can automatically replace VM instances that are unhealthy. For Flexible scale set instances, you must install and configure the Application Health Extension on the virtual machine. For Uniform scale set instances, you can use either the Application Health Extension, or measure health with an Azure Load Balancer Custom Health Probe.
Application health monitoring allows your application to provide Azure with a he
Virtual Machine Scale Sets allows you to list the instances that belong to the scale set. With Flexible orchestration, the list Virtual Machine Scale Sets VM command provides a list of scale sets VM IDs. You can then call the GET Virtual Machine Scale Sets VM commands to get more details on how the scale set is working with the VM instance. To get the full details of the VM, use the standard GET VM commands or [Azure Resource Graph](../governance/resource-graph/overview.md). ### Retrieve boot diagnostics data
-Use the standard VM APIs and commands to retrieve instance Boot Diagnostics data and screenshots. The Virtual Machine Scale Sets VM boot diagnostic APIs and commands are not used with Flexible orchestration mode instances.
+Use the standard VM APIs and commands to retrieve instance Boot Diagnostics data and screenshots. The Virtual Machine Scale Sets VM boot diagnostic APIs and commands aren't used with Flexible orchestration mode instances.
### VM extensions Use extensions targeted for standard virtual machines, instead of extensions targeted for Uniform orchestration mode instances.
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Virtual machine type | Standard Azure IaaS VM (Microsoft.compute/virtualmachines) | Scale Set specific VMs (Microsoft.compute/virtualmachinescalesets/virtualmachines) | Standard Azure IaaS VM (Microsoft.compute/virtualmachines) | | Maximum Instance Count (with FD guarantees) | 1000 | 100 | 200 | | SKUs supported | All SKUs | All SKUs | All SKUs |
-| Full control over VM, NICs, Disks | Yes | Limited control with virtual machine scale sets VM API | Yes |
-| RBAC Permissions Required | Compute VMSS Write, Compute VM Write, Network | Compute VMSS Write | N/A |
+| Full control over VM, NICs, Disks | Yes | Limited control with Virtual Machine Scale Sets VM API | Yes |
+| RBAC Permissions Required | Compute Virtual Machine Scale Sets Write, Compute VM Write, Network | Compute Virtual Machine Scale Sets Write | N/A |
| Cross tenant shared image gallery | No | Yes | Yes | | Accelerated networking | Yes | Yes | Yes | | Spot instances and pricing  | Yes, you can have both Spot and Regular priority instances | Yes, instances must either be all Spot or all Regular | No, Regular priority instances only | | Mix operating systems | Yes, Linux and Windows can reside in the same Flexible scale set | No, instances are the same operating system | Yes, Linux and Windows can reside in the same availability set |
-| Disk Types | Managed disks only, all storage types | Managed and unmanaged disks, all storage types | Managed and unmanaged disks, Ultradisk not supported |
+| Disk Types | Managed disks only, all storage types | Managed and unmanaged disks | Managed and unmanaged disks. Ultradisk not supported |
| Disk Server Side Encryption with Customer Managed Keys | Yes | Yes | Yes | | Write Accelerator  | Yes | Yes | Yes | | Proximity Placement Groups  | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes |
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Auto-Remove NICs and Disks when deleting VM instances | Yes | Yes | No | | Upgrade Policy (virtual machine scale set) | No, upgrade policy must be null or [] during create | Automatic, Rolling, Manual | N/A | | Automatic OS Updates (virtual machine scale set) | No | Yes | N/A |
-| In Guest Security Patching | Yes | No | Yes |
+| In Guest Security Patching | Yes, read [Auto VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) | No | Yes |
| Terminate Notifications (virtual machine scale set) | Yes, read [Terminate Notifications documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md) | Yes, read [Terminate Notifications documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md) | N/A | | Monitor Application Health | Application health extension | Application health extension or Azure load balancer probe | Application health extension | | Instance Repair (virtual machine scale set) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | N/A |
The following table compares the Flexible orchestration mode, Uniform orchestrat
### Unsupported parameters
-The following virtual machine scale set parameters are not currently supported with virtual machine scale sets in Flexible orchestration mode:
+The following virtual machine scale set parameters aren't currently supported with Virtual Machine Scale Sets in Flexible orchestration mode:
- Single placement group - you must choose `singlePlacementGroup=False` - Ultra disk configuration: `diskIOPSReadWrite`, `diskMBpsReadWrite` - Virtual machine scale set Overprovisioning
InvalidParameter. The specified fault domain count 3 must fall in the range 1 to
<!-- error -->
-### OperationNotAllowed. Deletion of Virtual Machine Scale Set is not allowed as it contains one or more VMs. Please delete or detach the VM(s) before deleting the Virtual Machine Scale Set.
+### OperationNotAllowed. Deletion of Virtual Machine Scale Set isn't allowed as it contains one or more VMs. Please delete or detach the VM(s) before deleting the Virtual Machine Scale Set.
```
-OperationNotAllowed. Deletion of Virtual Machine Scale Set is not allowed as it contains one or more VMs. Please delete or detach the VM(s) before deleting the Virtual Machine Scale Set.
+OperationNotAllowed. Deletion of Virtual Machine Scale Set isn't allowed as it contains one or more VMs. Please delete or detach the VM(s) before deleting the Virtual Machine Scale Set.
``` **Cause:** Trying to delete a scale set in Flexible orchestration mode that is associated with one or more virtual machines.
OutboundConnectivityNotEnabledOnVM. No outbound connectivity configured for virt
## Get started with Flexible orchestration mode
-Register and get started with [Flexible orchestration mode](..\virtual-machines\flexible-virtual-machine-scale-sets.md) for your virtual machine scale sets.
+Register and get started with [Flexible orchestration mode](..\virtual-machines\flexible-virtual-machine-scale-sets.md) for your Virtual Machine Scale Sets.
## Frequently asked questions
virtual-machines Businessobjects Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/businessobjects-deployment-guide.md
This document provides guidance on planning and implementation consideration for
SAP BusinessObjects BI Platform is a self-contained system that can exist on a single Azure virtual machine or can be scaled into a cluster of many Azure Virtual Machines that run different components. SAP BOBI Platform consists of six conceptual tiers: Client Tier, Web Tier, Management Tier, Storage Tier, Processing Tier, and Data Tier. (For more details on each tier, refer Administrator Guide in [SAP BusinessObjects Business Intelligence Platform](https://help.sap.com/viewer/product/SAP_BUSINESSOBJECTS_BUSINESS_INTELLIGENCE_PLATFORM/4.3/en-US) help portal). Following is the high-level details on each tier: - **Client Tier:** It contains all desktop client applications that interact with the BI platform to provide different kind of reporting, analytic, and administrative capabilities.-- **Web Tier:** It contains web applications deployed to JAVA web application servers. Web applications provide BI Platform functionality to end users through a web browser.
+- **Web Tier:** It contains web applications deployed to Java web application servers. Web applications provide BI Platform functionality to end users through a web browser.
- **Management Tier:** It coordinates and controls all the components that makes the BI Platform. It includes Central Management Server (CMS) and the Event Server and associated services - **Storage Tier:** It is responsible for handling files, such as documents and reports. It also handles report caching to save system resources when user access reports. - **Processing Tier:** It analyzes data, and produces reports and other output types. It's the only tier that accesses the databases that contain report data.
For Database-as-a-Service offering, any newly created database (Azure SQL Databa
- [SAP BusinessObjects BI Platform Deployment on Linux](businessobjects-deployment-guide-linux.md) - [Azure Virtual Machines planning and implementation for SAP](planning-guide.md) - [Azure Virtual Machines deployment for SAP](deployment-guide.md)-- [Azure Virtual Machines DBMS deployment for SAP](./dbms_guide_general.md)
+- [Azure Virtual Machines DBMS deployment for SAP](./dbms_guide_general.md)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 11/15/2022 Last updated : 11/18/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- November 18, 2022: Add a recommendation to use Pacemaker simple mount configuration for new implementations on SLES 15 in [Azure VMs HA for SAP NW on SLES with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs HA for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs HA for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md) and [Azure VMs HA for SAP NW on SLES](high-availability-guide-suse.md)
- November 15, 2022: Change in [HA for SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add recommendation to use mount option `nconnect` for workloads with higher throughput requirements - November 15, 2022: Add a recommendation for minimum required version of package resource-agents in [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) - November 14, 2022: Provided more details about nconnect mount option in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
virtual-machines High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md
vm-windows Previously updated : 11/03/2022 Last updated : 11/18/2022
[sap-hana-ha]:sap-hana-high-availability.md [nfs-ha]:high-availability-guide-suse-nfs.md
-This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster framework, and install a highly available SAP NetWeaver 7.50 system, using [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md).
-In the example configurations, installation commands etc., the ASCS instance is number 00, the ERS instance number 01, the Primary Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used.
+This article explains how to configure high availability for SAP NetWeaver application with [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md).
-This article explains how to achieve high availability for SAP NetWeaver application with Azure NetApp Files. The database layer isn't covered in detail in this article.
+For new implementations on SLES for SAP Applications 15, we recommended to deploy high availability for SAP ASCS/ERS in [simple mount configuration](./high-availability-guide-suse-nfs-simple-mount.md). The classic Pacemaker configuration, based on cluster-controlled file systems for the SAP central services directories, described in this article is still [supported](https://documentation.suse.com/sbp/all/single-html/SAP-nw740-sle15-setupguide/#id-introduction).
+
+In the example configurations, installation commands etc., the ASCS instance is number 00, the ERS instance number 01, the Primary Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used. The database layer isn't covered in detail in this article.
Read the following SAP Notes and papers first:
virtual-machines High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-azure-files.md
vm-windows Previously updated : 11/03/2022 Last updated : 11/18/2022
This article describes how to deploy and configure VMs, install the cluster framework, and install an HA SAP NetWeaver system, using [NFS on Azure Files](../../../storage/files/files-nfs-protocol.md). The example configurations use VMs that run on SUSE Linux Enterprise Server (SLES).
+For new implementations on SLES for SAP Applications 15, we recommended to deploy high availability for SAP ASCS/ERS in [simple mount configuration](./high-availability-guide-suse-nfs-simple-mount.md). The classic Pacemaker configuration, based on cluster-controlled file systems for the SAP central services directories, described in this article is still [supported](https://documentation.suse.com/sbp/all/single-html/SAP-nw740-sle15-setupguide/#id-introduction).
+ ## Prerequisites * [Azure Files documentation][afs-azure-doc]
virtual-machines High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-simple-mount.md
vm-windows Previously updated : 11/03/2022 Last updated : 11/18/2022
This article describes how to deploy and configure Azure virtual machines (VMs),
- [NFS on Azure Files](../../../storage/files/files-nfs-protocol.md) - [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md)
+The simple mount configuration is expected to be the [default](https://documentation.suse.com/sbp/sap/single-html/SAP-S4HA10-setupguide-simplemount-sle15/#id-introduction) for new implementations on SLES for SAP Applications 15.
+ ## Prerequisites The following guides contain all the required information to set up a NetWeaver HA system:
virtual-machines High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
vm-windows Previously updated : 11/03/2022 Last updated : 11/18/2022
[nfs-ha]:high-availability-guide-suse-nfs.md This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster framework, and install a highly available SAP NetWeaver 7.50 system.
-In the example configurations, installation commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is used. The names of the resources (for example virtual machines, virtual networks) in the example assume that you have used the [converged template][template-converged] with SAP system ID NW1 to create the resources.
+In the example configurations, installation commands etc. ASCS instance number 00, ERS instance number 02, and SAP System ID NW1 is used.
+
+For new implementations on SLES for SAP Applications 15, we recommended to deploy high availability for SAP ASCS/ERS in [simple mount configuration](./high-availability-guide-suse-nfs-simple-mount.md). The classic Pacemaker configuration, based on cluster-controlled file systems for the SAP central services directories, described in this article is still [supported](https://documentation.suse.com/sbp/all/single-html/SAP-nw740-sle15-setupguide/#id-introduction).
Read the following SAP Notes and papers first
virtual-machines Monitor Sap On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure.md
Some important points about the architecture include:
The following diagram shows, at a high level, how Azure Monitor for SAP solutions collects data from the SAP HANA database. The architecture is the same if SAP HANA is deployed on Azure VMs or Azure Large Instances. :::image type="complex" source="./media/azure-monitor-sap/azure-monitor-sap-architecture.png" alt-text="Diagram showing the new Azure Monitor for SAP solutions architecture.":::
- Diagram of the new Azure Monitor for SAP solutions architecture. The customer connects to the Azure Monitor for SAP solutions resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and JAVA), SAP HANA, Microsoft SQL Server, IBM Db2, Pacemaker clusters, and Linux OS.
+ Diagram of the new Azure Monitor for SAP solutions architecture. The customer connects to the Azure Monitor for SAP solutions resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and Java), SAP HANA, Microsoft SQL Server, IBM Db2, Pacemaker clusters, and Linux OS.
:::image-end::: The key components of the architecture are:
You can also use Kusto Query Language (KQL) to [run log queries](../../../azure-
The following diagram shows, at a high level, how Azure Monitor for SAP solutions (classic) collects data from the SAP HANA database. The architecture is the same if SAP HANA is deployed on Azure VMs or Azure Large Instances. :::image type="complex" source="./media/azure-monitor-sap/azure-monitor-sap-classic-architecture.png" alt-text="Diagram showing the Azure Monitor for SAP solutions classic architecture.":::
- Diagram of the Azure Monitor for SAP solutions (classic) architecture. The customer connects to the Azure Monitor for SAP solutions resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and JAVA), SAP HANA, Microsoft SQL Server, Pacemaker clusters, and Linux OS.
+ Diagram of the Azure Monitor for SAP solutions (classic) architecture. The customer connects to the Azure Monitor for SAP solutions resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and Java), SAP HANA, Microsoft SQL Server, Pacemaker clusters, and Linux OS.
:::image-end::: The key components of the architecture are:
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
Designing SAP NetWeaver, Business one, `Hybris` or S/4HANA systems architecture
## General platform restrictions Azure has various platforms besides so called native Azure VMs that are offered as first party service. [HANA Large Instances](./hana-overview-architecture.md), which is in sunset mode is one of those platforms. [Azure VMware Services](https://azure.microsoft.com/products/azure-VMware/) is another of these first party services. At this point in time Azure VMware Services in general isn't supported by SAP for hosting SAP workload. Refer to [SAP support note #2138865 - SAP Applications on VMware Cloud: Supported Products and VM configurations](https://launchpad.support.sap.com/#/notes/2138865) for more details of VMware support on different platforms.
-Besides the on-premises Active Directory, Azure offers a managed Active Directory SaaS service with [Azure Active Directory Domain Services](../../../active-directory-domain-services/overview.md) and [Azure Active Directory](../../../active-directory/fundamentals/active-directory-whatis.md). SAP components hosted on Windows OS that are supposed to use Active directory, are solely relying on the traditional Active Directory as it's hosted on-premises by you, or Azure Active Directory Domain Services. But these SAP components can't function with the native Azure Active Directory. Reason is that there are still larger gaps in functionality between Active Directory in its on-premises form or its SaaS form (Azure Active Directory Domain Services) and the native Azure Active Directory. This is the reason why Azure Active Directory accounts aren't supported for running SAP components, like ABAP stack, JAVA stack on Windows OS. Traditional Active Directory accounts need to be used in such scenarios.
+Besides the on-premises Active Directory, Azure offers a managed Active Directory SaaS service with [Azure Active Directory Domain Services](../../../active-directory-domain-services/overview.md) and [Azure Active Directory](../../../active-directory/fundamentals/active-directory-whatis.md). SAP components hosted on Windows OS that are supposed to use Active directory, are solely relying on the traditional Active Directory as it's hosted on-premises by you, or Azure Active Directory Domain Services. But these SAP components can't function with the native Azure Active Directory. Reason is that there are still larger gaps in functionality between Active Directory in its on-premises form or its SaaS form (Azure Active Directory Domain Services) and the native Azure Active Directory. This is the reason why Azure Active Directory accounts aren't supported for running SAP components, like ABAP stack, Java stack on Windows OS. Traditional Active Directory accounts need to be used in such scenarios.
## 2-Tier configuration An SAP 2-Tier configuration is considered to be built up out of a combined layer of the SAP DBMS and application layer that run on the same server or VM unit. The second tier is considered to be the user interface layer. In the case of a 2-Tier configuration, the DBMS, and SAP application layer share the resources of the Azure VM. As a result, you need to configure the different components in a way that these components don't compete for resources. You also need to be careful to not oversubscribe the resources of the VM. Such a configuration doesn't provide any high availability, beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
Scenario(s) that we didn't test and therefore have no experience with list like:
## Next Steps
-Read next steps in the [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
+Read next steps in the [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
virtual-network Manage Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md
Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) t
Get-AzVirtualNetwork -ResourceGroupName myResourceGroup -Name myVNet ```
-### View virtual networks and settings the Azure CLI
+### View virtual networks and settings using the Azure CLI
Use [az network vnet list](/cli/azure/network/vnet#az-network-vnet-list) to list all virtual networks in a resource group.
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azu
1. In the left pane, locate the **VPN connection**, then click **Connect**.
-Azure VPN client provides high availability by allowing you to add a secondary VPN client profile, providing a more resilient way to access VPN. You can choose to add a secondary client profile using any of the already imported client profiles and that **enables the high availability** option for windows. In case of any **region outage** or failure to connect to the primary VPN client profile, Azure VPN provides the capability to auto-connect to the secondary client profile without causing any disruptions.
+Azure VPN client provides high availability by allowing you to add a secondary VPN client profile, providing a more resilient way to access VPN. You can choose to add a secondary client profile using any of the already imported client profiles and that **enables the high availability** option for windows. In case of any **region outage** or failure to connect to the primary VPN client profile, Azure VPN provides the capability to auto-connect to the secondary client profile without causing any disruptions. This setting requires the Azure VPN Client version 2.2124.51.0, which is currently in the process of being rolled out.
## <a name="openvpn"></a>OpenVPN - OpenVPN Client steps
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
DRS 2.1 includes 17 rule groups, as shown in the following table. Each group con
The following rules are disabled by default for DRS 2.1:
-|Rule ID |Rule Group|Description |Why disabled|
+|Rule ID |Rule Group|Description |Details|
||||| |942110 |SQLI|SQL Injection Attack: Common Injection Testing Detected |Replaced by MSTIC rule 99031001 | |942150 |SQLI|SQL Injection Attack|Replaced by MSTIC rule 99031003 | |942260 |SQLI|Detects basic SQL authentication bypass attempts 2/3 |Replaced by MSTIC rule 99031004 | |942430 |SQLI|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)|Too many false positives.| |942440 |SQLI|SQL Comment Sequence Detected|Replaced by MSTIC rule 99031002 |
-|99005006|MS-ThreatIntel-WebShells|Spring4Shell Interaction Attempt|Replaced by Microsoft threat intelligence rule.|
-|99001014|MS-ThreatIntel-CVEs|Attempted Spring Cloud routing-expression injection [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|Replaced by Microsoft threat intelligence rule.|
-|99001015|MS-ThreatIntel-WebShells|Attempted Spring Framework unsafe class object exploitation [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)|Replaced by Microsoft threat intelligence rule.|
-|99001016|MS-ThreatIntel-WebShells|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)|Replaced by Microsoft threat intelligence rule.|
+|99005006|MS-ThreatIntel-WebShells|Spring4Shell Interaction Attempt|Enable rule to prevent against SpringShell vulnerability|
+|99001014|MS-ThreatIntel-CVEs|Attempted Spring Cloud routing-expression injection [CVE-2022-22963](https://www.cve.org/CVERecord?id=CVE-2022-22963)|Enable rule to prevent against SpringShell vulnerability|
+|99001015|MS-ThreatIntel-WebShells|Attempted Spring Framework unsafe class object exploitation [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)|Enable rule to prevent against SpringShell vulnerability|
+|99001016|MS-ThreatIntel-WebShells|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)|Enable rule to prevent against SpringShell vulnerability|
### DRS 2.0