Updates from: 10/27/2022 01:09:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
Once you've added the app ID and secrete, use the following steps to add the Azu
const { clientPrincipal } = payload; return clientPrincipal; }
-
+
await getUserInfo(); ``` - > [!TIP] > If you can't run the above JavaScript code in your browser, navigate to the following URL `https://<app-name>.azurewebsites.net/.auth/me`. Replace the `<app-name>` with your Azure Web App.
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-sendgrid.md
With a SendGrid account created and SendGrid API key stored in an Azure AD B2C p
1. Select **Blank Template** and then **Code Editor**. 1. In the HTML editor, paste following HTML template or use your own. The `{{otp}}` and `{{email}}` parameters will be replaced dynamically with the one-time password value and the user email address.
- ```HTML
+ ```html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="en"><head id="Head1">
With a SendGrid account created and SendGrid API key stored in an Azure AD B2C p
<td valign="top" width="50%"></td> </tr> </table>
- </body>
+ </body>
</html> ```
-1. Expand **Settings** on the left, and for **Version Name**, enter a template version.
+1. Expand **Settings** on the left, and for **Version Name**, enter a template version.
1. For **Subject**, enter `{{subject}}`. 1. A the top of the page, select **Save**. 1. Return to the **Transactional Templates** page by selecting the back arrow. 1. Record the **ID** of template you created for use in a later step. For example, `d-989077fbba9746e89f3f6411f596fb96`. You specify this ID when you [add the claims transformation](#add-the-claims-transformation). - [!INCLUDE [active-directory-b2c-important-for-custom-email-provider](../../includes/active-directory-b2c-important-for-custom-email-provider.md)] ## Add Azure AD B2C claim types
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
Title: Localization string IDs - Azure Active Directory B2C
+ Title: Localization string IDs - Azure Active Directory B2C
description: Specify the IDs for a content definition with an ID of api.signuporsignin in a custom policy in Azure Active Directory B2C.
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-The **Localization** element enables you to support multiple locales or languages in the policy for the user journeys. This article provides the list of localization IDs that you can use in your policy. To get familiar with UI localization, see [Localization](localization.md).
+The **Localization** element enables you to support multiple locales or languages in the policy for the user journeys. This article provides the list of localization IDs that you can use in your policy. For more information about UI localization, see [Localization](localization.md).
## Sign-up or sign-in page elements
The following IDs are used for a content definition with an ID of `api.signupors
| ID | Default value | Page Layout Version | | | - | |
-| **forgotpassword_link** | Forgot your password? | `All` |
-| **createaccount_intro** | Don't have an account? | `All` |
-| **button_signin** | Sign in | `All` |
-| **social_intro** | Sign in with your social account | `All` |
-| **remember_me** |Keep me signed in. | `All` |
-| **unknown_error** | We are having trouble signing you in. Please try again later. | `All` |
-| **divider_title** | OR | `All` |
-| **local_intro_email** | Sign in with your existing account | `< 2.0.0` |
-| **logonIdentifier_email** | Email Address | `< 2.0.0` |
-| **requiredField_email** | Please enter your email | `< 2.0.0` |
-| **invalid_email** | Please enter a valid email address | `< 2.0.0` |
-| **email_pattern** | ^[a-zA-Z0-9.!#$%&''\*+/=?^\_\`{\|}~-]+@[a-zA-Z0-9-]+(?:\\.[a-zA-Z0-9-]+)\*$ | `< 2.0.0` |
-| **local_intro_username** | Sign in with your user name | `< 2.0.0` |
-| **logonIdentifier_username** | Username | `< 2.0.0` |
-| **requiredField_username** | Please enter your user name | `< 2.0.0` |
-| **password** | Password | `< 2.0.0` |
-| **requiredField_password** | Please enter your password | `< 2.0.0` |
-| **createaccount_link** | Sign up now | `< 2.0.0` |
-| **cancel_message** | The user has forgotten their password | `< 2.0.0` |
-| **invalid_password** | The password you entered is not in the expected format. | `< 2.0.0` |
-| **createaccount_one_link** | Sign up now | `>= 2.0.0` |
-| **createaccount_two_links** | Sign up with {0} or {1} | `>= 2.0.0` |
-| **createaccount_three_links** | Sign up with {0}, {1}, or {2} | `>= 2.0.0` |
-| **local_intro_generic** | Sign in with your {0} | `>= 2.1.0` |
-| **requiredField_generic** | Please enter your {0} | `>= 2.1.0` |
-| **invalid_generic** | Please enter a valid {0} | `>= 2.1.1` |
-| **heading** | Sign in | `>= 2.1.1` |
+| `forgotpassword_link` | Forgot your password? | `All` |
+| `createaccount_intro` | Don't have an account? | `All` |
+| `button_signin` | Sign in | `All` |
+| `social_intro` | Sign in with your social account | `All` |
+| `remember_me` |Keep me signed in. | `All` |
+| `unknown_error` | We are having trouble signing you in. Please try again later. | `All` |
+| `divider_title` | OR | `All` |
+| `local_intro_email` | Sign in with your existing account | `< 2.0.0` |
+| `logonIdentifier_email` | Email Address | `< 2.0.0` |
+| `requiredField_email` | Please enter your email | `< 2.0.0` |
+| `invalid_email` | Please enter a valid email address | `< 2.0.0` |
+| `email_pattern` | ```^[a-zA-Z0-9.!#$%&''\*+/=?^\_\`{\|}~-]+@[a-zA-Z0-9-]+(?:\\.[a-zA-Z0-9-]+)\*$``` | `< 2.0.0` |
+| `local_intro_username` | Sign in with your user name | `< 2.0.0` |
+| `logonIdentifier_username` | Username | `< 2.0.0` |
+| `requiredField_username` | Please enter your user name | `< 2.0.0` |
+| `password` | Password | `< 2.0.0` |
+| `requiredField_password` | Please enter your password | `< 2.0.0` |
+| `createaccount_link` | Sign up now | `< 2.0.0` |
+| `cancel_message` | The user has forgotten their password | `< 2.0.0` |
+| `invalid_password` | The password you entered is not in the expected format. | `< 2.0.0` |
+| `createaccount_one_link` | Sign up now | `>= 2.0.0` |
+| `createaccount_two_links` | Sign up with {0} or {1} | `>= 2.0.0` |
+| `createaccount_three_links` | Sign up with {0}, {1}, or {2} | `>= 2.0.0` |
+| `local_intro_generic` | Sign in with your {0} | `>= 2.1.0` |
+| `requiredField_generic` | Please enter your {0} | `>= 2.1.0` |
+| `invalid_generic` | Please enter a valid {0} | `>= 2.1.1` |
+| `heading` | Sign in | `>= 2.1.1` |
> [!NOTE]
-> * Placeholders like {0} will be filled automatically with the `DisplayName` value of `ClaimType`.
+> * Placeholders like `{0}` are populated automatically with the `DisplayName` value of `ClaimType`.
> * To learn how to localize `ClaimType`, see [Sign-up or sign-in example](#signupsigninexample).
-The following example shows the use of some of the user interface elements in the sign-up or sign-in page:
+The following example shows the use of some user interface elements in the sign-up or sign-in page:
:::image type="content" source="./media/localization-string-ids/localization-susi-2.png" alt-text="Screenshot that shows sign-up or sign-in page U X elements."::: ### Sign-up or sign-in identity providers
-The ID of the identity providers is configured in the user journey **ClaimsExchange** element. To localize the title of the identity provider, the **ElementType** is set to `ClaimsProvider`, while the **StringId** is set to the ID of the `ClaimsExchange`.
+The ID of the identity providers is configured in the user journey **ClaimsExchange** element. To localize the title of the identity provider, the **ElementType** is set to `ClaimsProvider`, while the **StringId** is set to the ID of the `ClaimsExchange`.
```xml <OrchestrationStep Order="2" Type="ClaimsExchange">
The following example localizes the Facebook identity provider to Arabic:
| ID | Default value | | | - |
-| **UserMessageIfInvalidPassword** | Your password is incorrect. |
-| **UserMessageIfPasswordExpired**| Your password has expired.|
-| **UserMessageIfClaimsPrincipalDoesNotExist** | We can't seem to find your account. |
-| **UserMessageIfOldPasswordUsed** | Looks like you used an old password. |
-| **DefaultMessage** | Invalid username or password. |
-| **UserMessageIfUserAccountDisabled** | Your account has been locked. Contact your support person to unlock it, then try again. |
-| **UserMessageIfUserAccountLocked** | Your account is temporarily locked to prevent unauthorized use. Try again later. |
-| **AADRequestsThrottled** | There are too many requests at this moment. Please wait for some time and try again. |
+| `UserMessageIfInvalidPassword` | Your password is incorrect. |
+| `UserMessageIfPasswordExpired`| Your password has expired.|
+| `UserMessageIfClaimsPrincipalDoesNotExist` | We can't seem to find your account. |
+| `UserMessageIfOldPasswordUsed` | Looks like you used an old password. |
+| `DefaultMessage` | Invalid username or password. |
+| `UserMessageIfUserAccountDisabled` | Your account has been locked. Contact your support person to unlock it, then try again. |
+| `UserMessageIfUserAccountLocked` | Your account is temporarily locked to prevent unauthorized use. Try again later. |
+| `AADRequestsThrottled` | There are too many requests at this moment. Please wait for some time and try again. |
<a name="signupsigninexample"></a>+ ### Sign-up or sign-in example ```xml
The following example localizes the Facebook identity provider to Arabic:
## Sign-up and self-asserted pages user interface elements
-The following are the IDs for a content definition with an ID of `api.localaccountsignup` or any content definition that starts with `api.selfasserted`, such as `api.selfasserted.profileupdate` and `api.localaccountpasswordreset`, and [self-asserted technical profile](self-asserted-technical-profile.md).
+The following IDs are used for a content definition having an ID of `api.localaccountsignup` or any content definition that starts with `api.selfasserted`, such as `api.selfasserted.profileupdate` and `api.localaccountpasswordreset`, and [self-asserted technical profile](self-asserted-technical-profile.md).
| ID | Default value | | | - |
-| **ver_sent** | Verification code has been sent to: |
-| **ver_but_default** | Default |
-| **cancel_message** | The user has canceled entering self-asserted information |
-| **preloader_alt** | Please wait |
-| **ver_but_send** | Send verification code |
-| **alert_yes** | Yes |
-| **error_fieldIncorrect** | One or more fields are filled out incorrectly. Please check your entries and try again. |
-| **year** | Year |
-| **verifying_blurb** | Please wait while we process your information. |
-| **button_cancel** | Cancel |
-| **ver_fail_no_retry** | You've made too many incorrect attempts. Please try again later. |
-| **month** | Month |
-| **ver_success_msg** | E-mail address verified. You can now continue. |
-| **months** | January, February, March, April, May, June, July, August, September, October, November, December |
-| **ver_fail_server** | We are having trouble verifying your email address. Please enter a valid email address and try again. |
-| **error_requiredFieldMissing** | A required field is missing. Please fill out all required fields and try again. |
-| **heading** | User Details |
-| **initial_intro** | Please provide the following details. |
-| **ver_but_resend** | Send new code |
-| **button_continue** | Create |
-| **error_passwordEntryMismatch** | The password entry fields do not match. Please enter the same password in both fields and try again. |
-| **ver_incorrect_format** | Incorrect format. |
-| **ver_but_edit** | Change e-mail |
-| **ver_but_verify** | Verify code |
-| **alert_no** | No |
-| **ver_info_msg** | Verification code has been sent to your inbox. Please copy it to the input box below. |
-| **day** | Day |
-| **ver_fail_throttled** | There have been too many requests to verify this email address. Please wait a while, then try again. |
-| **helplink_text** | What is this? |
-| **ver_fail_retry** | That code is incorrect. Please try again. |
-| **alert_title** | Cancel Entering Your Details |
-| **required_field** | This information is required. |
-| **alert_message** | Are you sure that you want to cancel entering your details? |
-| **ver_intro_msg** | Verification is necessary. Please click Send button. |
-| **ver_input** | Verification code |
+| `ver_sent` | Verification code has been sent to: |
+| `ver_but_default` | Default |
+| `cancel_message` | The user has canceled entering self-asserted information |
+| `preloader_alt` | Please wait |
+| `ver_but_send` | Send verification code |
+| `alert_yes` | Yes |
+| `error_fieldIncorrect` | One or more fields are filled out incorrectly. Please check your entries and try again. |
+| `year` | Year |
+| `verifying_blurb` | Please wait while we process your information. |
+| `button_cancel` | Cancel |
+| `ver_fail_no_retry` | You've made too many incorrect attempts. Please try again later. |
+| `month` | Month |
+| `ver_success_msg` | E-mail address verified. You can now continue. |
+| `months` | January, February, March, April, May, June, July, August, September, October, November, December |
+| `ver_fail_server` | We are having trouble verifying your email address. Please enter a valid email address and try again. |
+| `error_requiredFieldMissing` | A required field is missing. Please fill out all required fields and try again. |
+| `heading` | User Details |
+| `initial_intro` | Please provide the following details. |
+| `ver_but_resend` | Send new code |
+| `button_continue` | Create |
+| `error_passwordEntryMismatch` | The password entry fields do not match. Please enter the same password in both fields and try again. |
+| `ver_incorrect_format` | Incorrect format. |
+| `ver_but_edit` | Change e-mail |
+| `ver_but_verify` | Verify code |
+| `alert_no` | No |
+| `ver_info_msg` | Verification code has been sent to your inbox. Please copy it to the input box below. |
+| `day` | Day |
+| `ver_fail_throttled` | There have been too many requests to verify this email address. Please wait a while, then try again. |
+| `helplink_text` | What is this? |
+| `ver_fail_retry` | That code is incorrect. Please try again. |
+| `alert_title` | Cancel Entering Your Details |
+| `required_field` | This information is required. |
+| `alert_message` | Are you sure that you want to cancel entering your details? |
+| `ver_intro_msg` | Verification is necessary. Please click Send button. |
+| `ver_input` | Verification code |
### Sign-up and self-asserted pages disclaimer links
The following `UxElement` string IDs will display disclaimer link(s) at the bott
| ID | Example value | | | - |
-| **disclaimer_msg_intro** | By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign into {insert your application name}. Standard messsage and data rates may apply. |
-| **disclaimer_link_1_text** | Privacy Statement |
-| **disclaimer_link_1_url** | {insert your privacy statement URL} |
-| **disclaimer_link_2_text** | Terms and Conditions |
-| **disclaimer_link_2_url** | {insert your terms and conditions URL} |
+| `disclaimer_msg_intro` | By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign into {insert your application name}. Standard message and data rates may apply. |
+| `disclaimer_link_1_text` | Privacy Statement |
+| `disclaimer_link_1_url` | {insert your privacy statement URL} |
+| `disclaimer_link_2_text` | Terms and Conditions |
+| `disclaimer_link_2_url` | {insert your terms and conditions URL} |
### Sign-up and self-asserted pages error messages | ID | Default value | | | - |
-| **UserMessageIfClaimsPrincipalAlreadyExists** | A user with the specified ID already exists. Please choose a different one. |
-| **UserMessageIfClaimNotVerified** | Claim not verified: {0} |
-| **UserMessageIfIncorrectPattern** | Incorrect pattern for: {0} |
-| **UserMessageIfMissingRequiredElement** | Missing required element: {0} |
-| **UserMessageIfValidationError** | Error in validation by: {0} |
-| **UserMessageIfInvalidInput** | {0} has invalid input. |
-| **ServiceThrottled** | There are too many requests at this moment. Please wait for some time and try again. |
+| `UserMessageIfClaimsPrincipalAlreadyExists` | A user with the specified ID already exists. Please choose a different one. |
+| `UserMessageIfClaimNotVerified` | Claim not verified: {0} |
+| `UserMessageIfIncorrectPattern` | Incorrect pattern for: {0} |
+| `UserMessageIfMissingRequiredElement` | Missing required element: {0} |
+| `UserMessageIfValidationError` | Error in validation by: {0} |
+| `UserMessageIfInvalidInput` | {0} has invalid input. |
+| `ServiceThrottled` | There are too many requests at this moment. Please wait for some time and try again. |
The following example shows the use of some of the user interface elements in the sign-up page:
The Following are the IDs for a content definition with an ID of `api.phonefacto
| ID | Default value | Page Layout Version | | | - | |
-| **button_verify** | Call Me | `All` |
-| **country_code_label** | Country Code | `All` |
-| **cancel_message** | The user has canceled multi-factor authentication | `All` |
-| **text_button_send_second_code** | send a new code | `All` |
-| **code_pattern** | \\d{6} | `All` |
-| **intro_mixed** | We have the following number on record for you. We can send a code via SMS or phone to authenticate you. | `All` |
-| **intro_mixed_p** | We have the following numbers on record for you. Choose a number that we can phone or send a code via SMS to authenticate you. | `All` |
-| **button_verify_code** | Verify Code | `All` |
-| **requiredField_code** | Please enter the verification code you received | `All` |
-| **invalid_code** | Please enter the 6-digit code you received | `All` |
-| **button_cancel** | Cancel | `All` |
-| **local_number_input_placeholder_text** | Phone number | `All` |
-| **button_retry** | Retry | `All` |
-| **alternative_text** | I don't have my phone | `All` |
-| **intro_phone_p** | We have the following numbers on record for you. Choose a number that we can phone to authenticate you. | `All` |
-| **intro_phone** | We have the following number on record for you. We will phone to authenticate you. | `All` |
-| **enter_code_text_intro** | Enter your verification code below, or | `All` |
-| **intro_entry_phone** | Enter a number below that we can phone to authenticate you. | `All` |
-| **intro_entry_sms** | Enter a number below that we can send a code via SMS to authenticate you. | `All` |
-| **button_send_code** | Send Code | `All` |
-| **invalid_number** | Please enter a valid phone number | `All` |
-| **intro_sms** | We have the following number on record for you. We will send a code via SMS to authenticate you. | `All` |
-| **intro_entry_mixed** | Enter a number below that we can send a code via SMS or phone to authenticate you. | `All` |
-| **number_pattern** | `^\\+(?:[0-9][\\x20-]?){6,14}[0-9]$` | `All` |
-| **intro_sms_p** |We have the following numbers on record for you. Choose a number that we can send a code via SMS to authenticate you. | `All` |
-| **requiredField_countryCode** | Please select your country code | `All` |
-| **requiredField_number** | Please enter your phone number | `All` |
-| **country_code_input_placeholder_text** |Country or region | `All` |
-| **number_label** | Phone Number | `All` |
-| **error_tryagain** | The phone number you provided is busy or unavailable. Please check the number and try again. | `All` |
-| **error_sms_throttled** | You hit the limit on the number of text messages. Try again shortly. | `>= 1.2.3` |
-| **error_phone_throttled** | You hit the limit on the number of call attempts. Try again shortly. | `>= 1.2.3` |
-| **error_throttled** | You hit the limit on the number of verification attempts. Try again shortly. | `>= 1.2.3` |
-| **error_incorrect_code** | The verification code you have entered does not match our records. Please try again, or request a new code. | `All` |
-| **countryList** | See [the countries list](#phone-factor-authentication-page-example). | `All` |
-| **error_448** | The phone number you provided is unreachable. | `All` |
-| **error_449** | User has exceeded the number of retry attempts. | `All` |
-| **verification_code_input_placeholder_text** | Verification code | `All` |
+| `button_verify` | Call Me | `All` |
+| `country_code_label` | Country Code | `All` |
+| `cancel_message` | The user has canceled multi-factor authentication | `All` |
+| `text_button_send_second_code` | send a new code | `All` |
+| `code_pattern` | \\d{6} | `All` |
+| `intro_mixed` | We have the following number on record for you. We can send a code via SMS or phone to authenticate you. | `All` |
+| `intro_mixed_p` | We have the following numbers on record for you. Choose a number that we can phone or send a code via SMS to authenticate you. | `All` |
+| `button_verify_code` | Verify Code | `All` |
+| `requiredField_code` | Please enter the verification code you received | `All` |
+| `invalid_code` | Please enter the 6-digit code you received | `All` |
+| `button_cancel` | Cancel | `All` |
+| `local_number_input_placeholder_text` | Phone number | `All` |
+| `button_retry` | Retry | `All` |
+| `alternative_text` | I don't have my phone | `All` |
+| `intro_phone_p` | We have the following numbers on record for you. Choose a number that we can phone to authenticate you. | `All` |
+| `intro_phone` | We have the following number on record for you. We will phone to authenticate you. | `All` |
+| `enter_code_text_intro` | Enter your verification code below, or | `All` |
+| `intro_entry_phone` | Enter a number below that we can phone to authenticate you. | `All` |
+| `intro_entry_sms` | Enter a number below that we can send a code via SMS to authenticate you. | `All` |
+| `button_send_code` | Send Code | `All` |
+| `invalid_number` | Please enter a valid phone number | `All` |
+| `intro_sms` | We have the following number on record for you. We will send a code via SMS to authenticate you. | `All` |
+| `intro_entry_mixed` | Enter a number below that we can send a code via SMS or phone to authenticate you. | `All` |
+| `number_pattern` | `^\\+(?:[0-9][\\x20-]?){6,14}[0-9]$` | `All` |
+| `intro_sms_p` |We have the following numbers on record for you. Choose a number that we can send a code via SMS to authenticate you. | `All` |
+| `requiredField_countryCode` | Please select your country code | `All` |
+| `requiredField_number` | Please enter your phone number | `All` |
+| `country_code_input_placeholder_text` |Country or region | `All` |
+| `number_label` | Phone Number | `All` |
+| `error_tryagain` | The phone number you provided is busy or unavailable. Please check the number and try again. | `All` |
+| `error_sms_throttled` | You hit the limit on the number of text messages. Try again shortly. | `>= 1.2.3` |
+| `error_phone_throttled` | You hit the limit on the number of call attempts. Try again shortly. | `>= 1.2.3` |
+| `error_throttled` | You hit the limit on the number of verification attempts. Try again shortly. | `>= 1.2.3` |
+| `error_incorrect_code` | The verification code you have entered does not match our records. Please try again, or request a new code. | `All` |
+| `countryList` | See [the countries list](#phone-factor-authentication-page-example). | `All` |
+| `error_448` | The phone number you provided is unreachable. | `All` |
+| `error_449` | User has exceeded the number of retry attempts. | `All` |
+| `verification_code_input_placeholder_text` | Verification code | `All` |
The following example shows the use of some of the user interface elements in the MFA enrollment page:
The following example shows the use of some of the user interface elements in th
## Verification display control user interface elements
-The following are the IDs for a [Verification display control](display-control-verification.md) with [page layout version](page-layout.md) 2.1.0 or higher.
+The following IDs are used for a [Verification display control](display-control-verification.md) with [page layout version](page-layout.md) 2.1.0 or higher.
| ID | Default value | | | - |
-|intro_msg<sup>1</sup>| Verification is necessary. Please click Send button.|
-|success_send_code_msg | Verification code has been sent. Please copy it to the input box below.|
-|failure_send_code_msg | We are having trouble verifying your email address. Please enter a valid email address and try again.|
-|success_verify_code_msg | E-mail address verified. You can now continue.|
-|failure_verify_code_msg | We are having trouble verifying your email address. Please try again.|
-|but_send_code | Send verification code|
-|but_verify_code | Verify code|
-|but_send_new_code | Send new code|
-|but_change_claims | Change e-mail|
-| UserMessageIfVerificationControlClaimsNotVerified<sup>2</sup>| The claims for verification control have not been verified. |
+| `intro_msg` <sup>1</sup>| Verification is necessary. Please click Send button.|
+| `success_send_code_msg` | Verification code has been sent. Please copy it to the input box below.|
+| `failure_send_code_msg` | We are having trouble verifying your email address. Please enter a valid email address and try again.|
+| `success_verify_code_msg` | E-mail address verified. You can now continue.|
+| `failure_verify_code_msg` | We are having trouble verifying your email address. Please try again.|
+| `but_send_code` | Send verification code|
+| `but_verify_code` | Verify code|
+| `but_send_new_code` | Send new code|
+| `but_change_claims` | Change e-mail|
+| `UserMessageIfVerificationControlClaimsNotVerified` <sup>2</sup> | The claims for verification control have not been verified. |
<sup>1</sup> The `intro_msg` element is hidden, and not shown on the self-asserted page. To make it visible, use the [HTML customization](customize-ui-with-html.md) with Cascading Style Sheets. For example:
-```css
-.verificationInfoText div{display: block!important}
-```
+`.verificationInfoText div{display: block!important}`
<sup>2</sup> This error message is displayed to the user if they enter a verification code, but instead of completing the verification by selecting on the **Verify** button, they select the **Continue** button.
-
+ ### Verification display control example ```xml
The following are the IDs for a [Verification display control](display-control-v
## Verification display control user interface elements (deprecated)
-The following are the IDs for a [Verification display control](display-control-verification.md) with [page layout version](page-layout.md) 2.0.0.
+The following IDs are used for a [Verification display control](display-control-verification.md) with [page layout version](page-layout.md) 2.0.0.
| ID | Default value | | | - |
-|verification_control_but_change_claims |Change |
-|verification_control_fail_send_code |Failed to send the code, please try again later. |
-|verification_control_fail_verify_code |Failed to verify the code, please try again later. |
-|verification_control_but_send_code |Send Code |
-|verification_control_but_send_new_code |Send New Code |
-|verification_control_but_verify_code |Verify Code |
-|verification_control_code_sent| Verification code has been sent. Please copy it to the input box below. |
+| `verification_control_but_change_claims` |Change |
+| `verification_control_fail_send_code` |Failed to send the code, please try again later. |
+| `verification_control_fail_verify_code` |Failed to verify the code, please try again later. |
+| `verification_control_but_send_code` |Send Code |
+| `verification_control_but_send_new_code` |Send New Code |
+| `verification_control_but_verify_code` |Verify Code |
+| `verification_control_code_sent`| Verification code has been sent. Please copy it to the input box below. |
### Verification display control example (deprecated)
The following are the IDs for a [Verification display control](display-control-v
## TOTP MFA controls display control user interface elements
-The following are the IDs for a [time-based one-time password (TOTP) display control](display-control-time-based-one-time-password.md) with [page layout version](page-layout.md) 2.1.9 and later.
+The following IDs are used for a [time-based one-time password (TOTP) display control](display-control-time-based-one-time-password.md) with [page layout version](page-layout.md) 2.1.9 and later.
| ID | Default value | | | - |
-|title_text |Download the Microsoft Authenticator using the download links for iOS and Android or use any other authenticator app of your choice. |
-| DN |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
-|DisplayName |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
-|title_text |Scan the QR code |
-|info_msg |You can download the Microsoft Authenticator app or use any other authenticator app of your choice. |
-|link_text |Can't scan? Try this |
-|title_text| Enter the account details manually. |
-|account_name | Account Name: |
-|display_prefix | Secret |
-|collapse_text | Still having trouble? |
-|DisplayName | Enter the verification code from your authenticator appΓÇï.|
-|DisplayName | Enter your code. |
-| button_continue | Verify |
+| `title_text` |Download the Microsoft Authenticator using the download links for iOS and Android or use any other authenticator app of your choice. |
+| `DN` |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
+| `DisplayName` |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
+| `title_text` |Scan the QR code |
+| `info_msg` |You can download the Microsoft Authenticator app or use any other authenticator app of your choice. |
+| `link_text` |Can't scan? Try this |
+| `title_text`| Enter the account details manually. |
+| `account_name` | Account Name: |
+| `display_prefix` | Secret |
+| `collapse_text` | Still having trouble? |
+| `DisplayName` | Enter the verification code from your authenticator appΓÇï.|
+| `DisplayName` | Enter your code. |
+| `button_continue` | Verify |
### TOTP MFA controls display control example
The following are the IDs for a [time-based one-time password (TOTP) display con
## Restful service error messages
-The following are the IDs for [Restful service technical profile](restful-technical-profile.md) error messages:
+The following IDs are used for [Restful service technical profile](restful-technical-profile.md) error messages:
| ID | Default value | | | - |
-|DefaultUserMessageIfRequestFailed | Failed to establish connection to restful service end point. Restful service URL: {0} |
-|UserMessageIfCircuitOpen | {0} Restful Service URL: {1} |
-|UserMessageIfDnsResolutionFailed | Failed to resolve the hostname of the restful service endpoint. Restful service URL: {0} |
-|UserMessageIfRequestTimeout | Failed to establish connection to restful service end point within timeout limit {0} seconds. Restful service URL: {1} |
+| `DefaultUserMessageIfRequestFailed` | Failed to establish connection to restful service end point. Restful service URL: {0} |
+| `UserMessageIfCircuitOpen` | {0} Restful Service URL: {1} |
+| `UserMessageIfDnsResolutionFailed` | Failed to resolve the hostname of the restful service endpoint. Restful service URL: {0} |
+| `UserMessageIfRequestTimeout` | Failed to establish connection to restful service end point within timeout limit {0} seconds. Restful service URL: {1} |
### Restful service example
The following are the IDs for [Restful service technical profile](restful-techni
## Azure AD MFA error messages
-The following are the IDs for an [Azure AD MFA technical profile](multi-factor-auth-technical-profile.md) error message:
+The following IDs are used for an [Azure AD MFA technical profile](multi-factor-auth-technical-profile.md) error message:
| ID | Default value | | | - |
-|UserMessageIfCouldntSendSms | Cannot Send SMS to the phone, please try another phone number. |
-|UserMessageIfInvalidFormat | Your phone number is not in a valid format, please correct it and try again.|
-|UserMessageIfMaxAllowedCodeRetryReached | Wrong code entered too many times, please try again later.|
-|UserMessageIfServerError | Cannot use MFA service, please try again later.|
-|UserMessageIfThrottled | Your request has been throttled, please try again later.|
-|UserMessageIfWrongCodeEntered|Wrong code entered, please try again.|
+| `UserMessageIfCouldntSendSms` | Cannot Send SMS to the phone, please try another phone number. |
+| `UserMessageIfInvalidFormat` | Your phone number is not in a valid format, please correct it and try again.|
+| `UserMessageIfMaxAllowedCodeRetryReached` | Wrong code entered too many times, please try again later.|
+| `UserMessageIfServerError` | Cannot use MFA service, please try again later.|
+| `UserMessageIfThrottled` | Your request has been throttled, please try again later.|
+| `UserMessageIfWrongCodeEntered` |Wrong code entered, please try again.|
### Azure AD MFA example
The following are the IDs for an [Azure AD MFA technical profile](multi-factor-a
## Azure AD SSPR
-The following are the IDs for [Azure AD SSPR technical profile](aad-sspr-technical-profile.md) error messages:
+The following IDs are used for [Azure AD SSPR technical profile](aad-sspr-technical-profile.md) error messages:
| ID | Default value | | | - |
-|UserMessageIfChallengeExpired | The code has expired.|
-|UserMessageIfInternalError | The email service has encountered an internal error, please try again later.|
-|UserMessageIfThrottled | You have sent too many requests, please try again later.|
-|UserMessageIfVerificationFailedNoRetry | You have exceeded maximum number of verification attempts.|
-|UserMessageIfVerificationFailedRetryAllowed | The verification has failed, please try again.|
+|`UserMessageIfChallengeExpired` | The code has expired.|
+|`UserMessageIfInternalError` | The email service has encountered an internal error, please try again later.|
+|`UserMessageIfThrottled` | You have sent too many requests, please try again later.|
+|`UserMessageIfVerificationFailedNoRetry` | You have exceeded maximum number of verification attempts.|
+|`UserMessageIfVerificationFailedRetryAllowed` | The verification has failed, please try again.|
### Azure AD SSPR example
The following are the IDs for [Azure AD SSPR technical profile](aad-sspr-technic
</LocalizedResources> ```
-## One time password error messages
+## One-time password error messages
-The following are the IDs for a [one-time password technical profile](one-time-password-technical-profile.md) error messages
+The following IDs are used for a [one-time password technical profile](one-time-password-technical-profile.md) error messages
| ID | Default value | Description | | | - | -- |
-| UserMessageIfSessionDoesNotExist | No | The message to display to the user if the code verification session has expired. It is either the code has expired or the code has never been generated for a given identifier. |
-| UserMessageIfMaxRetryAttempted | No | The message to display to the user if they've exceeded the maximum allowed verification attempts. |
-| UserMessageIfMaxNumberOfCodeGenerated | No | The message to display to the user if the code generation has exceeded the maximum allowed number of attempts. |
-| UserMessageIfInvalidCode | No | The message to display to the user if they've provided an invalid code. |
-| UserMessageIfVerificationFailedRetryAllowed | No | The message to display to the user if they've provided an invalid code, and user is allowed to provide the correct code. |
-|UserMessageIfSessionConflict|No| The message to display to the user if the code cannot be verified.|
+| `UserMessageIfSessionDoesNotExist` | No | The message to display to the user if the code verification session has expired. It is either the code has expired or the code has never been generated for a given identifier. |
+| `UserMessageIfMaxRetryAttempted` | No | The message to display to the user if they've exceeded the maximum allowed verification attempts. |
+| `UserMessageIfMaxNumberOfCodeGenerated` | No | The message to display to the user if the code generation has exceeded the maximum allowed number of attempts. |
+| `UserMessageIfInvalidCode` | No | The message to display to the user if they've provided an invalid code. |
+| `UserMessageIfVerificationFailedRetryAllowed` | No | The message to display to the user if they've provided an invalid code, and user is allowed to provide the correct code. |
+| `UserMessageIfSessionConflict` | No | The message to display to the user if the code cannot be verified.|
### One time password example
The following are the IDs for a [one-time password technical profile](one-time-p
## Claims transformations error messages
-The following are the IDs for claims transformations error messages:
+The following IDs are used for claims transformations error messages:
| ID | Claims transformation | Default value | | | - |- |
-|UserMessageIfClaimsTransformationBooleanValueIsNotEqual |[AssertBooleanClaimIsEqualToValue](boolean-transformations.md#assertbooleanclaimisequaltovalue) | Boolean claim value comparison failed for claim type "inputClaim".|
-|DateTimeGreaterThan |[AssertDateTimeIsGreaterThan](date-transformations.md#assertdatetimeisgreaterthan) | Claim value comparison failed: The provided left operand is greater than the right operand.|
-|UserMessageIfClaimsTransformationStringsAreNotEqual |[AssertStringClaimsAreEqual](string-transformations.md#assertstringclaimsareequal) | Claim value comparison failed using StringComparison "OrdinalIgnoreCase".|
+| `UserMessageIfClaimsTransformationBooleanValueIsNotEqual` |[AssertBooleanClaimIsEqualToValue](boolean-transformations.md#assertbooleanclaimisequaltovalue) | Boolean claim value comparison failed for claim type "inputClaim".|
+| `DateTimeGreaterThan` |[AssertDateTimeIsGreaterThan](date-transformations.md#assertdatetimeisgreaterthan) | Claim value comparison failed: The provided left operand is greater than the right operand.|
+| `UserMessageIfClaimsTransformationStringsAreNotEqual` |[AssertStringClaimsAreEqual](string-transformations.md#assertstringclaimsareequal) | Claim value comparison failed using StringComparison "OrdinalIgnoreCase".|
### Claims transformations example
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
</TechnicalProfile>
- </RelyingParty>
+ </RelyingParty>
``` ## Integrate with Azure AD B2C
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
To integrate your legacy on-premises app with Azure AD B2C, contact [Datawiza](h
## Run DAB with a header-based application 1. You can use either Docker or Kubernetes to run DAB. The docker image is needed for users to create a sample header-based application. See instructions on how to [configure DAB and SSO integration](https://docs.datawiza.com/step-by-step/step3.html) for more details and how to [deploy DAB with Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html) for Kubernetes-specific instructions. A sample docker image `docker-compose.yml file` is provided for you to download and use. Log in to the container registry to download the images of DAB and the header-based application. Follow [these instructions](https://docs.datawiza.com/step-by-step/step3.html#important-step).
-
- ```yaml
- version: '3'
+
+ ```yaml
+ version: '3'
datawiza-access-broker:
To integrate your legacy on-premises app with Azure AD B2C, contact [Datawiza](h
- "3001:3001" ```
- 2. After executing `docker-compose -f docker-compose.yml up`, the header-based application should have SSO enabled with Azure AD B2C. Open a browser and type in `http://localhost:9772/`.
+2. After executing `docker-compose -f docker-compose.yml up`, the header-based application should have SSO enabled with Azure AD B2C. Open a browser and type in `http://localhost:9772/`.
3. An Azure AD B2C login page will show up.
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
At this point, the **Deduce RESTfull API** has been set up, but it's not yet ava
1. Open the `TrustFrameworkBase.xml` file from the starter pack.
-1. Find and copy the entire contents of the **UserJourneys** element that includes 'Id=SignUpOrSignIn`.
+1. Find and copy the entire contents of the **UserJourneys** element that includes `Id=SignUpOrSignIn`.
1. Open the `TrustFrameworkExtensions.xml` and find the **UserJourneys** element. If the element doesn't exist, add one.
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
Now that you have a user journey add the new identity provider to the user journ
The following XML demonstrates the orchestration steps of a user journey with xID identity provider:
- ```xml
+ ```xml
<UserJourney Id="CombinedSignInAndSignUp"> <OrchestrationSteps>
active-directory-b2c Publish App To Azure Ad App Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/publish-app-to-azure-ad-app-gallery.md
Previously updated : 03/30/2022 Last updated : 09/30/2022
Here are some benefits of adding your Azure AD B2C app to the app gallery:
## Sign in flow overview
-The sign in flow involves the following steps:
+The sign-in flow involves the following steps:
-1. Users go to the [My Apps portal](https://myapps.microsoft.com/) and select your app. The app opens the app sign in URL.
-1. The app sign in URL starts an authorization request and redirects users to the Azure AD B2C authorization endpoint.
+1. Users go to the [My Apps portal](https://myapps.microsoft.com/) and select your app. The app opens the app sign-in URL.
+1. The app sign-in URL starts an authorization request and redirects users to the Azure AD B2C authorization endpoint.
1. Users choose to sign in with their Azure AD "Corporate" account. Azure AD B2C takes them to the Azure AD authorization endpoint, where they sign in with their work account. 1. If the Azure AD SSO session is active, Azure AD issues an access token without prompting users to sign in again. Otherwise, users are prompted to sign in again.
The sign in flow involves the following steps:
Depending on the users' SSO session and Azure AD identity settings, they might be prompted to: - Provide their email address or phone number.+ - Enter their password or sign in with the [Microsoft authenticator app](https://www.microsoft.com/p/microsoft-authenticator/9nblgggzmcj6).+ - Complete multifactor authentication.+ - Accept the consent page. Your customer's tenant administrator can [grant tenant-wide admin consent to an app](../active-directory/manage-apps/grant-admin-consent.md). When consent is granted, the consent page won't be presented to users.
-Upon successful sign in, Azure AD returns a token to Azure AD B2C. Azure AD B2C validates and reads the token claims, and then returns a token to your application.
+Upon successful sign-in, Azure AD returns a token to Azure AD B2C. Azure AD B2C validates and reads the token claims, and then returns a token to your application.
## Prerequisites
To enable sign in to your app with Azure AD B2C, register your app in the Azure
If you haven't already done so, [register a web application](tutorial-register-applications.md). Later, you'll register this app with the Azure app gallery.
-## Step 2: Set up sign in for multitenant Azure AD
+## Step 2: Set up sign-in for multitenant Azure AD
To allow employees and consumers from any Azure AD tenant to sign in by using Azure AD B2C, follow the guidance for [setting up sign in for multitenant Azure AD](identity-provider-azure-ad-multi-tenant.md?pivots=b2c-custom-policy). ## Step 3: Prepare your app
-In your app, copy the URL of the sign in endpoint. If you use the [web application sample](configure-authentication-sample-web-app.md), the sign in URL is `https://localhost:5001/MicrosoftIdentity/Account/SignIn?`. This URL is where the Azure AD app gallery takes users to sign in to your app.
+In your app, copy the URL of the sign-in endpoint. If you use the [web application sample](configure-authentication-sample-web-app.md), the sign-in URL is `https://localhost:5001/MicrosoftIdentity/Account/SignIn?`. This URL is where the Azure AD app gallery takes users to sign in to your app.
In production environments, the app registration redirect URI is ordinarily a publicly accessible endpoint where your app is running, such as `https://woodgrovedemo.com/Account/SignIn`. The reply URL must begin with `https`. ## Step 4: Publish your Azure AD B2C app
-Finally, add the multitenant app to the Azure AD app gallery. Follow the instructions in [Publish your app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md). To add your app to the app gallery, do the following:
+Finally, add the multitenant app to the Azure AD app gallery. Follow the instructions in [Publish your app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md). To add your app to the app gallery, use the following steps:
1. [Create and publish documentation](../active-directory/manage-apps/v2-howto-app-gallery-listing.md#create-and-publish-documentation). 1. [Submit your app](../active-directory/manage-apps/v2-howto-app-gallery-listing.md#submit-your-application) with the following information:
Finally, add the multitenant app to the Azure AD app gallery. Follow the instruc
|What feature would you like to enable when listing your application in the gallery? | Select **Federated SSO (SAML, WS-Fed & OpenID Connect)**. | | Select your application federation protocol| Select **OpenID Connect & OAuth 2.0**. | | Application (Client) ID | Provide the ID of [your Azure AD B2C application](#step-1-register-your-application-in-azure-ad-b2c). |
- | Application sign in URL|Provide the app sign in URL as it's configured in [Step 3. Prepare your app](#step-3-prepare-your-app).|
+ | Application sign in URL|Provide the app sign-in URL as it's configured in [Step 3. Prepare your app](#step-3-prepare-your-app).|
| Multitenant| Select **Yes**. |
- | | |
## Next steps -- Learn how to [Publish your app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md).
+- Learn how to [Publish your Azure AD app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md).
active-directory-b2c Register Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/register-apps.md
+
+ Title: Register apps in Azure Active Directory B2C
+
+description: Learn how to register different apps types such as web app, web API, single-page apps, mobile and desktop apps, daemon apps, Microsoft Graph apps and SAML app in Azure Active Directory B2C
+++++++ Last updated : 09/30/2022++++
+# Register apps in Azure Active Directory B2C
+
+Before your [applications](application-types.md) can interact with Azure Active Directory B2C (Azure AD B2C), you must register them in a tenant that you manage.
+
+Azure AD B2C supports authentication for various modern application architectures. The interaction of every application type with Azure AD B2C is different. Hence, you need to specify the type of app that you want to register.
++
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+- If you haven't already created your own [Azure AD B2C Tenant](tutorial-create-tenant.md), create one now. You can use an existing Azure AD B2C tenant.
++
+## Select an app type to register
+
+You can register different app types in your Azure AD B2C Tenant. The how-to guides below show you how to register and configure the various app types:
++
+- [Single-page application (SPA)](tutorial-register-spa.md)
+- [Web application](tutorial-register-applications.md)
+- [Native client (for mobile and desktop)](add-native-application.md)
+- [Web API](add-web-api-application.md)
+- [Daemon apps](client-credentials-grant-flow.md)
+- [Microsoft Graph application](microsoft-graph-get-started.md)
+- [SAML application](saml-service-provider.md?tabs=windows&pivots=b2c-custom-policy)
+- [Publish app in Azure AD app gallery](publish-app-to-azure-ad-app-gallery.md)
+
+
+
+
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Below are sample requests to help outline what the sync engine currently sends v
"value": "False" } ]
-}
+ }
``` **With feature flag**
Below are sample requests to help outline what the sync engine currently sends v
"value": false } ]
-}
+ }
``` **Requests made to add a single-value string attribute:**
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
If successful, this method returns a `204 No Content` response code and does not
##### Request Here is an example of the request. - ```http PATCH https://graph.microsoft.com/beta/applications/{<object-id-of--the-complex-app-under-APP-Registrations} Content-type: application/json
active-directory Application Proxy Register Connector Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-register-connector-powershell.md
There are two methods you can use to register the connector:
class Program {
- #region constants
- /// <summary>
- /// The AAD authentication endpoint uri
- /// </summary>
- static readonly string AadAuthenticationEndpoint = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize";
-
- /// <summary>
- /// The application ID of the connector in AAD
- /// </summary>
- static readonly string ConnectorAppId = "55747057-9b5d-4bd4-b387-abf52a8bd489";
-
- /// <summary>
- /// The AppIdUri of the registration service in AAD
- /// </summary>
- static readonly string RegistrationServiceAppIdUri = "https://proxy.cloudwebappproxy.net/registerapp/user_impersonation";
-
- #endregion
-
- #region private members
- private string token;
- private string tenantID;
- #endregion
-
- public void GetAuthenticationToken()
- {
-
- IPublicClientApplication clientApp = PublicClientApplicationBuilder
- .Create(ConnectorAppId)
- .WithDefaultRedirectUri() // will automatically use the default Uri for native app
- .WithAuthority(AadAuthenticationEndpoint)
- .Build();
-
- AuthenticationResult authResult = null;
-
- IAccount account = null;
-
- IEnumerable<string> scopes = new string[] { RegistrationServiceAppIdUri };
-
- try
- {
- authResult = await clientApp.AcquireTokenSilent(scopes, account).ExecuteAsync();
- }
- catch (MsalUiRequiredException ex)
- {
- authResult = await clientApp.AcquireTokenInteractive(scopes).ExecuteAsync();
- }
--
- if (authResult == null || string.IsNullOrEmpty(authResult.AccessToken) || string.IsNullOrEmpty(authResult.TenantId))
+ #region constants
+ /// <summary>
+ /// The AAD authentication endpoint uri
+ /// </summary>
+ static readonly string AadAuthenticationEndpoint = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize";
+
+ /// <summary>
+ /// The application ID of the connector in AAD
+ /// </summary>
+ static readonly string ConnectorAppId = "55747057-9b5d-4bd4-b387-abf52a8bd489";
+
+ /// <summary>
+ /// The AppIdUri of the registration service in AAD
+ /// </summary>
+ static readonly string RegistrationServiceAppIdUri = "https://proxy.cloudwebappproxy.net/registerapp/user_impersonation";
+
+ #endregion
+
+ #region private members
+ private string token;
+ private string tenantID;
+ #endregion
+
+ public void GetAuthenticationToken()
{
- Trace.TraceError("Authentication result, token or tenant id returned are null");
- throw new InvalidOperationException("Authentication result, token or tenant id returned are null");
+ IPublicClientApplication clientApp = PublicClientApplicationBuilder
+ .Create(ConnectorAppId)
+ .WithDefaultRedirectUri() // will automatically use the default Uri for native app
+ .WithAuthority(AadAuthenticationEndpoint)
+ .Build();
+
+ AuthenticationResult authResult = null;
+
+ IAccount account = null;
+
+ IEnumerable<string> scopes = new string[] { RegistrationServiceAppIdUri };
+
+ try
+ {
+ authResult = await clientApp.AcquireTokenSilent(scopes, account).ExecuteAsync();
+ }
+ catch (MsalUiRequiredException ex)
+ {
+ authResult = await clientApp.AcquireTokenInteractive(scopes).ExecuteAsync();
+ }
+
+ if (authResult == null || string.IsNullOrEmpty(authResult.AccessToken) || string.IsNullOrEmpty(authResult.TenantId))
+ {
+ Trace.TraceError("Authentication result, token or tenant id returned are null");
+ throw new InvalidOperationException("Authentication result, token or tenant id returned are null");
+ }
+
+ token = authResult.AccessToken;
+ tenantID = authResult.TenantId;
}-
- token = authResult.AccessToken;
- tenantID = authResult.TenantId;
- }
- ```
+ }
+ ```
**Using PowerShell:** ```powershell # Load MSAL (Tested with version 4.7.1)
- Add-Type -Path "..\MSAL\Microsoft.Identity.Client.dll"
-
+ Add-Type -Path "..\MSAL\Microsoft.Identity.Client.dll"
+ # The AAD authentication endpoint uri
-
+ $authority = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize" #The application ID of the connector in AAD
There are two methods you can use to register the connector:
#The AppIdUri of the registration service in AAD $registrationServiceAppIdUri = "https://proxy.cloudwebappproxy.net/registerapp/user_impersonation"
- # Define the resources and scopes you want to call
+ # Define the resources and scopes you want to call
$scopes = New-Object System.Collections.ObjectModel.Collection["string"]
There are two methods you can use to register the connector:
[Microsoft.Identity.Client.IAccount] $account = $null
- # Acquiring the token
+ # Acquiring the token
$authResult = $null
There are two methods you can use to register the connector:
# Check AuthN result If (($authResult) -and ($authResult.AccessToken) -and ($authResult.TenantId)) {
-
- $token = $authResult.AccessToken
- $tenantId = $authResult.TenantId
- Write-Output "Success: Authentication result returned."
-
+ $token = $authResult.AccessToken
+ $tenantId = $authResult.TenantId
+
+ Write-Output "Success: Authentication result returned."
} Else {
-
- Write-Output "Error: Authentication result, token or tenant id returned with null."
-
+
+ Write-Output "Error: Authentication result, token or tenant id returned with null."
+ } ```
There are two methods you can use to register the connector:
## Next steps * [Publish applications using your own domain name](application-proxy-configure-custom-domain.md) * [Enable single-sign on](application-proxy-configure-single-sign-on-with-kcd.md)
-* [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
+* [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 10/19/2022 Last updated : 10/26/2022
Azure Active Directory (Azure AD) adds and improves security features to better protect customers against increasing attacks. As new attack vectors become known, Azure AD may respond by enabling protection by default to help customers stay ahead of emerging security threats.
-For example, in response to increasing MFA fatigue attacks, Microsoft recommended ways for customers to [defend users](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/defend-your-users-from-mfa-fatigue-attacks/ba-p/2365677). One recommendation to prevent users from accidental multifactor authentication (MFA) approvals is to enable [number matching](how-to-mfa-number-match.md). As a result, default behavior for number matching will be explicitly **Enabled** for all Microsoft Authenticator users.
+For example, in response to increasing MFA fatigue attacks, Microsoft recommended ways for customers to [defend users](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/defend-your-users-from-mfa-fatigue-attacks/ba-p/2365677). One recommendation to prevent users from accidental multifactor authentication (MFA) approvals is to enable [number matching](how-to-mfa-number-match.md). As a result, default behavior for number matching will be explicitly **Enabled** for all Microsoft Authenticator users. You can learn more about new security features like number matching in our blog post [Advanced Microsoft Authenticator security features are now generally available!](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/advanced-microsoft-authenticator-security-features-are-now/ba-p/2365673).
There are two ways for protection of a security feature to be enabled by default:
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
To enable CBA and configure username bindings using Graph API, complete the foll
#### Request body: -
- ```http
+ ```http
PATCH https: //graph.microsoft.com/v1.0/policies/authenticationMethodsPolicy/authenticationMethodConfigurations/x509Certificate Content-Type: application/json
To enable CBA and configure username bindings using Graph API, complete the foll
} ] }
+ ```
1. You'll get a `204 No content` response code. Re-run the GET request to make sure the policies are updated correctly. 1. Test the configuration by signing in with a certificate that satisfies the policy.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
The Office 365 suite makes it possible to target these services all at once. We
Targeting this group of applications helps to avoid issues that may arise because of inconsistent policies and dependencies. For example: The Exchange Online app is tied to traditional Exchange Online data like mail, calendar, and contact information. Related metadata may be exposed through different resources like search. To ensure that all metadata is protected by as intended, administrators should assign policies to the Office 365 app.
-Administrators can exclude the entire Office 365 suite or specific Office 365 client apps from the Conditional Access policy.
+Administrators can exclude the entire Office 365 suite or specific Office 365 cloud apps from the Conditional Access policy.
-The following key applications are included in the Office 365 client app:
+The following key applications are affected by the Office 365 cloud app:
- Exchange Online - Microsoft 365 Search Service
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
Previously updated : 04/06/2022 Last updated : 10/26/2022
Hybrid Azure AD join requires devices to have access to the following Microsoft
- Your organization's Security Token Service (STS) (**For federated domains**) > [!WARNING]
-> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to these URLs are excluded from TLS break-and-inspect. Failure to exclude these URLs may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
+> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to `https://devices.login.microsoftonline.com` is excluded from TLS break-and-inspect. Failure to exclude this URL may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
If your organization requires access to the internet via an outbound proxy, you can use [Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v=technet.10)) to enable Windows 10 or newer computers for device registration with Azure AD. To address issues configuring and managing WPAD, see [Troubleshooting Automatic Detection](/previous-versions/tn-archive/cc302643(v=technet.10)).
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
To enable security defaults in your directory:
### Require all users to register for Azure AD Multi-Factor Authentication
-All users in your tenant must register for multifactor authentication (MFA) in the form of the Azure AD Multi-Factor Authentication. Users have 14 days to register for Azure AD Multi-Factor Authentication by using the Microsoft Authenticator app. After the 14 days have passed, the user can't sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults.
+All users in your tenant must register for multifactor authentication (MFA) in the form of the Azure AD Multi-Factor Authentication. Users have 14 days to register for Azure AD Multi-Factor Authentication by using the [Microsoft Authenticator app](../authentication/concept-authentication-authenticator-app.md) or any app supporting [OATH TOTP](../authentication/concept-authentication-oath-tokens.md). After the 14 days have passed, the user can't sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults.
### Require administrators to do multifactor authentication
This policy applies to all users who are accessing Azure Resource Manager servic
### Authentication methods
-Security defaults users are required to register for and use Azure AD Multi-Factor Authentication **using the Microsoft Authenticator app using notifications**. Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option.
+Security defaults users are required to register for and use Azure AD Multi-Factor Authentication using the [Microsoft Authenticator app using notifications](../authentication/concept-authentication-authenticator-app.md). Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option. Users can also use any third party application using [OATH TOTP](../authentication/concept-authentication-oath-tokens.md) to generate codes.
> [!WARNING] > Do not disable methods for your organization if you are using security defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
If you are reviewing access to an application, then before creating the review,
1. In the **Enable review decision helpers** section choose whether you want your reviewer to receive recommendations during the review process: 1. If you select **No sign-in within 30 days**, users who have signed in during the previous 30-day period are recommended for approval. Users who haven't signed in during the past 30 days are recommended for denial. This 30-day interval is irrespective of whether the sign-ins were interactive or not. The last sign-in date for the specified user will also display along with the recommendation.
- 1. If you select User-to-Group Affiliation, reviewers will get the recommendation to Approve or Deny access for the users based on userΓÇÖs average distance in the organizationΓÇÖs reporting-structure. Users who are very distant from all the other users within the group are considered to have "low affiliation" and will get a deny recommendation in the group access reviews.
+ 1. If you select **(Preview) User-to-Group Affiliation**, reviewers will get the recommendation to Approve or Deny access for the users based on userΓÇÖs average distance in the organizationΓÇÖs reporting-structure. Users who are very distant from all the other users within the group are considered to have "low affiliation" and will get a deny recommendation in the group access reviews.
> [!NOTE] > If you create an access review based on applications, your recommendations are based on the 30-day interval period depending on when the user last signed in to the application rather than the tenant.
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md
To generate a self-signed certificate,
```powershell $cert | ft Thumbprint
+ ```
1. After you have exported the files, you can remove the certificate and key pair from your local user certificate store. In subsequent steps you will remove the `.pfx` and `.crt` files as well, once the certificate and private key have been uploaded to the Azure Automation and Azure AD services.
Next, you will create an app registration in Azure AD, so that Azure AD will rec
1. Select each of the permissions that your Azure Automation account will require, then select **Add permissions**.
- * If your runbook is only performing queries or updates within a single catalog, then you do not need to assign it tenant-wide application permissions; instead you can assign the service principal to the catalog's **Catalog owner** or **Catalog reader** role.
- * If your runbook is only performing queries for entitlement management, then it can use the **EntitlementManagement.Read.All** permission.
- * If your runbook is making changes to entitlement management, for example to create assignments across multiple catalogs, then use the **EntitlementManagement.ReadWrite.All** permission.
- * For other APIs, ensure that the necessary permission is added. For example, for identity protection, the **IdentityRiskyUser.Read.All** permission should be added.
+ * If your runbook is only performing queries or updates within a single catalog, then you do not need to assign it tenant-wide application permissions; instead you can assign the service principal to the catalog's **Catalog owner** or **Catalog reader** role.
+ * If your runbook is only performing queries for entitlement management, then it can use the **EntitlementManagement.Read.All** permission.
+ * If your runbook is making changes to entitlement management, for example to create assignments across multiple catalogs, then use the **EntitlementManagement.ReadWrite.All** permission.
+ * For other APIs, ensure that the necessary permission is added. For example, for identity protection, the **IdentityRiskyUser.Read.All** permission should be added.
-10. Select **Grant admin permissions** to give your app those permissions.
+1. Select **Grant admin permissions** to give your app those permissions.
## Create Azure Automation variables
Import-Module Microsoft.Graph.Authentication
$ClientId = Get-AutomationVariable -Name 'ClientId' $TenantId = Get-AutomationVariable -Name 'TenantId' $Thumbprint = Get-AutomationVariable -Name 'Thumbprint'
-Connect-MgGraph -clientId $ClientId -tenantid $TenantId -certificatethumbprint $Thumbprint
+Connect-MgGraph -clientId $ClientId -tenantId $TenantId -certificatethumbprint $Thumbprint
``` 5. Select **Test pane**, and select **Start**. Wait a few seconds for the Azure Automation processing of your runbook script to complete.
You can also add input parameters to your runbook, by adding a `Param` section a
```powershell Param (
-  [String]$AccessPackageAssignmentId
+ [String] $AccessPackageAssignmentId
) ```
There are two places where you can see the expiration date in the Azure portal.
## Next steps -- [Create an Automation account using the Azure portal](../../automation/quickstarts/create-azure-automation-account-portal.md)
+- [Create an Automation account using the Azure portal](../../automation/quickstarts/create-azure-automation-account-portal.md)
active-directory Review Recommendations Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/review-recommendations-access-reviews.md
na Previously updated : 8/5/2022 Last updated : 10/25/2022
For more information, see [License requirements](access-reviews-overview.md#lice
## Inactive user recommendations A user is considered 'inactive' if they have not signed into the tenant within the last 30 days. This behavior is adjusted for reviews of application assignments, which checks each user's last activity in the app as opposed to the entire tenant. When inactive user recommendations are enabled for an access review, the last sign-in date for each user will be evaluated once the review starts, and any user that has not signed-in within 30 days will be given a recommended action of Deny. Additionally, when these decision helpers are enabled, reviewers will be able to see the last sign-in date for all users being reviewed. This sign-in date (as well as the resulting recommendation) is determined when the review begins and will not get updated while the review is in-progress.
+## User-to-Group Affiliation (preview)
+Making the review experience easier and more accurate empowers IT admins and reviewers to make more informed decisions. This Machine Learning based recommendation opens the journey to automate access reviews, thereby enabling intelligent automation and reducing access rights attestation fatigue.
+
+User-to-Group Affiliation in an organizationΓÇÖs chart is defined as two or more users who share similar characteristics in an organization's reporting structure.
+
+This recommendation detects user affiliation with other users within the group, based on organization's reporting-structure similarity. The recommendation relies on a scoring mechanism which is calculated by computing the userΓÇÖs average distance with the remaining users in the group. Users who are very distant from all the other group members based on their organization's chart, are considered to have "low affiliation" within the group.
+
+If this decision helper is enabled by the creator of the access review, reviewers can receive User-to-Group Affiliation recommendations for group access reviews.
+
+> [!NOTE]
+> This feature is only available for users in your directory. A user should have a manager attribute and should be a part of an organizational hierarchy for the User-to-group Affiliation to work.
+
+The following image has an example of an organization's reporting structure in a cosmetics company:
+
+![Screenshot that shows a fictitious hierarchial organization chart for a cosmetics company.](./media/review-recommendations-group-access-reviews/org-chart-example.png)
+
+Based on the reporting structure in the example image, users who are statistically significant amount of distance away from other users within the group, would get a "Deny" recommendation by the system if the User-to-Group Affiliation recommendation was selected by the reviewer for group access reviews.
+
+For example, Phil who works within the Personal care division is in a group with Debby, Irwin, and Emily who all work within the Cosmetics division. The group is called *Fresh Skin*. If an Access Review for the group Fresh Skin is performed, based on the reporting structure and distance away from the other group members, Phil would be considered to have low affiliation. The system will create a **Deny** recommendation in the group access review.
+ ## Next Steps - [Create an access review](create-access-review.md)-- [Review access to groups or applications](perform-access-review.md)-
+- [Review access to groups or applications](perform-access-review.md)
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
Yes, key user properties like employeeHireDate and employeeType are supported fo
### How do I see more details and parameters of tasks and the attributes that are being updated?
-Some tasks do update existing attributes; however, we donΓÇÖt currently share those specific details. As these tasks are updating attributes related to other Azure AD features, so you can find that info in those docs. For temporary access pass, weΓÇÖre writing to the appropriate attributes listed [here](/graph/api/resources/temporaryaccesspassauthenticationmethod).
+Some tasks do update existing attributes; however, we donΓÇÖt currently share those specific details. As these tasks are updating attributes related to other Azure AD features, so you can find that info in those docs. For temporary access pass, we're writing to the appropriate attributes listed [here](/graph/api/resources/temporaryaccesspassauthenticationmethod).
### Is it possible for me to create new tasks and how? For example, triggering other graph APIs/web hooks?
active-directory Reference Connect Adsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsync.md
The following documentation provides reference information for the ADSync.psm1 P
This cmdlet resets the password for the service account and updates it both in Azure AD and in the sync engine. ### SYNTAX+ #### byIdentifier
- ```
+ ```powershell
Add-ADSyncADDSConnectorAccount [-Identifier] <Guid> [-EACredential <PSCredential>] [<CommonParameters>]
- ```
+ ```
#### byName
- ```
+ ```powershell
Add-ADSyncADDSConnectorAccount [-Name] <String> [-EACredential <PSCredential>] [<CommonParameters>]
- ```
+ ```
### DESCRIPTION This cmdlet resets the password for the service account and updates it both in Azure AD and in the sync engine.
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
- Disable-ADSyncExportDeletionThreshold [[-AADCredential] <PSCredential>] [-WhatIf] [-Confirm]
+ ```powershell
+ Disable-ADSyncExportDeletionThreshold [[-AADCredential] <PSCredential>] [-WhatIf] [-Confirm]
[<CommonParameters>]
- ```
+ ```
### DESCRIPTION Disables feature for deletion threshold at Export stage.
The following documentation provides reference information for the ADSync.psm1 P
### EXAMPLES #### Example 1
- ```powershell
+ ```powershell
PS C:\> Disable-ADSyncExportDeletionThreshold -AADCredential $aadCreds
- ```
+ ```
Uses the provided AAD Credentials to disable the feature for export deletion threshold.
The following documentation provides reference information for the ADSync.psm1 P
#### -AADCredential The AAD credential.
- ```yaml
+ ```yaml
Type: PSCredential Parameter Sets: (All) Aliases:
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Enable-ADSyncExportDeletionThreshold [-DeletionThreshold] <UInt32> [[-AADCredential] <PSCredential>] [-WhatIf] [-Confirm] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncAutoUpgrade [-Detail] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### SearchByIdentifier
- ```
+ ```powershell
Get-ADSyncCSObject [-Identifier] <Guid> [<CommonParameters>] ``` #### SearchByConnectorIdentifierDistinguishedName
- ```
+ ```powershell
Get-ADSyncCSObject [-ConnectorIdentifier] <Guid> [-DistinguishedName] <String> [-SkipDNValidation] [-Transient] [<CommonParameters>] ``` #### SearchByConnectorIdentifier
- ```
+ ```powershell
Get-ADSyncCSObject [-ConnectorIdentifier] <Guid> [-Transient] [-StartIndex <Int32>] [-MaxResultCount <Int32>] [<CommonParameters>] ``` #### SearchByConnectorNameDistinguishedName
- ```
+ ```powershell
Get-ADSyncCSObject [-ConnectorName] <String> [-DistinguishedName] <String> [-SkipDNValidation] [-Transient] [<CommonParameters>] ``` #### SearchByConnectorName
- ```
+ ```powershell
Get-ADSyncCSObject [-ConnectorName] <String> [-Transient] [-StartIndex <Int32>] [-MaxResultCount <Int32>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncCSObjectLog [-Identifier] <Guid> [-Count] <UInt32> [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncDatabaseConfiguration [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncExportDeletionThreshold [[-AADCredential] <PSCredential>] [-WhatIf] [-Confirm] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncMVObject -Identifier <Guid> [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncRunProfileResult [-RunHistoryId <Guid>] [-ConnectorId <Guid>] [-RunProfileId <Guid>] [-RunNumber <Int32>] [-NumberRequested <Int32>] [-RunStepDetails] [-StepNumber <Int32>] [-WhatIf] [-Confirm] [<CommonParameters>]
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncRunStepResult [-RunHistoryId <Guid>] [-StepHistoryId <Guid>] [-First] [-StepNumber <Int32>] [-WhatIf] [-Confirm] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncScheduler [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncSchedulerConnectorOverride [-ConnectorIdentifier <Guid>] [-ConnectorName <String>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### SearchByDistinguishedName
- ```
+ ```powershell
Invoke-ADSyncCSObjectPasswordHashSync [-ConnectorName] <String> [-DistinguishedName] <String> [<CommonParameters>] ``` #### SearchByIdentifier
- ```
+ ```powershell
Invoke-ADSyncCSObjectPasswordHashSync [-Identifier] <Guid> [<CommonParameters>] ``` #### CSObject
- ```
+ ```powershell
Invoke-ADSyncCSObjectPasswordHashSync [-CsObject] <CsObject> [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ConnectorName
- ```
+ ```powershell
Invoke-ADSyncRunProfile -ConnectorName <String> -RunProfileName <String> [-Resume] [<CommonParameters>] ``` #### ConnectorIdentifier
- ```
+ ```powershell
Invoke-ADSyncRunProfile -ConnectorIdentifier <Guid> -RunProfileName <String> [-Resume] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ServiceAccount
- ```
+ ```powershell
Remove-ADSyncAADServiceAccount [-AADCredential] <PSCredential> [-Name] <String> [-WhatIf] [-Confirm] [<CommonParameters>] ``` #### ServicePrincipal
- ```
+ ```powershell
Remove-ADSyncAADServiceAccount [-ServicePrincipal] [-WhatIf] [-Confirm] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Set-ADSyncAutoUpgrade [-AutoUpgradeState] <AutoUpgradeConfigurationState> [[-SuspensionReason] <String>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Set-ADSyncScheduler [[-CustomizedSyncCycleInterval] <TimeSpan>] [[-SyncCycleEnabled] <Boolean>] [[-NextSyncCyclePolicyType] <SynchronizationPolicyType>] [[-PurgeRunHistoryInterval] <TimeSpan>] [[-MaintenanceEnabled] <Boolean>] [[-SchedulerSuspended] <Boolean>] [-Force] [<CommonParameters>]
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ConnectorIdentifier
- ```
+ ```powershell
Set-ADSyncSchedulerConnectorOverride -ConnectorIdentifier <Guid> [-FullImportRequired <Boolean>] [-FullSyncRequired <Boolean>] [<CommonParameters>] ``` #### ConnectorName
- ```
+ ```powershell
Set-ADSyncSchedulerConnectorOverride -ConnectorName <String> [-FullImportRequired <Boolean>] [-FullSyncRequired <Boolean>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### online
- ```
+ ```powershell
Start-ADSyncPurgeRunHistory [[-PurgeRunHistoryInterval] <TimeSpan>] [<CommonParameters>] ``` #### offline
- ```
+ ```powershell
Start-ADSyncPurgeRunHistory [-Offline] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Start-ADSyncSyncCycle [[-PolicyType] <SynchronizationPolicyType>] [[-InteractiveMode] <Boolean>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Stop-ADSyncRunProfile [[-ConnectorName] <String>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Stop-ADSyncSyncCycle [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ConnectorName_ObjectDN
- ```
+ ```powershell
Sync-ADSyncCSObject -ConnectorName <String> -DistinguishedName <String> [-Commit] [<CommonParameters>] ``` #### ConnectorIdentifier_ObjectDN
- ```
+ ```powershell
Sync-ADSyncCSObject -ConnectorIdentifier <Guid> -DistinguishedName <String> [-Commit] [<CommonParameters>] ``` #### ObjectIdentifier
- ```
+ ```powershell
Sync-ADSyncCSObject -Identifier <Guid> [-Commit] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ByEnvironment
- ```
+ ```powershell
Test-AdSyncAzureServiceConnectivity [-AzureEnvironment] <Identifier> [[-Service] <AzureService>] [-CurrentUser] [<CommonParameters>] ``` #### ByTenantName
- ```
+ ```powershell
Test-AdSyncAzureServiceConnectivity [-Domain] <String> [[-Service] <AzureService>] [-CurrentUser] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Test-AdSyncUserHasPermissions [-ForestFqdn] <String> [-AdConnectorId] <Guid> [-AdConnectorCredential] <PSCredential> [-BaseDn] <String> [-PropertyType] <String> [-PropertyValue] <String> [-WhatIf] [-Confirm] [<CommonParameters>]
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
To assign users to an app using PowerShell, you need:
# Assign the user to the app role New-AzureADUserAppRoleAssignment -ObjectId $user.ObjectId -PrincipalId $user.ObjectId -ResourceId $sp.ObjectId -Id $appRole.Id
+ ```
To assign a group to an enterprise app, you must replace `Get-AzureADUser` with `Get-AzureADGroup` and replace `New-AzureADUserAppRoleAssignment` with `New-AzureADGroupAppRoleAssignment`.
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
# Assign the values to the variables $username = "britta.simon@contoso.com" $app_name = "Workplace Analytics"
+ ```
1. In this example, we don't know what is the exact name of the application role we want to assign to Britta Simon. Run the following commands to get the user ($user) and the service principal ($sp) using the user UPN and the service principal display names.
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
# Get the user to assign, and the service principal for the app to assign to $user = Get-AzureADUser -ObjectId "$username" $sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
+ ```
1. Run the command `$sp.AppRoles` to display the roles available for the Workplace Analytics application. In this example, we want to assign Britta Simon the Analyst (Limited access) Role. ![Shows the roles available to a user using Workplace Analytics Role](./media/assign-user-or-group-access-portal/workplace-analytics-role.png)
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
# Assign the values to the variables $app_role_name = "Analyst (Limited access)" $appRole = $sp.AppRoles | Where-Object { $_.DisplayName -eq $app_role_name }
+ ```
1. Run the following command to assign the user to the app role:
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
## Remove all users who are assigned to the application
- ```powershell
-
- #Retrieve the service principal object ID.
- $app_name = "<Your App's display name>"
- $sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
- $sp.ObjectId
+```powershell
+#Retrieve the service principal object ID.
+$app_name = "<Your App's display name>"
+$sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
+$sp.ObjectId
# Get Service Principal using objectId $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
To delete an enterprise application, you need:
1. Get the list of enterprise applications in your tenant. ```powershell
- Get-MgServicePrincipal
+ Get-MgServicePrincipal
```+ 1. Record the object ID of the enterprise app you want to delete.+ 1. Delete the enterprise application. ```powershell Remove-MgServicePrincipal -ServicePrincipalId 'd4142c52-179b-4d31-b5b9-08940873507b'
+ ```
:::zone-end - :::zone pivot="ms-graph" Delete an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
active-directory Qs Configure Template Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vmss.md
In this section, you assign a user-assigned managed identity to a virtual machin
} }
- ```
+ ```
**Microsoft.Compute/virtualMachineScaleSets API version 2017-12-01**
In this section, you assign a user-assigned managed identity to a virtual machin
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('<USERASSIGNEDIDENTITY>'))]" ] }- }
+ ```
3. When you are done, your template should look similar to the following:
- **Microsoft.Compute/virtualMachineScaleSets API version 2018-06-01**
+ **Microsoft.Compute/virtualMachineScaleSets API version 2018-06-01**
```json "resources": [
In this section, you assign a user-assigned managed identity to a virtual machin
} ] ```+ ### Remove user-assigned managed identity from an Azure virtual machine scale set If you have a virtual machine scale set that no longer needs a user-assigned managed identity:
active-directory Tutorial Linux Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-cosmos-db.md
To gain access to the Azure Cosmos DB account access keys from the Resource Mana
```azurecli-interactive az resource show --id /subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.Compute/virtualMachines/<VM NAMe> --api-version 2017-12-01 ```+ The response includes the details of the system-assigned managed identity (note the principalID as it is used in the next section): ```output
To complete these steps, you need an SSH client. If you are using Windows, you c
> In the previous request, the value of the "resource" parameter must be an exact match for what is expected by Azure AD. When using the Azure Resource Manager resource ID, you must include the trailing slash on the URI. > In the following response, the access_token element as been shortened for brevity.
- ```bash
- {"access_token":"eyJ0eXAiOi...",
- "expires_in":"3599",
- "expires_on":"1518503375",
- "not_before":"1518499475",
- "resource":"https://management.azure.com/",
- "token_type":"Bearer",
- "client_id":"1ef89848-e14b-465f-8780-bf541d325cd5"}
- ```
-
+ ```json
+ {
+ "access_token":"eyJ0eXAiOi...",
+ "expires_in":"3599",
+ "expires_on":"1518503375",
+ "not_before":"1518499475",
+ "resource":"https://management.azure.com/",
+ "token_type":"Bearer",
+ "client_id":"1ef89848-e14b-465f-8780-bf541d325cd5"
+ }
+ ```
+ ### Get access keys from Azure Resource Manager to make Azure Cosmos DB calls Now use CURL to call Resource Manager using the access token retrieved in the previous section to retrieve the Azure Cosmos DB account access key. Once we have the access key, we can query Azure Cosmos DB. Be sure to replace the `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>`, and `<COSMOS DB ACCOUNT NAME>` parameter values with your own values. Replace the `<ACCESS TOKEN>` value with the access token you retrieved earlier. If you want to retrieve read/write keys, use key operation type `listKeys`. If you want to retrieve read-only keys, use the key operation type `readonlykeys`:
active-directory Tutorial Linux Vm Access Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-access-key.md
To complete these steps, you will need an SSH client. If you are using Windows,
> In the previous request, the value of the "resource" parameter must be an exact match for what is expected by Azure AD. When using the Azure Resource Manager resource ID, you must include the trailing slash on the URI. > In the following response, the access_token element as been shortened for brevity.
- ```bash
- {"access_token":"eyJ0eXAiOiJ...",
- "refresh_token":"",
- "expires_in":"3599",
- "expires_on":"1504130527",
- "not_before":"1504126627",
- "resource":"https://management.azure.com",
- "token_type":"Bearer"}
- ```
-
+ ```json
+ {
+ "access_token": "eyJ0eXAiOiJ...",
+ "refresh_token": "",
+ "expires_in": "3599",
+ "expires_on": "1504130527",
+ "not_before": "1504126627",
+ "resource": "https://management.azure.com",
+ "token_type": "Bearer"
+ }
+ ```
+ ## Get storage account access keys from Azure Resource Manager to make storage calls Now use CURL to call Resource Manager using the access token we retrieved in the previous section, to retrieve the storage access key. Once we have the storage access key, we can call storage upload/download operations. Be sure to replace the `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>`, and `<STORAGE ACCOUNT NAME>` parameter values with your own values. Replace the `<ACCESS TOKEN>` value with the access token you retrieved earlier:
The CURL response gives you the list of Keys:
```bash {"keys":[{"keyName":"key1","permissions":"Full","value":"iqDPNt..."},{"keyName":"key2","permissions":"Full","value":"U+uI0B..."}]} ```+ Create a sample blob file to upload to your blob storage container. On a Linux VM, you can do this with the following command. ```bash
Response:
In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Storage using an access key. To learn more about Azure Storage access keys see: > [!div class="nextstepaction"]
->[Manage your storage access keys](../../storage/common/storage-account-create.md)
+>[Manage your storage access keys](../../storage/common/storage-account-create.md)
active-directory Tutorial Linux Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-sas.md
Now that you have your SSH client continue to the steps below:
> In the previous request, the value of the "resource" parameter must be an exact match for what is expected by Azure AD. When using the Azure Resource Manager resource ID, you must include the trailing slash on the URI. > In the following response, the access_token element has been shortened for brevity.
- ```bash
- {"access_token":"eyJ0eXAiOiJ...",
- "refresh_token":"",
- "expires_in":"3599",
- "expires_on":"1504130527",
- "not_before":"1504126627",
- "resource":"https://management.azure.com",
- "token_type":"Bearer"}
- ```
+ ```json
+ {
+ "access_token":"eyJ0eXAiOiJ...",
+ "refresh_token":"",
+ "expires_in":"3599",
+ "expires_on":"1504130527",
+ "not_before":"1504126627",
+ "resource":"https://management.azure.com",
+ "token_type":"Bearer"
+ }
+ ```
## Get a SAS credential from Azure Resource Manager to make storage calls
Response:
In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Storage using a SAS credential. To learn more about Azure Storage SAS, see: > [!div class="nextstepaction"]
->[Using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md)
+>[Using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md)
active-directory Custom User Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-user-permissions.md
+
+ Title: User management permissions for Azure AD custom roles (preview) - Azure Active Directory
+description: User management permissions for Azure AD custom roles in the Azure portal, PowerShell, or Microsoft Graph API.
+++++++ Last updated : 10/26/2022+++++
+# User management permissions for Azure AD custom roles (preview)
+
+> [!IMPORTANT]
+> User management permissions for Azure AD custom roles is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+User management permissions can be used in custom role definitions in Azure Active Directory (Azure AD) to grant fine-grained access such as the following:
+
+- Read or update basic properties of users
+- Read or update identity of users
+- Read or update job information of users
+- Update contact information of users
+- Update parental controls of users
+- Update settings of users
+- Read direct reports of users
+- Update extension properties of users
+- Read device information of users
+- Read or manage licenses of users
+- Update password policies of users
+- Read assignments and memberships of users
+
+This article lists the permissions you can use in your custom roles for different user management scenarios. For information about how to create custom roles, see [Create and assign a custom role](custom-create.md).
+
+## License requirements
++
+## Read or update basic properties of users
+
+The following permissions are available to read or update basic properties of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/standard/read | Read basic properties on users. |
+> | microsoft.directory/users/basic/update | Update basic properties on users. |
+
+## Read or update identity of users
+
+The following permissions are available to read or update identity of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/identities/read | Read identities of users. |
+> | microsoft.directory/users/identities/update | Update the identity properties of users, such as name and user principal name. |
+
+## Read or update job information of users
+
+The following permissions are available to read or update job information of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/manager/read | Read manager of users. |
+> | microsoft.directory/users/manager/update | Update manager for users. |
+> | microsoft.directory/users/jobInfo/update | Update the job info properties of users, such as job title, department, and company name. |
+
+## Update contact information of users
+
+The following permissions are available to update contact information of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/contactInfo/update | Update the contact info properties of users, such as address, phone, and email. |
+
+## Update parental controls of users
+
+The following permissions are available to update parental controls of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/parentalControls/update | Update parental controls of users. |
+
+## Update settings of users
+
+The following permissions are available to update settings of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/usageLocation/update | Update usage location of users. |
+
+## Read direct reports of users
+
+The following permissions are available to read direct reports of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/directReports/read | Read the direct reports for users. |
+
+## Update extension properties of users
+
+The following permissions are available to update extension properties of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/extensionProperties/update | Update extension properties of users. |
+
+## Read device information of users
+
+The following permissions are available to read device information of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/ownedDevices/read | Read owned devices of users |
+> | microsoft.directory/users/registeredDevices/read | Read registered devices of users |
+> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users. |
+
+## Read or manage licenses of users
+
+The following permissions are available to read or manage licenses of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/licenseDetails/read | Read license details of users. |
+> | microsoft.directory/users/assignLicense | Manage user licenses. |
+> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users. |
+
+## Update password policies of users
+
+The following permissions are available to update password policies of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/passwordPolicies/update | Update password policies properties of users. |
+
+## Read assignments and memberships of users
+
+The following permissions are available to read assignments and memberships of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/appRoleAssignments/read | Read application role assignments for users |
+> | microsoft.directory/users/scopedRoleMemberOf/read | Read user's membership of an Azure AD role, that is scoped to an administrative unit |
+> | microsoft.directory/users/memberOf/read | Read the group memberships of users |
+
+## Full list of permissions
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/appRoleAssignments/read | Read application role assignments for users. |
+> | microsoft.directory/users/assignLicense | Manage user licenses. |
+> | microsoft.directory/users/basic/update | Update basic properties on users. |
+> | microsoft.directory/users/contactInfo/update | Update the contact info properties of users, such as address, phone, and email. |
+> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users. |
+> | microsoft.directory/users/directReports/read | Read the direct reports for users. |
+> | microsoft.directory/users/extensionProperties/update | Update extension properties of users. |
+> | microsoft.directory/users/identities/read | Read identities of users. |
+> | microsoft.directory/users/identities/update | Update the identity properties of users, such as name and user principal name. |
+> | microsoft.directory/users/jobInfo/update | Update the job info properties of users, such as job title, department, and company name. |
+> | microsoft.directory/users/licenseDetails/read | Read license details of users. |
+> | microsoft.directory/users/manager/read | Read manager of users. |
+> | microsoft.directory/users/manager/update | Update manager for users. |
+> | microsoft.directory/users/memberOf/read | Read the group memberships of users. |
+> | microsoft.directory/users/ownedDevices/read | Read owned devices of users. |
+> | microsoft.directory/users/parentalControls/update | Update parental controls of users. |
+> | microsoft.directory/users/passwordPolicies/update | Update password policies properties of users. |
+> | microsoft.directory/users/registeredDevices/read | Read registered devices of users. |
+> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users. |
+> | microsoft.directory/users/scopedRoleMemberOf/read | Read user's membership of an Azure AD role, that is scoped to an administrative unit. |
+> | microsoft.directory/users/standard/read | Read basic properties on users. |
+> | microsoft.directory/users/usageLocation/update | Update usage location of users. |
+
+## Next steps
+
+- [Create and assign a custom role in Azure Active Directory](custom-create.md)
+- [List Azure AD role assignments](view-assignments.md)
active-directory Ascentis Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ascentis-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
In the **Sign-on URL** text box, type a URL using any one of the following pattern:
- ```https
+ ```https
https://selfservice.ascentis.com/<clientname>/STS/signin.aspx?SAMLResponse=true https://selfservice2.ascentis.com/<clientname>/STS/signin.aspx?SAMLResponse=true ```
When you click the Ascentis tile in the Access Panel, you should be automaticall
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Cernercentral Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cernercentral-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you should decide what
This section guides you through connecting your Azure AD to Cerner CentralΓÇÖs User Roster using Cerner's SCIM user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in Cerner Central based on user and group assignment in Azure AD. > [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for Cerner Central, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other. For more information, see the [Cerner Central single sign-on tutorial](cernercentral-tutorial.md).
+> You may also choose to enable SAML-based single sign-on for Cerner Central, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other. For more information, see the [Cerner Central single sign-on tutorial](cernercentral-tutorial.md).
### To configure automatic user account provisioning to Cerner Central in Azure AD:
In order to provision user accounts to Cerner Central, youΓÇÖll need to request
* In the **Secret Token** field, enter the OAuth bearer token you generated in step #3 and click **Test Connection**.
- * You should see a success notification on the upper­right side of your portal.
+ * You should see a success notification on the upper-right side of your portal.
1. Enter the email address of a person or group who should receive provisioning error notifications in the **Notification Email** field, and check the checkbox below.
For more information on how to read the Azure AD provisioning logs, see [Reporti
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal:
d. In the **Logout URL** box, enter a URL in the pattern `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port><FQDN>/remote/saml/logout`.
- > [!NOTE]
- > These values are just patterns. You need to use the actual **Sign on URL**, **Identifier**, **Reply URL**, and **Logout URL** that is configured on the FortiGate.
+ > [!NOTE]
+ > These values are just patterns. You need to use the actual **Sign on URL**, **Identifier**, **Reply URL**, and **Logout URL** that is configured on the FortiGate.
1. The FortiGate SSL VPN application expects SAML assertions in a specific format, which requires you to add custom attribute mappings to the configuration. The following screenshot shows the list of default attributes.
- ![Screenshot of showing Attributes and Claims section.](./media/fortigate-ssl-vpn-tutorial/claims.png)
-
+ ![Screenshot of showing Attributes and Claims section.](./media/fortigate-ssl-vpn-tutorial/claims.png)
1. The claims required by FortiGate SSL VPN are shown in the following table. The names of these claims must match the names used in the **Perform FortiGate command-line configuration** section of this tutorial. Names are case-sensitive.
Follow these steps to enable Azure AD SSO in the Azure portal:
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select the **Download** link next to **Certificate (Base64)** to download the certificate and save it on your computer:
- ![Screenshot that shows the certificate download link.](common/certificatebase64.png)
+ ![Screenshot that shows the certificate download link.](common/certificatebase64.png)
1. In the **Set up FortiGate SSL VPN** section, copy the appropriate URL or URLs, based on your requirements:
- ![Screenshot that shows the configuration URLs.](common/copy-configuration-urls.png)
+ ![Screenshot that shows the configuration URLs.](common/copy-configuration-urls.png)
#### Create an Azure AD test user
To complete these steps, you'll need the values you recorded earlier:
| FortiGate SAML CLI setting | Equivalent Azure configuration | | | |
- | SP entity ID (`entity-id`) | Identifier (Entity ID) |
-| SP Single Sign-On URL (`single-sign-on-url`) | Reply URL (Assertion Consumer Service URL) |
+ | SP entity ID (`entity-id`) | Identifier (Entity ID) |
+| SP Single Sign-On URL (`single-sign-on-url`) | Reply URL (Assertion Consumer Service URL) |
| SP Single Logout URL (`single-logout-url`) | Logout URL | | IdP Entity ID (`idp-entity-id`) | Azure AD Identifier | | IdP Single Sign-On URL (`idp-single-sign-on-url`) | Azure Login URL |
To complete these steps, you'll need the values you recorded earlier:
1. Establish an SSH session to your FortiGate appliance, and sign in with a FortiGate Administrator account. 1. Run these commands and substitute the `<values>` with the information that you collected previously:
- ```console
+ ```console
config user saml
- edit azure
- set cert <FortiGate VPN Server Certificate Name>
- set entity-id < Identifier (Entity ID)Entity ID>
- set single-sign-on-url < Reply URL Reply URL>
- set single-logout-url <Logout URL>
- set idp-entity-id <Azure AD Identifier>
- set idp-single-sign-on-url <Azure Login URL>
- set idp-single-logout-url <Azure Logout URL>
- set idp-cert <Base64 SAML Certificate Name>
- set user-name username
- set group-name group
- next
+ edit azure
+ set cert <FortiGate VPN Server Certificate Name>
+ set entity-id < Identifier (Entity ID)Entity ID>
+ set single-sign-on-url < Reply URL Reply URL>
+ set single-logout-url <Logout URL>
+ set idp-entity-id <Azure AD Identifier>
+ set idp-single-sign-on-url <Azure Login URL>
+ set idp-single-logout-url <Azure Logout URL>
+ set idp-cert <Base64 SAML Certificate Name>
+ set user-name username
+ set group-name group
+ next
end-
- ```
+ ```
#### Configure FortiGate for group matching
In this section, you'll configure FortiGate to recognize the Object ID of the se
To complete these steps, you'll need the Object ID of the FortiGateAccess security group that you created earlier in this tutorial. 1. Establish an SSH session to your FortiGate appliance, and sign in with a FortiGate Administrator account.+ 1. Run these commands:
- ```console
+ ```console
config user group
- edit FortiGateAccess
- set member azure
- config match
- edit 1
- set server-name azure
- set group-name <Object Id>
- next
- end
- next
+ edit FortiGateAccess
+ set member azure
+ config match
+ edit 1
+ set server-name azure
+ set group-name <Object Id>
+ next
+ end
+ next
end
- ```
-
+ ```
+ #### Create a FortiGate VPN Portals and Firewall Policy In this section, you'll configure a FortiGate VPN Portals and Firewall Policy that grants access to the FortiGateAccess security group you created earlier in this tutorial.
active-directory Linkedinelevate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinelevate-provisioning-tutorial.md
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
4. Click **+ Add new SCIM configuration** and follow the procedure by filling in each field. > [!NOTE]
- > When auto­assign licenses is not enabled, it means that only user data is synced.
+ > When auto-assign licenses is not enabled, it means that only user data is synced.
![Screenshot shows the LinkedIn Account Center Global Settings.](./media/linkedinelevate-provisioning-tutorial/linkedin_elevate1.PNG) > [!NOTE]
- > When auto­license assignment is enabled, you need to note the application instance and license type. Licenses are assigned on a first come, first serve basis until all the licenses are taken.
+ > When auto-license assignment is enabled, you need to note the application instance and license type. Licenses are assigned on a first come, first serve basis until all the licenses are taken.
![Screenshot shows the S C I M Setup page.](./media/linkedinelevate-provisioning-tutorial/linkedin_elevate2.PNG)
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
* In the **Secret Token** field, enter the access token you generated in step 1 and click **Test Connection** .
- * You should see a success notification on the upper­right side of
+ * You should see a success notification on the upper-right side of
your portal. 12. Enter the email address of a person or group who should receive provisioning error notifications in the **Notification Email** field, and check the checkbox below.
active-directory Uber Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uber-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Uber for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Uber.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: f16047ee-8ed6-4f8f-86e4-d9bc2cbd9016
+++
+ms.devlang: na
+ Last updated : 10/25/2022+++
+# Tutorial: Configure Uber for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Uber and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Uber](https://www.uber.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Uber.
+> * Remove users in Uber when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Uber.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* You must be onboarded to a [Uber for Business](https://business.uber.com/) organization and have Admin access to it.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Uber](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Uber to support provisioning with Azure AD
+
+Before you start the setup, below are the requirements to enable SCIM provisioning end to end
+
+* You must be onboarded to a [Uber for Business](https://business.uber.com/) organization and have Admin access to it.
+* You must allow syncing via identity providers, you can find this by hovering your mouse above your profile photo in the top right corner and navigating to **Settings > Integrations section > toggle Allow**
+* Grab your `organization-id` and replace it in `https://api.uber.com/v1/scim/organizations/{organization-id}/v2` to create your **Tenant Url** .This Tenant Url is to be entered in the Provisioning tab of your Uber application in the Azure portal.
+
+ ![Screenshot of Grab Organization ID.](media/uber-provisioning-tutorial/organization-id.png)
+
+## Step 3. Add Uber from the Azure AD application gallery
+
+Add Uber from the Azure AD application gallery to start managing provisioning to Uber. If you have previously setup Uber for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5. Configure automatic user provisioning to Uber
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Uber based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Uber in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Uber**.
+
+ ![Screenshot of the Uber link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab,](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, enter the **Tenant Url** and then click on Authorize, make sure that you enter your Uber account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Uber. If the connection fails, ensure your Uber account has Admin permissions and try again.
+
+ ![Screenshot of Token.](media/uber-provisioning-tutorial/authorize.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Uber**.
+
+1. Review the user attributes that are synchronized from Azure AD to Uber in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Uber for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Uber API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Uber|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |externalId|String||&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Uber, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Uber by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
The Admin API is server over HTTPS. All URLs referenced in the documentation hav
## Authentication
-The API is protected through Azure Active Directory and uses OAuth2 bearer tokens. The app registration needs to have the API Permission for `Verifiable Credentials Service Admin` and then when acquiring the access token the app should use scope `6a8b4b39-c021-437c-b060-5a14a3fd65f3/full_access`.
+The API is protected through Azure Active Directory and uses OAuth2 bearer tokens. The app registration needs to have the API Permission for `Verifiable Credentials Service Admin` and then when acquiring the access token the app should use scope `6a8b4b39-c021-437c-b060-5a14a3fd65f3/full_access`. The access token must be for a user with the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) role.
## Onboarding
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
The issuance request payload contains information about your verifiable credenti
"clientName": "Verifiable Credential Expert Sample" }, "type": "VerifiedCredentialExpert",
- "manifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredentials/contracts/VerifiedCredentialExpert",
+ "manifest": "https://verifiedid.did.msidentity.com/v1.0/tenants/12345678-0000-0000-0000-000000000000/verifiableCredentials/contracts/MTIzNDU2NzgtMDAwMC0wMDAwLTAwMDAtMDAwMDAwMDAwMDAwdmVyaWZpZWRjcmVkZW50aWFsZXhwZXJ0/manifest",
"claims": { "given_name": "Megan", "family_name": "Bowen"
aks Azure Csi Blob Storage Static https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-static.md
The following example demonstrates how to mount a Blob storage container as a pe
storage: 10Gi volumeName: pv-blob storageClassName: azureblob-nfs-premium
- ```
+ ```
4. Run the following command to create the persistent volume claim using the `kubectl create` command referencing the YAML file created earlier:
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disks on Azure Kub
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 07/21/2022-- Last updated : 10/13/2022 # Use the Azure Disks Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
- Performance improvements during concurrent disk attach and detach - In-tree drivers attach or detach disks in serial, while CSI drivers attach or detach disks in batch. There's significant improvement when there are multiple disks attaching to one node.
+- Premium SSD v1 and v2 are supported.
- Zone-redundant storage (ZRS) disk support - `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported. ZRS disk could be scheduled on the zone or non-zone node, without the restriction that disk volume should be co-located in the same zone as a given node. For more information, including which regions are supported, see [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md). - [Snapshot](#volume-snapshots)
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
|Name | Meaning | Available Value | Mandatory | Default value | | | | |
-|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
+|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `PremiumV2_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows| |cachingMode | [Azure Data Disk Host Cache Setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`| |location | Specify Azure region where Azure Disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
An up-to-date cluster avoids unnecessary performance issues and ensures you bene
Add-ons and extensions covered by the [AKS support policy](/azure/aks/support-policies) provide additional and supported functionality to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
-* Ensure you install [Keda](/azure/aks/integrations#available-add-ons) as an add-on and [GitOps & Dapr](/azure/aks/cluster-extensions?tabs=azure-cli#currently-available-extensions) as extensions.
+* Ensure you install [KEDA](/azure/aks/integrations#available-add-ons) as an add-on and [GitOps & Dapr](/azure/aks/cluster-extensions?tabs=azure-cli#currently-available-extensions) as extensions.
### Containerize your workload where applicable
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
Title: Configure kubenet networking in Azure Kubernetes Service (AKS)
description: Learn how to configure kubenet (basic) network in Azure Kubernetes Service (AKS) to deploy an AKS cluster into an existing virtual network and subnet. Previously updated : 06/20/2022 Last updated : 10/26/2022
Limitations:
* For system-assigned managed identity, it's only supported to provide your own subnet and route table via Azure CLI. That's because CLI will add the role assignment automatically. If you are using an ARM template or other clients, you must use a [user-assigned managed identity][Create an AKS cluster with user-assigned managed identities], assign permissions before cluster creation, and ensure the user-assigned identity has write permissions to your custom subnet and custom route table. * Using the same route table with multiple AKS clusters isn't supported.
-After you create a custom route table and associate it to your subnet in your virtual network, you can create a new AKS cluster that uses your route table.
+> [!NOTE]
+> To create and use your own VNet and route table with `kubelet` network plugin, you need to use [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For system-assigned control plane identity, the identity ID cannot be retrieved before creating a cluster, which causes a delay during role assignment.
+> To create and use your own VNet and route table with `azure` network plugin, both system-assigned and user-assigned managed identities are supported. But user-assigned managed identity is more recommended for BYO scenarios.
+
+After creating a custom route table and associating it with a subnet in your virtual network, you can create a new AKS cluster specifying your route table with a user-assigned managed identity.
You need to use the subnet ID for where you plan to deploy your AKS cluster. This subnet also must be associated with your custom route table. ```azurecli-interactive
az network vnet subnet list --resource-group
```azurecli-interactive # Create a kubernetes cluster with with a custom subnet preconfigured with a route table
-az aks create -g MyResourceGroup -n MyManagedCluster --vnet-subnet-id <MySubnetID-resource-id>
+az aks create -g myResourceGroup -n myManagedCluster --vnet-subnet-id mySubnetIDResourceID --enable-managed-identity --assign-identity controlPlaneIdentityResourceID
``` ## Next steps
With an AKS cluster deployed into your existing virtual network subnet, you can
[network-comparisons]: concepts-network.md#compare-network-models [custom-route-table]: ../virtual-network/manage-route-table.md [Create an AKS cluster with user-assigned managed identities]: configure-kubenet.md#create-an-aks-cluster-with-user-assigned-managed-identities
+[bring-your-own-control-plane-managed-identity]: ../aks/use-managed-identity.md#bring-your-own-control-plane-managed-identity
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
This scenario is intended for customers using Azure Monitor to monitor AKS. It d
## Container insights AKS generates [platform metrics and resource logs](monitor-aks-reference.md), like any other Azure resource, that you can use to monitor its basic health and performance. Enable [Container insights](../azure-monitor/containers/container-insights-overview.md) to expand on this monitoring. Container insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS in addition to other cluster configurations. Container insights provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios.
-[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are CNCF backed widely popular open source tools for kubernetes monitoring. AKS exposes many metrics in Prometheus format which makes Prometheus a popular choice for monitoring. [Container insights](../azure-monitor/containers/container-insights-overview.md) has native integration with AKS, collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics, and many native Azure Monitor insights are built-up on top of Prometheus metrics. Container insights complements and completes E2E monitoring of AKS including log collection which Prometheus as stand-alone tool doesnΓÇÖt provide. Many customers use Prometheus integration and Azure Monitor together for E2E monitoring.
+[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are CNCF backed widely popular open source tools for kubernetes monitoring. AKS exposes many metrics in Prometheus format which makes Prometheus a popular choice for monitoring. [Container insights](../azure-monitor/containers/container-insights-overview.md) has native integration with AKS, collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics, and many native Azure Monitor Insights are built-up on top of Prometheus metrics. Container insights complements and completes E2E monitoring of AKS including log collection which Prometheus as stand-alone tool doesnΓÇÖt provide. Many customers use Prometheus integration and Azure Monitor together for E2E monitoring.
Learn more about using Container insights at [Container insights overview](../azure-monitor/containers/container-insights-overview.md). [Monitor layers of AKS with Container insights](#monitor-layers-of-aks-with-container-insights) below introduces various features of Container insights and the monitoring scenarios that they support.
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl
description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview). Previously updated : 10/03/2022 Last updated : 10/24/2022 # Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster
When ready, refresh the registration of the *Microsoft.ContainerService* resourc
az provider register --namespace Microsoft.ContainerService ```
-## Register the 'EnableOIDCIssuerPreview' feature flag
-
-Register the `EnableOIDCIssuerPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableOIDCIssuerPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableOIDCIssuerPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ## Create AKS cluster Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*:
In this article, you deployed a Kubernetes cluster and configured it to use a wo
<!-- INTERNAL LINKS --> [kubernetes-concepts]: concepts-clusters-workloads.md [az-feature-register]: /cli/azure/feature#az_feature_register
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-list]: /cli/azure/feature#az-feature-list
[workload-identity-overview]: workload-identity-overview.md [create-key-vault-azure-cli]: ../key-vault/general/quick-create-cli.md [az-keyvault-list]: /cli/azure/keyvault#az-keyvault-list
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
The following table lists all the upcoming breaking changes and feature retireme
| [Resource provider source IP address updates][bc1] | March 31, 2023 | | [Resource provider source IP address updates][rp2023] | September 30, 2023 | | [API version retirements][api2023] | September 30, 2023 |
-| [Deprecated (legacy) portal retirement][devportal2023] | October 2023 |
+| [Deprecated (legacy) portal retirement][devportal2023] | October 31, 2023 |
| [Self-hosted gateway v0/v1 retirement][shgwv0v1] | October 1, 2023 | | [stv1 platform retirement][stv12024] | August 31, 2024 | | [ADAL-based Azure AD or Azure AD B2C identity provider retirement][msal2025] | September 30, 2025 |
api-management How To Configure Service Fabric Backend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-service-fabric-backend.md
Add the [`set-backend-service`](api-management-transformation-policies.md#SetBac
1. On the **Design** tab, in the **Inbound processing** section, select the code editor (**</>**) icon. 1. Position the cursor inside the **&lt;inbound&gt;** element 1. Add the `set-service-backend` policy statement.
- * In `backend-id`, substitute the name of your Service Fabric backend.
+ * In `backend-id`, substitute the name of your Service Fabric backend.
- * The `sf-resolve-condition` is a condition for re-resolving a service location and resending a request. The number of retries was set when configuring the backend. For example:
+ * The `sf-resolve-condition` is a condition for re-resolving a service location and resending a request. The number of retries was set when configuring the backend. For example:
```xml <set-backend-service backend-id="mysfbackend" sf-resolve-condition="@(context.LastError?.Reason == "BackendConnectionFailure")"/>
- ```
+ ```
1. Select **Save**. :::image type="content" source="media/backends/set-backend-service.png" alt-text="Configure set-backend-service policy":::
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-container-github-action.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli). ```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName
-/providers/Microsoft.Web/sites/
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/
``` 1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
When you configure the workflow file later, you use the secret for the input `cr
with: creds: ${{ secrets.AZURE_CREDENTIALS }} ```+ # [OpenID Connect](#tab/openid) You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
jobs:
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} images: 'mycontainer.azurecr.io/myapp:${{ github.sha }}' ```+ # [Service principal](#tab/service-principal) ```yaml
jobs:
run: | az logout ```+ ## Next steps
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 9/15/2022 Last updated : 10/26/2022
If your App Service Environment doesn't pass the validation checks or you try to
|Migration to ASEv3 is not allowed for this ASE. |You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) has been met. |Remove unneeded environments or contact support to review your options. | |`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location. |You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](using-an-ase.md#upgrade-preference) from the Azure portal. |Wait until the upgrade finishes and then migrate. |
+|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade will be initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. |
|App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You'll be able to migrate once these operations are complete. | ## Overview of the migration process using the migration feature
There's no cost to migrate your App Service Environment. You'll stop being charg
> [Using an App Service Environment v3](using.md) > [!div class="nextstepaction"]
-> [Custom domain suffix](./how-to-custom-domain-suffix.md)
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
When an outdated runtime is hidden from the Portal, any of your existing sites u
If you need to create another web app with an outdated runtime version that is no longer shown on the Portal see the language configuration guides for instructions on how to get the runtime version of your site. You can use the Azure CLI to create another site with the same runtime. Alternatively, you can use the **Export Template** button on the web app blade in the Portal to export an ARM template of the site. You can reuse this template to deploy a new site with the same runtime and configuration.
-#### Debian 9 End of Life
-
-On June 30th 2022 Debian 9 (also known as "Stretch") will reach End-of-Life (EOL) status, which means security patches and updates will cease. As of June 2022, a platform update is rolling out to provide an upgrade path to Debian 11 (also known as "Bullseye"). The runtimes listed below are currently using Debian 9; if you are using one of the listed runtimes, follow the instructions below to upgrade your site to Bullseye.
--- Python 3.8-- Python 3.7-- .NET 3.1-- PHP 7.4-
-> [!NOTE]
-> To ensure customer applications are running on secure and supported Debian distributions, after February 2023 all Linux web apps still running on Debian 9 (Stretch) will be upgraded to Debian 11 (Bullseye) automatically.
->
-
-##### Verify the platform update
-
-First, validate that the new platform update which contains Debian 11 has reached your site.
-
-1. Navigate to the SCM site (also known as Kudu site) of your webapp. You can browse to this site at `http://<your-site-name>.scm.azurewebsites.net/Env` (replace `\<your-site-name>` with the name of your web app).
-1. Under "Environment Variables", search for `PLATFORM_VERSION`. The value of this environment variable is the current platform version of your web app.
-1. If the value of `PLATFORM_VERSION` starts with "99" or greater, then your site is on the latest platform update and you can continue to the section below. If the value does **not** show "99" or greater, then your site has not yet received the latest platform update--please check again at a later date.
-
-Next, create a deployment slot to test that your application works properly with Debian 11 before applying the change to production.
-
-1. [Create a deployment slot](deploy-staging-slots.md#add-a-slot) if you do not already have one, and clone your settings from the production slot. A deployment slot will allow you to safely test changes to your application (such as upgrading to Debian 11) and swap those changes into production after review.
-1. To upgrade to Debian 11 (Bullseye), create an app setting on your slot named `WEBSITE_LINUX_OS_VERSION` with a value of `DEBIAN|BULLSEYE`.
-
- ```bash
- az webapp config appsettings set -g MyResourceGroup -n MyUniqueApp --settings WEBSITE_LINUX_OS_VERSION="DEBIAN|BULLSEYE"
- ```
-1. Deploy your application to the deployment slot using the tool of your choice (VS Code, Azure CLI, GitHub Actions, etc.)
-1. Confirm your application is functioning as expected in the deployment slot.
-1. [Swap your production and staging slots](deploy-staging-slots.md#swap-two-slots). This will apply the `WEBSITE_LINUX_OS_VERSION=DEBIAN|BULLSEYE` app setting to production.
-1. Delete the deployment slot if you are no longer using it.
-
-##### Resources
--- [Debian Long Term Support schedule](https://wiki.debian.org/LTS)-- [Debian 11 (Bullseye) Release Notes](https://www.debian.org/releases/bullseye/)-- [Debain 9 (Stretch) Release Notes](https://www.debian.org/releases/stretch/)- ### Limitations > [!NOTE]
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
First, create an Azure SQL Server to host the database. A new Azure SQL Server is created by using the [az sql server create ](/cli/azure/sql/server#az-sql-server-create) command.
-Replace the *server-name* placeholder with a unique SQL Database name. The SQL Database name is used as part of the globally unique SQL Database endpoint. Also, replace *db-username* and *db-username* with a username and password of your choice.
+Replace the *server-name* placeholder with a unique SQL Database name. The SQL Database name is used as part of the globally unique SQL Database endpoint. Also, replace *db-username* and *db-password* with a username and password of your choice.
```azurecli-interactive az sql server create \
application-gateway Understanding Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/understanding-pricing.md
Azure Application Gateway is a layer 7 load-balancing solution, which enables scalable, highly available, and secure web application delivery on Azure. There are no upfront costs or termination costs associated with Application Gateway.
-You will be billed only for the resources pre-provisioned and utilized based on actual hourly consumption. Costs associated with Application Gateway are classified into two components: fixed costs and variable costs. Actual costs within each component will vary according to the SKU being utilized.
+You'll be billed only for the resources pre-provisioned and utilized based on actual hourly consumption. Costs associated with Application Gateway are classified into two components: fixed costs and variable costs. Actual costs within each component will vary according to the SKU being utilized.
-This article describes the costs associated with each SKU and it is recommended that users utilize this document for planning and managing costs associated with the Azure Application Gateway.
+This article describes the costs associated with each SKU and it's recommended that users utilize this document for planning and managing costs associated with the Azure Application Gateway.
## V2 SKUs
Compute Unit is the measure of compute capacity consumed. Factors affecting comp
Compute unit guidance: * Standard_v2 - Each compute unit is capable of approximately 50 connections per second with RSA 2048-bit key TLS certificate.
-* WAF_v2 - Each compute unit can support approximately 10 concurrent requests per second for 70-30% mix of traffic with 70% requests less than 2 KB GET/POST and remaining higher. WAF performance is not affected by response size currently.
+* WAF_v2 - Each compute unit can support approximately 10 concurrent requests per second for 70-30% mix of traffic with 70% requests less than 2 KB GET/POST and remaining higher. WAF performance isn't affected by response size currently.
##### Instance Count Pre-provisioning of resources for Application Gateway V2 SKUs is defined in terms of instance count. Each instance guarantees a minimum of 10 capacity units in terms of processing capability. The same instance could potentially support more than 10 capacity units for different traffic patterns depending upon the Capacity Unit parameters.
V2 SKUs are billed based on the consumption and constitute of two parts:
The fixed cost also includes the cost associated with the public IP attached to the Application Gateway.
- The number of instances running at any point of time is not considered as a factor for fixed costs for V2 SKUs. The fixed costs of running a Standard_V2 (or WAF_V2) would be same per hour regardless of the number of instances running within the same Azure region.
+ The number of instances running at any point of time isn't considered as a factor for fixed costs for V2 SKUs. The fixed costs of running a Standard_V2 (or WAF_V2) would be same per hour regardless of the number of instances running within the same Azure region.
* Capacity Unit Costs
Since 80 (reserved capacity) > 40 (required capacity), no additional CUs are req
Fixed Price = $0.246 * 730 (Hours) = $179.58
-Variable Costs = $0.008 * 8 (Instance Units) * 10(capacity units) * 730 (Hours) = $467.2
+Variable Costs = $0.008 * 8 (Instance Units) * 10 (capacity units) * 730 (Hours) = $467.2
Total Costs = $179.58 + $467.2 = $646.78
If processing capacity equivalent to 10 additional CUs was available for use wit
Fixed Price = $0.246 * 730 (Hours) = $179.58
-Variable Costs = $0.008 * ( 3(Instance Units) * 10(capacity units) + 10 (additional capacity units) ) * 730 (Hours) = $233.6
+Variable Costs = $0.008 * ( 3 (Instance Units) * 10 (capacity units) + 10 (additional capacity units) ) * 730 (Hours) = $233.6
Total Costs = $179.58 + $233.6 = $413.18
In this scenario the Application Gateway resource is under scaled and could pote
Fixed Price = $0.246 * 730 (Hours) = $179.58
-Variable Costs = $0.008 * ( 3(Instance Units) * 10(capacity units) + 7 (additional capacity units) ) * 730 (Hours) = $216.08
+Variable Costs = $0.008 * ( 3(Instance Units) * 10 (capacity units) + 7 (additional capacity units) ) * 730 (Hours) = $216.08
Total Costs = $179.58 + $216.08 = $395.66
Total Costs = $179.58 + $216.08 = $395.66
### Example 2 ΓÇô WAF_V2 instance with Autoscaling
-LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 with autoscaling enabled and set the minimum instance count to 6 for the entire month. The request load has caused the WAF instance to scale out and utilize 65 Capacity units(scale out of 5 capacity units, while 60 units were reserved) for the entire month.
+LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 with autoscaling enabled and set the minimum instance count to 6 for the entire month. The request load has caused the WAF instance to scale out and utilize 65 Capacity units (scale out of 5 capacity units, while 60 units were reserved) for the entire month.
Your Application Gateway costs using the pricing mentioned above would be calculated as follows: Monthly price estimates are based on 730 hours of usage per month. Fixed Price = $0.443 * 730 (Hours) = $323.39
-Variable Costs = $0.0144 * 65(capacity units) * 730 (Hours) = $683.28
+Variable Costs = $0.0144 * 65 (capacity units) * 730 (Hours) = $683.28
Total Costs = $323.39 + $683.28 = $1006.67
Monthly price estimates are based on 730 hours of usage per month.
Fixed Price = $0.443 * 730 (Hours) = $323.39
-Variable Costs = $0.0144 * 1(capacity units) * 730 (Hours) = $10.512
+Variable Costs = $0.0144 * 1 (capacity units) * 730 (Hours) = $10.512
Total Costs = $323.39 + $10.512 = $333.902 ### Example 3 (b) ΓÇô WAF_V2 instance with Autoscaling with 0 Min instance count
-LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 with autoscaling enabled and set the minimum instance count to 0 for the entire month. However, there is 0 traffic directed to the WAF instance for the entire month.
+LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 with autoscaling enabled and set the minimum instance count to 0 for the entire month. However, there's 0 traffic directed to the WAF instance for the entire month.
Your Application Gateway costs using the pricing mentioned above would be calculated as follows: Fixed Price = $0.443 * 730 (Hours) = $323.39
-Variable Costs = $0.0144 * 0(capacity units) * 730 (Hours) = $0
+Variable Costs = $0.0144 * 0 (capacity units) * 730 (Hours) = $0
Total Costs = $323.39 + $0 = $323.39
-### Example 3 (C) ΓÇô WAF_V2 instance with manual scaling set to 1 instance
+### Example 3 (c) ΓÇô WAF_V2 instance with manual scaling set to 1 instance
-LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 and set it to manual scaling with the minimum acceptable value of 1 instance for the entire month. However, there is 0 traffic directed to the WAF for the entire month.
+LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 and set it to manual scaling with the minimum acceptable value of 1 instance for the entire month. However, there's 0 traffic directed to the WAF for the entire month.
Your Application Gateway costs using the pricing mentioned above would be calculated as follows: Monthly price estimates are based on 730 hours of usage per month. Fixed Price = $0.443 * 730 (Hours) = $323.39
-Variable Costs = $0.0144 * 1(Instance count) * 10(capacity units) * 730 (Hours) =
+Variable Costs = $0.0144 * 1 (Instance count) * 10 (capacity units) * 730 (Hours) =
$105.12 Total Costs = $323.39 + $105.12 = $428.51
Variable Costs = $0.0144 * 730 (Hours) * {Max (25/50, 8.88/2.22)} = $42.048 (4
Total Costs = $323.39 + $42.048 = $365.438
-### Example 5 (a) ΓÇô Standard_V2 with Autoscaling, time-based calculations
+### Example 5 ΓÇô Standard_V2 with Autoscaling, time-based calculations
LetΓÇÖs assume youΓÇÖve provisioned a standard_V2 with autoscaling enabled and set the minimum instance count to 0 and this application gateway is active for 2 hours. During the first hour, it receives traffic that can be handled by 10 Capacity Units and during the second hour it receives traffic that required 20 Capacity Units to handle the load.
Your Application Gateway costs using the pricing mentioned above would be calcul
Fixed Price = $0.246 * 2 (Hours) = $0.492
-Variable Costs = $0.008 * 10(capacity units) * 1 (Hours) + $0.008 * 20(capacity
+Variable Costs = $0.008 * 10 (capacity units) * 1 (Hours) + $0.008 * 20 (capacity
units) * 1 (Hours) = $0.24 Total Costs = $0.492 + $0.24 = $0.732
+### Example 6 ΓÇô WAF_V2 with DDoS Protection Standard Plan, and with manual scaling set to 2 instance
+
+LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 and set it to manual scaling with 2 instance for the entire month with 2 CUs. Let's also assume that you've enabled DDoS Protection Standard Plan. In this example, since you're paying the monthly fee for DDoS Protection Standard, there's no additional charges for WAF; and you're charged at the lower Standard_V2 rates.
+
+Monthly price estimates are based on 730 hours of usage per month.
+
+Fixed Price = $0.246 * 730 (Hours) = $179.58
+
+Variable Costs = $0.008 * 2 (capacity units) * 730 (Hours) = $11.68
+
+DDoS Protection Standard Cost = $2,944 * 1 (month) = $2,944
+
+Total Costs = $179.58 + $11.68 + $2,944 = $3,135.26
++ ## V1 SKUs Standard Application Gateway and WAF V1 SKUs are billed as a combination of:
Total Costs = $9 + $120 = $129
###### Large instance WAF Application Gateway 24 Hours * 15 Days = 360 Hours
-Fixed Price = $0.448 * 360 (Hours) = $161.28
+Fixed Price = $0.448 * 360 (Hours) = $161.28
-Variable Costs = 60 * 1000 * $0.0035/GB = $210 (Large tier has no costs for the first 40 TB processed per month)
+Variable Costs = 60 * 1000 * $0.0035/GB = $210 (Large tier has no costs for the first 40 TB processed per month)
Total Costs = $161.28 + $210 = $371.28
+### Example 3 ΓÇô WAF Application Gateway with DDoS Protection Standard Plan
+
+Let's assume you've provisioned a medium type WAF application Gateway, and you've enabled DDoS Protection Standard Plan. This medium WAF application gateway processes 40 TB in the duration that it is active. Your Application Gateway costs using the pricing method above would be calculated as follows:
+
+Monthly price estimates are based on 730 hours of usage per month.
+
+Fixed Price = $0.07 * 730 (Hours) = $51.1
+
+Variable Costs = 30 * 1000 * $0.007/GB = $210 (Medium tier has no cost for the first 10 TB processed per month)
+
+DDoS Protection Standard Costs = $2,944 * 1 (month) = $2,944
+
+Total Costs = $3,507.08
++
+## Azure DDoS Protection Standard Plan
+
+When Azure DDoS Protection Standard Plan is enabled on your application gateway with WAF you'll be billed at the lower non-WAF rates. Please see [Azure DDoS Protection pricing](https://azure.microsoft.com/pricing/details/ddos-protection/) for more details.
+ ## Monitoring Billed Usage
More metrics such as throughput, current connections and compute units are also
* Compute Units = 17.38 * Throughput = 1.37M Bytes/sec - 10.96 Mbps * Current Connections = 123.08k
-* Capacity Units calculated = max(17.38, 10.96/2.22, 123.08k/2500) = 49.232
+* Capacity Units calculated = max (17.38, 10.96/2.22, 123.08k/2500) = 49.232
Observed Capacity Units in metrics = 49.23
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Title: Form Recognizer business card model
+ Title: Business card data extraction - Form Recognizer
-description: Concepts related to data extraction and analysis using the prebuilt business card model.
+description: OCR and machine learning based business card scanning in Form Recognizer extracts key data from business cards.
recommendations: false
<!-- markdownlint-disable MD033 -->
-# Form Recognizer business card model
+# Business card data extraction
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
+## How business card data extraction works
+
+Business cards are a great way of representing a business or a professional. The company logo, fonts and background images found in business cards help the company branding and differentiate it from others. Applying OCR and machine-learning based techniques to automate scanning of business cards is a common image processing scenario. Enterprise systems used by sales and marketing teams typically have business card data extraction capability integrated into them for the benefit of their users.
+
+## Form Recognizer Business Card model
+ The business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation. ***Sample business card processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***
The following tools are supported by Form Recognizer v2.1:
|-|-| |**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try Form Recognizer
+### Try business card data extraction
See how data, including name, job title, address, email, and company name, is extracted from business cards using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Title: Form Recognizer composed models
+ Title: Composed custom models - Form Recognizer
-description: Learn about composed custom models
+description: Compose several custom models into a single model for easier data extraction from groups of distinct form types.
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Title: Form Recognizer custom neural model
+ Title: Custom neural document model - Form Recognizer
-description: Learn about custom neural (neural) model type, its features and how you train a model with high accuracy to extract data from structured and unstructured documents.
+description: Use the custom neural document model to train a model to extract data from structured, semistructured, and unstructured documents.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Form Recognizer custom neural model
+# Custom neural document model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-Custom neural models or neural models are a deep learned model that combines layout and language features to accurately extract labeled fields from documents. The base custom neural model is trained on various document types that makes it suitable to be trained for extracting fields from structured, semi-structured and unstructured documents. The table below lists common document types for each category:
+Custom neural document models or neural models are a deep learned model type that combines layout and language features to accurately extract labeled fields from documents. The base custom neural model is trained on various document types that makes it suitable to be trained for extracting fields from structured, semi-structured and unstructured documents. The table below lists common document types for each category:
|Documents | Examples | ||--|
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Title: Form Recognizer custom template model
+ Title: Custom template document model - Form Recognizer
-description: Learn about the custom template model type, its features and how you train a model with high accuracy to extract data from structured or templated forms
+description: Use the custom template document model to train a model to extract data from structured or templated forms.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Form Recognizer custom template model
+# Custom template document model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-Custom template (formerly custom form) is an easy-to-train model that accurately extracts labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
+Custom template (formerly custom form) is an easy-to-train document model that accurately extracts labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
Custom template models share the same labeling format and strategy as custom neural models, with support for more field types and languages.
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Title: Form Recognizer custom and composed models
+ Title: Custom document models - Form Recognizer
-description: Learn to create, use, and manage Form Recognizer custom and composed models.
+description: Label and train customized models for your documents and compose multiple models into a single model identifier.
monikerRange: '>=form-recog-2.1.0' recommendations: false
-# Form Recognizer custom models
+# Custom document models
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
Form Recognizer uses advanced machine learning technology to detect and extract
To create a custom model, you label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
-## Custom model types
+## Custom document model types
-Custom models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
+Custom document models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
### Custom template model (v3.0)
The following tools are supported by Form Recognizer v2.1:
|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
-### Try Form Recognizer
+### Try building a custom model
Try extracting data from your specific or unique documents using custom models. You need the following resources:
Try extracting data from your specific or unique documents using custom models.
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)
-## Model capabilities
+## Custom model extraction summary
This table compares the supported data extraction areas:
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Title: Form Recognizer ID document model
+ Title: Identity document (ID) processing ΓÇô Form Recognizer
-description: Concepts related to data extraction and analysis using the prebuilt ID document model
+description: Automate identity document (ID) processing of driver licenses, passports, and more with Form Recognizer.
monikerRange: '>=form-recog-2.1.0' recommendations: false
-<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD033 -->
-# Form Recognizer ID document model
+# Identity document (ID) processing
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-The ID document model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
+## What is identity document (ID) processing
+
+Identity document (ID) processing involves extraction of data from identity documents whether manually or using OCR based techniques. Examples of identity documents include passports, driver licenses, resident cards, and national identity cards like the social security card in the US. It is an important step in any business process that requires some proof of identity. Examples include customer verification in banks and other financial institutions, mortgage applications, medical visits, claim processing, hospitality industry, and more. Individuals provide some proof of their identity via driver licenses, passports, and other similar documents so that the business can efficiently verify them before providing services and benefits.
+
+## Form Recognizer Identity document (ID) model
+
+The Form Recognizer Identity document (ID) model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from identity documents: US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state IDs, social security cards, and permanent resident cards and more. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***
The following tools are supported by Form Recognizer v3.0:
|-|-|--| |**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-idDocument**|
+### Try Identity document (ID) extraction
+ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try Form Recognizer
- Extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio. You'll need the following resources: * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
Extract data, including name, birth date, machine-readable zone, and expiration
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (Green card)</li></ul></br>|English (United States)ΓÇöen-US|
+|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (Residence permit card)</li></ul></br>|English (United States)ΓÇöen-US|
## Field extractions
-|Name| Type | Description | Standardized output|
-|:--|:-|:-|:-|
-| CountryRegion | countryRegion | Country or region code compliant with ISO 3166 standard | |
-| DateOfBirth | Date | DOB | yyyy-mm-dd |
-| DateOfExpiration | Date | Expiration date DOB | yyyy-mm-dd |
-| DocumentNumber | String | Relevant passport number, driver's license number, etc. | |
-| FirstName | String | Extracted given name and middle initial if applicable | |
-| LastName | String | Extracted surname | |
-| Nationality | countryRegion | Country or region code compliant with ISO 3166 standard (Passport only) | |
-| Sex | String | Possible extracted values include "M", "F" and "X" | |
-| MachineReadableZone | Object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
-| DocumentType | String | Document type, for example, Passport, Driver's License | "passport" |
-| Address | String | Extracted address (Driver's License only) ||
-| Region | String | Extracted region, state, province, etc. (Driver's License only) | |
-
-## Form Recognizer v3.0
-
- The Form Recognizer v3.0 introduces several new features and capabilities:
-
-* **ID document (v3.0)** prebuilt model supports extraction of endorsement, restriction, and vehicle class codes from US driver's licenses.
-
-* The ID Document **2022-06-30** and later releases support the following data extraction from US driver's licenses:
-
- * Date issued
- * Height
- * Weight
- * Eye color
- * Hair color
- * Document discriminator security code
-
-### ID document field extractions
-
-|Name| Type | Description | Standardized output|
-|:--|:-|:-|:-|
-| DateOfIssue | Date | Issue date | yyyy-mm-dd |
-| Height | String | Height of the holder. | |
-| Weight | String | Weight of the holder. | |
-| EyeColor | String | Eye color of the holder. | |
-| HairColor | String | Hair color of the holder. | |
-| DocumentDiscriminator | String | Document discriminator is a security code that identifies where and when the license was issued. | |
-| Endorsements | String | More driving privileges granted to a driver such as Motorcycle or School bus. | |
-| Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| |
-| VehicleClassification | String | Types of vehicles that can be driven by a driver. ||
-| CountryRegion | countryRegion | Country or region code compliant with ISO 3166 standard | |
-| DateOfBirth | Date | DOB | yyyy-mm-dd |
-| DateOfExpiration | Date | Expiration date DOB | yyyy-mm-dd |
-| DocumentNumber | String | Relevant passport number, driver's license number, etc. | |
-| FirstName | String | Extracted given name and middle initial if applicable | |
-| LastName | String | Extracted surname | |
-| Nationality | countryRegion | Country or region code compliant with ISO 3166 standard (Passport only) | |
-| Sex | String | Possible extracted values include "M", "F" and "X" | |
-| MachineReadableZone | Object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
-| DocumentType | String | Document type, for example, Passport, Driver's License, Social security card and more | "passport" |
-| Address | String | Extracted address, address is also parsed to its components - address, city, state, country, zip code ||
-| Region | String | Extracted region, state, province, etc. (Driver's License only) | |
-
-### Migration guide and REST API v3.0
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
+Below are the fields extracted per document type. The Azure Form Recognizer ID model `prebuilt-idDocument` extracts the below fields in the `documents.*.fields`. It also extracts all the text in the documents, words, lines and styles which will be included in the JSON output in the different sections.
+ * `pages.*.words`
+ * `pages.*.lines`
+ * `paragraphs`
+ * `styles`
+ * `documents`
+ * `documents.*.fields`
+
+#### Document type - `idDocument.driverLicense` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`Region`|`string`|State or province|Washington|
+|`DocumentNumber`|`string`|Driver license number|WDLABCD456DG|
+|`DocumentDiscriminator`|`string`|Driver license document discriminator|12645646464554646456464544|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`EyeColor`|`string`|Eye color|BLU|
+|`HairColor`|`string`|Hair color|BRO|
+|`Height`|`string`|Height|5'11"|
+|`Weight`|`string`|Weight|185LB|
+|`Sex`|`string`|Sex|M|
+|`Endorsements`|`string`|Endorsements|L|
+|`Restrictions`|`string`|Restrictions|B|
+|`VehicleClassifications`|`string`|Vehicle classification|D|
+
+#### Document type - `idDocument.passport` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`DocumentNumber`|`string`|Passport number|340020013|
+|`FirstName`|`string`|Given name and middle initial if applicable|JENNIFER|
+|`MiddleName`|`string`|Name between given name and surname|REYES|
+|`LastName`|`string`|Surname|BROOKS|
+|`Aliases`|`array`|||
+|`Aliases.*`|`string`|Also known as|MAY LIN|
+|`DateOfBirth`|`date`|Date of birth|1980-01-01|
+|`DateOfExpiration`|`date`|Date of expiration|2019-05-05|
+|`DateOfIssue`|`date`|Date of issue|2014-05-06|
+|`Sex`|`string`|Sex|F|
+|`CountryRegion`|`countryRegion`|Issuing country or organization|USA|
+|`DocumentType`|`string`|Document type|P|
+|`Nationality`|`countryRegion`|Nationality|USA|
+|`PlaceOfBirth`|`string`|Place of birth|MASSACHUSETTS, U.S.A.|
+|`PlaceOfIssue`|`string`|Place of issue|LA PAZ|
+|`IssuingAuthority`|`string`|Issuing authority|United States Department of State|
+|`PersonalNumber`|`string`|Personal Id. No.|A234567893|
+|`MachineReadableZone`|`object`|Machine readable zone (MRZ)|P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816|
+|`MachineReadableZone.FirstName`|`string`|Given name and middle initial if applicable|JENNIFER|
+|`MachineReadableZone.LastName`|`string`|Surname|BROOKS|
+|`MachineReadableZone.DocumentNumber`|`string`|Passport number|340020013|
+|`MachineReadableZone.CountryRegion`|`countryRegion`|Issuing country or organization|USA|
+|`MachineReadableZone.Nationality`|`countryRegion`|Nationality|USA|
+|`MachineReadableZone.DateOfBirth`|`date`|Date of birth|1980-01-01|
+|`MachineReadableZone.DateOfExpiration`|`date`|Date of expiration|2019-05-05|
+|`MachineReadableZone.Sex`|`string`|Sex|F|
+
+#### Document type - `idDocument.nationalIdentityCard` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`Region`|`string`|State or province|Washington|
+|`DocumentNumber`|`string`|National identity card number|WDLABCD456DG|
+|`DocumentDiscriminator`|`string`|National identity card document discriminator|12645646464554646456464544|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`EyeColor`|`string`|Eye color|BLU|
+|`HairColor`|`string`|Hair color|BRO|
+|`Height`|`string`|Height|5'11"|
+|`Weight`|`string`|Weight|185LB|
+|`Sex`|`string`|Sex|M|
+
+#### Document type - `idDocument.residencePermit` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`DocumentNumber`|`string`|Residence permit number|WDLABCD456DG|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`Sex`|`string`|Sex|M|
+|`PlaceOfBirth`|`string`|Place of birth|Germany|
+|`Category`|`string`|Permit category|DV2|
+
+#### Document type - `idDocument.usSocialSecurityCard` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`DocumentNumber`|`string`|Social security card number|WDLABCD456DG|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+ ## Next steps
+* Try the prebuilt ID model in the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument). Use the sample documents or bring your own documents.
+ * Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
Title: Form Recognizer invoice model
+ Title: Invoice data extraction ΓÇô Form Recognizer
-description: Concepts related to data extraction and analysis using prebuilt invoice model
+description: Automate invoice data extraction with Form RecognizerΓÇÖs invoice model to extract accounts payable data including invoice line items.
recommendations: false
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
- The invoice model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key fields and line items from sales invoices. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices.
+## What is automated invoice processing?
+
+Automated invoice processing is the process of extracting key accounts payable fields from including invoice line items from invoices and integrating it with your accounts payable (AP) workflows for reviews and payments. Historically, the accounts payable process has been very manual and time consuming. Accurate extraction of key data from invoices is typically the first and one of the most critical steps in the invoice automation process.
+
+## Form Recognizer Invoice model
+
+The machine learning based invoice model combines powerful Optical Character Recognition (OCR) capabilities with invoice understanding models to analyze and extract key fields and line items from sales invoices. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices.
**Sample invoice processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)**:
The following tools are supported by Form Recognizer v2.1:
|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try Form Recognizer
+### Try invoice data extraction
See how data, including customer information, vendor details, and line items, is extracted from invoices using the Form Recognizer Studio. You'll need the following resources:
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Title: Layouts - Form Recognizer
+ Title: Document layout analysis - Form Recognizer
-description: Learn concepts related to the Layout API with Form Recognizer REST API usage and limits.
+description: Extract text, tables, selections, titles, section headings, page headers, page footers, and more with layout analysis model from Form Recognizer.
monikerRange: '>=form-recog-2.1.0'
recommendations: false
-# Form Recognizer layout model
+# Document layout analysis
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-The Form Recognizer Layout API extracts text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+## What is document layout analysis?
+
+Document structure and layout analysis is the process of analyzing a document to extract regions of interest and their inter-relationships. The goal is to extract text and structural elements from the page for building better semantic understanding models. For all extracted text, there are two types of roles that text plays in a document layout. Text, tables, and selection marks are examples of geometric roles. Titles, headings, and footers are examples of logical roles. For example. a reading system requires differentiating text regions from non-textual ones along with their reading order.
+
+The following illustration shows the typical components in an image of a sample page.
++
+## Form Recognizer Layout model
+
+The Form Recognizer Layout is an advanced machine-learning based document layout analysis model available in the Form Recognizer cloud API. In the version v2.1, the document layout model extracted text lines, words, tables, and selection marks.
+
+**Starting with v3.0 GA**, it extracts paragraphs and additional structure information like titles, section headings, page header, page footer, page number, and footnote from the document page. These are examples of logical roles described in the previous section. This capability is supported for PDF documents and images (JPG, PNG, BMP, TIFF).
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
### Data extraction
-| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** |
+| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Logical roles** |
| | | | | | | | Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-**Supported paragraph roles**:
+**Supported logical roles for paragraphs**:
The paragraph roles are best used with unstructured documents. Paragraph roles help analyze the structure of the extracted content for better semantic search and analysis. * title
The following tools are supported by Form Recognizer v2.1:
|-|-| |**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-## Try Form Recognizer
+## Try document layout analysis
Try extracting data from forms and documents using the Form Recognizer Studio. You'll need the following resources:
Try extracting data from forms and documents using the Form Recognizer Studio. Y
The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
-### Paragraphs <sup>🆕</sup>
+### Paragraph extraction <sup>🆕</sup>
The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
The Layout model extracts all identified blocks of text in the `paragraphs` coll
### Paragraph roles<sup> 🆕</sup>
-The Layout model may flag certain paragraphs with their specialized type or `role` as predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
+The new machine-learning based page object detection extracts logical roles like titles, section headings, page headers, page footers, and more. The Form Recognizer Layout model assigns certain text blocks in the `paragraphs` collection with their specialized role or type predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
| **Predicted role** | **Description** | | | |
The Layout model may flag certain paragraphs with their specialized type or `rol
```
-### Pages
+### Pages extraction
The pages collection is the very first object you see in the service response.
The pages collection is the very first object you see in the service response.
] ```
-### Text lines and words
+### Text lines and words extraction
-Read extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+The document layout model in Form Recognizer extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
```json "words": [
Read extracts print and handwritten style text as `lines` and `words`. The model
} ] ```
-### Selection marks
+### Selection marks extraction
-Layout API also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
+The Layout model also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
```json {
Layout API also extracts selection marks from documents. Extracted selection mar
} ```
-### Tables and table headers
+### Extract tables from documents and images
-Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding `polygon` is output along with information whether it's recognized as a `columnHeader` or not. The API also works with rotated tables. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top level `content` that contains the full text from the document.
+Extracting tables is a key requirement for processing documents containing large volumes of data typically formatted as tables. The Layout model extracts tables in the `pageResults` section of the JSON output. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding polygon is output along with information whether it's recognized as a `columnHeader` or not. The model supports extracting tables that are rotated. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top-level content that contains the full text from the document.
```json {
Layout API extracts tables in the `pageResults` section of the JSON output. Docu
] }
+```
+### Handwritten style for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
+
+```json
+"styles": [
+{
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
+}
```
-### Select page numbers or ranges for text extraction
+### Extracts selected pages from documents
For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Title: Form Recognizer models
+ Title: Document processing models - Form Recognizer
-description: Concepts related to data extraction and analysis using prebuilt models.
+description: Document processing models for OCR, document layout, invoices, identity, custom models, and more to extract text, structure, and key-value pairs.
recommendations: false
<!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD033 -->
-# Form Recognizer models
+# Document processing models
::: moniker range="form-recog-3.0.0" [!INCLUDE [applies to v3.0](includes/applies-to-v3-0.md)]
recommendations: false
| **Model** | **Description** | | | |
-|**Document analysis**||
-| [Read](#read) | Extract typeface and handwritten text lines, words, locations, and detected languages.|
-| [General document](#general-document) | Extract text, tables, structure, key-value pairs, and named entities.|
-| [Layout](#layout) | Extract text and layout information from documents.|
-|**Prebuilt**||
-| [W-2](#w-2) | Extract employee, employer, wage information, etc. from US W-2 forms. |
-| [Invoice](#invoice) | Extract key information from English and Spanish invoices. |
-| [Receipt](#receipt) | Extract key information from English receipts. |
-| [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
-| [Business card](#business-card) | Extract key information from English business cards. |
-|**Custom**||
-| [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
-| [Composed](#composed-custom-model) | Compose a collection of custom models and assign them to a single model built from your form types.
-
-### Read
+|**Document analysis models**||
+| [Read OCR](#read-ocr) | Extract print and handwritten text including words, locations, and detected languages.|
+| [Layout analysis](#layout-analysis) | Extract text and document layout elements like tables, selection marks, titles, section headings, and more.|
+| [General document](#general-document) | Extract key-value pairs in addition to text and document structure information.|
+|**Prebuilt models**||
+| [W-2](#w-2) | Process W2 forms to extract employee, employer, wage, and other information. |
+| [Invoice](#invoice) | Automate invoice processing for English and Spanish invoices. |
+| [Receipt](#receipt) | Extract receipt data from English receipts.|
+| [Identity document (ID)](#identity-document-id) | Extract identity (ID) fields from US driver licenses and international passports. |
+| [Business card](#business-card) | Scan business cards to extract key fields and data into your applications. |
+|**Custom models**||
+| [Custom models](#custom-models) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
+| [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model.
+
+### Read OCR
[:::image type="icon" source="media/studio/read-card.png" :::](https://formrecognizer.appliedai.azure.com/studio/read)
The Read API analyzes and extracts ext lines, words, their locations, detected l
> [!div class="nextstepaction"] > [Learn more: read model](concept-read.md)
-### W-2
+### Layout analysis
-[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
+[:::image type="icon" source="media/studio/layout.png":::](https://formrecognizer.appliedai.azure.com/studio/layout)
-The W-2 model analyzes and extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including single and multiple forms on one page.
+The Layout analysis model analyzes and extracts text, tables, selection marks, and other structure elements like titles, section headings, page headers, page footers, and more.
-***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
+***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***:
> [!div class="nextstepaction"]
-> [Learn more: W-2 model](concept-w2.md)
+>
+> [Learn more: layout model](concept-layout.md)
### General document [:::image type="icon" source="media/studio/general-document.png":::](https://formrecognizer.appliedai.azure.com/studio/document)
-* The general document API supports most form types and will analyze your documents and associate values to keys and entries to tables that it discovers. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
-
-* The general document is a pre-trained model and can be directly invoked via the REST API.
-
-* The general document model supports named entity recognition (NER) for several entity categories. NER is the ability to identify different entities in text and categorize them into pre-defined classes or types such as: person, location, event, product, and organization. Extracting entities can be useful in scenarios where you want to validate extracted values. The entities are extracted from the entire content.
+The general document model is ideal for extracting common key-value pairs from forms and documents. ItΓÇÖs a pre-trained model and can be directly invoked via the REST API and the SDKs. You can use the general document model as an alternative to training a custom model.
***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document)***:
The W-2 model analyzes and extracts key information reported in each box on a W-
> [!div class="nextstepaction"] > [Learn more: general document model](concept-general-document.md)
-### Layout
-[:::image type="icon" source="media/studio/layout.png":::](https://formrecognizer.appliedai.azure.com/studio/layout)
+### W-2
-The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from documents.
+[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
-***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***:
+The W-2 form model extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including single and multiple forms on one page.
+***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
+ > [!div class="nextstepaction"]
->
-> [Learn more: layout model](concept-layout.md)
+> [Learn more: W-2 model](concept-w2.md)
### Invoice [:::image type="icon" source="media/studio/invoice.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
-The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
+The invoice model automates processing of invoices to extracts customer name, billing address, due date, and amount due, line items and other key data. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
***Sample invoice processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)***:
The invoice model analyzes and extracts key information from sales invoices. The
[:::image type="icon" source="media/studio/receipt.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
-* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
-
-* Version v3.0 also supports single-page hotel receipt processing.
+Use the receipt model to scan sales receipts for merchant name, dates, line items, quantities, and totals from printed and handwritten receipts. The version v3.0 also supports single-page hotel receipt processing.
***Sample receipt processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The invoice model analyzes and extracts key information from sales invoices. The
> [!div class="nextstepaction"] > [Learn more: receipt model](concept-receipt.md)
-### ID document
+### Identity document (ID)
[:::image type="icon" source="media/studio/id-document.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
- The ID document model analyzes and extracts key information from the following documents:
-
-* U.S. Driver's Licenses (all 50 states and District of Columbia)
-
-* Biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts
+Use the Identity document (ID) model to process U.S. Driver's Licenses (all 50 states and District of Columbia) and biographical pages from international passports (excluding visa and other travel documents) to extract key fields.
***Sample U.S. Driver's License processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***:
The invoice model analyzes and extracts key information from sales invoices. The
[:::image type="icon" source="media/studio/business-card.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
-The business card model analyzes and extracts key information from business card images.
+Use the business card model to scan and extract key information from business card images.
***Sample business card processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***:
The business card model analyzes and extracts key information from business card
> [!div class="nextstepaction"] > [Learn more: business card model](concept-business-card.md)
-### Custom
+### Custom models
[:::image type="icon" source="media/studio/custom.png":::](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)
-* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+Custom document models analyze and extract data from forms and documents specific to your business. They are trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started.
-* Version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
+Version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
***Sample custom template processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
The business card model analyzes and extracts key information from business card
> [!div class="nextstepaction"] > [Learn more: custom model](concept-custom.md)
-#### Composed custom model
+#### Composed models
-A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
+A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. You can assign up to 100 trained custom models to a single composed model.
***Composed model dialog window in [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
A composed model is created by taking a collection of custom models and assignin
## Model data extraction
-| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** |
+| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** | **Key-Value pairs** | **Fields** |
|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | | | [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Title: Read OCR - Form Recognizer
+ Title: OCR for documents - Form Recognizer
-description: Learn concepts related to Read OCR API analysis with Form Recognizer APIΓÇöusage and limits.
+description: Extract print and handwritten text from scanned and digital documents with Form RecognizerΓÇÖs Read OCR model.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Form Recognizer Read OCR model
+# OCR for documents
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-Form Recognizer v3.0 includes the new Read Optical Character Recognition (OCR) model. The Read OCR model extracts typeface and handwritten text including mixed languages in documents. The Read OCR model can detect lines, words, locations, and languages and is the core of all other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the Read OCR model as a foundation for extracting texts from documents.
+> [!NOTE]
+>
+> For general, in-the-wild images like labels, street signs, and posters, use the [Computer Vision v4.0 preview Read](../../cognitive-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
+>
+
+## What is OCR for documents?
+
+Optical Character Recognition (OCR) for documents is optimized for large text-heavy documents in multiple file formats and global languages. It should include features like higher-resolution scanning of document images for better handling of smaller and dense text, paragraphs detection, handling fillable forms, and advanced forms and document scenarios like single character boxes and accurate extraction of key fields commonly found in invoices, receipts, and other prebuilt scenarios.
+
+## Form Recognizer Read model
+
+Form Recognizer v3.0ΓÇÖs Read Optical Character Recognition (OCR) model runs at a higher resolution than Computer Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes preview support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages, and is the underlying OCR engine for other Form Recognizer models like Layout, General Document, Invoice, Receipt, Identity (ID) document, and other prebuilt models, as well as custom models.
## Supported document types
The following resources are supported by Form Recognizer v3.0:
|-||| |**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
-## Try Form Recognizer
+## Try OCR in Form Recognizer
Try extracting text from forms and documents using the Form Recognizer Studio. You'll need the following assets:
Form Recognizer v3.0 version supports several languages for the read model. *See
## Data detection and extraction
-### Paragraphs <sup>🆕</sup>
+### Microsoft Office and HTML text extraction (preview) <sup>🆕</sup>
+Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview text extraction from Microsoft Word, Excel, PowerPoint, and HTML files. The following illustration shows extraction of the digital text as well as text from the images embedded in the Word document by running OCR on the images.
++
+The page units in the model output are computed as shown:
+
+ **File format** | **Computed page unit** | **Total pages** |
+| | | |
+|Word (preview) | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
+|Excel (preview) | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
+|PowerPoint (preview)| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
+|HTML (preview)| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+
+### Paragraphs extraction <sup>🆕</sup>
-The Read model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
+The Read OCR model in Form Recognizer extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top-level `content` property that contains the full text from the document.
```json "paragraphs": [
The Read model extracts all identified blocks of text in the `paragraphs` collec
``` ### Language detection <sup>🆕</sup>
-Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
+The Read OCR model in Form Recognizer adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
```json "languages": [
Read adds [language detection](language-support.md#detected-languages-read-api)
}, ] ```
-### Microsoft Office and HTML support (preview) <sup>🆕</sup>
-Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview the support for Microsoft Word, Excel, PowerPoint, and HTML files.
-
-The page units in the model output are computed as shown:
-
- **File format** | **Computed page unit** | **Total pages** |
-| | | |
-|Word (preview) | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
-|Excel (preview) | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
-|PowerPoint (preview)| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
-|HTML (preview)| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
-
-### Pages
+### Extracting pages from documents
The page units in the model output are computed as shown:
The page units in the model output are computed as shown:
] ```
-### Text lines and words
+### Extract text lines and words
-Read extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+The Read OCR model extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, Read will extract all embedded text as is. For any embedded images, it will run OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries will include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
For large multi-page PDF documents, use the `pages` query parameter to indicate
> [!NOTE] > For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, the Read API ignores the pages parameter and extracts all pages by default.
+### Handwritten style for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
+
+```json
+"styles": [
+{
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
+}
+```
+ ## Next steps Complete a Form Recognizer quickstart:
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Title: Form Recognizer receipt model
+ Title: Receipt data extraction - Form Recognizer
-description: Concepts related to data extraction and analysis using the prebuilt receipt model
+description: Use machine learning powered receipt data extraction model to digitize receipts.
recommendations: false
<!-- markdownlint-disable MD033 -->
-# Form Recognizer receipt model
+# Receipt data extraction
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
+## What is receipt digitization
+
+Receipt digitization is the process of converting scanned receipts into digital form for downstream processing. OCR powered receipt data extraction helps to automate the conversion and save time and effort. The output from the receipt data extraction is used for accounts payable and receivables automation, sales data analytics, and other business scenarios.
+
+## Form Recognizer receipt model
+
+The Form Recognizer receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The following tools are supported by Form Recognizer v2.1:
|-|-| |**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try Form Recognizer
+### Try receipt data extraction
See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts using the Form Recognizer Studio. You'll need the following resources:
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Title: Form Recognizer W-2 prebuilt model
+ Title: Automated W-2 form processing - Form Recognizer
-description: Data extraction and analysis extraction using the prebuilt W-2 model
+description: Use the Form Recognizer prebuilt W-2 model to automate extraction of W2 form data.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Form Recognizer W-2 model
+# Automated W-2 form processing
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+## Why is automated W-2 form processing important?
+
+Form W-2, also known as the Wage and Tax Statement, is sent by an employer to each employee and the Internal Revenue Service (IRS) at the end of the year. A W-2 form reports employees' annual wages and the amount of taxes withheld from their paychecks. The IRS also uses W-2 forms to track individuals' tax obligations. The Social Security Administration (SSA) uses the information on this and other forms to compute the Social Security benefits for all workers.
+
+## Form Recognizer W-2 form model
+ The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported on [US Internal Revenue Service (IRS) tax forms](https://www.irs.gov/forms-pubs/about-form-w-2). A W-2 tax form is a multipart form divided into state and federal sections consisting of more than 14 boxes detailing an employee's income from the previous year. The W-2 tax form is a key document used in employees' federal and state tax filings, as well as other processes like mortgage loans and Social Security Administration (SSA) benefits. The Form Recognizer W-2 model supports both single and multiple standard and customized forms from 2018 to the present. ***Sample W-2 tax form processed using Form Recognizer Studio***
The prebuilt W-2 model is supported by Form Recognizer v3.0 with the following t
|-|-|--| |**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
-### Try Form Recognizer
+### Try W-2 form data extraction
Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
applied-ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/disaster-recovery.md
Request body
You'll get a `200` response code with response body that contains the JSON payload required to initiate the copy.
-```http
+```json
{
- "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
- "targetResourceRegion": "region",
- "targetModelId": "target-model-name",
- "targetModelLocation": "model path",
- "accessToken": "access token",
- "expirationDateTime": "timestamp"
+ "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
+ "targetResourceRegion": "region",
+ "targetModelId": "target-model-name",
+ "targetModelLocation": "model path",
+ "accessToken": "access token",
+ "expirationDateTime": "timestamp"
} ```
The body of your request is the response from the previous step.
```json {
- "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
- "targetResourceRegion": "region",
- "targetModelId": "target-model-name",
- "targetModelLocation": "model path",
- "accessToken": "access token",
- "expirationDateTime": "timestamp"
+ "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
+ "targetResourceRegion": "region",
+ "targetModelId": "target-model-name",
+ "targetModelLocation": "model path",
+ "accessToken": "access token",
+ "expirationDateTime": "timestamp"
} ```
The following code snippets use cURL to make API calls outlined in the steps abo
### Generate Copy authorization
- **Request**
+**Request**
- ```bash
- curl -i -X POST "{YOUR-ENDPOINT}formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31"
- -H "Content-Type: application/json"
- -H "Ocp-Apim-Subscription-Key: {YOUR-KEY}"
- --data-ascii "{
- 'modelId': '{modelId}',
- 'description': '{description}'
- }"
- ```
+```bash
+curl -i -X POST "{YOUR-ENDPOINT}formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31"
+-H "Content-Type: application/json"
+-H "Ocp-Apim-Subscription-Key: {YOUR-KEY}"
+--data-ascii "{
+ 'modelId': '{modelId}',
+ 'description': '{description}'
+}"
+```
- **Successful response**
+**Successful response**
- ```http
- {
+```json
+{
"targetResourceId": "string", "targetResourceRegion": "string", "targetModelId": "string", "targetModelLocation": "string", "accessToken": "string", "expirationDateTime": "string"
- }
- ```
+}
+```
### Begin Copy operation
- **Request**
+**Request**
- ```bash
- curl -i -X POST "{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}:copyTo?api-version=2022-08-31"
+```bash
+curl -i -X POST "{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}:copyTo?api-version=2022-08-31"
-H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {YOUR-KEY}" --data-ascii "{
The following code snippets use cURL to make API calls outlined in the steps abo
'expirationDateTime': '{expirationDateTime}' }"
- ```
+```
- **Successful response**
+**Successful response**
- ```http
- HTTP/1.1 202 Accepted
- Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
- ```
+```http
+HTTP/1.1 202 Accepted
+Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
+```
### Track copy operation progress
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Title: "Overview: What is Azure Form Recognizer?"
+ Title: Intelligent document processing - Form Recognizer
-description: Azure Form Recognizer service that analyzes and extracts text, table and data, maps field relationships as key-value pairs, and returns a structured JSON output from your forms and documents.
+description: Machine-learning based OCR and document understanding service to automate extraction of text, table and structure, and key-value pairs from your forms and documents.
recommendations: false
<!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
-# What is Azure Form Recognizer?
+
+# What is Intelligent Document Processing?
+
+Intelligent Document Processing (IDP) refers to capturing, transforming, and processing data from documents (e.g., PDF, or scanned documents including Microsoft Office and HTML documents). It typically uses advanced machine-learning based technologies like computer vision, Optical Character Recognition (OCR), document layout analysis, and Natural Language Processing (NLP) to extract meaningful information, process and integrate with other systems.
+
+IDP solutions can extract data from structured documents with pre-defined layouts like a tax form, unstructured or free-form documents like a contract, and semi-structured documents. They have a wide variety of benefits spanning knowledge mining, business process automation, and industry-specific applications. Examples include invoice processing, medical claims processing, and contracts workflow automation.
+
+## What is Azure Form Recognizer?
::: moniker range="form-recog-3.0.0" [!INCLUDE [applies to v3.0](includes/applies-to-v3-0.md)]
recommendations: false
::: moniker range="form-recog-3.0.0"
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* the Concepts articles:
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning based optical character recognition (OCR) and document understanding technologies to extract print and handwritten text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
| Model type | Model name | ||--|
-|**Document analysis models**| &#9679; [**Read model**](concept-read.md)</br> &#9679; [**General document model**](concept-general-document.md)</br> &#9679; [**Layout model**](concept-layout.md) </br> |
-| **Prebuilt models** | &#9679; [**W-2 form model**](concept-w2.md) </br>&#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
+|**Document analysis models**| &#9679; [**Read OCR model**](concept-read.md)</br> &#9679; [**General document model**](concept-general-document.md)</br> &#9679; [**Layout analysis model**](concept-layout.md) </br> |
+| **Prebuilt models** | &#9679; [**W-2 form model**](concept-w2.md) </br>&#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**Identity (ID) document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)| ## Which Form Recognizer model should I use?
This section will help you decide which **Form Recognizer v3.0** supported model
| Type of document | Data to extract |Document format | Your best solution | | --|-| -|-|
-|**A text-based document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read model**](concept-read.md)|
-|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
+|**A generic document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read OCR model**](concept-read.md)|
+|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout analysis model**](concept-layout.md)
|**A structured or semi-structured document that includes content formatted as fields and values**, like a credit application or survey form.|You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| The form or document is a standardized format commonly used in your business or industry and printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).|[**General document model**](concept-general-document.md) |**U.S. W-2 form**|You want to extract key information such as salary, wages, and taxes withheld from US W2 tax forms.</li></ul> |The W-2 document is in United States English (en-US) text.|[**W-2 model**](concept-w2.md) |**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md) |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
-|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
+|**Identity document (ID)** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**Identity document (ID) model**](concept-id-document.md)|
|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)| |**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)| >[!Tip] >
-> * If you're still unsure which model to use, try the General Document model.
-> * The General Document model is powered by the Read OCR model to detect lines, words, locations, and languages.
-> * General document extracts all the same fields as Layout model (pages, tables, styles) and also extracts key-value pairs.
+> * If you're still unsure which model to use, try the General Document model to extract key-value pairs.
+> * The General Document model is powered by the Read OCR engine to detect text lines, words, locations, and languages.
+> * General document also extracts the same data as the document layout model (pages, tables, styles).
-## Form Recognizer models and development options
+## Document processing models and development options
> [!NOTE]
->The following models and development options are supported by the Form Recognizer service v3.0.
+>The following document understanding models and development options are supported by the Form Recognizer service v3.0.
-You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
+You can Use Form Recognizer to automate your document processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
| Model | Description |Automation use cases | Development options | |-|--|-|--|
-|[**Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
+|[**Read OCR model**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
+|[**Layout analysis model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>| |[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> | |[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>| |[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Identity document (ID) model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>| ::: moniker-end
You can Use Form Recognizer to automate your data processing in applications and
::: moniker range="form-recog-2.1.0"
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* the Concepts articles:
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning based optical character recognition (OCR) and document understanding technologies to extract print and handwritten text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
| Model type | Model name | ||--|
-|**Document analysis model**| &#9679; [**Layout model**](concept-layout.md) </br> |
-| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
+|**Document analysis model**| &#9679; [**Layout analysis model**](concept-layout.md) </br> |
+| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**Identity document (ID) model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)|
-## Which Form Recognizer model should I use?
+## Which document processing model should I use?
This section will help you decide which Form Recognizer v2.1 supported model you should use for your application: | Type of document | Data to extract |Document format | Your best solution | | --|-| -|-|
-|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
+|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables and selection marks.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout analysis model**](concept-layout.md)
|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md) |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
-|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
+|**Identity document (ID)** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)| |**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)|
Use the links in the table to learn more about each model and browse the API ref
| Model| Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout analysis**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Identity document (ID) model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end ## Data privacy and security
- As with all the cognitive services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
+ As with all AI services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
## Next steps
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Use the REST API parameter `api-version=2022-06-30-preview` when using the API o
### New Prebuilt Contract model
-A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. Contracts is currenlty in preview, please request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
+A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. Contracts is currently in preview, please request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
### Region expansion for training custom neural models
The **2022-06-30-preview** release presents extensive updates across the feature
* [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales). * [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales). * [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
-* [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [page extraction](concept-read.md#pages).
+* [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction (preview)](concept-read.md#microsoft-office-and-html-text-extraction-preview-).
#### Form Recognizer SDK beta June 2022 preview release
applied-ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
Choose **Single View App**.
![New Single View App](./media/ios/xcode-single-view-app.png) ## Get the SDK CocoaPod+ The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via Cocoapods:+ 1. [Install CocoaPods](http://guides.cocoapods.org/using/getting-started.html) - Follow the getting started guide to install Cocoapods.+ 2. Create a Podfile by running `pod init` in your Xcode project's root directory.
-3. Add the CocoaPod to your Podfile by adding `pod 'immersive-reader-sdk', :path => 'https://github.com/microsoft/immersive-reader-sdk/tree/master/iOS/immersive-reader-sdk'`. Your Podfile should look like the following, with your target's name replacing picture-to-immersive-reader-swift:
- ```ruby
- platform :ios, '9.0'
-
- target 'picture-to-immersive-reader-swift' do
- use_frameworks!
- # Pods for picture-to-immersive-reader-swift
- pod 'immersive-reader-sdk', :git => 'https://github.com/microsoft/immersive-reader-sdk.git'
- end
-```
+
+3. Add the CocoaPod to your Podfile by adding `pod 'immersive-reader-sdk', :path => 'https://github.com/microsoft/immersive-reader-sdk/tree/master/iOS/immersive-reader-sdk'`. Your Podfile should look like the following, with your target's name replacing picture-to-immersive-reader-swift:
+
+ ```ruby
+ platform :ios, '9.0'
+
+ target 'picture-to-immersive-reader-swift' do
+ use_frameworks!
+ # Pods for picture-to-immersive-reader-swift
+ pod 'immersive-reader-sdk', :git => 'https://github.com/microsoft/immersive-reader-sdk.git'
+ end
+ ```
+ 4. In the terminal, in the directory of your Xcode project, run the command `pod install` to install the Immersive Reader SDK pod.+ 5. Add `import immersive_reader_sdk` to all files that need to reference the SDK.+ 6. Ensure to open the project by opening the `.xcworkspace` file and not the `.xcodeproj` file. ## Acquire an Azure AD authentication token
applied-ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
The first section lists a summary of the current incident, including basic infor
- Analyzed root cause is an automatically analyzed result. Metrics Advisor analyzes all anomalies that are captured on time series within one metric with different dimension values at the same timestamp. Then performs correlation, clustering to group related anomalies together and generates root cause advice. + For metrics with multiple dimensions, it's a common case that multiple anomalies will be detected at the same time. However, those anomalies may share the same root cause. Instead of analyzing all anomalies one by one, leveraging **Analyzed root cause** should be the most efficient way to diagnose current incident.
applied-ai-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/manage-data-feeds.md
Previously updated : 04/20/2021 Last updated : 10/25/2022
Select the **Backfill** button to trigger an immediate ingestion on a time-stam
## Manage permission of a data feed
-Workspace access is controlled by the Metrics Advisor resource, which uses Azure Active Directory for authentication. Another layer of permission control is applied to metric data.
+Azure operations can be divided into two categories - control plane and data plane. You use the control plane to manage resources in your subscription. You use the data plane to use capabilities exposed by your instance of a resource type.
+Metrics Advisor requires at least a 'Reader' role to use its capabilities, but cannot perform edit/delete action to the resource itself.
-Metrics Advisor lets you grant permissions to different groups of people on different data feeds. There are two types of roles:
+Within Metrics Advisor there're other fine-grained roles to enable permission control on specific entities, like data feeds, hooks, credentials etc. There are two types of roles:
-- **Administrator**: Has full permissions to manage a data feed, including modify and delete.-- **Viewer**: Has access to a read-only view of the data feed.
-
+- **Administrator**: Has full permissions to manage a data feed, hook, credentials, etc. including modify and delete.
+- **Viewer**: Has access to a read-only view of the data feed, hook, credentials, etc.
## Advanced settings
applied-ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
Previously updated : 05/20/2021 Last updated : 05/20/2021 # Tutorial: Enable anomaly notification in Metrics Advisor
There are several options to send email, both Microsoft hosted and 3rd-party off
Fill in the content that you'd like to include to 'Body', 'Subject' in the email and fill in an email address in 'To'. ![Screenshot of send an email](../media/tutorial/logic-apps-send-email.png)
-
+ #### [Teams Channel](#tab/teams)
-
-### Send anomaly notification through a Microsoft Teams channel
-This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
-
+### Send anomaly notification through a Microsoft Teams channel
+This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
**Step 1.** Add a 'Incoming Webhook' connector to your Teams channel
automation Automation Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-disaster-recovery.md
+
+ Title: Disaster recovery for Azure Automation
+description: This article details on disaster recovery strategy to handle service outage or zone failure for Azure Automation
+keywords: automation disaster recovery
++ Last updated : 10/17/2022+++
+# Disaster recovery for Azure Automation
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
+
+This article explains the disaster recovery strategy to handle a region-wide or zone-wide failure.
+
+You must have a disaster recovery strategy to handle a region-wide service outage or zone-wide failure to help reduce the impact and effects arising from unpredictable events on your business and customers. You are responsible to set up disaster recovery of Automation accounts, and its dependent resources such as Modules, Connections, Credentials, Certificates, Variables and Schedules. An important aspect of a disaster recovery plan is preparing to failover to the replica of the Automation account created in advance in the secondary region, if the Automation account in the primary region becomes unavailable. Ensure that your disaster recovery strategy considers your Automation account and the dependent resources.
+
+In addition to high availability offered by Availability zones, some regions are paired with another region to provide protection from regional or large geographical disasters. Irrespective of whether the primary region has a regional pair or not, the disaster recovery strategy for the Automation account remains the same. For more information about regional pairs, [learn more](../availability-zones/cross-region-replication-azure.md).
++
+## Enable disaster recovery
+
+Every Automation account that you [create](https://learn.microsoft.com/azure/automation/quickstarts/create-azure-automation-account-portal)
+requires a location that you must use for deployment. This would be the primary region for your Automation account and it includes Assets, runbooks created for the Automation account, job execution data, and logs. For disaster recovery, the replica Automation account must be already deployed and ready in the secondary region.
+
+- Begin by [creating a replica Automation account](https://learn.microsoft.com/azure/automation/quickstarts/create-azure-automation-account-portal#create-automation-account) in any alternate [region](https://azure.microsoft.com/global-infrastructure/services/?products=automation&regions=all).
+- Select the secondary region of your choice - paired region or any other region where Azure Automation is available.
+- Apart from creating a replica of the Automation account, replicate the dependent resources such as Runbooks, Modules, Connections, Credentials, Certificates, Variables, Schedules and permissions assigned for the Run As account and Managed Identities in the Automation account in primary region to the Automation account in secondary region. You can use the [PowerShell script](#script-to-migrate-automation-account-assets-from-one-region-to-another) to migrate assets of the Automation account from one region to another.
+- If you are using [ARM templates](../azure-resource-manager/management/overview.md) to define and deploy Automation runbooks, you can use these templates to deploy the same runbooks in any other Azure region where you create the replica Automation account. In case of a region-wide outage or zone-wide failure in the primary region, you can execute the runbooks replicated in the secondary region to continue business as usual. This ensures that the secondary region steps up to continue the work if the primary region has a disruption or failure.
+
+>[!NOTE]
+> Due to data residency requirements, jobs data and logs present in the primary region are not available in the secondary region.
+
+## Scenarios for cloud and hybrid jobs
+
+### Scenario: Execute Cloud jobs in secondary region
+For Cloud jobs, there would be a negligible downtime, provided a replica Automation account and all dependent resources and runbooks are already deployed and available in the secondary region. You can use the replica account for executing jobs as usual.
+
+### Scenario: Execute jobs on Hybrid Runbook Worker deployed in a region different from primary region of failure
+If the Windows or Linux Hybrid Runbook worker is deployed using the extension-based approach in a region *different* from the primary region of failure, follow these steps to continue executing the Hybrid jobs:
+
+1. [Delete](extension-based-hybrid-runbook-worker-install.md?tabs=windows#delete-a-hybrid-runbook-worker) the extension installed on Hybrid Runbook worker in the Automation account in the primary region.
+1. [Add](extension-based-hybrid-runbook-worker-install.md?tabs=windows#create-hybrid-worker-group) the same Hybrid Runbook worker to a Hybrid Worker group in the Automation account in the secondary region. The Hybrid worker extension is installed on the machine in the replica of the Automation account.
+1. Execute the jobs on the Hybrid Runbook worker created in Step 2.
+
+For Hybrid Runbook worker deployed using the agent-based approach, choose from below:
+
+#### [Windows Hybrid Runbook worker](#tab/win-hrw)
+
+If the Windows Hybrid Runbook worker is deployed using an agent-based approach in a region different from the primary region of failure, follow the steps to continue executing Hybrid jobs:
+1. [Uninstall](automation-windows-hrw-install.md#remove-windows-hybrid-runbook-worker) the agent from the Hybrid Runbook worker present in the Automation account in the primary region.
+1. [Re-install](automation-windows-hrw-install.md#installation-options) the agent on the same machine in the replica Automation account in the secondary region.
+1. You can now execute jobs on the Hybrid Runbook worker created in Step 2.
+
+#### [Linux Hybrid Runbook worker](#tab/linux-hrw)
+
+If the Linux Hybrid Runbook worker is deployed using agent-based approach in a region different from the primary region of failure, follow the below steps to continue executing Hybrid jobs:
+1. [Uninstall](automation-linux-hrw-install.md#remove-linux-hybrid-runbook-worker) the agent from the Hybrid Runbook worker present in Automation account in the primary region.
+1. [Re-install](automation-linux-hrw-install.md#install-a-linux-hybrid-runbook-worker) the agent on the same machine in the replica Automation account in the secondary region.
+1. You can now execute jobs on the Hybrid Runbook worker created in Step 2.
+++
+### Scenario: Execute jobs on Hybrid Runbook Worker deployed in the primary region of failure
+If the Hybrid Runbook worker is deployed in the primary region, and there is a compute failure in that region, the machine will not be available for executing Automation jobs. You must provision a new virtual machine in an alternate region and register it as Hybrid Runbook Worker in Automation account in the secondary region.
+
+- See the installation steps in [how to deploy an extension-based Windows or Linux User Hybrid Runbook Worker](extension-based-hybrid-runbook-worker-install.md?tabs=windows#create-hybrid-worker-group).
+- See the installation steps in [how to deploy an agent-based Windows Hybrid Worker](automation-windows-hrw-install.md#installation-options).
+- See the installation steps in [how to deploy an agent-based Linux Hybrid Worker](automation-linux-hrw-install.md#install-a-linux-hybrid-runbook-worker).
+
+## Script to migrate Automation account assets from one region to another
+
+You can use these scripts for migration of Automation account assets from the account in primary region to the account in the secondary region. These scripts are used to migrate only Runbooks, Modules, Connections, Credentials, Certificates and Variables. The execution of these scripts does not affect the Automation account and its assets present in the primary region.
+
+### Prerequisites
+
+ 1. Ensure that the Automation account in the secondary region is created and available so that assets from primary region can be migrated to it. It is preferred if the destination automation account is one without any custom resources as it prevents potential resource class due to same name and loss of data.
+ 1. Ensure that the system assigned identities are enabled in the Automation account in the primary region.
+ 1. Ensure that the primary Automation account's Managed Identity has Contributor access with read and write permissions to the Automation account in secondary region. To enable, provide the necessary permissions in secondary Automation account's managed identities. [Learn more](../role-based-access-control/quickstart-assign-role-user-portal.md).
+ 1. Ensure that the script has access to the Automation account assets in primary region. Hence, it should be executed as a runbook in that Automation account for successful migration.
+ 1. If the primary Automation account is deployed using a Run as account, then it must be switched to Managed Identity before migration. [Learn more](migrate-run-as-accounts-managed-identity.md).
+ 1. Modules required are:
+
+ - Az.Accounts version 2.8.0
+ - Az.Resources version 6.0.0
+ - Az.Automation version 1.7.3
+ - Az.Storage version 4.6.0
+1. Ensure that both the source and destination Automation accounts should belong to the same Azure Active Directory tenant.
+
+### Create and execute the runbook
+You can use the[PowerShell script](https://github.com/azureautomation/Migrate-automation-account-assets-from-one-region-to-another) or [PowerShell workflow](https://github.com/azureautomation/Migrate-automation-account-assets-from-one-region-to-another-PwshWorkflow/tree/main) runbook or import from the Runbook gallery and execute it to enable migration of assets from one Automation account to another.
+
+Follow the steps to import and execute the runbook:
+
+#### [PowerShell script](#tab/ps-script)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to Automation account that you want to migrate to another region.
+1. Under **Process Automation**, select **Runbooks**.
+1. Select **Browse gallery** and in the search, enter *Migrate Automation account assets from one region to another* and select **PowerShell script**.
+1. In the **Import a runbook** page, enter a name for the runbook.
+1. Select **Runtime version** as either 5.1 or 7.1 (preview)
+1. Enter the description and select **Import**.
+1. In the **Edit PowerShell Runbook** page, edit the required parameters and execute it.
+
+You can choose either of the options to edit and execute the script. You can provide the seven mandatory parameters as given in Option 1 **or** three mandatory parameters given in Option 2 to edit and execute the script.
+
+#### [PowerShell Workflow](#tab/ps-workflow)
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to Automation account that you want to migrate to another region.
+1. Under **Process Automation**, select **Runbooks**.
+1. Select **Browse gallery** and in the search, enter *Migrate Automation account assets from one region to another* and Select **PowerShell workflow**.
+1. In the **Import a runbook** page, enter a name for the runbook.
+1. Select **Runtime version** as 5.1
+1. Enter the description and select **Import**.
+
+You can input the parameters during execution of PowerShell Workflow runbook. You can provide the seven mandatory parameters as given in Option 1 **or** three mandatory parameters given in Option 2 to execute the script.
+++
+The options are:
+
+#### [Option 1](#tab/option-one)
+
+**Name** | **Required** | **Description**
+-- | - | --
+SourceAutomationAccountName | True | Name of automation account in primary region from where assets need to be migrated. |
+DestinationAutomationAccountName | True | Name of automation account in secondary region to which assets need to be migrated. |
+SourceResourceGroup | True | Resource group name of the Automation account in the primary region. |
+DestinationResourceGroup | True | Resource group name of the Automation account in the secondary region. |
+SourceSubscriptionId | True | Subscription ID of the Automation account in primary region |
+DestinationSubscriptionId | True | Subscription ID of the Automation account in secondary region. |
+Type[] | True | Array consisting of all the types of assets that need to be migrated, possible values are Certificates, Connections, Credentials, Modules, Runbooks, and Variables. |
+
+#### [Option 2](#tab/option-two)
+
+**Name** | **Required** | **Description**
+-- | - | --
+SourceAutomationAccountResourceId | True | Resource ID of the Automation account in primary region from where assets need to be migrated. |
+DestinationAutomationAccountResourceId | True | Resource ID of the Automation account in secondary region to which assets need to be migrated. |
+Type[] | True | Array consisting of all the types of assets that need to be migrated, possible values are Certificates, Connections, Credentials, Modules, Runbooks, and Variables. |
+++
+### Limitations
+- The script migrates only Custom PowerShell modules. Default modules and Python packages would not be migrated to replica Automation account.
+- The script does not migrate **Schedules** and **Managed identities** present in Automation account in primary region. These would have to be created manually in replica Automation account.
+- Jobs data and activity logs would not be migrated to the replica account.
+
+## Next steps
+
+- Learn more about [regions that support availability zones](../availability-zones/az-region.md).
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-secure-asset-encryption.md
For more information about Azure Key Vault, see [What is Azure Key Vault?](../ke
When you use encryption with customer-managed keys for an Automation account, Azure Automation wraps the account encryption key with the customer-managed key in the associated key vault. Enabling customer-managed keys doesn't impact performance, and the account is encrypted with the new key immediately, without any delay.
-A new Automation account is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the account is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Automation account. The managed identity is available only after the storage account is created.
+A new Automation account is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the account is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Automation account. The managed identity is available only after the automation account is created.
When you modify the key being used for Azure Automation secure asset encryption, by enabling or disabling customer-managed keys, updating the key version, or specifying a different key, the encryption of the account encryption key changes but the secure assets in your Azure Automation account don't need to be re-encrypted.
automation Migrate Oms Update Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-oms-update-deployments.md
- Title: Migrate Azure Monitor logs update deployments to Azure portal
-description: This article tells how to migrate Azure Monitor logs update deployments to Azure portal.
-- Previously updated : 07/16/2018--
-# Migrate Azure Monitor logs update deployments to Azure portal
-
-The Operations Management Suite (OMS) portal is being [deprecated](../azure-monitor/logs/oms-portal-transition.md). All functionality that was available in the OMS portal for Update Management is available in the Azure portal, through Azure Monitor logs. This article provides the information you need to migrate to the Azure portal.
-
-## Key information
-
-* Existing deployments will continue to work. Once you have recreated the deployment in Azure, you can delete your old deployment.
-* All existing features that you had in OMS are available in Azure. To learn more about Update Management, see [Update Management overview](./update-management/overview.md).
-
-## Access the Azure portal
-
-1. From your workspace, click **Open in Azure**.
-
- ![Open in Azure - Log Analytics](media/migrate-oms-update-deployments/link-to-azure-portal.png)
-
-2. In the Azure portal, click **Automation Account**
-
- ![Azure Monitor logs](media/migrate-oms-update-deployments/log-analytics.png)
-
-3. In your Automation account, click **Update Management**.
-
- :::image type="content" source="media/migrate-oms-update-deployments/azure-automation.png" alt-text="Screenshot of the Update management page.":::
-
-4. In the Azure portal, select **Automation Accounts** under **All services**.
-
-5. Under **Management Tools**, select the appropriate Automation account, and click **Update Management**.
-
-## Recreate existing deployments
-
-All update deployments created in the OMS portal have a [saved search](../azure-monitor/logs/computer-groups.md) also known as a computer group, with the same name as the update deployment that exists. The saved search contains the list of machines that were scheduled in the update deployment.
--
-To use this existing saved search, follow these steps:
-
-1. To create a new update deployment, go to the Azure portal, select the Automation account that is used, and click **Update Management**. Click **Schedule update deployment**.
-
- ![Schedule update deployment](media/migrate-oms-update-deployments/schedule-update-deployment.png)
-
-2. The New Update Deployment pane opens. Enter values for the properties described in the following table and then click **Create**:
-
-3. For **Machines to update**, select the saved search used by the OMS deployment.
-
- | Property | Description |
- | | |
- |Name |Unique name to identify the update deployment. |
- |Operating System| Select **Linux** or **Windows**.|
- |Machines to update |Select a Saved search, Imported group, or pick Machine from the dropdown and select individual machines. If you choose **Machines**, the readiness of the machine is shown in the **UPDATE AGENT READINESS** column.</br> To learn about the different methods of creating computer groups in Azure Monitor logs, see [Computer groups in Azure Monitor logs](../azure-monitor/logs/computer-groups.md) |
- |Update classifications|Select all the update classifications that you need. CentOS does not support this out of the box.|
- |Updates to exclude|Enter the updates to exclude. For Windows, enter the KB article without the **KB** prefix. For Linux, enter the package name or use a wildcard character. |
- |Schedule settings|Select the time to start, and then select either **Once** or **Recurring** for the recurrence. |
- | Maintenance window |Number of minutes set for updates. The value can't be less than 30 minutes or more than 6 hours. |
- | Reboot control| Determines how reboots should be handled.</br>Available options are:</br>Reboot if required (Default)</br>Always reboot</br>Never reboot</br>Only reboot - will not install updates|
-
-4. Click **Scheduled update deployments** to view the status of the newly created update deployment.
-
- ![new update deployment](media/migrate-oms-update-deployments/new-update-deployment.png)
-
-5. As mentioned previously, once your new deployments are configured through the Azure portal, you can remove the existing deployments from the Azure portal.
-
-## Next steps
-
-To learn more about Update Management in Azure Automation, see [Update Management overview](./update-management/overview.md).
availability-zones Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service.md
Availability zone support is a property of the App Service plan. The following a
- Central US - East US - East US 2
+ - South Central US
- Canada Central - Brazil South - North Europe - West Europe
+ - Sweden Central
- Germany West Central - France Central - UK South - Japan East - East Asia - Southeast Asia
+ - Qatar Central
+ - Central India
- Australia East - Availability zones can only be specified when creating a **new** App Service plan. A pre-existing App Service plan can't be converted to use availability zones. - Availability zones are only supported in the newer portion of the App Service footprint.
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Learn more about the cluster extensions currently available for Azure Arc-enable
* [Azure Monitor](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json) * [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json) * [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md)
-* [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json)
* [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) * [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) * [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md)
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
If you run into problems, the following suggestions may help:
nslookup gbl.his.arc.azure.com nslookup agentserviceapi.guestconfiguration.azure.com nslookup dp.kubernetesconfiguration.azure.com
- ```
+ ```
* If you are having trouble onboarding your Kubernetes cluster, confirm that youΓÇÖve added the Azure Active Directory, Azure Resource Manager, AzureFrontDoor.FirstParty and Microsoft Container Registry service tags to your local network firewall.
If you run into problems, the following suggestions may help:
* Learn more about [Azure Private Endpoint](../../private-link/private-link-overview.md). * Learn how to [troubleshoot Azure Private Endpoint connectivity problems](../../private-link/troubleshoot-private-endpoint-connectivity.md).
-* Learn how to [configure Private Link for Azure Monitor](../../azure-monitor/logs/private-link-security.md).
+* Learn how to [configure Private Link for Azure Monitor](../../azure-monitor/logs/private-link-security.md).
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
Title: Configure geo-replication for Premium Azure Cache for Redis instances
-description: Learn how to replicate your Azure Cache for Redis Premium instances across Azure regions
+ Title: Configure passive geo-replication for Premium Azure Cache for Redis instances
+description: Learn how to use cross-region replication to provide disaster recovery on the Premium tier of Azure Cache for Redis.
+ Previously updated : 05/24/2022+ Last updated : 10/20/2022
-# Configure geo-replication for Premium Azure Cache for Redis instances
+# Configure passive geo-replication for Premium Azure Cache for Redis instances
+
+In this article, you learn how to configure passive geo-replication on a pair of Azure Cache for Redis instances using the Azure portal.
+
+Passive geo-replication links together two Premium tier Azure Cache for Redis instances and creates an _active-passive_ data replication relationship. Active-passive means that there's a pair of caches, primary and secondary, that have their data synchronized. But you can only write to one side of the pair, the primary. The other side of the pair, the secondary cache, is read-only.
-In this article, you learn how to configure a geo-replicated Azure Cache using the Azure portal.
+Compare _active-passive_ to _active-active_, where you can write to either side of the pair, and it will synchronize with the other side.
-Geo-replication links together two Premium Azure Cache for Redis instances and creates a data replication relationship. These cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagate changes to the secondary. This process continues until the link between the two instances is removed.
+With passive geo-replication, the cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagates changes to the secondary.
+
+Failover is not automatic. For more information and information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
> [!NOTE] > Geo-replication is designed as a disaster-recovery solution. > >
+## Scope of availability
+
+|Tier | Basic, Standard | Premium |Enterprise, Enterprise Flash |
+|||||
+|Available | No | Yes | Yes |
+
+_Passive geo-replication_ is only available in the Premium tier of Azure Cache for Redis. The Enterprise and Enterprise Flash tiers also offer geo-replication, but those tiers use a more advanced version called _active geo-replication_.
## Geo-replication prerequisites
To configure geo-replication between two caches, the following prerequisites mus
- Both caches are [Premium tier](cache-overview.md#service-tiers) caches. - Both caches are in the same Azure subscription.-- The secondary linked cache is either the same cache size or a larger cache size than the primary linked cache.
+- The secondary linked cache is either the same cache size or a larger cache size than the primary linked cache. To use geo-failover, both caches must be the same size.
- Both caches are created and in a running state.-- Neither cache can have more than one replica. > [!NOTE]
-> Data transfer between Azure regions will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
+> Data transfer between Azure regions is charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
Some features aren't supported with geo-replication: - Zone Redundancy isn't supported with geo-replication. - Persistence isn't supported with geo-replication.
+- Caches with more than one replica can't be geo-replicated.
- Clustering is supported if both caches have clustering enabled and have the same number of shards. - Caches in the same Virtual Network (VNet) are supported. - Caches in different VNets are supported with caveats. See [Can I use geo-replication with my caches in a VNet?](#can-i-use-geo-replication-with-my-caches-in-a-vnet) for more information.-- Caches with more than one replica can't be geo-replicated. After geo-replication is configured, the following restrictions apply to your linked cache pair: -- The secondary linked cache is read-only; you can read from it, but you can't write any data to it. If you choose to read from the Geo-Secondary instance when a full data sync is happening between the Geo-Primary and the Geo-Secondary, the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync is complete. The errors state that a full data sync is in progress. Also, the errors are thrown when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios. Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
+- The secondary linked cache is read-only. You can read from it, but you can't write any data to it. If you choose to read from the Geo-Secondary instance when a full data sync is happening between the Geo-Primary and the Geo-Secondary, the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync is complete. The errors state that a full data sync is in progress. Also, the errors are thrown when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios. Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
- Any data that was in the secondary linked cache before the link was added is removed. If the geo-replication is later removed however, the replicated data remains in the secondary linked cache. - You can't [scale](cache-how-to-scale.md) either cache while the caches are linked. - You can't [change the number of shards](cache-how-to-premium-clustering.md) if the cache has clustering enabled.
After geo-replication is configured, the following restrictions apply to your li
- You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache. - You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](#how-much-does-it-cost-to-replicate-my-data-across-azure-regions)-- Automatic failover doesn't occur between the primary and secondary linked cache. For more information and information on how to failover a client application, see [How does failing over to the secondary linked cache work?](#how-does-failing-over-to-the-secondary-linked-cache-work)
+- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information and information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+ - Private links can't be added to caches that are already geo-replicated. To add a private link to a geo-replicated cache: 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication. ## Add a geo-replication link
-1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from **Geo-replication** on the left.
+1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from the working pane.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Cache geo-replication menu":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Screenshot showing the cache's Geo-replication menu.":::
1. Select the name of your intended secondary cache from the **Compatible caches** list. If your secondary cache isn't displayed in the list, verify that the [Geo-replication prerequisites](#geo-replication-prerequisites) for the secondary cache are met. To filter the caches by region, select the region in the map to display only those caches in the **Compatible caches** list.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link.png" alt-text="Select compatible cache":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link.png" alt-text="Screenshot showing compatible caches for linking with geo-replication.":::
You can also start the linking process or view details about the secondary cache by using the context menu.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png" alt-text="Geo-replication context menu":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png" alt-text="Screenshot showing the Geo-replication context menu.":::
1. Select **Link** to link the two caches together and begin the replication process.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Link caches":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Screenshot showing how to link caches for geo-replication.":::
1. You can view the progress of the replication process using **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Linking status":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Screenshot showing the current Linking status.":::
You can also view the linking status on the left, using **Overview**, for both the primary and secondary caches.
After geo-replication is configured, the following restrictions apply to your li
Once the replication process is complete, the **Link status** changes to **Succeeded**.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Cache status":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Screenshot showing cache linking status as Succeeded.":::
The primary linked cache remains available for use during the linking process. The secondary linked cache isn't available until the linking process completes.
-> [!NOTE]
-> Geo-replication can be enabled for this cache if you scale it to 'Premium' pricing tier and disable data persistence. This feature is not available at this time when using extra replicas.
+## Geo-primary URLs (preview)
+
+Once the caches are linked, URLs are generated that always point to the geo-primary cache. If a failover is initiated from the geo-primary to the geo-secondary, the URL remains the same, and the underlying DNS record is updated automatically to point to the new geo-primary.
++
+Four URLs are shown:
+
+- **Geo-Primary URL** is a proxy URL with the format of `<cache-1-name>.geo.redis.cache.windows.net`. This URL always has the name of the first cache to be linked, but it always points to whichever cache is the current geo-primary.
+- **Linked cache Geo-Primary URL** is a proxy URL with the format of `<cache-2-name>.geo.redis.cache.windows.net`. This URL always has the name of the second cache to be linked, and it will also always point to whichever cache is the current geo-primary.
+- **Current Geo Primary Cache** is the direct address of the cache that is currently the geo-primary. The address is `redis.cache.windows.net` not `geo.redis.cache.windows.net`. The address listed in this field changes if a failover is initiated.
+- **Current Geo Secondary Cache** is the direct address of the cache that is currently the geo-secondary. The address is `redis.cache.windows.net` not `geo.redis.cache.windows.net`. The address listed in this field changes if a failover is initiated.
+
+The goal of the two geo-primary URLs is to make updating the cache address easier on the application side in the event of a failover. Changing the address of either linked cache from `redis.cache.windows.net` to `geo.redis.cache.windows.net` ensures that your application is always pointing to the geo-primary, even if a failover is triggered.
+
+The URLs for the current geo-primary and current geo-secondary cache are provided in case youΓÇÖd like to link directly to a cache resource without any automatic routing.
+
+## Initiate a failover from geo-primary to geo-secondary (preview)
+
+With one click, you can trigger a failover from the geo-primary to the geo-secondary.
++
+This causes the following steps to be taken:
+
+1. The geo-secondary cache is promoted to geo-primary.
+1. DNS records are updated to redirect the geo-primary URLs to the new geo-primary.
+1. The old geo-primary cache is demoted to secondary, and attempts to form a link to the new geo-primary cache.
+
+The geo-failover process takes a few minutes to complete.
+
+### Settings to check before initiating geo-failover
+
+When the failover is initiated, the geo-primary and geo-secondary caches will swap. If the new geo-primary is configured differently from the geo-secondary, it can create problems for your application.
+
+Be sure to check the following items:
+
+- If youΓÇÖre using a firewall in either cache, make sure that the firewall settings are similar so you have no connection issues.
+- Make sure both caches are using the same port and TLS/SSL settings
+- The geo-primary and geo-secondary caches have different access keys. In the event of a failover being triggered, make sure your application can update the access key it's using to match the new geo-primary.
+
+### Failover with minimal data loss
+
+Geo-failover events can introduce data inconsistencies during the transition, especially if the client maintains a connection to the old geo-primary during the failover process. It's possible to minimize data loss in a planned geo-failover event using the following tips:
+
+- Check the geo-replication data sync offset metric. The metric is emitted by the current geo-primary cache. This metric indicates how much data has yet to be replicated to the geo-primary. If possible, only initiate failover if the metric indicates fewer than 14 bytes remain to be written.
+- Run the `CLIENT PAUSE` command in the current geo-primary before initiating failover. Running `CLIENT PAUSE` blocks any new write requests and instead returns timeout failures to the Azure Cache for Redis client. The `CLIENT PAUSE` command requires providing a timeout period in milliseconds. Make sure a long enough timeout period is provided to allow the failover to occur. Setting this to around 30 minutes (1,800,000 milliseconds) is a good place to start. You can always lower this number as needed.
+
+There's no need to run the CLIENT UNPAUSE command as the new geo-primary does retain the client pause.
## Remove a geo-replication link 1. To remove the link between two caches and stop geo-replication, select **Unlink caches** from the **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Unlink caches":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Screenshot showing how to unlink caches.":::
When the unlinking process completes, the secondary cache is available for both reads and writes. >[!NOTE] >When the geo-replication link is removed, the replicated data from the primary linked cache remains in the secondary cache. >
->
## Geo-replication FAQ - [Can I use geo-replication with a Standard or Basic tier cache?](#can-i-use-geo-replication-with-a-standard-or-basic-tier-cache) - [Is my cache available for use during the linking or unlinking process?](#is-my-cache-available-for-use-during-the-linking-or-unlinking-process)
+- Can I track the health of the geo-replication link?
- [Can I link more than two caches together?](#can-i-link-more-than-two-caches-together) - [Can I link two caches from different Azure subscriptions?](#can-i-link-two-caches-from-different-azure-subscriptions) - [Can I link two caches with different sizes?](#can-i-link-two-caches-with-different-sizes)
After geo-replication is configured, the following restrictions apply to your li
- [How much does it cost to replicate my data across Azure regions?](#how-much-does-it-cost-to-replicate-my-data-across-azure-regions) - [Why did the operation fail when I tried to delete my linked cache?](#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - [What region should I use for my secondary linked cache?](#what-region-should-i-use-for-my-secondary-linked-cache)-- [How does failing over to the secondary linked cache work?](#how-does-failing-over-to-the-secondary-linked-cache-work) - [Can I configure Firewall with geo-replication?](#can-i-configure-a-firewall-with-geo-replication) ### Can I use geo-replication with a Standard or Basic tier cache?
-No, geo-replication is only available for Premium tier caches.
+No, passive geo-replication is only available in the Premium tier. A more advanced version of geo-replication called, _active geo-replication_, is available in the Enterprise and Enterprise Flash tier.
### Is my cache available for use during the linking or unlinking process?
No, geo-replication is only available for Premium tier caches.
- The secondary linked cache isn't available until the linking process completes. - Both caches remain available until the unlinking process completes.
+### Can I track the health of the geo-replication link?
+
+Yes, there are several metrics available to help track the status of the geo-replication. These metrics are available in the Azure portal.
+
+- **Geo Replication Healthy** shows the status of the geo-replication link. The link will show up as unhealthy if either the geo-primary or geo-secondary caches are down. This is typically due to standard patching operations, but it could also indicate a failure situation.
+- **Geo Replication Connectivity Lag** shows the time since the last successful data synchronization between geo-primary and geo-secondary.
+- **Geo Replication Data Sync Offset** shows the amount of data that has yet to be synchronized to the geo-secondary cache.
+- **Geo Replication Fully Sync Event Started** indicates that a full synchronization action has been initiated between the geo-primary and geo-secondary caches. This occurs if standard replication can't keep up with the number of new writes.
+- **Geo Replication Full Sync Event Finished** indicates that a full synchronization action has been completed.
+ ### Can I link more than two caches together? No, you can only link two caches together.
No, both caches must be in the same Azure subscription.
### Can I link two caches with different sizes?
-Yes, as long as the secondary linked cache is larger than the primary linked cache.
+Yes, as long as the secondary linked cache is larger than the primary linked cache. However, you can't use the failover feature if the caches are different sizes.
### Can I use geo-replication with clustering enabled?
Replication is continuous and asynchronous. It doesn't happen on a specific sche
### How long does geo-replication replication take?
-Replication is incremental, asynchronous, and continuous and the time taken isn't much different from the latency across regions. Under certain circumstances, the secondary cache can be required to do a full sync of the data from the primary. The replication time in this case depends on many factors like: load on the primary cache, available network bandwidth, and inter-region latency. We have found replication time for a full 53-GB geo-replicated pair can be anywhere between 5 to 10 minutes.
+Replication is incremental, asynchronous, and continuous and the time taken isn't much different from the latency across regions. Under certain circumstances, the secondary cache can be required to do a full sync of the data from the primary. The replication time in this case depends on many factors like: load on the primary cache, available network bandwidth, and inter-region latency. We have found replication time for a full 53-GB geo-replicated pair can be anywhere between 5 to 10 minutes. You can track the amount of data that has yet to be replicated using the `Geo Replication Data Sync Offset` metric in Azure monitor.
### Is the replication recovery point guaranteed?
Geo-replicated caches and their resource groups can't be deleted while linked un
In general, it's recommended for your cache to exist in the same Azure region as the application that accesses it. For applications with separate primary and fallback regions, it's recommended your primary and secondary caches exist in those same regions. For more information about paired regions, see [Best Practices ΓÇô Azure Paired regions](../availability-zones/cross-region-replication-azure.md).
-### How does failing over to the secondary linked cache work?
-
-Automatic failover across Azure regions isn't supported for geo-replicated caches. In a disaster-recovery scenario, customers should bring up the entire application stack in a coordinated manner in their backup region. Letting individual application components decide when to switch to their backups on their own can negatively affect performance.
-
-One of the key benefits of Redis is that it's a very low-latency store. If the customer's main application is in a different region than its cache, the added round-trip time would have a noticeable effect on performance. For this reason, we avoid failing over automatically because of transient availability issues.
-
-To start a customer-initiated failover, first unlink the caches. Then, change your Redis client to use the connection endpoint of the (formerly linked) secondary cache. When the two caches are unlinked, the secondary cache becomes a regular read-write cache again and accepts requests directly from Redis clients.
- ### Can I configure a firewall with geo-replication? Yes, you can configure a [firewall](./cache-configure.md#firewall) with geo-replication. For geo-replication to function alongside a firewall, ensure that the secondary cache's IP address is added to the primary cache's firewall rules.
azure-cache-for-redis Cache Moving Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-moving-resources.md
Title: Move Azure Cache for Redis instances to different regions description: How to move Azure Cache for Redis instances to a different Azure region. + - Previously updated : 11/17/2021
-#Customer intent: As an Azure developer, I want to move my Azure Cache for Redis resource to another Azure region.
+ Last updated : 10/20/2022+ # Move Azure Cache for Redis instances to different regions
-In this article, you learn how to move Azure Cache for Redis instances to a different Azure region. You might move your resources to another region for a number of reasons:
+In this article, you learn how to move Azure Cache for Redis instances to a different Azure region. You might move your resources to another region for many reasons:
+ - To take advantage of a new Azure region. - To deploy features or services available in specific regions only. - To meet internal policy and governance requirements.
In this article, you learn how to move Azure Cache for Redis instances to a diff
If you're looking to migrate to Azure Cache for Redis from on-premises, cloud-based VMs, or another hosting service, we recommend you see [Migrate to Azure Cache for Redis](cache-migration-guide.md).
-The tier of Azure Cache for Redis you use determines the option that's best for you.
+The tier of Azure Cache for Redis you use determines the option that's best for you.
-| Cache Tier | Options |
-| | - |
-| Premium | Geo-replication, create a new cache, dual-write to two caches, export and import data via RDB file, or migrate programmatically |
-| Basic or Standard | Create a new cache, dual-write to two caches, or migrate programmatically |
-| Enterprise or Enterprise Flash | Create a new cache or export and import data with an RDB file, or migrate programmatically |
+| Cache Tier | Options |
+| | - |
+| Premium | Geo-replication, create a new cache, dual-write to two caches, export and import data via RDB file, or migrate programmatically |
+| Basic or Standard | Create a new cache, dual-write to two caches, or migrate programmatically |
+| Enterprise or Enterprise Flash | Create a new cache or export and import data with an RDB file, or migrate programmatically |
-## Geo-replication (Premium)
+## Passive geo-replication (Premium)
-### Prerequisites
+### Prerequisites
To configure geo-replication between two caches, the following prerequisites must be met:
To configure geo-replication between two caches, the following prerequisites mus
### Prepare
-To move your cache instance to another region, you need to [create a second premium cache instance](quickstart-create-redis.md) in the desired region. Once both caches are running, you can set up geo-replication between the two cache instances.
+To move your cache instance to another region, you need to [create a second premium cache instance](quickstart-create-redis.md) in the desired region. Once both caches are running, you can set up geo-replication between the two cache instances.
> [!NOTE] > Data transfer between Azure regions is charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
Conditions for geo-replications support:
After geo-replication is configured, the following restrictions apply to your linked cache pair: -- The secondary linked cache is read-only. You can read from it, but you can't write any data to it.
- - If you choose to read from the Geo-Secondary instance, whenever a full data sync is happening between the Geo-Primary and the Geo-Secondary, such as when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios as well,
- the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync between Geo-Primary and Geo-Secondary is complete.
- - Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
+- The secondary linked cache is read-only. You can read from it, but you can't write any data to it.
+ - If you choose to read from the Geo-Secondary instance when a full data sync is happening between the Geo-Primary and the Geo-Secondary, such as when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios as well, the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync between Geo-Primary and Geo-Secondary is complete.
+ - Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
- Any data that was in the secondary linked cache before the link was added is removed. If the geo-replication is later removed however, the replicated data remains in the secondary linked cache. - You can't [scale](cache-how-to-scale.md) either cache while the caches are linked. - You can't [change the number of shards](cache-how-to-premium-clustering.md) if the cache has clustering enabled.
After geo-replication is configured, the following restrictions apply to your li
- You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache. - You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](cache-how-to-geo-replication.md#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](cache-how-to-geo-replication.md#how-much-does-it-cost-to-replicate-my-data-across-azure-regions)-- Automatic failover doesn't occur between the primary and secondary linked cache. For more information and information on how to failover a client application, see [How does failing over to the secondary linked cache work?](cache-how-to-geo-replication.md#how-does-failing-over-to-the-secondary-linked-cache-work)
+- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information and information on how to failover a client application, see [Initiate a failover from geo-primary to geo-secondary (preview)](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
### Move
-1. To link two caches together for geo-replication, fist click **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, click **Add cache replication link** from **Geo-replication** on the left.
+1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Add link":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Screenshot showing the cache's Geo-replication menu.":::
1. Select the name of your intended secondary cache from the **Compatible caches** list. If your secondary cache isn't displayed in the list, verify that the [Geo-replication prerequisites](#prerequisites) for the secondary cache are met. To filter the caches by region, select the region in the map to display only those caches in the **Compatible caches** list.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link.png" alt-text="Geo-replication compatible caches":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link.png" alt-text="Screenshot showing compatible caches for linking with geo-replication.":::
You can also start the linking process or view details about the secondary cache by using the context menu.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png" alt-text="Geo-replication context menu":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png" alt-text="Screenshot showing the Geo-replication context menu.":::
1. Select **Link** to link the two caches together and begin the replication process.
-
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Link caches":::
+
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Screenshot showing how to link caches for geo-replication.":::
### Verify 1. You can view the progress of the replication process using **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Linking status":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Screenshot showing the current Linking status.":::
You can also view the linking status on the left, using **Overview**, for both the primary and secondary caches.
After geo-replication is configured, the following restrictions apply to your li
Once the replication process is complete, the **Link status** changes to **Succeeded**.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Cache status":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Screenshot showing cache linking status as Succeeded.":::
The primary linked cache remains available for use during the linking process. The secondary linked cache isn't available until the linking process completes.
-### Clean up source resources
+### Clean up source resources
Once your new cache in the targeted region is populated with all necessary data, remove the link between the two caches and delete the original instance.
-1. To remove the link between two caches and stop geo-replication, click **Unlink caches** from the **Geo-replication** on the left.
+1. To remove the link between two caches and stop geo-replication, select **Unlink caches** from the **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Unlink caches":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Screenshot showing how to unlink caches.":::
When the unlinking process completes, the secondary cache is available for both reads and writes. >[!NOTE] >When the geo-replication link is removed, the replicated data from the primary linked cache remains in the secondary cache. >
->
-2. Delete the original instance.
+1. Delete the original instance.
## Create a new cache (All tiers) ### Prerequisites+ - Azure subscription - [create one for free](https://azure.microsoft.com/free/) ### Prepare+ If you don't need to maintain your data during the move, the easiest way to move regions is to create a new cache instance in the targeted region and connect your application to it. For example, if you use Redis as a look-aside cache of database records, you can easily rebuild the cache from scratch. ### Move [!INCLUDE [redis-cache-create](includes/redis-cache-create.md)]
-Finally, update your application to use the new instances.
+Finally, update your application to use the new instances.
-### Clean up source resources
-Once your new cache in the targeted region is running, delete the original instance.
+### Clean up source resources
+Once your new cache in the targeted region is running, delete the original instance.
## Export and import data with an RDB file (Premium, Enterprise, Enterprise Flash)+ Open-source Redis defines a standard mechanism for taking a snapshot of a cache's in-memory dataset and saving it to a file. This file, called RDB, can be read by another Redis cache. [Azure Cache for Redis Premium and Enterprise](cache-overview.md#service-tiers) supports importing data into a cache instance with RDB files. You can use an RDB file to transfer data from an existing cache to Azure Cache for Redis. > [!IMPORTANT]
Open-source Redis defines a standard mechanism for taking a snapshot of a cache'
> ### Prerequisites+ - Both caches are [Premium tier or Enterprise tier](cache-overview.md#service-tiers) caches. - The second cache is either the same cache size or a larger cache size than the original cache. - The Redis version of the cache you're exporting from should be the same or lower than the version of your new cache instance. ### Prepare+ To move your cache instance to another region, you'll need to create [a second premium cache instance](quickstart-create-redis.md) or [a second enterprise cache instance](quickstart-create-redis-enterprise.md) in the desired region. ### Move
-1. See [here](cache-how-to-import-export-data.md) for more information on how to import and export data in Azure Cache for Redis.
-2. Update your application to use the new cache instance.
+1. For more information on how to import and export data in Azure Cache for Redis. see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
+
+1. Update your application to use the new cache instance.
### Verify+ You can monitor the progress of the import operation by following the notifications from the Azure portal, or by viewing the events in the [audit log](../azure-monitor/essentials/activity-log.md).
-### Clean up source resources
+### Clean up source resources
+ Once your new cache in the targeted region is running, delete the original instance. ## Dual-write to two caches (Basic, Standard, and Premium)+ Rather than moving data directly between caches, you can use your application to write data to both an existing cache and a new one you're setting up. The application initially reads data from the existing cache initially. When the new cache has the necessary data, you switch the application to that cache and retire the old one. Let's say, for example, you use Redis as a session store and the application sessions are valid for seven days. After writing to the two caches for a week, you'll be certain the new cache contains all non-expired session information. You can safely rely on it from that point onward without concern over data loss. ### Prerequisites+ - The second cache is either the same cache size or a larger cache size than the original cache. ### Prepare+ To move your cache instance to another region, you'll need to [create a second cache instance](quickstart-create-redis.md) in the desired region. ### Move+ General steps to implement this option are: 1. Modify application code to write to both the new and the original instances.
-2. Continue reading data from the original instance until the new instance is sufficiently populated with data.
+1. Continue reading data from the original instance until the new instance is sufficiently populated with data.
-3. Update the application code to reading and writing from the new instance only.
+1. Update the application code to reading and writing from the new instance only.
-### Clean up source resources
-Once your new cache in the targeted region is running, delete the original instance.
+### Clean up source resources
+Once your new cache in the targeted region is running, delete the original instance.
## Migrate programmatically (All tiers)
-You can create a custom migration process by programmatically reading data from an existing cache and writing them into Azure Cache for Redis. This [open-source tool](https://github.com/deepakverma/redis-copy) can be used to copy data from one Azure Cache for Redis instance to an another instance in a different Azure Cache region. A [compiled version](https://github.com/deepakverma/redis-copy/releases/download/alpha/Release.zip) is available as well. You may also find the source code to be a useful guide for writing your own migration tool.
+
+You can create a custom migration process by programmatically reading data from an existing cache and writing them into Azure Cache for Redis. This [open-source tool](https://github.com/deepakverma/redis-copy) can be used to copy data from one Azure Cache for Redis instance to another instance in a different Azure Cache region. A [compiled version](https://github.com/deepakverma/redis-copy/releases/download/alpha/Release.zip) is available as well. You may also find the source code to be a useful guide for writing your own migration tool.
> [!NOTE]
-> This tool isn't officially supported by Microsoft.
->
+> This tool isn't officially supported by Microsoft.
### Prerequisites+ - The second cache is either the same cache size or a larger cache size than the original cache. ### Prepare
You can create a custom migration process by programmatically reading data from
- To move your cache instance to another region, you'll need to [create a second cache instance](quickstart-create-redis.md) in the desired region. ### Move+ After creating a VM in the region where the existing cache is located and creating a new cache in the desired region, the general steps to implement this option are: 1. Flush data from the new cache to ensure that it's empty. This step is required because the copy tool itself doesn't overwrite any existing key in the target cache.
After creating a VM in the region where the existing cache is located and creati
2. Use an application such as the open-source tool above to automate the copying of data from the source cache to the target. Remember that the copy process could take a while to complete depending on the size of your dataset.
-### Clean up source resources
+### Clean up source resources
+ Once your new cache in the targeted region is running, delete the original instance. ## Next steps Learn more about Azure Cache for Redis features.+ - [Geo-replication FAQ](cache-how-to-geo-replication.md#geo-replication-faq) - [Azure Cache for Redis service tiers](cache-overview.md#service-tiers) - [High availability for Azure Cache for Redis](cache-high-availability.md)--
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Last updated 06/15/2022
ms.devlang: python
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./create-first-function-cli-python-uiex
+zone_pivot_groups: python-mode-functions
+ # Quickstart: Create a Python function in Azure from the command line
+In this article, you use command-line tools to create a Python function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+
+This article covers both Python programming models supported by Azure Functions. Use the selector at the top to choose your programming model.
-In this article, you use command-line tools to create a Python function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+>[!NOTE]
+>The Python v2 programming model for Functions is currently in Preview. To learn more about the Python v2 programming model, see the [Developer Reference Guide](functions-reference-python.md).
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Before you begin, you must have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x.-++ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.0.4785 or later. + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
Before you begin, you must have the following requirements in place:
+ The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later. + [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version).++ The [Azurite storage emulator](../storage/common/storage-use-azurite.md?tabs=npm#install-azurite). While you can also use an actual Azure Storage account, the article assumes you're using this emulator. ### Prerequisite check
Verify your prerequisites, which depend on whether you're using Azure CLI or Azu
# [Azure CLI](#tab/azure-cli) + In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.x.-++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.0.4785 or later. + Run `az --version` to check that the Azure CLI version is 2.4 or later. + Run `az login` to sign in to Azure and verify an active subscription.
You run all subsequent commands in this activated virtual environment.
In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function. 1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime. ```console
In Azure Functions, a function project is a container for one or more individual
```console func templates list -l python ```
+1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime and the specified programming model version.
+
+ ```console
+ func init LocalFunctionProj --python -m V2
+ ```
+
+1. Go to the project folder.
+
+ ```console
+ cd LocalFunctionProj
+ ```
+
+ This folder contains various files for the project, including configuration files named *[local.settings.json]*(functions-develop-local.md#local-settings-file) and *[host.json]*(functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+
+1. The file `function_app.py` can include all functions within your project. To start with, there's already an HTTP function stored in the file.
+
+```python
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello")
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ return func.HttpResponse("HttpTrigger1 function processed a request!")%
+```
### (Optional) Examine the file contents If desired, you can skip to [Run the function locally](#run-the-function-locally) and examine the file contents later. #### \_\_init\_\_.py *\_\_init\_\_.py* contains a `main()` Python function that's triggered according to the configuration in *function.json*.
If desired, you can change `scriptFile` to invoke a different Python file.
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/function.json"::: Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type [`httpTrigger`](functions-bindings-http-webhook-trigger.md) and output binding of type [`http`](functions-bindings-http-webhook-output.md).
+`function_app.py` is the entry point to the function and where functions will be stored and/or referenced. This file will include configuration of triggers and bindings through decorators, and the function content itself.
+
+For more information, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=python).
+
+## Start the storage emulator
+
+Before running the function locally, you must start the local Azurite storage emulator. You can skip this step if the `AzureWebJobsStorage` setting in the local.settings.json file is set to the connection string for an Azure Storage account.
+
+Use the following command to start the Azurite storage emulator:
+
+```cmd
+azurite
+```
+
+For more information, see [Run Azurite](../storage/common/storage-use-azurite.md?tabs=npm#run-azurite)
[!INCLUDE [functions-run-function-test-local-cli](../../includes/functions-run-function-test-local-cli.md)]
Use the following commands to create these items. Both Azure CLI and PowerShell
az login ```
- The [az login](/cli/azure/reference-index#az-login) command signs you into your Azure account.
+ The [`az login`](/cli/azure/reference-index#az-login) command signs you into your Azure account.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
Use the following commands to create these items. Both Azure CLI and PowerShell
+ ::: zone pivot="python-mode-decorators"
+ In the current v2 programming model preview, choose a region from one of the following locations: France Central, West Central US, North Europe, China East, East US, or North Central US.
+ ::: zone-end
+ > [!NOTE] > You can't host Linux and Windows apps in the same resource group. If you have an existing resource group named `AzureFunctionsQuickstart-rg` with a Windows function app or web app, you must use a different resource group.
Use the following commands to create these items. Both Azure CLI and PowerShell
In the previous example, replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
+## Update app settings
+
+To use the Python v2 model in your function app, you need to add a new application setting in Azure named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file.
+
+Run the following command to add this setting to your new function app in Azure.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"}
+```
+++
+In the previous example, replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. This setting is already in your local.settings.json file.
+
+## Verify in Azure
Run the following command to view near real-time [streaming logs](functions-run-local.md#enable-streaming-logs) in Application Insights in the Azure portal.
In a separate terminal window or in the browser, call the remote function again.
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-python)
-[Having issues? Let us know.](https://aka.ms/python-functions-qs-survey)
+Having issues with this article?
+++ [Troubleshoot Python function apps in Azure Functions](recover-python-functions.md)++ [Let us know](https://aka.ms/python-functions-qs-survey)
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
Title: Create a Python function using Visual Studio Code - Azure Functions description: Learn how to create a Python function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 06/15/2022 Last updated : 10/24/2022 ms.devlang: python
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./create-first-function-vs-code-python-uiex
+zone_pivot_groups: python-mode-functions
# Quickstart: Create a function in Azure with Python using Visual Studio Code - In this article, you use Visual Studio Code to create a Python function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+This article covers both Python programming models supported by Azure Functions. Use the selector at the top to choose your programming model.
+
+>[!NOTE]
+>The Python v2 programming model for Functions is currently in Preview. To learn more about the v2 programming model, see the [Developer Reference Guide](functions-reference-python.md).
+ Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. There's also a [CLI-based version](create-first-function-cli-python.md) of this article.
There's also a [CLI-based version](create-first-function-cli-python.md) of this
Before you begin, make sure that you have the following requirements in place: ++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).+++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x.++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools), version 4.0.4785 or a later version.++ Python versions that are [supported by Azure Functions](supported-languages.md#languages-by-runtime-version). For more information, see [How to install Python](https://wiki.python.org/moin/BeginnersGuide/Download).+++ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).+++ The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.+++ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.++ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code, version 1.8.1 or later.+++ The [Azurite V3 extension](https://marketplace.visualstudio.com/items?itemName=Azurite.azurite) local storage emulator. While you can also use an actual Azure storage account, this article assumes you're using the Azurite emulator. ## <a name="create-an-azure-functions-project"></a>Create your local project
In this section, you use Visual Studio Code to create a local Azure Functions pr
:::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
-1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
-
-1. Provide the following information at the prompts:
+2. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
+3. Provide the following information at the prompts:
|Prompt|Selection| |--|--|
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**| Choose `Add to workspace`.|
-1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=python#generated-project-files).
+4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=python#generated-project-files).
+3. Provide the following information at the prompts:
+
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language**| Choose `Python (Programming Model V2)`.|
+ |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.|
+ |**Select how you would like to open your project**| Choose `Add to workspace`.|
+
+4. Visual Studio Code uses the provided information and generates an Azure Functions project.
+
+5. Open the generated `function_app.py` project file, which contains your functions.
+
+6. Uncomment the `test_function` function, which is an HTTP triggered function.
+
+7. Replace the `app.route()` method call with the following code:
+
+ ```python
+ @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
+ ```
+
+ This code enables your HTTP function endpoint to be called in Azure without having to provide an [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys). Local execution doesn't require authorization keys.
+
+ Your function code should now look like the following example:
+
+ ```python
+ @app.function_name(name="HttpTrigger1")
+ @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
+ def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+
+ if name:
+ return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
+ else:
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
+ status_code=200
+ )
+ ```
+
+8. Open the local.settings.json project file and updated the `AzureWebJobsStorage` setting as in the following example:
+
+ ```json
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ ```
+
+ This tells the local Functions host to use the storage emulator for the storage connection currently required by the v2 model. When you publish your project to Azure, you'll instead use the default storage account. If you're instead using an Azure Storage account, set your storage account connection string here.
+
+## Start the emulator
+
+1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azurite: Start`.
+
+1. Check the bottom bar and verify that Azurite emulation services are running. If so, you can now run your function locally.
[!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)]
After you've verified that the function runs correctly on your local computer, i
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
+<! Go back to the shared include after preview
[!INCLUDE [functions-publish-project-vscode](../../includes/functions-publish-project-vscode.md)]
+-->
+## <a name="publish-the-project-to-azure"></a>Create the function app in Azure
+
+In this section, you create a function app and related resources in your Azure subscription.
+
+1. Choose the Azure icon in the Activity bar. Then in the **Resources** area, select the **+** icon and choose the **Create Function App in Azure** option.
+
+ ![Create a resource in your Azure subscription](../../includes/media/functions-publish-project-vscode/function-app-create-resource.png)
+
+1. Provide the following information at the prompts:
+
+ |Prompt|Selection|
+ |--|--|
+ |**Select subscription**| Choose the subscription to use. You won't see this prompt when you have only one subscription visible under **Resources**. |
+ |**Enter a globally unique name for the function app**| Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.|
+ |**Select a runtime stack**| Choose the language version on which you've been running locally. |
+ |**Select a location for new resources**| Choose a region for your function app.|
+
+ ::: zone pivot="python-mode-decorators"
+ In the current v2 programming model preview, choose a region from one of the following locations: France Central, West Central US, North Europe, China East, East US, or North Central US.
+ ::: zone-end
+
+ The extension shows the status of individual resources as they're being created in Azure in the **Azure: Activity Log** panel.
+
+ ![Log of Azure resource creation](../../includes/media/functions-publish-project-vscode/resource-activity-log.png)
+
+1. When the creation is complete, the following Azure resources are created in your subscription. The resources are named based on your function app name:
+
+ [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
+
+ A notification is displayed after your function app is created and the deployment package is applied.
+
+ [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
+
+## Deploy the project to Azure
++
+## Update app settings
+
+To use the Python v2 model in your function app, you need to add a new application setting in Azure named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file.
+
+1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`.
+
+1. Choose your new function app, type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>.
+
+1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>.
+
+The setting added to your new function app, which enables it to run the v2 model in Azure.
[!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)]
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions 1.x apps automatically have a reference to the extension.
|Property |Default | Description | ||||
-| customHeaders|none|Allows you to set custom headers in the HTTP response. The previous example adds the `X-Content-Type-Options` header to the response to avoid content type sniffing. |
+| customHeaders|none|Allows you to set custom headers in the HTTP response. The previous example adds the `X-Content-Type-Options` header to the response to avoid content type sniffing. This custom header applies to all HTTP triggered functions in the function app. |
|dynamicThrottlesEnabled|true<sup>\*</sup>|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like `connections/threads/processes/memory/cpu/etc` and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a `429 "Too Busy"` response until the counter(s) return to normal levels.<br/><sup>\*</sup>The default in a Consumption plan is `true`. The default in a Dedicated plan is `false`.| |hsts|not enabled|When `isEnabled` is set to `true`, the [HTTP Strict Transport Security (HSTS) behavior of .NET Core](/aspnet/core/security/enforcing-ssl?tabs=visual-studio#hsts) is enforced, as defined in the [`HstsOptions` class](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions). The above example also sets the [`maxAge`](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions.maxage#Microsoft_AspNetCore_HttpsPolicy_HstsOptions_MaxAge) property to 10 days. Supported properties of `hsts` are: <table><tr><th>Property</th><th>Description</th></tr><tr><td>excludedHosts</td><td>A string array of host names for which the HSTS header isn't added.</td></tr><tr><td>includeSubDomains</td><td>Boolean value that indicates whether the includeSubDomain parameter of the Strict-Transport-Security header is enabled.</td></tr><tr><td>maxAge</td><td>String that defines the max-age parameter of the Strict-Transport-Security header.</td></tr><tr><td>preload</td><td>Boolean that indicates whether the preload parameter of the Strict-Transport-Security header is enabled.</td></tr></table>| |maxConcurrentRequests|100<sup>\*</sup>|The maximum number of HTTP functions that are executed in parallel. This value allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a large number of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third-party service, and those calls need to be rate limited. In these cases, applying a throttle here can help. <br/><sup>*</sup>The default for a Consumption plan is 100. The default for a Dedicated plan is unbounded (`-1`).|
Functions 1.x apps automatically have a reference to the extension.
- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md) [extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Update your extensions]: ./functions-bindings-register.md
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Triggers Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-triggers-python.md
+
+ Title: Python V2 model Azure Functions triggers and bindings
+description: Provides examples of how to define Python triggers and bindings in Azure Functions using the preview v2 model
+ Last updated : 10/25/2022
+ms.devlang: python
+++
+# Python V2 model Azure Functions triggers and bindings (preview)
+
+The new Python v2 programming model in Azure Functions is intended to provide better alignment with Python development principles and with commonly used Python frameworks.
+
+The improved v2 programming model requires fewer files than the default model (v1), and specifically eliminates the need for a configuration file (`function.json`). Instead, triggers and bindings are represented in the `function_app.py` file as decorators. Moreover, functions can be logically organized with support for multiple functions to be stored in the same file. Functions within the same function application can also be stored in different files, and be referenced as blueprints.
+
+To learn more about using the new Python programming model for Azure Functions, see the [Azure Functions Python developer guide](./functions-reference-python.md). In addition to the documentation, [hints](https://aka.ms/functions-python-hints) are available in code editors that support type checking with .pyi files.
+
+This article contains example code snippets that define various triggers and bindings using the Python v2 programming model. To be able to run the code snippets below, ensure the following:
+
+- The function application is defined and named `app`.
+- Confirm that the parameters within the trigger reflect values that correspond with your storage account.
+- The name of the file the function is in must be `function_app.py`.
+
+To create your first function in the new v2 model, see one of these quickstart articles:
+++ [Get started with Visual Studio](./create-first-function-vs-code-python.md)++ [Get started command prompt](./create-first-function-cli-python.md)+
+## Blob trigger
+
+The following code snippet defines a function triggered from Azure Blob Storage:
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="BlobTrigger1")
+@app.blob_trigger(arg_name="myblob", path="samples-workitems/{name}",
+ connection="<STORAGE_CONNECTION_SETTING>")
+def test_function(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+```
+
+## Azure Cosmos DB trigger
+
+The following code snippet defines a function triggered from an Azure Cosmos DB (SQL API):
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="CosmosDBTrigger1")
+@app.cosmos_db_trigger(arg_name="documents", database_name="<DB_NAME>", collection_name="<COLLECTION_NAME>", connection_string_setting="<COSMOS_CONNECTION_SETTING>",
+ lease_collection_name="leases", create_lease_collection_if_not_exists="true")
+def test_function(documents: func.DocumentList) -> str:
+ if documents:
+ logging.info('Document id: %s', documents[0]['id'])
+```
+
+## Azure EventHub trigger
+
+The following code snippet defines a function triggered from an event hub instance:
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="EventHubTrigger1")
+@app.event_hub_message_trigger(arg_name="myhub", event_hub_name="samples-workitems",
+ connection="<EVENT_HUB_CONNECTION_SETTING>")
+def test_function(myhub: func.EventHubEvent):
+ logging.info('Python EventHub trigger processed an event: %s',
+ myhub.get_body().decode('utf-8'))
+```
+
+## HTTP trigger
+
+The following code snippet defines an HTTP triggered function:
+
+```python
+import azure.functions as func
+import logging
+app = func.FunctionApp(auth_level=func.AuthLevel.ANONYMOUS)
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello")
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+ if name:
+ return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
+ else:
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
+ status_code=200
+ )
+```
+
+## Azure Queue Storage trigger
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="QueueTrigger1")
+@app.queue_trigger(arg_name="msg", queue_name="python-queue-items",
+ connection="")
+def test_function(msg: func.QueueMessage):
+ logging.info('Python EventHub trigger processed an event: %s',
+ msg.get_body().decode('utf-8'))
+```
+
+## Azure Service Bus queue trigger
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="ServiceBusQueueTrigger1")
+@app.service_bus_queue_trigger(arg_name="msg", queue_name="myinputqueue", connection="")
+def test_function(msg: func.ServiceBusMessage):
+ logging.info('Python ServiceBus queue trigger processed message: %s',
+ msg.get_body().decode('utf-8'))
+```
+
+## Azure Service Bus topic trigger
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="ServiceBusTopicTrigger1")
+@app.service_bus_topic_trigger(arg_name="message", topic_name="mytopic", connection="", subscription_name="testsub")
+def test_function(message: func.ServiceBusMessage):
+ message_body = message.get_body().decode("utf-8")
+ logging.info("Python ServiceBus topic trigger processed message.")
+ logging.info("Message Body: " + message_body)
+```
+
+## Timer trigger
+
+```python
+import datetime
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="mytimer")
+@app.schedule(schedule="0 */5 * * * *", arg_name="mytimer", run_on_startup=True,
+ use_monitor=False)
+def test_function(mytimer: func.TimerRequest) -> None:
+ utc_timestamp = datetime.datetime.utcnow().replace(
+ tzinfo=datetime.timezone.utc).isoformat()
+ if mytimer.past_due:
+ logging.info('The timer is past due!')
+ logging.info('Python timer trigger function ran at %s', utc_timestamp)
+```
+## Next steps
+++ [Python developer guide](./functions-reference-python.md)++ [Get started with Visual Studio](./create-first-function-vs-code-python.md)++ [Get started command prompt](./create-first-function-cli-python.md)
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
Replace `<TARGET_VERSION>` in the example with a specific version of the package
## Add a function to your project
-You can add a new function to an existing project by using one of the predefined Functions triggers templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
+You can add a new function to an existing project by using one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
The results of this action depend on your project's language:
A new folder is created in the project. The folder contains a new function.json
# [Python](#tab/python)
-A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
+The results depend on the Python programming model. For more information, see the [Azure Functions Python developer guide](./functions-reference-python.md).
+
+**Python v1**: A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
+
+**Python v2**: New function code is added either to the default function_app.py file or to another Python file you selected.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Title: Python developer reference for Azure Functions
-description: Understand how to develop functions with Python.
+description: Understand how to develop functions with Python
Last updated 05/25/2022 ms.devlang: python
+zone_pivot_groups: python-mode-functions
# Azure Functions Python developer guide
-This article is an introduction to developing for Azure Functions by using Python. It assumes that you've already read the [Azure Functions developer guide](functions-reference.md).
+This article is an introduction to developing Azure Functions using Python. The content below assumes that you've already read the [Azure Functions developers guide](functions-reference.md).
-As a Python developer, you might also be interested in one of the following articles:
+> [!IMPORTANT]
+> This article supports both the v1 and v2 programming model for Python in Azure Functions.
+> The v2 programming model is currently in preview.
+> While the v1 model uses a functions.json file to define functions, the new v2 model lets you instead use a decorator-based approach. This new approach results in a simpler file structure and a more code-centric approach. Choose the **v2** selector at the top of the article to learn about this new programming model. .
+
+As a Python developer, you may also be interested in one of the following articles:
| Getting started | Concepts| Scenarios/Samples | |--|--|--|
-| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> |
+| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-configuration)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-configuration)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> |
+| Getting started | Concepts|
+|--|--|--|
+| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-decorators)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-decorators)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> |
> [!NOTE]
-> Although you can [develop your Python-based functions locally on Windows](create-first-function-vs-code-python.md#run-the-function-locally), Python functions are supported in Azure only when they're running on Linux. See the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime).
+> While you can develop your Python based Azure Functions locally on Windows, Python is only supported on a Linux based hosting plan when running in Azure. See the list of supported [operating system/runtime](functions-scale.md#operating-systemruntime) combinations.
## Programming model
-Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the *\__init\__.py* file. You can also [specify an alternate entry point](#alternate-entry-point).
+Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the `__init__.py` file. You can also [specify an alternate entry point](#alternate-entry-point).
-Data from triggers and bindings is bound to the function via method attributes that use the `name` property defined in the *function.json* file. For example, the following _function.json_ file describes a simple function triggered by an HTTP request named `req`:
+Data from triggers and bindings is bound to the function via method attributes using the `name` property defined in the *function.json* file. For example, the _function.json_ below describes a simple function triggered by an HTTP request named `req`:
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/function.json":::
-Based on this definition, the *\__init\__.py* file that contains the function code might look like the following example:
+Based on this definition, the `__init__.py` file that contains the function code might look like the following example:
+
+```python
+def main(req):
+ user = req.params.get('user')
+ return f'Hello, {user}!'
+```
+
+You can also explicitly declare the attribute types and return type in the function using Python type annotations. This action helps you to use the IntelliSense and autocomplete features provided by many Python code editors.
+
+```python
+import azure.functions
++
+def main(req: azure.functions.HttpRequest) -> str:
+ user = req.params.get('user')
+ return f'Hello, {user}!'
+```
+
+Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind input and outputs to your methods.
+Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method in the `function_app.py` file.
+
+Triggers and bindings can be declared and used in a function in a decorator based approach. They're defined in the same file, `function_app.py`, as the functions. As an example, the below _function_app.py_ file represents a function trigger by an HTTP request.
```python
+@app.function_name(name="HttpTrigger1")
+@app.route(route="req")
+ def main(req): user = req.params.get('user') return f'Hello, {user}!' ```
-You can also explicitly declare the attribute types and return type in the function by using Python type annotations. This action helps you to use the IntelliSense and autocomplete features that many Python code editors provide.
+You can also explicitly declare the attribute types and return type in the function using Python type annotations. This helps you use the IntelliSense and autocomplete features provided by many Python code editors.
```python import azure.functions
+@app.function_name(name="HttpTrigger1")
+@app.route(route="req")
def main(req: azure.functions.HttpRequest) -> str: user = req.params.get('user') return f'Hello, {user}!' ```
-Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind inputs and outputs to your methods.
+At this time, only specific triggers and bindings are supported by the v2 programming model. Supported triggers and bindings are as follows.
+
+| Type | Trigger | Input Binding | Output Binding |
+| | | | |
+| HTTP | x | | |
+| Timer | x | | |
+| Azure Queue Storage | x | | x |
+| Azure Service Bus Topic | x | | x |
+| Azure Service Bus Queue | x | | x |
+| Azure Cosmos DB | x | x | x |
+| Azure Blob Storage | x | x | x |
+| Azure Event Grid | x | | x |
+
+To learn about known limitations with the v2 model and their workarounds, see [Troubleshoot Python errors in Azure Functions](./recover-python-functions.md?pivots=python-mode-decorators).
## Alternate entry point
-You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the following _function.json_ file tells the runtime to use the `customentry()` method in the _main.py_ file as the entry point for your function:
+You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the _function.json_ below tells the runtime to use the `customentry()` method in the _main.py_ file, as the entry point for your Azure Function.
```json {
You can change the default behavior of a function by optionally specifying the `
} ```
+During preview, the entry point is only in the file `function_app.py`. However, functions within the project can be referenced in function_app.py using [blueprints](#blueprints) or by importing.
+ ## Folder structure
-The recommended folder structure for an Azure Functions project in Python looks like the following example:
+The recommended folder structure for a Python Functions project looks like the following example:
``` <project_root>/
The recommended folder structure for an Azure Functions project in Python looks
| - requirements.txt | - Dockerfile ```
-The main project folder (*<project_root>*) can contain the following files:
+The main project folder (<project_root>) can contain the following files:
+
+* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
+* *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure.
+* *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
+* *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings).
+* *.venv/*: (Optional) Contains a Python virtual environment used by local development.
+* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
+* *tests/*: (Optional) Contains the test cases of your function app.
+* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings being published.
+
+Each function has its own code file and binding configuration file (function.json).
+The recommended folder structure for a Python Functions project looks like the following example:
-* *local.settings.json*: Used to store app settings and connection strings when functions are running locally. This file isn't published to Azure. To learn more, see [Local settings file](functions-develop-local.md#local-settings-file).
-* *requirements.txt*: Contains the list of Python packages that the system installs when you're publishing to Azure.
-* *host.json*: Contains configuration options that affect all functions in a function app instance. This file is published to Azure. Not all options are supported when functions are running locally. To learn more, see the [host.json reference](functions-host-json.md).
-* *.vscode/*: (Optional) Contains stored Visual Studio Code configurations. To learn more, see [User and Workspace Settings](https://code.visualstudio.com/docs/getstarted/settings).
-* *.venv/*: (Optional) Contains a Python virtual environment that's used for local development.
-* *Dockerfile*: (Optional) Used when you're publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
+```
+ <project_root>/
+ | - .venv/
+ | - .vscode/
+ | - function_app.py
+ | - additional_functions.py
+ | - tests/
+ | | - test_my_function.py
+ | - .funcignore
+ | - host.json
+ | - local.settings.json
+ | - requirements.txt
+ | - Dockerfile
+```
+
+The main project folder (<project_root>) can contain the following files:
+* *.venv/*: (Optional) Contains a Python virtual environment used by local development.
+* *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings).
+* *function_app.py*: This is the default location for all functions and their related triggers and bindings.
+* *additional_functions.py*: (Optional) Any other Python files that contain functions (usually for logical grouping) that are referenced in `function_app.py` through blueprints.
* *tests/*: (Optional) Contains the test cases of your function app.
-* *.funcignore*: (Optional) Declares files that shouldn't be published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore the local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings from being published.
+* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings being published.
+* *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
+* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
+* *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure.
+* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
+
+When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing).
+
+## Blueprints
+
+The v2 programming model introduces the concept of _blueprints_. A blueprint is a new class instantiated to register functions outside of the core function application. The functions registered in blueprint instances aren't indexed directly by function runtime. To get these blueprint functions indexed, the function app needs to register the functions from blueprint instances.
-Each function has its own code file and binding configuration file (*function.json*).
+Using blueprints provides the following benefits:
-When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself. That means *host.json* should be in the package root. We recommend that you maintain your tests in a folder along with other functions. In this example, the folder is *tests/*. For more information, see [Unit testing](#unit-testing).
+* Lets you break-up the function app into modular components enabling you to define functions in multiple Python files and divide them into different components per file.
+* Provides extensible public function app interfaces to build and reuse your own APIs.
+
+The following example shows how to use blueprints:
+
+First, in an `http_blueprint.py` file HTTP triggered function is first defined and added to a blueprint object.
+
+```python
+import logging
+
+import azure.functions as func
+
+bp = func.Blueprint()
+
+@bp.route(route="default_template")
+def default_template(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+
+ if name:
+ return func.HttpResponse(
+ f"Hello, {name}. This HTTP triggered function "
+ f"executed successfully.")
+ else:
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully. "
+ "Pass a name in the query string or in the request body for a"
+ " personalized response.",
+ status_code=200
+ )
+```
+
+Next, in `function_app.py` the blueprint object is imported and its functions are registered to function app.
+
+```python
+import azure.functions as func
+from http_blueprint import bp
+
+app = func.FunctionApp()
+
+app.register_functions(bp)
+```
+ ## Import behavior
-You can import modules in your function code by using both absolute and relative references. Based on the folder structure shown earlier, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*:
+You can import modules in your function code using both absolute and relative references. Based on the folder structure shown above, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*:
```python from shared_code import my_first_helper_function #(absolute)
from . import example #(relative)
``` > [!NOTE]
-> The *shared_code/* folder needs to contain an *\_\_init\_\_.py* file to mark it as a Python package when you're using absolute import syntax.
+> The *shared_code/* folder needs to contain an \_\_init\_\_.py file to mark it as a Python package when using absolute import syntax.
-The following *\_\_app\_\_* import and beyond top-level relative import are deprecated. The static type checker and the Python test frameworks don't support them.
+The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it isn't supported by static type checker and not supported by Python test frameworks:
```python from __app__.shared_code import my_first_helper_function #(deprecated __app__ import)
from __app__.shared_code import my_first_helper_function #(deprecated __app__ im
from ..shared_code import my_first_helper_function #(deprecated beyond top-level relative import) ``` + ## Triggers and inputs
-Inputs are divided into two categories in Azure Functions: trigger input and other binding input. Although they're different in the *function.json* file, usage is identical in Python code. When functions are running locally, connection strings or secrets required by trigger and input sources are maintained in the `Values` collection of the *local.settings.json* file. When functions are running in Azure, those same connection strings or secrets are stored securely as [application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're different in the `function.json` file, usage is identical in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure.
-The following example code demonstrates the difference between the two:
+For example, the following code demonstrates the difference between the two:
```json // function.json
def main(req: func.HttpRequest,
logging.info(f'Python HTTP triggered function processed: {obj.read()}') ```
-When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from Azure Blob Storage based on the ID in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the `AzureWebJobsStorage` app setting, which is the same storage account that the function app uses.
+When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the same storage account used by the function app.
+Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're defined using different decorators, usage is similar in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure.
+
+As an example, the following code demonstrates the difference between the two:
+
+```json
+// local.settings.json
+{
+ "IsEncrypted": false,
+ "Values": {
+ "FUNCTIONS_WORKER_RUNTIME": "python",
+ "AzureWebJobsStorage": "<azure-storage-connection-string>"
+ }
+}
+```
+
+```python
+# function_app.py
+import azure.functions as func
+import logging
+
+app = func.FunctionApp()
+
+@app.route(route="req")
+@app.read_blob(arg_name="obj", path="samples/{id}", connection="AzureWebJobsStorage")
+
+def main(req: func.HttpRequest,
+ obj: func.InputStream):
+ logging.info(f'Python HTTP triggered function processed: {obj.read()}')
+```
+
+When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the same storage account used by the function app.
+
+At this time, only specific triggers and bindings are supported by the v2 programming model. Supported triggers and bindings are as follows.
+
+| Type | Trigger | Input Binding | Output Binding |
+| | | | |
+| HTTP | x | | |
+| Timer | x | | |
+| Azure Queue Storage | x | | x |
+| Azure Service Bus topic | x | | x |
+| Azure Service Bus queue | x | | x |
+| Azure Cosmos DB | x | x | x |
+| Azure Blob Storage | x | x | x |
+| Azure Event Grid | x | | x |
+
+To learn more about defining triggers and bindings in the v2 model, see this [documentation](https://github.com/Azure/azure-functions-python-library/blob/dev/docs/ProgModelSpec.pyi).
+ ## Outputs
-Output can be expressed in the return value and in output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
+Output can be expressed both in return value and output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
-To use the return value of a function as the value of an output binding, set the `name` property of the binding to `$return` in *function.json*.
+To use the return value of a function as the value of an output binding, the `name` property of the binding should be set to `$return` in `function.json`.
-To produce multiple outputs, use the `set()` method provided by the [azure.functions.Out](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and return an HTTP response:
+To produce multiple outputs, use the `set()` method provided by the [`azure.functions.Out`](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and also return an HTTP response.
```json {
def main(req: func.HttpRequest,
return message ```
+Output can be expressed both in return value and output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
+
+To produce multiple outputs, use the `set()` method provided by the [`azure.functions.Out`](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and also return an HTTP response.
+
+```python
+# function_app.py
+import azure.functions as func
++
+@app.write_blob(arg_name="msg", path="output-container/{name}",
+ connection="AzureWebJobsStorage")
+
+def test_function(req: func.HttpRequest,
+ msg: func.Out[str]) -> str:
+
+ message = req.params.get('body')
+ msg.set(message)
+ return message
+```
+ ## Logging Access to the Azure Functions runtime logger is available via a root [`logging`](https://docs.python.org/3/library/logging.html#module-logging) handler in your function app. This logger is tied to Application Insights and allows you to flag warnings and errors that occur during the function execution.
-The following example logs an info message when the function is invoked via an HTTP trigger:
+The following example logs an info message when the function is invoked via an HTTP trigger.
```python import logging
More logging methods are available that let you write to the console at differen
| Method | Description | | - | |
-| `critical(_message_)` | Writes a message with level CRITICAL on the root logger. |
-| `error(_message_)` | Writes a message with level ERROR on the root logger. |
-| `warning(_message_)` | Writes a message with level WARNING on the root logger. |
-| `info(_message_)` | Writes a message with level INFO on the root logger. |
-| `debug(_message_)` | Writes a message with level DEBUG on the root logger. |
+| **`critical(_message_)`** | Writes a message with level CRITICAL on the root logger. |
+| **`error(_message_)`** | Writes a message with level ERROR on the root logger. |
+| **`warning(_message_)`** | Writes a message with level WARNING on the root logger. |
+| **`info(_message_)`** | Writes a message with level INFO on the root logger. |
+| **`debug(_message_)`** | Writes a message with level DEBUG on the root logger. |
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.md). ### Log custom telemetry
-By default, the Azure Functions runtime collects logs and other telemetry data that your functions generate. This telemetry ends up as traces in Application Insights. By default, [triggers and bindings](functions-triggers-bindings.md#supported-bindings) also collect request and dependency telemetry for certain Azure services.
-
-To collect custom request and custom dependency telemetry outside bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). The Azure Functions extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
+By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). This extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
>[!NOTE] >To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there.
def main(req, context):
}) ```
-## HTTP trigger and bindings
+## HTTP trigger
-The HTTP trigger is defined in the *function.json* file. The `name` parameter of the binding must match the named parameter in the function.
+The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function.
+In the previous examples, a binding name `req` is used. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
-The previous examples use the binding name `req`. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
+From the [HttpRequest] object, you can get request headers, query parameters, route parameters, and the message body.
-From the `HttpRequest` object, you can get request headers, query parameters, route parameters, and the message body.
-
-The following example is from the [HTTP trigger template for Python](https://github.com/Azure/azure-functions-templates/tree/dev/Functions.Templates/Templates/HttpTrigger-Python):
+The following example is from the [HTTP trigger template for Python](https://github.com/Azure/azure-functions-templates/tree/dev/Functions.Templates/Templates/HttpTrigger-Python).
```python def main(req: func.HttpRequest) -> func.HttpResponse:
def main(req: func.HttpRequest) -> func.HttpResponse:
) ```
-In this function, the value of the `name` query parameter is obtained from the `params` parameter of the `HttpRequest` object. The JSON-encoded message body is read using the `get_json` method.
+In this function, the value of the `name` query parameter is obtained from the `params` parameter of the [HttpRequest] object. The JSON-encoded message body is read using the `get_json` method.
+
+Likewise, you can set the `status_code` and `headers` for the response message in the returned [HttpResponse] object.
+The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function.
+In the previous examples, a binding name `req` is used. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
-Likewise, you can set the `status_code` and `headers` information for the response message in the returned `HttpResponse` object.
+From the [HttpRequest] object, you can get request headers, query parameters, route parameters, and the message body.
+
+The following example is from the HTTP trigger template for Python v2 programming model. It's the sample code provided when you create a function from Core Tools or VS Code.
+
+```python
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello")
+
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+
+ if name:
+ return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
+ else:
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
+ status_code=200
+ )
+```
+
+In this function, the value of the `name` query parameter is obtained from the `params` parameter of the [HttpRequest] object. The JSON-encoded message body is read using the `get_json` method.
+
+Likewise, you can set the `status_code` and `headers` for the response message in the returned [HttpResponse] object.
+
+To pass in a name in this example, paste the URL provided when running the function, and append it with "?name={name}"
+ ## Web frameworks You can use WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
-First, the *function.json* file must be updated to include `route` in the HTTP trigger, as shown in the following example:
+First, the function.json file must be updated to include a `route` in the HTTP trigger, as shown in the following example:
```json {
First, the *function.json* file must be updated to include `route` in the HTTP t
} ```
-The *host.json* file must also be updated to include an HTTP `routePrefix` value, as shown in the following example:
+The host.json file must also be updated to include an HTTP `routePrefix`, as shown in the following example.
```json {
The *host.json* file must also be updated to include an HTTP `routePrefix` value
} ```
-Update the Python code file *__init__.py*, based on the interface that your framework uses. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
+Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
# [ASGI](#tab/asgi)
def main(req: func.HttpRequest, context) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.') return func.WsgiMiddleware(app).handle(req, context) ```
-For a full example, see [Using the Flask framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+For a full example, see [Using Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+++
+You can use ASGI and WSGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions, which is shown in the following example:
+
+# [ASGI](#tab/asgi)
+
+`AsgiFunctionApp` is the top-level function app class for constructing ASGI HTTP functions.
+
+```python
+# function_app.py
+
+import azure.functions as func
+from fastapi import FastAPI, Request, Response
+
+fast_app = FastAPI()
+
+@fast_app.get("/return_http_no_body")
+async def return_http_no_body():
+ return Response(content='', media_type="text/plain")
+
+app = func.AsgiFunctionApp(app=fast_app,
+ http_auth_level=func.AuthLevel.ANONYMOUS)
+```
+
+# [WSGI](#tab/wsgi)
+
+`WsgiFunctionApp` is top level function app class for constructing WSGI HTTP functions.
+
+```python
+# function_app.py
+
+import azure.functions as func
+from flask import Flask, request, Response, redirect, url_for
+
+flask_app = Flask(__name__)
+logger = logging.getLogger("my-function")
+
+@flask_app.get("/return_http")
+def return_http():
+ return Response('<h1>Hello WorldΓäó</h1>', mimetype='text/html')
+
+app = func.WsgiFunctionApp(app=flask_app.wsgi_app,
+ http_auth_level=func.AuthLevel.ANONYMOUS)
+```
## Scaling and performance
-For scaling and performance best practices for Python function apps, see [Improve throughput performance of Python apps in Azure Functions](python-scale-performance-reference.md).
+For scaling and performance best practices for Python function apps, see the [Python scale and performance article](python-scale-performance-reference.md).
## Context
def main(req: azure.functions.HttpRequest,
return f'{context.invocation_id}' ```
-The [Context](/python/api/azure-functions/azure.functions.context) class has the following string attributes:
+The [**Context**](/python/api/azure-functions/azure.functions.context) class has the following string attributes:
-- `function_directory`: Directory in which the function is running.
+`function_directory`
+The directory in which the function is running.
-- `function_name`: Name of the function.
+`function_name`
+Name of the function.
-- `invocation_id`: ID of the current function invocation.
+`invocation_id`
+ID of the current function invocation.
-- `trace_context`: Context for distributed tracing. For more information, see [Trace Context](https://www.w3.org/TR/trace-context/) on the W3C website.
+`trace_context`
+Context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/).
-- `retry_context`: Context for retries to the function. For more information, see [Retry policies](./functions-bindings-errors.md#retry-policies).
+`retry_context`
+Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies).
## Global variables
-It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. To cache the results of an expensive computation, declare it as a global variable:
+It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable.
```python CACHED_DATA = None
def main(req):
## Environment variables
-In Azure Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code:
+In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code.
| Method | Description | | | |
-| `os.environ["myAppSetting"]` | Tries to get the application setting by key name. It raises an error when unsuccessful. |
-| `os.getenv("myAppSetting")` | Tries to get the application setting by key name. It returns `null` when unsuccessful. |
+| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. |
+| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. |
Both of these ways require you to declare `import os`.
def main(req: func.HttpRequest) -> func.HttpResponse:
For local development, application settings are [maintained in the local.settings.json file](functions-develop-local.md#local-settings-file).
-## Python version
+In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code.
-Azure Functions supports the following Python versions. These are official Python distributions.
+| Method | Description |
+| | |
+| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. |
+| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. |
-| Functions version | Python versions |
-| -- | -- |
-| 4.x | 3.9<br/> 3.8<br/>3.7 |
-| 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 |
-| 2.x | 3.7<br/>3.6 |
+Both of these ways require you to declare `import os`.
-To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The `--functions-version` option sets the Azure Functions runtime version.
+The following example uses `os.environ["myAppSetting"]` to get the [application setting](functions-how-to-use-azure-function-app-settings.md#settings), with the key named `myAppSetting`:
-### Changing Python version
+```python
+import logging
+import os
+import azure.functions as func
-To set a Python function app to a specific language version, you need to specify the language and the version of the language in `linuxFxVersion` field in site configuration. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
+@app.function_name(name="HttpTrigger1")
+@app.route(route="req")
-To learn more about the Azure Functions runtime support policy, see [Language runtime support policy](./language-support-policy.md).
+def main(req: func.HttpRequest) -> func.HttpResponse:
-You can view and set `linuxFxVersion` from the Azure CLI by using the [az functionapp config show](/cli/azure/functionapp/config) command. Replace `<function_app>` with the name of your function app. Replace `<my_resource_group>` with the name of the resource group for your function app.
-```azurecli-interactive
-az functionapp config show --name <function_app> \
resource-group <my_resource_group>
+ # Get the setting named 'myAppSetting'
+ my_app_setting_value = os.environ["myAppSetting"]
+ logging.info(f'My app setting value:{my_app_setting_value}')
```
-You see `linuxFxVersion` in the following output, which has been truncated for clarity:
+For local development, application settings are [maintained in the local.settings.json file](functions-develop-local.md#local-settings-file).
-```output
-{
- ...
- "kind": null,
- "limits": null,
- "linuxFxVersion": <LINUX_FX_VERSION>,
- "loadBalancing": "LeastRequests",
- "localMySqlEnabled": false,
- "location": "West US",
- "logsDirectorySizeLimit": 35,
- ...
-}
+When using the new programming model, the following app setting needs to be enabled in the file `localsettings.json` as follows.
+
+```json
+"AzureWebJobsFeatureFlags": "EnableWorkerIndexing"
```
-You can update the `linuxFxVersion` setting in the function app by using the [az functionapp config set](/cli/azure/functionapp/config) command. In the following code:
+When deploying the function, this setting won't be automatically created. You must explicitly create this setting in your function app in Azure for it to run using the v2 model.
-- Replace `<FUNCTION_APP>` with the name of your function app. -- Replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. -- Replace `<LINUX_FX_VERSION>` with the Python version that you want to use, prefixed by `python|`. For example: `python|3.9`.
+Multiple Python workers aren't supported in v2 at this time. This means that setting `FUNCTIONS_WORKER_PROCESS_COUNT` to greater than 1 isn't supported for the functions using the v2 model.
-```azurecli-interactive
-az functionapp config set --name <FUNCTION_APP> \
resource-group <RESOURCE_GROUP> \linux-fx-version <LINUX_FX_VERSION>
-```
-You can run the command from [Azure Cloud Shell](../cloud-shell/overview.md) by selecting **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to run the command after you use [az login](/cli/azure/reference-index#az-login) to sign in.
+## Python version
+
+Azure Functions supports the following Python versions:
+
+| Functions version | Python<sup>*</sup> versions |
+| -- | -- |
+| 4.x | 3.9<br/> 3.8<br/>3.7 |
+| 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 |
+| 2.x | 3.7<br/>3.6 |
-The function app restarts after you change the site configuration.
+<sup>*</sup>Official Python distributions
-### Local Python version
+To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created and can't be changed.
-When running locally, the Azure Functions Core Tools uses the available Python version.
+The runtime uses the available Python version, when you run it locally.
+
+### Changing Python version
+
+To set a Python function app to a specific language version, you need to specify the language and the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
+
+To learn how to view and change the `linuxFxVersion` site setting, see [How to target Azure Functions runtime versions](set-runtime-version.md#manual-version-updates-on-linux).
+
+For more general information, see the [Azure Functions runtime support policy](./language-support-policy.md) and [Supported languages in Azure Functions](./supported-languages.md).
## Package management
-When you're developing locally by using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the *requirements.txt* file and install them by using `pip`.
+When developing locally using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the `requirements.txt` file and install them using `pip`.
-For example, you can use the following requirements file and `pip` command to install the `requests` package from PyPI:
+For example, the following requirements file and pip command can be used to install the `requests` package from PyPI.
```txt requests==2.19.1
pip install -r requirements.txt
## Publishing to Azure
-When you're ready to publish, make sure that all your publicly available dependencies are listed in the *requirements.txt* file. This file is at the root of your project directory.
+When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file. You can locate this file at the root of your project directory.
-You can also find project files and folders that are excluded from publishing, including the virtual environment folder, in the root directory of your project.
+Project files and folders that are excluded from publishing, including the virtual environment folder, you can find them in the root directory of your project.
-Three build actions are supported for publishing your Python project to Azure: remote build, local build, and builds that use custom dependencies.
+There are three build actions supported for publishing your Python project to Azure: remote build, local build, and builds using custom dependencies.
-You can also use Azure Pipelines to build your dependencies and publish by using continuous delivery (CD). To learn more, see [Continuous delivery by using Azure DevOps](functions-how-to-azure-devops.md).
+You can also use Azure Pipelines to build your dependencies and publish using continuous delivery (CD). To learn more, see [Continuous delivery with Azure Pipelines](functions-how-to-azure-devops.md).
### Remote build
-When you use a remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use a remote build when you're developing Python apps on Windows. If your project has custom dependencies, you can [use a remote build with an extra index URL](#remote-build-with-extra-index-url).
+When you use remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url).
-Dependencies are obtained remotely based on the contents of the *requirements.txt* file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, Azure Functions Core Tools requests a remote build when you use the following [func azure functionapp publish](functions-run-local.md#publish) command to publish your Python project to Azure. Replace `<APP_NAME>` with the name of your function app in Azure.
+Dependencies are obtained remotely based on the contents of the requirements.txt file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, the Azure Functions Core Tools requests a remote build when you use the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish your Python project to Azure.
```bash func azure functionapp publish <APP_NAME> ```
-The [Azure Functions extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) also requests a remote build by default.
+Remember to replace `<APP_NAME>` with the name of your function app in Azure.
+
+The [Azure Functions Extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) also requests a remote build by default.
### Local build
-Dependencies are obtained locally based on the contents of the *requirements.txt* file. You can prevent a remote build by using the following [func azure functionapp publish](functions-run-local.md#publish) command to publish with a local build. Replace `<APP_NAME>` with the name of your function app in Azure.
+Dependencies are obtained locally based on the contents of the requirements.txt file. You can prevent doing a remote build by using the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish with a local build.
```command func azure functionapp publish <APP_NAME> --build local ```
-When you use the `--build local` option, project dependencies are read from the *requirements.txt* file. Those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in the upload of a larger deployment package to Azure. If you can't get the *requirements.txt* file by using Core Tools, you must use the custom dependencies option for publishing.
+Remember to replace `<APP_NAME>` with the name of your function app in Azure.
+
+When you use the `--build local` option, project dependencies are read from the requirements.txt file, and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, you can't get requirements.txt file by Core Tools, you must use the custom dependencies option for publishing.
-We don't recommend using local builds when you're developing locally on Windows.
+We don't recommend using local builds when developing locally on Windows.
### Custom dependencies
-When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project.
+When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project. The build method depends on how you build the project.
#### Remote build with extra index URL
-When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` with the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
+When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` using the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
You can also use basic authentication credentials with your extra package index URLs. To learn more, see [Basic authentication credentials](https://pip.pypa.io/en/stable/user_guide/#basic-authentication-credentials) in Python documentation.
-> [!NOTE]
-> If you need to change the base URL of the Python Package Index from the default of `https://pypi.org/simple`, you can do this by [creating an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named [`PIP_INDEX_URL`](functions-app-settings.md#pip_index_url) that points to a different package index URL. Like [`PIP_EXTRA_INDEX_URL`](functions-app-settings.md#pip_extra_index_url), [`PIP_INDEX_URL`](functions-app-settings.md#pip_index_url) is a pip-specific application setting that changes the source for pip to use.
-
+#### Install local packages
-#### Installing local packages
-
-If your project uses packages that aren't publicly available, you can make them available to your app by putting them in the *\_\_app\_\_/.python_packages* directory. Before publishing, run the following command to install the dependencies locally:
+If your project uses packages not publicly available to our tools, you can make them available to your app by putting them in the \_\_app\_\_/.python_packages directory. Before publishing, run the following command to install the dependencies locally:
```command pip install --target="<PROJECT_DIR>/.python_packages/lib/site-packages" -r requirements.txt ```
-When you're using custom dependencies, use the following `--no-build` publishing option because you've already installed the dependencies into the project folder. Replace `<APP_NAME>` with the name of your function app in Azure.
+When using custom dependencies, you should use the `--no-build` publishing option, since you've already installed the dependencies into the project folder.
```command func azure functionapp publish <APP_NAME> --no-build ```
+Remember to replace `<APP_NAME>` with the name of your function app in Azure.
+ ## Unit testing
-You can test functions written in Python the same way that you test other Python code: through standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the [azure.functions](https://pypi.org/project/azure-functions/) package. Because the `azure.functions` package isn't immediately available, be sure to install it via your *requirements.txt* file as described in the earlier [Package management](#package-management) section.
+Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package isn't immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above.
-Take *my_second_function* as an example. Following is a mock test of an HTTP triggered function.
+Take *my_second_function* as an example, following is a mock test of an HTTP triggered function:
-First, to create the *<project_root>/my_second_function/function.json* file and define this function as an HTTP trigger, use the following code:
+First we need to create *<project_root>/my_second_function/function.json* file and define this function as an http trigger.
```json {
First, to create the *<project_root>/my_second_function/function.json* file and
} ```
-Now, you can implement *my_second_function* and *shared_code.my_second_helper_function*:
+Now, we can implement the *my_second_function* and the *shared_code.my_second_helper_function*.
```python # <project_root>/my_second_function/__init__.py
import logging
# Use absolute import to resolve shared_code modules from shared_code import my_second_helper_function
-# Define an HTTP trigger that accepts the ?value=<int> query parameter
+# Define an http trigger which accepts ?value=<int> query parameter
# Double the value and return the result in HttpResponse def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Executing my_second_function.')
def double(value: int) -> int:
return value * 2 ```
-You can start writing test cases for your HTTP trigger:
+We can start writing test cases for our http trigger.
```python # <project_root>/tests/test_my_second_function.py
class TestFunction(unittest.TestCase):
) ```
-Inside your *.venv* Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
+Inside your `.venv` Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
+First we need to create *<project_root>/function_app.py* file and implement *my_second_function* function as http trigger and the *shared_code.my_second_helper_function*.
+
+```python
+# <project_root>/function_app.py
+import azure.functions as func
+import logging
+
+# Use absolute import to resolve shared_code modules
+from shared_code import my_second_helper_function
+
+app = func.FunctionApp()
++
+# Define http trigger which accepts ?value=<int> query parameter
+# Double the value and return the result in HttpResponse
+@app.function_name(name="my_second_function")
+@app.route(route="hello")
+def main(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Executing my_second_function.')
+
+ initial_value: int = int(req.params.get('value'))
+ doubled_value: int = my_second_helper_function.double(initial_value)
+
+ return func.HttpResponse(
+ body=f"{initial_value} * 2 = {doubled_value}",
+ status_code=200
+ )
+```
+
+```python
+# <project_root>/shared_code/__init__.py
+# Empty __init__.py file marks shared_code folder as a Python package
+```
+
+```python
+# <project_root>/shared_code/my_second_helper_function.py
+
+def double(value: int) -> int:
+ return value * 2
+```
+
+We can start writing test cases for our http trigger.
+
+```python
+# <project_root>/tests/test_my_second_function.py
+import unittest
+import azure.functions as func
+from function_app import main
++
+class TestFunction(unittest.TestCase):
+ def test_my_second_function(self):
+ # Construct a mock HTTP request.
+ req = func.HttpRequest(
+ method='GET',
+ body=None,
+ url='/api/my_second_function',
+ params={'value': '21'})
+
+ # Call the function.
+ func_call = main.build().get_user_function()
+ resp = func_call(req)
+
+ # Check the output.
+ self.assertEqual(
+ resp.get_body(),
+ b'21 * 2 = 42',
+ )
+```
+
+Inside your `.venv` Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
+ ## Temporary files
-The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is */tmp*. Your application can use this directory to store temporary files that your functions generate and use during execution.
+The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is `/tmp`. Your application can use this directory to store temporary files generated and used by your functions during execution.
> [!IMPORTANT]
-> Files written to the temporary directory aren't guaranteed to persist across invocations. During scale-out, temporary files aren't shared between instances.
+> Files written to the temporary directory aren't guaranteed to persist across invocations. During scale out, temporary files aren't shared between instances.
-The following example creates a named temporary file in the temporary directory (*/tmp*):
+The following example creates a named temporary file in the temporary directory (`/tmp`):
```python import logging
from os import listdir
filesDirListInTemp = listdir(tempFilePath) ```
-We recommend that you maintain your tests in a folder that's separate from the project folder. This action keeps you from deploying test code with your app.
+We recommend that you maintain your tests in a folder separate from the project folder. This action keeps you from deploying test code with your app.
## Preinstalled libraries
-A few libraries come with the runtime for Azure Functions on Python.
+There are a few libraries that come with the Python Functions runtime.
### Python Standard Library
-The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On Unix systems, package collections provide them.
+The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they're provided by package collections.
-To view the full details of these libraries, use these links:
+To view the full details of the list of these libraries, see the links below:
* [Python 3.6 Standard Library](https://docs.python.org/3.6/library/) * [Python 3.7 Standard Library](https://docs.python.org/3.7/library/) * [Python 3.8 Standard Library](https://docs.python.org/3.8/library/) * [Python 3.9 Standard Library](https://docs.python.org/3.9/library/)
-### Worker dependencies
+### Azure Functions Python worker dependencies
-The Python worker for Azure Functions requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they might not be available to your code when you're running outside Azure Functions. You can find a detailed list of dependencies in the `install\_requires` section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
+The Functions Python worker requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they may not be available to your code when running outside of Azure Functions. You can find a detailed list of dependencies in the **install\_requires** section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
> [!NOTE]
-> If your function app's *requirements.txt* file contains an `azure-functions-worker` entry, remove it. The Azure Functions platform automatically manages this worker, and we regularly update it with new features and bug fixes. Manually installing an old version of the worker in *requirements.txt* might cause unexpected problems.
+> If your function app's requirements.txt contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in requirements.txt may cause unexpected issues.
> [!NOTE]
-> If your package contains certain libraries that might collide with the worker's dependencies (for example, protobuf, TensorFlow, or grpcio), configure [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring to the worker's dependencies. This feature is in preview.
+> If your package contains certain libraries that may collide with worker's dependencies (e.g. protobuf, tensorflow, grpcio), please configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring worker's dependencies. This feature is in preview.
-### Python library for Azure Functions
+### Azure Functions Python library
-Every Python worker update includes a new version of the [Python library for Azure Functions (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backward compatible. You can find a list of releases of this library in the [azure-functions information on the PyPi website](https://pypi.org/project/azure-functions/#history).
+Every Python worker update includes a new version of [Azure Functions Python library (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backwards-compatible. A list of releases of this library can be found in [azure-functions PyPi](https://pypi.org/project/azure-functions/#history).
-The runtime library version is fixed by Azure, and *requirements.txt* can't override it. The `azure-functions` entry in *requirements.txt* is only for linting and customer awareness.
+The runtime library version is fixed by Azure, and it can't be overridden by requirements.txt. The `azure-functions` entry in requirements.txt is only for linting and customer awareness.
-Use the following code to track the version of the Python library for Azure Functions in your runtime:
+Use the following code to track the actual version of the Python Functions library in your runtime:
```python getattr(azure.functions, '__version__', '< 1.2.1')
getattr(azure.functions, '__version__', '< 1.2.1')
### Runtime system libraries
-The following table lists preinstalled system libraries in Docker images for the Python worker:
+For a list of preinstalled system libraries in Python worker Docker images, see the links below:
| Functions runtime | Debian version | Python versions | ||||
-| Version 2.x | Stretch | [Python 3.7](https://github.com/Azure/azure-functions-docker/blob/dev/host/4/bullseye/amd64/python/python37/python37.Dockerfile) |
| Version 3.x | Buster | [Python 3.6](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python36/python36.Dockerfile)<br/>[Python 3.7](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python37/python37.Dockerfile)<br />[Python 3.8](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python38/python38.Dockerfile)<br/> [Python 3.9](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python39/python39.Dockerfile)| ## Python worker extensions
Extensions are imported in your function code much like a standard Python librar
| Scope | Description | | | |
-| **Application level** | When the extension is imported into any function trigger, it applies to every function execution in the app. |
-| **Function level** | Execution is limited to only the specific function trigger into which it's imported. |
+| **Application-level** | When imported into any function trigger, the extension applies to every function execution in the app. |
+| **Function-level** | Execution is limited to only the specific function trigger into which it's imported. |
-Review the information for an extension to learn more about the scope in which the extension runs.
+Review the information for a given extension to learn more about the scope in which the extension runs.
-Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle.
+Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions).
### Using extensions You can use a Python worker extension library in your Python functions by following these basic steps:
-1. Add the extension package in the *requirements.txt* file for your project.
+1. Add the extension package in the requirements.txt file for your project.
1. Install the library into your app. 1. Add the application setting `PYTHON_ENABLE_WORKER_EXTENSIONS`:
- + To add the setting locally, add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
- + To add the setting in Azure, add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings).
+ + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
+ + Azure: add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings).
1. Import the extension module into your function trigger.
-1. Configure the extension instance, if needed. Configuration requirements should be called out in the extension's documentation.
+1. Configure the extension instance, if needed. Configuration requirements should be called-out in the extension's documentation.
> [!IMPORTANT]
-> Microsoft doesn't support or warranty third-party Python worker extension libraries. Make sure that any extensions you use in your function app are trustworthy. You bear the full risk of using a malicious or poorly written extension.
+> Third-party Python worker extension libraries are not supported or warrantied by Microsoft. You must make sure that any extensions you use in your function app is trustworthy, and you bear the full risk of using a malicious or poorly written extension.
-Third parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally).
+Third-parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally).
Here are examples of using extensions in a function app, by scope:
-# [Application level](#tab/application-level)
+# [Application-level](#tab/application-level)
```python # <project_root>/requirements.txt
AppExtension.configure(key=value)
def main(req, context): # Use context.app_ext_attributes here ```
-# [Function level](#tab/function-level)
+# [Function-level](#tab/function-level)
```python # <project_root>/requirements.txt function-level-extension==1.0.0
def main(req, context):
### Creating extensions
-Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer designs, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
+Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer design, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
To learn how to create, package, publish, and consume a Python worker extension package, see [Develop Python worker extensions for Azure Functions](develop-python-worker-extensions.md).
An extension inherited from [`AppExtensionBase`](https://github.com/Azure/azure-
| Method | Description | | | |
-| `init` | Called after the extension is imported. |
-| `configure` | Called from function code when it's needed to configure the extension. |
-| `post_function_load_app_level` | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only. Any attempt to write to a local file in this directory fails. |
-| `pre_invocation_app_level` | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
-| `post_invocation_app_level` | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
+| **`init`** | Called after the extension is imported. |
+| **`configure`** | Called from function code when needed to configure the extension. |
+| **`post_function_load_app_level`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. |
+| **`pre_invocation_app_level`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
+| **`post_invocation_app_level`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
#### Function-level extensions
An extension that inherits from [FuncExtensionBase](https://github.com/Azure/azu
| Method | Description | | | |
-| `__init__` | Called when an extension instance is initialized in a specific function. This method is the constructor of the extension. When you're implementing this abstract method, you might want to accept a `filename` parameter and pass it to the parent's `super().__init__(filename)` method for proper extension registration. |
-| `post_function_load` | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only. Any attempt to write to a local file in this directory fails. |
-| `pre_invocation` | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
-| `post_invocation` | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
+| **`__init__`** | This method is the constructor of the extension. It's called when an extension instance is initialized in a specific function. When implementing this abstract method, you may want to accept a `filename` parameter and pass it to the parent's method `super().__init__(filename)` for proper extension registration. |
+| **`post_function_load`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. |
+| **`pre_invocation`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
+| **`post_invocation`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
## Cross-origin resource sharing
By default, a host instance for Python can process only one function invocation
## <a name="shared-memory"></a>Shared memory (preview)
-To improve throughput, Azure Functions lets your out-of-process Python language worker share memory with the host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
+To improve throughput, Functions let your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
-For example, you might enable shared memory to reduce bottlenecks when using Azure Blob Storage bindings to transfer payloads larger than 1 MB.
+For example, you might enable shared memory to reduce bottlenecks when using Blob storage bindings to transfer payloads larger than 1 MB.
-This functionality is available only for function apps running in Premium and Dedicated (Azure App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory).
+This functionality is available only for function apps running in Premium and Dedicated (App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory).
-## Known issues and FAQs
+## Known issues and FAQ
-Here's a list of troubleshooting guides for common issues:
+The following is a list of troubleshooting guides for common issues:
* [ModuleNotFoundError and ImportError](recover-python-functions.md#troubleshoot-modulenotfounderror)
-* [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
-* [Troubleshoot Errors with Protobuf](recover-python-functions.md#troubleshoot-errors-with-protocol-buffers)
+* [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc).
+
+Following is a list of troubleshooting guides for known issues with the v2 programming model:
+
+* [Couldn't load file or assembly](recover-python-functions.md#troubleshoot-could-not-load-file-or-assembly)
+* [Unable to resolve the Azure Storage connection named Storage](recover-python-functions.md#troubleshoot-unable-to-resolve-the-azure-storage-connection).
-All known issues and feature requests are tracked through the [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
+All known issues and feature requests are tracked using [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
## Next steps
For more information, see the following resources:
* [Azure Functions package API documentation](/python/api/azure-functions/azure.functions) * [Best practices for Azure Functions](functions-best-practices.md) * [Azure Functions triggers and bindings](functions-triggers-bindings.md)
-* [Blob Storage bindings](functions-bindings-storage-blob.md)
-* [HTTP and webhook bindings](functions-bindings-http-webhook.md)
-* [Azure Queue Storage bindings](functions-bindings-storage-queue.md)
+* [Blob storage bindings](functions-bindings-storage-blob.md)
+* [HTTP and Webhook bindings](functions-bindings-http-webhook.md)
+* [Queue storage bindings](functions-bindings-storage-queue.md)
* [Timer trigger](functions-bindings-timer.md) [Having issues? Let us know.](https://aka.ms/python-functions-ref-survey) [HttpRequest]: /python/api/azure-functions/azure.functions.httprequest
-[HttpResponse]: /python/api/azure-functions/azure.functions.httpresponse
+[HttpResponse]: /python/api/azure-functions/azure.functions.httpresponse
azure-functions Python Scale Performance Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-scale-performance-reference.md
When developing for Azure Functions using Python, you need to understand how your functions perform and how that performance affects the way your function app gets scaled. The need is more important when designing highly performant apps. The main factors to consider when designing, writing, and configuring your functions apps are horizontal scaling and throughput performance configurations. ## Horizontal scaling
-By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Python as needed. Azure Functions uses built-in thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't user configurable. For more information, see [Event-driven scaling in Azure Functions](event-driven-scaling.md).
+By default, Azure Functions automatically monitors the load on your application and creates more host instances for Python as needed. Azure Functions uses built-in thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't user configurable. For more information, see [Event-driven scaling in Azure Functions](event-driven-scaling.md).
## Improving throughput performance
-The default configurations are suitable for most of Azure Functions applications. However, you can improve the performance of your applications' throughput by employing configurations based on your workload profile. The first step is to understand the type of workload that you are running.
+The default configurations are suitable for most of Azure Functions applications. However, you can improve the performance of your applications' throughput by employing configurations based on your workload profile. The first step is to understand the type of workload that you're running.
| Workload type | Function app characteristics | Examples | | - | - | - |
To run a function asynchronously, use the `async def` statement, which runs the
async def main(): await some_nonblocking_socket_io_op() ```
-Here is an example of a function with HTTP trigger that uses [aiohttp](https://pypi.org/project/aiohttp/) http client:
+Here's an example of a function with HTTP trigger that uses [aiohttp](https://pypi.org/project/aiohttp/) http client:
```python import aiohttp
async def main(req: func.HttpRequest) -> func.HttpResponse:
```
-A function without the `async` keyword is run automatically in an ThreadPoolExecutor thread pool:
+A function without the `async` keyword is run automatically in a ThreadPoolExecutor thread pool:
```python # Runs in an ThreadPoolExecutor threadpool. Number of threads is defined by PYTHON_THREADPOOL_THREAD_COUNT.
def main():
some_blocking_socket_io() ```
-In order to achieve the full benefit of running functions asynchronously, the I/O operation/library that is used in your code needs to have async implemented as well. Using synchronous I/O operations in functions that are defined as asynchronous **may hurt** the overall performance. If the libraries you are using do not have async version implemented, you may still benefit from running your code asynchronously by [managing event loop](#managing-event-loop) in your app.
+In order to achieve the full benefit of running functions asynchronously, the I/O operation/library that is used in your code needs to have async implemented as well. Using synchronous I/O operations in functions that are defined as asynchronous **may hurt** the overall performance. If the libraries you're using don't have async version implemented, you may still benefit from running your code asynchronously by [managing event loop](#managing-event-loop) in your app.
-Here are a few examples of client libraries that has implemented async pattern:
+Here are a few examples of client libraries that have implemented async patterns:
- [aiohttp](https://pypi.org/project/aiohttp/) - Http client/server for asyncio - [Streams API](https://docs.python.org/3/library/asyncio-stream.html) - High-level async/await-ready primitives to work with network connection - [Janus Queue](https://pypi.org/project/janus/) - Thread-safe asyncio-aware queue for Python
Here are a few examples of client libraries that has implemented async pattern:
##### Understanding async in Python worker
-When you define `async` in front of a function signature, Python will mark the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop and allow event loop to process next task during the wait time.
+When you define `async` in front of a function signature, Python will mark the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop, which allows the event loop to process the next task during the wait time.
-In our Python Worker, the worker shares the event loop with the customer's `async` function and it is capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries (e.g. [aiohttp](https://pypi.org/project/aiohttp/), [pyzmq](https://pypi.org/project/pyzmq/)). Employing these recommendations will greatly increase your function's throughput compared to those libraries implemented in synchronous fashion.
+In our Python Worker, the worker shares the event loop with the customer's `async` function and it's capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries, such as [aiohttp](https://pypi.org/project/aiohttp/) and [pyzmq](https://pypi.org/project/pyzmq/). Following these recommendations increases your function's throughput compared to those libraries when implemented synchronously.
> [!NOTE] > If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibit the Python worker to handle concurrent requests. #### Use multiple language worker processes
-By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
+By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [`FUNCTIONS_WORKER_PROCESS_COUNT`](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
-For CPU bound apps, you should set the number of language worker to be the same as or higher than the number of cores that are available per function app. To learn more, see [Available instance SKUs](functions-premium-plan.md#available-instance-skus).
+For CPU bound apps, you should set the number of language workers to be the same as or higher than the number of cores that are available per function app. To learn more, see [Available instance SKUs](functions-premium-plan.md#available-instance-skus).
I/O-bound apps may also benefit from increasing the number of worker processes beyond the number of cores available. Keep in mind that setting the number of workers too high can impact overall performance due to the increased number of required context switches.
-The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your application to meet demand.
+The `FUNCTIONS_WORKER_PROCESS_COUNT` applies to each host that Functions creates when scaling out your application to meet demand.
+
+> [!NOTE]
+> Multiple Python workers are not supported in V2 at this time. This means that enabling intelligent concurrency and setting `FUNCTIONS_WORKER_PROCESS_COUNT` greater than 1 is not supported for functions developed using the V2 model.
#### Set up max workers within a language worker process
-As mentioned in the async [section](#understanding-async-in-python-worker), the Python language worker treats functions and [coroutines](https://docs.python.org/3/library/asyncio-task.html#coroutines) differently. A coroutine is run within the same event loop that the language worker runs on. On the other hand, a function invocation is run within a [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor), that is maintained by the language worker, as a thread.
+As mentioned in the async [section](#understanding-async-in-python-worker), the Python language worker treats functions and [coroutines](https://docs.python.org/3/library/asyncio-task.html#coroutines) differently. A coroutine is run within the same event loop that the language worker runs on. On the other hand, a function invocation is run within a [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor), which is maintained by the language worker as a thread.
You can set the value of maximum workers allowed for running sync functions using the [PYTHON_THREADPOOL_THREAD_COUNT](functions-app-settings.md#python_threadpool_thread_count) application setting. This value sets the `max_worker` argument of the ThreadPoolExecutor object, which lets Python use a pool of at most `max_worker` threads to execute calls asynchronously. The `PYTHON_THREADPOOL_THREAD_COUNT` applies to each worker that Functions host creates, and Python decides when to create a new thread or reuse the existing idle thread. For older Python versions(that is, `3.8`, `3.7`, and `3.6`), `max_worker` value is set to 1. For Python version `3.9` , `max_worker` is set to `None`. For CPU-bound apps, you should keep the setting to a low number, starting from 1 and increasing as you experiment with your workload. This suggestion is to reduce the time spent on context switches and allowing CPU-bound tasks to finish.
-For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default - the number of cores + 4 and then tweak based on the throughput values you are seeing.
+For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default - the number of cores + 4 and then tweak based on the throughput values you're seeing.
-For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend to profile them and set the values according to the behavior they present. Also refer to this [section](#use-multiple-language-worker-processes) to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
+For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend profiling them and set the values according to the behavior they present. Also refer to this [section](#use-multiple-language-worker-processes) to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
> [!NOTE] > Although these recommendations apply to both HTTP and non-HTTP triggered functions, you might need to adjust other trigger specific configurations for non-HTTP triggered functions to get the expected performance from your function apps. For more information about this, please refer to this [article](functions-best-practices.md).
For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT`
You should use asyncio compatible third-party libraries. If none of the third-party libraries meet your needs, you can also manage the event loops in Azure Functions. Managing event loops give you more flexibility in compute resource management, and it also makes it possible to wrap synchronous I/O libraries into coroutines.
-There are many useful Python official documents discussing the [Coroutines and Tasks](https://docs.python.org/3/library/asyncio-task.html) and [Event Loop](https://docs.python.org/3.8/library/asyncio-eventloop.html) by using the built in **asyncio** library.
+There are many useful Python official documents discussing the [Coroutines and Tasks](https://docs.python.org/3/library/asyncio-task.html) and [Event Loop](https://docs.python.org/3.8/library/asyncio-eventloop.html) by using the built-in **asyncio** library.
Take the following [requests](https://github.com/psf/requests) library as an example, this code snippet uses the **asyncio** library to wrap the `requests.get()` method into a coroutine, running multiple web requests to SAMPLE_URL concurrently.
async def main(req: func.HttpRequest) -> func.HttpResponse:
mimetype='application/json') ``` #### Vertical scaling
-For more processing units especially in CPU-bound operation, you might be able to get this by upgrading to premium plan with higher specifications. With higher processing units, you can adjust the number of worker process count according to the number of cores available and achieve higher degree of parallelism.
+For more processing units especially in CPU-bound operation, you might be able to get this by upgrading to premium plan with higher specifications. With higher processing units, you can adjust the number of worker processes count according to the number of cores available and achieve higher degree of parallelism.
## Next steps
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
Title: Troubleshoot Python function apps in Azure Functions description: Learn how to troubleshoot Python functions.- Previously updated : 07/29/2020 Last updated : 10/25/2022 ms.devlang: python
+zone_pivot_groups: python-mode-functions
# Troubleshoot Python errors in Azure Functions
-Following is a list of troubleshooting guides for common issues in Python functions:
+This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose your desired model from the selector at the top of the article. The v2 model is currently in preview. For more information on Python programming models, see the [Python developer guide](./functions-reference-python.md).
+
+The following is a list of troubleshooting sections for common issues in Python functions:
* [ModuleNotFoundError and ImportError](#troubleshoot-modulenotfounderror) * [Cannot import 'cygrpc'](#troubleshoot-cannot-import-cygrpc) * [Python exited with code 137](#troubleshoot-python-exited-with-code-137) * [Python exited with code 139](#troubleshoot-python-exited-with-code-139)
+* [Troubleshoot errors with Protocol Buffers](#troubleshoot-errors-with-protocol-buffers)
+* [ModuleNotFoundError and ImportError](#troubleshoot-modulenotfounderror)
+* [Cannot import 'cygrpc'](#troubleshoot-cannot-import-cygrpc)
+* [Python exited with code 137](#troubleshoot-python-exited-with-code-137)
+* [Python exited with code 139](#troubleshoot-python-exited-with-code-139)
+* [Troubleshoot errors with Protocol Buffers](#troubleshoot-errors-with-protocol-buffers)
+* [Multiple Python workers not supported](#multiple-python-workers-not-supported)
+* [Could not load file or assembly](#troubleshoot-could-not-load-file-or-assembly)
+* [Unable to resolve the Azure Storage connection named Storage](#troubleshoot-unable-to-resolve-the-azure-storage-connection)
+* [Issues with deployment](#issue-with-deployment)
## Troubleshoot ModuleNotFoundError
This error occurs when a Python function app fails to load a Python module. The
To identify the actual cause of your issue, you need to get the Python project files that run on your function app. If you don't have the project files on your local computer, you can get them in one of the following ways: * If the function app has `WEBSITE_RUN_FROM_PACKAGE` app setting and its value is a URL, download the file by copy and paste the URL into your browser.
-* If the function app has `WEBSITE_RUN_FROM_PACKAGE` and it is set to `1`, navigate to `https://<app-name>.scm.azurewebsites.net/api/vfs/data/SitePackages` and download the file from the latest `href` URL.
+* If the function app has `WEBSITE_RUN_FROM_PACKAGE` and it's set to `1`, navigate to `https://<app-name>.scm.azurewebsites.net/api/vfs/data/SitePackages` and download the file from the latest `href` URL.
* If the function app doesn't have the app setting mentioned above, navigate to `https://<app-name>.scm.azurewebsites.net/api/settings` and find the URL under `SCM_RUN_FROM_PACKAGE`. Download the file by copy and paste the URL into your browser.
-* If none of these works for you, navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and reveal the content under `/home/site/wwwroot`.
+* If none of these suggestions resolve the issue, navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and view the content under `/home/site/wwwroot`.
The rest of this article helps you troubleshoot potential causes of this error by inspecting your function app's content, identifying the root cause, and resolving the specific issue.
See [Enable remote build](#enable-remote-build) or [Build native dependencies](#
Go to `.python_packages/lib/python3.6/site-packages/<package-name>-<version>-dist-info` or `.python_packages/lib/site-packages/<package-name>-<version>-dist-info`. Use your favorite text editor to open the **wheel** file and check the **Tag:** section. If the value of the tag doesn't contain **linux**, this could be the issue.
-Python functions run only on Linux in Azure: Functions runtime v2.x runs on Debian Stretch and the v3.x runtime on Debian Buster. The artifact is expected to contain the correct Linux binaries. Using `--build local` flag in Core Tools, third-party, or outdated tools may cause older binaries to be used.
+Python functions run only on Linux in Azure: Functions runtime v2.x runs on Debian Stretch and the v3.x runtime on Debian Buster. The artifact is expected to contain the correct Linux binaries. When you use the `--build local` flag in Core Tools, third-party, or outdated tools it may cause older binaries to be used.
See [Enable remote build](#enable-remote-build) or [Build native dependencies](#build-native-dependencies) for mitigation.
See [Update your package to the latest version](#update-your-package-to-the-late
#### The package conflicts with other packages
-If you have verified that the package is resolved correctly with the proper Linux wheels, there may be a conflict with other packages. In certain packages, the PyPi documentations may clarify the incompatible modules. For example in [`azure 4.0.0`](https://pypi.org/project/azure/4.0.0/), there's a statement as follows:
+If you've verified that the package is resolved correctly with the proper Linux wheels, there may be a conflict with other packages. In certain packages, the PyPi documentations may clarify the incompatible modules. For example in [`azure 4.0.0`](https://pypi.org/project/azure/4.0.0/), there's a statement as follows:
<pre>This package isn't compatible with azure-storage. If you installed azure-storage, or if you installed azure 1.x/2.x and didnΓÇÖt uninstall azure-storage,
See [Update your package to the latest version](#update-your-package-to-the-late
Open the `requirements.txt` with a text editor and check the package in `https://pypi.org/project/<package-name>`. Some packages only run on Windows or macOS platforms. For example, pywin32 only runs on Windows.
-The `Module Not Found` error may not occur when you're using Windows or macOS for local development. However, the package fails to import on Azure Functions, which uses Linux at runtime. This is likely to be caused by using `pip freeze` to export virtual environment into requirements.txt from your Windows or macOS machine during project initialization.
+The `Module Not Found` error may not occur when you're using Windows or macOS for local development. However, the package fails to import on Azure Functions, which uses Linux at runtime. This issue is likely to be caused by using `pip freeze` to export virtual environment into requirements.txt from your Windows or macOS machine during project initialization.
See [Replace the package with equivalents](#replace-the-package-with-equivalents) or [Handcraft requirements.txt](#handcraft-requirementstxt) for mitigation.
The following are potential mitigations for module-related issues. Use the [diag
Make sure that remote build is enabled. The way that you do this depends on your deployment method.
-## [Visual Studio Code](#tab/vscode)
+# [Visual Studio Code](#tab/vscode)
Make sure that the latest version of the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) is installed. Verify that `.vscode/settings.json` exists and it contains the setting `"azureFunctions.scmDoBuildDuringDeployment": true`. If not, please create this file with the `azureFunctions.scmDoBuildDuringDeployment` setting enabled and redeploy the project.
-## [Azure Functions Core Tools](#tab/coretools)
+# [Azure Functions Core Tools](#tab/coretools)
Make sure that the latest version of [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools/releases) is installed. Go to your local function project folder, and use `func azure functionapp publish <app-name>` for deployment.
-## [Manual publishing](#tab/manual)
+# [Manual publishing](#tab/manual)
-If you're manually publishing your package into the `https://<app-name>.scm.azurewebsites.net/api/zipdeploy` endpoint, make sure that both **SCM_DO_BUILD_DURING_DEPLOYMENT** and **ENABLE_ORYX_BUILD** are set to **true**. To learn more, see [how to work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+If you're manually publishing your package into the `https://<app-name>.scm.azurewebsites.net/api/zipdeploy` endpoint, make sure that both `SCM_DO_BUILD_DURING_DEPLOYMENT` and `ENABLE_ORYX_BUILD` are set to `true`. To learn more, see [how to work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
Make sure that the latest version of both **docker** and [Azure Functions Core T
#### Update your package to the latest version
-Browse the latest package version in `https://pypi.org/project/<package-name>` and check the **Classifiers:** section. The package should be `OS Independent`, or compatible with `POSIX` or `POSIX :: Linux` in **Operating System**. Also, the Programming Language should contains `Python :: 3`, `Python :: 3.6`, `Python :: 3.7`, `Python :: 3.8`, or `Python :: 3.9`.
+Browse the latest package version in `https://pypi.org/project/<package-name>` and check the **Classifiers:** section. The package should be `OS Independent`, or compatible with `POSIX` or `POSIX :: Linux` in **Operating System**. Also, the Programming Language should contain: `Python :: 3`, `Python :: 3.6`, `Python :: 3.7`, `Python :: 3.8`, or `Python :: 3.9`.
If these are correct, you can update the package to the latest version by changing the line `<package-name>~=<latest-version>` in requirements.txt.
The best practice is to check the import statement from each .py file in your pr
First, we should take a look into the latest version of the package in `https://pypi.org/project/<package-name>`. Usually, this package has their own GitHub page, go to the **Issues** section on GitHub and search if your issue has been fixed. If so, update the package to the latest version.
-Sometimes, the package may have been integrated into [Python Standard Library](https://docs.python.org/3/library/) (such as pathlib). If so, since we provide a certain Python distribution in Azure Functions (Python 3.6, Python 3.7, Python 3.8, and Python 3.9), the package in your requirements.txt should be removed.
+Sometimes, the package may have been integrated into [Python Standard Library](https://docs.python.org/3/library/) (such as `pathlib`). If so, since we provide a certain Python distribution in Azure Functions (Python 3.6, Python 3.7, Python 3.8, and Python 3.9), the package in your requirements.txt should be removed.
-However, if you're facing an issue that it has not been fixed and you're on a deadline. I encourage you to do some research and find a similar package for your project. Usually, the Python community will provide you with a wide variety of similar libraries that you can use.
+However, if you're facing an issue that it hasn't been fixed and you're on a deadline. I encourage you to do some research and find a similar package for your project. Usually, the Python community will provide you with a wide variety of similar libraries that you can use.
This section helps you troubleshoot 'cygrpc' related errors in your Python funct
This error occurs when a Python function app fails to start with a proper Python interpreter. The root cause for this error is one of the following issues: - [The Python interpreter mismatches OS architecture](#the-python-interpreter-mismatches-os-architecture)-- [The Python interpreter is not supported by Azure Functions Python Worker](#the-python-interpreter-is-not-supported-by-azure-functions-python-worker)
+- [The Python interpreter isn't supported by Azure Functions Python Worker](#the-python-interpreter-isnt-supported-by-azure-functions-python-worker)
### Diagnose 'cygrpc' reference error
On Unix-like shell: `python3 -c 'import platform; print(platform.architecture()[
If there's a mismatch between Python interpreter bitness and operating system architecture, please download a proper Python interpreter from [Python Software Foundation](https://www.python.org/downloads).
-#### The Python interpreter is not supported by Azure Functions Python Worker
+#### The Python interpreter isn't supported by Azure Functions Python Worker
The Azure Functions Python Worker only supports Python 3.6, 3.7, 3.8, and 3.9.
-Please check if your Python interpreter matches our expected version by `py --version` in Windows or `python3 --version` in Unix-like systems. Ensure the return result is Python 3.6.x, Python 3.7.x, Python 3.8.x, or Python 3.9.x.
+Check if your Python interpreter matches our expected version by `py --version` in Windows or `python3 --version` in Unix-like systems. Ensure the return result is Python 3.6.x, Python 3.7.x, Python 3.8.x, or Python 3.9.x.
-If your Python interpreter version does not meet our expectation, please download the Python 3.6, 3.7, 3.8, or 3.9 interpreter from [Python Software Foundation](https://www.python.org/downloads).
+If your Python interpreter version doesn't meet the requirements for Functions, instead download the Python 3.6, 3.7, 3.8, or 3.9 interpreter from [Python Software Foundation](https://www.python.org/downloads).
Code 137 errors are typically caused by out-of-memory issues in your Python func
This error occurs when a Python function app is forced to terminate by the operating system with a SIGKILL signal. This signal usually indicates an out-of-memory error in your Python process. The Azure Functions platform has a [service limitation](functions-scale.md#service-limits) which will terminate any function apps that exceeded this limit.
-Please visit the tutorial section in [memory profiling on Python functions](python-memory-profiler-reference.md#memory-profiling-process) to analyze the memory bottleneck in your function app.
+Visit the tutorial section in [memory profiling on Python functions](python-memory-profiler-reference.md#memory-profiling-process) to analyze the memory bottleneck in your function app.
This section helps you troubleshoot segmentation fault errors in your Python fun
> `Microsoft.Azure.WebJobs.Script.Workers.WorkerProcessExitException : python exited with code 139`
-This error occurs when a Python function app is forced to terminate by the operating system with a SIGSEGV signal. This signal indicates a memory segmentation violation which can be caused by unexpectedly reading from or writing into a restricted memory region. In the following sections, we provide a list of common root causes.
+This error occurs when a Python function app is forced to terminate by the operating system with a SIGSEGV signal. This signal indicates a memory segmentation violation, which can be caused by unexpectedly reading from or writing into a restricted memory region. In the following sections, we provide a list of common root causes.
### A regression from third-party packages
In your function app's requirements.txt, an unpinned package will be upgraded to
### Unpickling from a malformed .pkl file
-If your function app is using the Python pickel library to load Python object from .pkl file, it is possible that the .pkl contains malformed bytes string, or invalid address reference in it. To recover from this issue, try commenting out the pickle.load() function.
+If your function app is using the Python pickel library to load Python object from .pkl file, it's possible that the .pkl contains malformed bytes string, or invalid address reference in it. To recover from this issue, try commenting out the pickle.load() function.
### Pyodbc connection collision
There are two ways to mitigate this issue.
+## Multiple Python workers not supported
+
+Multiple Python workers aren't supported in the v2 programming model at this time. This means that enabling intelligent concurrency by setting `FUNCTIONS_WORKER_PROCESS_COUNT` greater than 1 isn't supported for functions developed using the V2 model.
+
+## Troubleshoot could not load file or assembly
+
+If you're facing this error, it may be the case that you are using the V2 programming model. This error is due to a known issue that will be resolved in an upcoming release.
+
+This specific error may ready:
+
+> `DurableTask.Netherite.AzureFunctions: Could not load file or assembly 'Microsoft.Azure.WebJobs.Extensions.DurableTask, Version=2.0.0.0, Culture=neutral, PublicKeyToken=014045d636e89289'.`
+> `The system cannot find the file specified.`
+
+The reason this error may be occurring is because of an issue with how the extension bundle was cached. To detect if this is the issue, you can run the command with `--verbose` to see more details.
+
+> `func host start --verbose`
+
+Upon running the command, if you notice that `Loading startup extension <>` is not followed by `Loaded extension <>` for each extension, it is likely that you are facing a caching issue.
+
+To resolve this issue,
+
+1. Find the path of `.azure-functions-core-tools` by running
+```console
+func GetExtensionBundlePath
+```
+
+2. Delete the directory `.azure-functions-core-tools`
+
+# [bash](#tab/bash)
+
+```bash
+rm -r <insert path>/.azure-functions-core-tools
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+Remove-Item <insert path>/.azure-functions-core-tools
+```
+
+# [Cmd](#tab/cmd)
+
+```cmd
+rmdir <insert path>/.azure-functions-core-tools
+```
++
+## Troubleshoot unable to resolve the Azure Storage connection
+
+You may see this error in your local output as the following message:
+
+> `Microsoft.Azure.WebJobs.Extensions.DurableTask: Unable to resolve the Azure Storage connection named 'Storage'.`
+> `Value cannot be null. (Parameter 'provider')`
+
+This error is a result of how extensions are loaded from the bundle locally. To resolve this error, you can do one of the following:
+* Use a storage emulator such as [Azurite](../storage/common/storage-use-azurite.md). This is a good option when you aren't planning to use a storage account in your function application.
+* Create a storage account and add a connection string to the `AzureWebJobsStorage` environment variable in `localsettings.json`. Use this option when you are using a storage account trigger or binding with your application, or if you have an existing storage account. To get started, see [Create a storage account](../storage/common/storage-account-create.md).
+
+## Issue with Deployment
+
+In the [Azure portal](https://portal.azure.com), navigate to **Settings** > **Configuration** and make sure that the `AzureWebJobsFeatureFlags` application setting has a value of `EnableWorkerIndexing`. If it is not found, add this setting to the function app.
+ ## Next steps If you're unable to resolve your issue, please report this to the Functions team:
azure-government Documentation Government Get Started Connect To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-to-storage.md
These endpoint differences must be taken into account when you connect to storag
- Read more about [Azure Storage](../storage/index.yml). - Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/) - Get help on Stack Overflow by using the [azure-gov](https://stackoverflow.com/questions/tagged/azure-gov) tag-
azure-monitor Itsmc Dashboard Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard-errors.md
The following sections describe common errors that appear in the connector statu
**Cause**: The IP address of the ITSM application doesn't allow ITSM connections from partner ITSM tools.
-**Resolution**: To allow ITSM connections from partner ITSM tools, we recommend that the to list includes the entire public IP range of the Azure region of the LogAnalytics workspace. For more information, see this article about [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=56519). You can only list the ActionGroup network tag in these regions: EUS/WEU/EUS2/WUS2/US South Central.
-
+**Resolution**: To allow ITSM connections make sure ActionGroup network tag is allowed on your network.
## Authentication **Error**: "User Not Authenticated"
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
To create an action group:
When you create or edit an Azure alert rule, use an action group, which has an ITSM action. When the alert triggers, the work item is created or updated in the ITSM tool. > [!NOTE]
-> For information about the pricing of the ITSM action, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for action groups.
+> * For information about the pricing of the ITSM action, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for action groups.
>
-> The short description field in the alert rule definition is limited to 40 characters when you send it by using the ITSM action.
+> * The short description field in the alert rule definition is limited to 40 characters when you send it by using the ITSM action.
+>
+> * In case you have policies for inbound traffic for your ServiceNow instances, add ActionGroup service tag to allowList.
## Next steps
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This section will guide you through manually adding Application Insights to a te
3. Copy the following XML configuration into your newly created file:
- ```xml
- <?xml version="1.0" encoding="utf-8"?>
+ ```xml
+ <?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings"> <TelemetryInitializers> <Add Type="Microsoft.ApplicationInsights.DependencyCollector.HttpDependenciesParsingTelemetryInitializer, Microsoft.AI.DependencyCollector" />
This section will guide you through manually adding Application Insights to a te
--> <ConnectionString>Copy connection string from Application Insights Resource Overview</ConnectionString> </ApplicationInsights>
- ```
+ ```
4. Before the closing `</ApplicationInsights>` tag, add the connection string for your Application Insights resource. You can find your connection string on the overview pane of the newly created Application Insights resource.
This section will guide you through manually adding Application Insights to a te
} } }
-
``` 6. In the *App_Start* folder, open the *FilterConfig.cs* file and change it to match the sample:
For the latest updates and bug fixes, [consult the release notes](./release-note
## Next steps * Add synthetic transactions to test that your website is available from all over the world with [availability monitoring](monitor-web-app-availability.md).
-* [Configure sampling](sampling.md) to help reduce telemetry traffic and data storage costs.
+* [Configure sampling](sampling.md) to help reduce telemetry traffic and data storage costs.
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
To create a new file, right click under your timer trigger function (for example
```xml <Project Sdk="Microsoft.NET.Sdk">
-     <PropertyGroup>
-         <TargetFramework>netstandard2.0</TargetFramework>
-     </PropertyGroup>
-     <ItemGroup>
-         <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure you’re using the latest version -->
-     </ItemGroup>
+ <PropertyGroup>
+ <TargetFramework>netstandard2.0</TargetFramework>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure youΓÇÖre using the latest version -->
+ </ItemGroup>
</Project>
-
```
- :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot of function.proj in App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
+ :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot of function.proj in App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
2. Create a new file called "runAvailabilityTest.csx" and paste the following code:
To create a new file, right click under your timer trigger function (for example
public async static Task RunAvailabilityTestAsync(ILogger log) {
-     using (var httpClient = new HttpClient())
-     {
-         // TODO: Replace with your business logic
-         await httpClient.GetStringAsync("https://www.bing.com/");
-     }
+ using (var httpClient = new HttpClient())
+ {
+ // TODO: Replace with your business logic
+ await httpClient.GetStringAsync("https://www.bing.com/");
+ }
} ```
To create a new file, right click under your timer trigger function (for example
public async static Task Run(TimerInfo myTimer, ILogger log, ExecutionContext executionContext) {
-     if (telemetryClient == null)
-     {
-         // Initializing a telemetry configuration for Application Insights based on connection string
-
-         var telemetryConfiguration = new TelemetryConfiguration();
-         telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
-         telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
-         telemetryClient = new TelemetryClient(telemetryConfiguration);
-     }
-
-     string testName = executionContext.FunctionName;
-     string location = Environment.GetEnvironmentVariable("REGION_NAME");
-     var availability = new AvailabilityTelemetry
-     {
-         Name = testName,
-
-         RunLocation = location,
-
-         Success = false,
-     };
-
-     availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
-     availability.Context.Operation.Id = Activity.Current.RootId;
-     var stopwatch = new Stopwatch();
-     stopwatch.Start();
-
-     try
-     {
-         using (var activity = new Activity("AvailabilityContext"))
-         {
-             activity.Start();
-             availability.Id = Activity.Current.SpanId.ToString();
-             // Run business logic
-             await RunAvailabilityTestAsync(log);
-         }
-         availability.Success = true;
-     }
-
-     catch (Exception ex)
-     {
-         availability.Message = ex.Message;
-         throw;
-     }
-
-     finally
-     {
-         stopwatch.Stop();
-         availability.Duration = stopwatch.Elapsed;
-         availability.Timestamp = DateTimeOffset.UtcNow;
-         telemetryClient.TrackAvailability(availability);
-         telemetryClient.Flush();
-     }
+ if (telemetryClient == null)
+ {
+ // Initializing a telemetry configuration for Application Insights based on connection string
+
+ var telemetryConfiguration = new TelemetryConfiguration();
+ telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
+ telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
+ telemetryClient = new TelemetryClient(telemetryConfiguration);
+ }
+
+ string testName = executionContext.FunctionName;
+ string location = Environment.GetEnvironmentVariable("REGION_NAME");
+ var availability = new AvailabilityTelemetry
+ {
+ Name = testName,
+
+ RunLocation = location,
+
+ Success = false,
+ };
+
+ availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
+ availability.Context.Operation.Id = Activity.Current.RootId;
+ var stopwatch = new Stopwatch();
+ stopwatch.Start();
+
+ try
+ {
+ using (var activity = new Activity("AvailabilityContext"))
+ {
+ activity.Start();
+ availability.Id = Activity.Current.SpanId.ToString();
+ // Run business logic
+ await RunAvailabilityTestAsync(log);
+ }
+ availability.Success = true;
+ }
+
+ catch (Exception ex)
+ {
+ availability.Message = ex.Message;
+ throw;
+ }
+
+ finally
+ {
+ stopwatch.Stop();
+ availability.Duration = stopwatch.Elapsed;
+ availability.Timestamp = DateTimeOffset.UtcNow;
+ telemetryClient.TrackAvailability(availability);
+ telemetryClient.Flush();
+ }
} ```
azure-monitor Tutorial Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-alert.md
- Title: Send alerts from Azure Application Insights | Microsoft Docs
-description: Tutorial shows how to send alerts in response to errors in your application by using Application Insights.
- Previously updated : 04/10/2019----
-# Monitor and alert on application health with Application Insights
-
-Application Insights allows you to monitor your application and sends you alerts when it's unavailable, experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating tests to continuously check the availability of your application.
-
-You'll learn how to:
-
-> [!div class="checklist"]
-> * Create availability tests to continuously check the response of the application.
-> * Send mail to administrators when a problem occurs.
-
-## Prerequisites
-
-To complete this tutorial, create an [Application Insights resource](../app/create-new-resource.md).
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create availability test
-
-Availability tests in Application Insights allow you to automatically test your application from various locations around the world. In this tutorial, you'll perform a URL test to ensure that your web application is available. You could also create a complete walkthrough to test its detailed operation.
-
-1. Select **Application Insights** and then select your subscription.
-
-1. Under the **Investigate** menu, select **Availability**. Then select **Create test**.
-
- ![Screenshot that shows adding an availability test.](media/tutorial-alert/add-test-001.png)
-
-1. Enter a name for the test and leave the other defaults. This selection will trigger requests for the application URL every 5 minutes from five different geographic locations.
-
-1. Select **Alerts** to open the **Alerts** dropdown where you can define details for how to respond if the test fails. Choose **Near-realtime** and set the status to **Enabled.**
-
- Enter an email address to send when the alert criteria are met. Optionally, you can enter the address of a webhook to call when the alert criteria are met.
-
- ![Screenshot that shows creating a test.](media/tutorial-alert/create-test-001.png)
-
-1. Return to the test panel, select the ellipses, and edit the alert to enter the configuration for your near-realtime alert.
-
- ![Screenshot that shows editing an alert.](media/tutorial-alert/edit-alert-001.png)
-
-1. Set failed locations to greater than or equal to 3. Create an [action group](../alerts/action-groups.md) to configure who gets notified when your alert threshold is breached.
-
- ![Screenshot that shows saving alert UI.](media/tutorial-alert/save-alert-001.png)
-
-1. After you've configured your alert, select the test name to view details from each location. Tests can be viewed in both line graph and scatter plot format to visualize the successes and failures for a given time range.
-
- ![Screenshot that shows test details.](media/tutorial-alert/test-details-001.png)
-
-1. To see the details of any test, select its dot in the scatter chart to open the **End-to-end transaction details** screen. The following example shows the details for a failed request.
-
- ![Screenshot that shows test results.](media/tutorial-alert/test-result-001.png)
-
-## Next steps
-
-Now that you've learned how to alert on issues, advance to the next tutorial to learn how to analyze how users are interacting with your application.
-
-> [!div class="nextstepaction"]
-> [Understand users](./tutorial-users.md)
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md
Like the data collected for server performance, Application Insights makes all c
Now that you've learned how to identify runtime exceptions, proceed to the next tutorial to learn how to create alerts in response to failures. > [!div class="nextstepaction"]
-> [Alert on application health](./tutorial-alert.md)
+> [Standard test](availability-standard-tests.md)
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
# Configure agent data collection for Container insights
-Container insights collects stdout, stderr, and environmental variables from container workloads deployed to managed Kubernetes clusters from the containerized agent. You can configure agent data collection settings by creating a custom Kubernetes ConfigMaps to control this experience.
+Container insights collects stdout, stderr, and environmental variables from container workloads deployed to managed Kubernetes clusters from the containerized agent. You can configure agent data collection settings by creating a custom Kubernetes ConfigMap to control this experience.
-This article demonstrates how to create ConfigMap and configure data collection based on your requirements.
+This article demonstrates how to create ConfigMaps and configure data collection based on your requirements.
## ConfigMap file settings overview
-A template ConfigMap file is provided that allows you to easily edit it with your customizations without having to create it from scratch. Before starting, you should review the Kubernetes documentation about [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and familiarize yourself with how to create, configure, and deploy ConfigMaps. This will allow you to filter stderr and stdout per namespace or across the entire cluster, and environment variables for any container running across all pods/nodes in the cluster.
+A template ConfigMap file is provided so that you can easily edit it with your customizations without having to create it from scratch. Before you start, review the Kubernetes documentation about [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/). Familiarize yourself with how to create, configure, and deploy ConfigMaps. You need to know how to filter stderr and stdout per namespace or across the entire cluster. You also need to know how to filter environment variables for any container running across all pods/nodes in the cluster.
>[!IMPORTANT]
->The minimum agent version supported to collect stdout, stderr, and environmental variables from container workloads is ciprod06142019 or later. To verify your agent version, from the **Node** tab select a node, and in the properties pane note value of the **Agent Image Tag** property. For additional information about the agent versions and what's included in each release, see [agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
+>The minimum agent version supported to collect stdout, stderr, and environmental variables from container workloads is **ciprod06142019** or later. To verify your agent version, on the **Node** tab, select a node. On the **Properties** pane, note the value of the **Agent Image Tag** property. For more information about the agent versions and what's included in each release, see [Agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
### Data collection settings
-The following table describes the settings you can configure to control data collection:
+The following table describes the settings you can configure to control data collection.
| Key | Data type | Value | Description | |--|--|--|--|
-| `schema-version` | String (case sensitive) | v1 | This is the schema version used by the agent<br> when parsing this ConfigMap.<br> Currently supported schema-version is v1.<br> Modifying this value is not supported and will be<br> rejected when ConfigMap is evaluated. |
-| `config-version` | String | | Supports ability to keep track of this config file's version in your source control system/repository.<br> Maximum allowed characters are 10, and all other characters are truncated. |
-| `[log_collection_settings.stdout] enabled =` | Boolean | true or false | This controls if stdout container log collection is enabled. When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stdout.exclude_namespaces` setting below), stdout logs will be collected from all containers across all pods/nodes in the cluster. If not specified in ConfigMaps,<br> the default value is `enabled = true`. |
-| `[log_collection_settings.stdout] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs will not be collected. This setting is effective only if<br> `log_collection_settings.stdout.enabled`<br> is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. |
-| `[log_collection_settings.stderr] enabled =` | Boolean | true or false | This controls if stderr container log collection is enabled.<br> When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stderr.exclude_namespaces` setting), stderr logs will be collected from all containers across all pods/nodes in the cluster.<br> If not specified in ConfigMaps, the default value is<br> `enabled = true`. |
-| `[log_collection_settings.stderr] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs will not be collected.<br> This setting is effective only if<br> `log_collection_settings.stdout.enabled` is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. |
-| `[log_collection_settings.env_var] enabled =` | Boolean | true or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in ConfigMaps.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to **False** either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the **env:** section.<br> If collection of environment variables is globally disabled, then you cannot enable collection for a specific container (that is, the only override that can be applied at the container level is to disable collection when it's already enabled globally.). |
-| `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | true or false | This setting controls container log enrichment to populate the Name and Image property values<br> for every log record written to the ContainerLog table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in ConfigMap. |
-| `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | true or false | This setting allows the collection of Kube events of all types.<br> By default the Kube events with type *Normal* are not collected. When this setting is set to `true`, the *Normal* events are no longer filtered and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap |
+| `schema-version` | String (case sensitive) | v1 | This schema version is used by the agent<br> when parsing this ConfigMap.<br> Currently supported schema-version is v1.<br> Modifying this value isn't supported and will be<br> rejected when the ConfigMap is evaluated. |
+| `config-version` | String | | Supports the ability to keep track of this config file's version in your source control system/repository.<br> Maximum allowed characters are 10, and all other characters are truncated. |
+| `[log_collection_settings.stdout] enabled =` | Boolean | True or false | Controls if stdout container log collection is enabled. When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stdout.exclude_namespaces` setting), stdout logs will be collected from all containers across all pods/nodes in the cluster. If not specified in the ConfigMap,<br> the default value is `enabled = true`. |
+| `[log_collection_settings.stdout] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs won't be collected. This setting is effective only if<br> `log_collection_settings.stdout.enabled`<br> is set to `true`.<br> If not specified in the ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system","gatekeeper-system"]`. |
+| `[log_collection_settings.stderr] enabled =` | Boolean | True or false | Controls if stderr container log collection is enabled.<br> When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stderr.exclude_namespaces` setting), stderr logs will be collected from all containers across all pods/nodes in the cluster.<br> If not specified in the ConfigMap, the default value is<br> `enabled = true`. |
+| `[log_collection_settings.stderr] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs won't be collected.<br> This setting is effective only if<br> `log_collection_settings.stdout.enabled` is set to `true`.<br> If not specified in the ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system","gatekeeper-system"]`. |
+| `[log_collection_settings.env_var] enabled =` | Boolean | True or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in the ConfigMap.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to `False` either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the `env:` section.<br> If collection of environment variables is globally disabled, you can't enable collection for a specific container. The only override that can be applied at the container level is to disable collection when it's already enabled globally. |
+| `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | True or false | This setting controls container log enrichment to populate the `Name` and `Image` property values<br> for every log record written to the **ContainerLog** table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in the ConfigMap. |
+| `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | True or false | This setting allows the collection of Kube events of all types.<br> By default, the Kube events with type **Normal** aren't collected. When this setting is set to `true`, the **Normal** events are no longer filtered, and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap. |
### Metric collection settings
-The following table describes the settings you can configure to control metric collection:
+The following table describes the settings you can configure to control metric collection.
| Key | Data type | Value | Description | |--|--|--|--|
-| `[metric_collection_settings.collect_kube_system_pv_metrics] enabled =` | Boolean | true or false | This setting allows persistent volume (PV) usage metrics to be collected in the kube-system namespace. By default, usage metrics for persistent volumes with persistent volume claims in the kube-system namespace are not collected. When this setting is set to `true`, PV usage metrics for all namespaces are collected. By default, this is set to `false`. |
+| `[metric_collection_settings.collect_kube_system_pv_metrics] enabled =` | Boolean | True or false | This setting allows persistent volume (PV) usage metrics to be collected in the kube-system namespace. By default, usage metrics for persistent volumes with persistent volume claims in the kube-system namespace aren't collected. When this setting is set to `true`, PV usage metrics for all namespaces are collected. By default, this setting is set to `false`. |
-ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You cannot have another ConfigMaps overruling the collections.
+ConfigMap is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMap overruling the collections.
## Configure and deploy ConfigMaps
-Perform the following steps to configure and deploy your ConfigMap configuration file to your cluster.
+To configure and deploy your ConfigMap configuration file to your cluster:
-1. Download the [template ConfigMap YAML file](https://aka.ms/container-azm-ms-agentconfig) and save it as container-azm-ms-agentconfig.yaml.
+1. Download the [template ConfigMap YAML file](https://aka.ms/container-azm-ms-agentconfig) and save it as *container-azm-ms-agentconfig.yaml*.
-2. Edit the ConfigMap yaml file with your customizations to collect stdout, stderr, and/or environmental variables.
+1. Edit the ConfigMap YAML file with your customizations to collect stdout, stderr, and environmental variables:
- - To exclude specific namespaces for stdout log collection, you configure the key/value using the following example: `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.
-
- - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally, and then follow the steps [here](container-insights-manage-agent.md#how-to-disable-environment-variable-collection-on-a-container) to complete configuration for the specific container.
-
- - To disable stderr log collection cluster-wide, you configure the key/value using the following example: `[log_collection_settings.stderr] enabled = false`.
+ - To exclude specific namespaces for stdout log collection, configure the key/value by using the following example:
+ `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.
+ - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally. Then follow the steps [here](container-insights-manage-agent.md#how-to-disable-environment-variable-collection-on-a-container) to complete configuration for the specific container.
+ - To disable stderr log collection cluster-wide, configure the key/value by using the following example: `[log_collection_settings.stderr] enabled = false`.
Save your changes in the editor.
-3. Create ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+1. Create a ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`
-The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. Then all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, so not all of them restart at the same time. When the restarts are finished, a message similar to this example includes the following result: `configmap "container-azm-ms-agentconfig" created`.
## Verify configuration
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n kube-system`. If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following:
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n kube-system`. If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
``` ***************Start Config Processing******************** config::unsupported/missing config schema version - 'v21' , using defaults ```
-Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes:
--- From an agent pod logs using the same `kubectl logs` command.
+Errors related to applying configuration changes are also available for review. The following options are available to perform more troubleshooting of configuration changes:
-- From Live logs. Live logs show errors similar to the following:
+- From an agent pod log by using the same `kubectl logs` command.
+- From live logs. Live logs show errors similar to the following example:
``` config::error::Exception while parsing config map for log collection/env variable settings: \nparse error on value \"$\" ($end), using defaults, please check config map for errors ``` -- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence and count in the last hour.
+- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with error severity for configuration errors. If there are no errors, the entry in the table will have data with severity info, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.
-After you correct the error(s) in ConfigMap, save the yaml file and apply the updated ConfigMaps by running the command: `kubectl apply -f <configmap_yaml_file.yaml`.
+After you correct the errors in the ConfigMap, save the YAML file and apply the updated ConfigMap by running the following command: `kubectl apply -f <configmap_yaml_file.yaml`.
-## Applying updated ConfigMap
+## Apply updated ConfigMap
-If you have already deployed a ConfigMap on clusters and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`.
+If you've already deployed a ConfigMap on clusters and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used. Then you can apply it by using the same command as before: `kubectl apply -f <configmap_yaml_file.yaml`.
-The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" updated`.
+The configuration change can take a few minutes to finish before taking effect. Then all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, so not all of them restart at the same time. When the restarts are finished, a message similar to this example includes the following result: `configmap "container-azm-ms-agentconfig" updated`.
-## Verifying schema version
+## Verify schema version
-Supported config schema versions are available as pod annotation (schema-versions) on the Azure Monitor Agent pod. You can see them with the following kubectl command: `kubectl describe pod ama-logs-fdf58 -n=kube-system`
+Supported config schema versions are available as pod annotation (schema-versions) on the Azure Monitor Agent pod. You can see them with the following kubectl command: `kubectl describe pod ama-logs-fdf58 -n=kube-system`.
-The output will show similar to the following with the annotation schema-versions:
+Output similar to the following example appears with the annotation schema-versions:
``` Name: ama-logs-fdf58
The output will show similar to the following with the annotation schema-version
## Next steps -- Container insights does not include a predefined set of alerts. Review the [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.--- With monitoring enabled to collect health and resource utilization of your AKS or hybrid cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.--- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
+- Container insights doesn't include a predefined set of alerts. Review the [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.
+- With monitoring enabled to collect health and resource utilization of your Azure Kubernetes Service or hybrid cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
+- View [log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
# Enable Container insights for Azure Kubernetes Service (AKS) cluster
-This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on an [Azure Kubernetes Service](../../aks/index.yml) cluster.
+
+This article describes how to set up Container insights to monitor a managed Kubernetes cluster hosted on an [Azure Kubernetes Service (AKS)](../../aks/index.yml) cluster.
## Prerequisites If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). ## New AKS cluster
-You can enable monitoring for an AKS cluster as when it's created using any of the following methods:
-- Azure CLI. Follow the steps in [Create AKS cluster](../../aks/learn/quick-kubernetes-deploy-cli.md). -- Azure Policy. Follow the steps in [Enable AKS monitoring addon using Azure Policy](container-insights-enable-aks-policy.md).-- Terraform. If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you do not choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) and complete the profile by including the [**addon_profile**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specify **oms_agent**.
+You can enable monitoring for an AKS cluster when it's created by using any of the following methods:
+
+- **Azure CLI**: Follow the steps in [Create AKS cluster](../../aks/learn/quick-kubernetes-deploy-cli.md).
+- **Azure Policy**: Follow the steps in [Enable AKS monitoring add-on by using Azure Policy](container-insights-enable-aks-policy.md).
+- **Terraform**: If you're [deploying a new AKS cluster by using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you don't choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution). Complete the profile by including the [addon_profile](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specifying **oms_agent**.
## Existing AKS cluster+ Use any of the following methods to enable monitoring for an existing AKS cluster. ## [CLI](#tab/azure-cli) > [!NOTE]
-> Azure CLI version 2.39.0 or higher required for managed identity authentication.
+> Azure CLI version 2.39.0 or higher is required for managed identity authentication.
### Use a default Log Analytics workspace
-Use the following command to enable monitoring of your AKS cluster using a default Log Analytics workspace for the resource group. If a default workspace doesn't already exist in the cluster's region, then one will be created with a name in the format *DefaultWorkspace-\<GUID>-\<Region>*.
+Use the following command to enable monitoring of your AKS cluster by using a default Log Analytics workspace for the resource group. If a default workspace doesn't already exist in the cluster's region, one will be created with a name in the format *DefaultWorkspace-\<GUID>-\<Region>*.
```azurecli az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> ```
-The output will resemble the following:
+The output will resemble the following example:
```output provisioningState : Succeeded
Use the following command to enable monitoring of your AKS cluster on a specific
az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> ```
-The output will resemble the following:
+The output will resemble the following example:
```output provisioningState : Succeeded ``` ## [Terraform](#tab/terraform)
-Use the following steps to enable monitoring using Terraform:
-1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster)
+To enable monitoring by using Terraform:
+
+1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster).
``` addon_profile {
Use the following steps to enable monitoring using Terraform:
} ```
-2. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) following the steps in the Terraform documentation.
-3. Enable collection of custom metrics using the guidance at [Enable custom metrics](container-insights-custom-metrics.md)
+1. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) by following the steps in the Terraform documentation.
+1. Enable collection of custom metrics by using the guidance at [Enable custom metrics](container-insights-custom-metrics.md).
## [Azure portal](#tab/portal-azure-monitor) > [!NOTE] > You can initiate this same process from the **Insights** option in the AKS menu for your cluster in the Azure portal.
-To enable monitoring of your AKS cluster in the Azure portal from Azure Monitor, do the following:
+To enable monitoring of your AKS cluster in the Azure portal from Azure Monitor:
1. In the Azure portal, select **Monitor**.
-2. Select **Containers** from the list.
-3. On the **Monitor - containers** page, select **Unmonitored clusters**.
-4. From the list of unmonitored clusters, find the cluster in the list and click **Enable**.
-5. On the **Configure Container insights** page, click **Configure**
-
- :::image type="content" source="media/container-insights-enable-aks/container-insights-configure.png" lightbox="media/container-insights-enable-aks/container-insights-configure.png" alt-text="Screenshot of configuration screen for AKS cluster.":::
+1. Select **Containers** from the list.
+1. On the **Monitor - containers** page, select **Unmonitored clusters**.
+1. From the list of unmonitored clusters, find the cluster in the list and select **Enable**.
+1. On the **Configure Container insights** page, select **Configure**.
-6. On the **Configure Container insights**, fill in the following information:
+ :::image type="content" source="media/container-insights-enable-aks/container-insights-configure.png" lightbox="media/container-insights-enable-aks/container-insights-configure.png" alt-text="Screenshot that shows the configuration screen for an AKS cluster.":::
- | Option | Description |
- |:|:|
- | Log Analytics workspace | Select a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) from the drop-down list or click **Create new** to create a default Log Analytics workspace. The Log Analytics workspace must be in the same subscription as the AKS container. |
- | Enable Prometheus metrics | Select this option to collect Prometheus metrics for the cluster in [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). |
- | Azure Monitor workspace | If you select **Enable Prometheus metrics**, then you must select an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). The Azure Monitor workspace must be in the same subscription as the AKS container and the Log Analytics workspace. |
- | Grafana workspace | To use the collected Prometheus metrics with dashboards in [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) to the Azure Monitor workspace if it isn't already. |
+1. On the **Configure Container insights** page, fill in the following information:
-7. Select **Use managed identity** if you want to use [managed identity authentication with the Azure Monitor agent](container-insights-onboard.md#authentication).
+ | Option | Description |
+ |:|:|
+ | Log Analytics workspace | Select a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) from the dropdown list or select **Create new** to create a default Log Analytics workspace. The Log Analytics workspace must be in the same subscription as the AKS container. |
+ | Enable Prometheus metrics | Select this option to collect Prometheus metrics for the cluster in [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). |
+ | Azure Monitor workspace | If you select **Enable Prometheus metrics**, you must select an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). The Azure Monitor workspace must be in the same subscription as the AKS container and the Log Analytics workspace. |
+ | Grafana workspace | To use the collected Prometheus metrics with dashboards in [Azure-managed Grafana](../../managed-grafan#link-a-grafana-workspace) to the Azure Monitor workspace if it isn't already. |
+
+1. Select **Use managed identity** if you want to use [managed identity authentication with Azure Monitor Agent](container-insights-onboard.md#authentication).
After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster. ## [Resource Manager template](#tab/arm) >[!NOTE]
->The template needs to be deployed in the same resource group as the cluster.
-
+>The template must be deployed in the same resource group as the cluster.
### Create or download templates
-You will either download template and parameter files or create your own depending on what authentication mode you're using.
-**To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
+You'll either download template and parameter files or create your own depending on the authentication mode you're using.
-1. Download the template at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file) and save it as **existingClusterOnboarding.json**.
+To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication):
-2. Download the parameter file at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save it as **existingClusterParam.json**.
+1. Download the template in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file) and save it as **existingClusterOnboarding.json**.
-3. Edit the values in the parameter file.
+1. Download the parameter file in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save it as **existingClusterParam.json**.
- - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
- - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
- - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
- - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension DCR of the cluster and the name of the data collection rule, which will be MSCI-\<clusterName\>-\<clusterRegion\> and this resource created in AKS clusters Resource Group. If this is first-time onboarding, you can set the arbitrary tag values.
+1. Edit the values in the parameter file:
+ - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
+ - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterName\>-\<clusterRegion\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values.
-**To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
+To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication):
1. Save the following JSON as **existingClusterOnboarding.json**.
You will either download template and parameter files or create your own dependi
} ```
-2. Save the following JSON as **existingClusterParam.json**.
+1. Save the following JSON as **existingClusterParam.json**.
```json {
You will either download template and parameter files or create your own dependi
} ```
-2. Download the parameter file at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save as **existingClusterParam.json**.
-
-3. Edit the values in the parameter file.
+1. Download the parameter file in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save as **existingClusterParam.json**.
- - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
- - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
- - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
- - `resourceTagValues`: Use the existing tag values specified for the AKS cluster.
+1. Edit the values in the parameter file:
-### Deploy template
+ - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
+ - `resourceTagValues`: Use the existing tag values specified for the AKS cluster.
-Deploy the template with the parameter file using any valid method for deploying Resource Manager templates. See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for examples of different methods.
+### Deploy the template
+Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
-
-#### To deploy with Azure PowerShell:
+#### Deploy with Azure PowerShell
```powershell New-AzResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <ResourceGroupName> -TemplateFile .\existingClusterOnboarding.json -TemplateParameterFile .\existingClusterParam.json ```
-The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
+The configuration change can take a few minutes to complete. When it's finished, a message similar to the following example includes this result:
```output provisioningState : Succeeded ```
-#### To deploy with Azure CLI, run the following commands:
+#### Deploy with Azure CLI
```azurecli az login
az account set --subscription "Subscription Name"
az deployment group create --resource-group <ResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json ```
-The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
+The configuration change can take a few minutes to complete. When it's finished, a message similar to the following example includes this result:
```output provisioningState : Succeeded
After you've enabled monitoring, it might take about 15 minutes before you can v
## Verify agent and solution deployment+ Run the following command to verify that the agent is deployed successfully. ``` kubectl get ds ama-logs --namespace=kube-system ```
-The output should resemble the following, which indicates that it was deployed properly:
+The output should resemble the following example, which indicates that it was deployed properly:
```output User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
ama-logs 2 2 2 2 2 beta.kubernetes.io/os=linux 1d ```
-If there are Windows Server nodes on the cluster then you can run the following command to verify that the agent is deployed successfully.
+If there are Windows Server nodes on the cluster, run the following command to verify that the agent is deployed successfully:
``` kubectl get ds ama-logs-windows --namespace=kube-system ```
-The output should resemble the following, which indicates that it was deployed properly:
+The output should resemble the following example, which indicates that it was deployed properly:
```output User@aksuser:~$ kubectl get ds ama-logs-windows --namespace=kube-system
To verify deployment of the solution, run the following command:
kubectl get deployment ama-logs-rs -n=kube-system ```
-The output should resemble the following, which indicates that it was deployed properly:
+The output should resemble the following example, which indicates that it was deployed properly:
```output User@aksuser:~$ kubectl get deployment ama-logs-rs -n=kube-system
ama-logs-rs 1 1 1 1 3h
## View configuration with CLI
-Use the `aks show` command to get details such as is the solution enabled or not, what is the Log Analytics workspace resourceID, and summary details about the cluster.
+Use the `aks show` command to find out whether the solution is enabled or not, what the Log Analytics workspace resource ID is, and summary information about the cluster.
```azurecli az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster> ```
-After a few minutes, the command completes and returns JSON-formatted information about solution. The results of the command should show the monitoring add-on profile and resembles the following example output:
+After a few minutes, the command completes and returns JSON-formatted information about the solution. The results of the command should show the monitoring add-on profile and resemble the following example output:
```output "addonProfiles": {
After a few minutes, the command completes and returns JSON-formatted informatio
## Migrate to managed identity authentication
-### Existing clusters with service principal
-AKS Clusters with service principal must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration.
+This section explains two methods for migrating to managed identity authentication.
-1. Get the configured Log Analytics workspace resource ID:
+### Existing clusters with a service principal
-```cli
-az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
-```
+AKS clusters with a service principal must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration.
+
+1. Get the configured Log Analytics workspace resource ID:
+
+ ```cli
+ az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
+ ```
-2. Disable monitoring with the following command:
+1. Disable monitoring with the following command:
- ```cli
- az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
- ```
+ ```cli
+ az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
+ ```
-3. Upgrade cluster to system managed identity with the following command:
+1. Upgrade cluster to system managed identity with the following command:
- ```cli
- az aks update -g <resource-group-name> -n <cluster-name> --enable-managed-identity
- ```
+ ```cli
+ az aks update -g <resource-group-name> -n <cluster-name> --enable-managed-identity
+ ```
-4. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
+1. Enable the monitoring add-on with the managed identity authentication option by using the Log Analytics workspace resource ID obtained in step 1:
- ```cli
- az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
- ```
+ ```cli
+ az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
+ ```
-### Existing clusters with system or user assigned identity
-AKS Clusters with system assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for clusters with system identity. For clusters with user assigned identity, only Azure Public cloud is supported.
+### Existing clusters with system or user-assigned identity
-1. Get the configured Log Analytics workspace resource ID:
+AKS clusters with system-assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for clusters with system identity. For clusters with user-assigned identity, only Azure public cloud is supported.
- ```cli
- az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
- ```
+1. Get the configured Log Analytics workspace resource ID:
-2. Disable monitoring with the following command:
+ ```cli
+ az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
+ ```
- ```cli
- az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
- ```
+1. Disable monitoring with the following command:
-3. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
+ ```cli
+ az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
+ ```
- ```cli
- az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
- ```
+1. Enable the monitoring add-on with the managed identity authentication option by using the Log Analytics workspace resource ID obtained in step 1:
+
+ ```cli
+ az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
+ ```
## Private link
-To enable network isolation by connecting your cluster to the Log Analytics workspace using [private link](../logs/private-link-security.md), your cluster must be using managed identity authentication with the Azure Monitor agent.
-1. Follow the steps in [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md) to create a data collection endpoint and add it to your AMPLS.
-2. Create an association between the cluster and the data collection endpoint using the following API call. See [Data Collection Rule Associations - Create](/rest/api/monitor/data-collection-rule-associations/create) for details on this call. The DCR association name must beΓÇ»**configurationAccessEndpoint**, `resourceUri` is the resource ID of the AKS cluster.
+To enable network isolation by connecting your cluster to the Log Analytics workspace by using [Azure Private Link](../logs/private-link-security.md), your cluster must be using managed identity authentication with Azure Monitor Agent.
+
+1. Follow the steps in [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md) to create a data collection endpoint and add it to your Azure Monitor private link service.
+
+1. Create an association between the cluster and the data collection endpoint by using the following API call. For information on this call, see [Data collection rule associations - Create](/rest/api/monitor/data-collection-rule-associations/create). The DCR association name must beΓÇ»**configurationAccessEndpoint**, and `resourceUri` is the resource ID of the AKS cluster.
```rest PUT https://management.azure.com/{cluster-resource-id}/providers/Microsoft.Insights/dataCollectionRuleAssociations/configurationAccessEndpoint?api-version=2021-04-01
To enable network isolation by connecting your cluster to the Log Analytics work
} ```
- Following is an example of this API call.
+ The following snippet is an example of this API call:
```rest PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/my-aks-cluster/providers/Microsoft.Insights/dataCollectionRuleAssociations/configurationAccessEndpoint?api-version=2021-04-01
To enable network isolation by connecting your cluster to the Log Analytics work
} ```
-3. Enable monitoring with managed identity authentication option using the steps in [Migrate to managed identity authentication](#migrate-to-managed-identity-authentication).
+1. Enable monitoring with the managed identity authentication option by using the steps in [Migrate to managed identity authentication](#migrate-to-managed-identity-authentication).
## Limitations -- Enabling managed identity authentication (preview) is not currently supported using Terraform or Azure Policy.-- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-name\>-\<cluster-region\>*. This name cannot currently be modified.
+- Enabling managed identity authentication (preview) isn't currently supported by using Terraform or Azure Policy.
+- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-name\>-\<cluster-region\>*. Currently, this name can't be modified.
## Next steps
-* If you experience issues while attempting to onboard the solution, review the [troubleshooting guide](container-insights-troubleshoot.md)
-
+* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md).
* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
Title: View Live Data with Container insights
+ Title: View live data with Container insights
description: This article describes the real-time view of Kubernetes logs, events, and pod metrics without using kubectl in Container insights. Last updated 05/24/2022
-# How to view Kubernetes logs, events, and pod metrics in real-time
+# View Kubernetes logs, events, and pod metrics in real time
-Container insights includes the Live Data feature, which is an advanced diagnostic feature allowing you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to further assist in troubleshooting issues in real-time.
+Container insights includes the Live Data feature. You can use this advanced diagnostic feature for direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time.
-This article provides a detailed overview and helps you understand how to use this feature.
+This article provides an overview of this feature and helps you understand how to use it.
-For help setting up or troubleshooting the Live Data feature, review our [setup guide](container-insights-livedata-setup.md). This feature directly accesses the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
+For help with setting up or troubleshooting the Live Data feature, see the [Setup guide](container-insights-livedata-setup.md). This feature directly accesses the Kubernetes API. For more information about the authentication model, see [The Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
## View AKS resource live logs
-Use the following procedure to view the live logs for pods, deployments, and replica sets with or without Container insights from the AKS resource view.
+
+To view the live logs for pods, deployments, and replica sets with or without Container insights from the AKS resource view:
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
-2. Select **Workloads** in the **Kubernetes resources** section of the menu.
+1. Select **Workloads** in the **Kubernetes resources** section of the menu.
-3. Select a pod, deployment, replica-set from the respective tab.
+1. Select a pod, deployment, or replica set from the respective tab.
-4. Select **Live Logs** from the resource's menu.
+1. Select **Live Logs** from the resource's menu.
-5. Select a pod to start collection of the live data.
+1. Select a pod to start collecting the live data.
- [![Deployment live logs](./media/container-insights-livedata-overview/live-data-deployment.png)](./media/container-insights-livedata-overview/live-data-deployment.png#lightbox)
+ [![Screenshot that shows the deployment of live logs.](./media/container-insights-livedata-overview/live-data-deployment.png)](./media/container-insights-livedata-overview/live-data-deployment.png#lightbox)
## View logs
-You can view real-time log data as they are generated by the container engine from the **Nodes**, **Controllers**, and **Containers** view. To view log data, perform the following steps.
+You can view real-time log data as it's generated by the container engine on the **Nodes**, **Controllers**, or **Containers** view. To view log data:
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
-2. On the AKS cluster dashboard, under **Monitoring** on the left-hand side, choose **Insights**.
+1. On the AKS cluster dashboard, under **Monitoring** on the left side, select **Insights**.
-3. Select either the **Nodes**, **Controllers**, or **Containers** tab.
+1. Select the **Nodes**, **Controllers**, or **Containers** tab.
-4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+1. Select an object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure Active Directory (Azure AD), you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure.
>[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [How to query logs from Container insights](container-insights-log-query.md) feature to learn more about viewing historical logs, events and metrics.
+ >To view the data from your Log Analytics workspace, select **View in analytics** in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md).
-After successfully authenticating, the Live Data console pane will appear below the performance data grid where you can view log data in a continuous stream. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
+After successful authentication, the Live Data console pane appears below the performance data grid. You can view log data here in a continuous stream. If the fetch status indicator shows a green check mark at the far right, it means data can be retrieved, and it begins streaming to your console.
-![Node properties pane view data option](./media/container-insights-livedata-overview/node-properties-pane.png)
+![Screenshot that shows the Node properties pane view data option.](./media/container-insights-livedata-overview/node-properties-pane.png)
The pane title shows the name of the pod the container is grouped with. ## View events
-You can view real-time event data as they are generated by the container engine from the **Nodes**, **Controllers**, **Containers**, and **Deployments** view when a container, pod, node, ReplicaSet, DaemonSet, job, CronJob or Deployment is selected. To view events, perform the following steps.
+You can view real-time event data as it's generated by the container engine on the **Nodes**, **Controllers**, **Containers**, or **Deployments** view when a container, pod, node, ReplicaSet, DaemonSet, job, CronJob, or Deployment is selected. To view events:
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
-2. On the AKS cluster dashboard, under **Monitoring** on the left-hand side, choose **Insights**.
+1. On the AKS cluster dashboard, under **Monitoring** on the left side, select **Insights**.
-3. Select either the **Nodes**, **Controllers**, **Containers**, or **Deployments** tab.
+1. Select the **Nodes**, **Controllers**, **Containers**, or **Deployments** tab.
-4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+1. Select an object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure AD, you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure.
>[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [How to query logs from Container insights](container-insights-log-query.md) feature to learn more about viewing historical logs, events and metrics.
+ >To view the data from your Log Analytics workspace, select **View in analytics** in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md).
-After successfully authenticating, the Live Data console pane will appear below the performance data grid. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
+After successful authentication, the Live Data console pane appears below the performance data grid. If the fetch status indicator shows a green check mark at the far right, it means data can be retrieved, and it begins streaming to your console.
-If the object you selected was a container, select the **Events** option in the pane. If you selected a Node, Pod, or controller, viewing events is automatically selected.
+If the object you selected was a container, select the **Events** option in the pane. If you selected a node, pod, or controller, viewing events is automatically selected.
-![Controller properties pane view events](./media/container-insights-livedata-overview/controller-properties-live-event.png)
+![Screenshot that shows the Controller properties pane view events.](./media/container-insights-livedata-overview/controller-properties-live-event.png)
The pane title shows the name of the Pod the container is grouped with. ### Filter events
-While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to choose from.
+While you view events, you can also limit the results by using the **Filter** pill found to the right of the search bar. Depending on the resource you select, the pill lists a pod, namespace, or cluster to choose from.
## View metrics
-You can view real-time metric data as they are generated by the container engine from the **Nodes** or **Controllers** view only when a **Pod** is selected. To view metrics, perform the following steps.
+You can view real-time metric data as it's generated by the container engine from the **Nodes** or **Controllers** view only when a **Pod** is selected. To view metrics:
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
-2. On the AKS cluster dashboard, under **Monitoring** on the left-hand side, choose **Insights**.
+1. On the AKS cluster dashboard, under **Monitoring** on the left side, select **Insights**.
-3. Select either the **Nodes** or **Controllers** tab.
+1. Select either the **Nodes** or **Controllers** tab.
-4. Select a **Pod** object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+1. Select a **Pod** object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure AD, you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure.
>[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review [How to query logs from Container insights](container-insights-log-query.md) to learn more about viewing historical logs, events and metrics.
+ >To view the data from your Log Analytics workspace, select the **View in analytics** option in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md).
+
+After successful authentication, the Live Data console pane appears below the performance data grid. Metric data is retrieved and begins streaming to your console for presentation in the two charts. The pane title shows the name of the pod the container is grouped with.
-After successfully authenticating, the Live Data console pane will appear below the performance data grid. Metric data is retrieved and begins streaming to your console for presentation in the two charts. The pane title shows the name of the pod the container is grouped with.
+![Screenshot that shows the View Pod metrics example.](./media/container-insights-livedata-overview/pod-properties-live-metrics.png)
-![View Pod metrics example](./media/container-insights-livedata-overview/pod-properties-live-metrics.png)
+## Use live data views
-## Using live data views
The following sections describe functionality that you can use in the different live data views. ### Search
-The Live Data feature includes search functionality. In the **Search** field, you can filter results by typing a key word or term and any matching results are highlighted to allow quick review. While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to chose from.
-![Live Data console pane filter example](./media/container-insights-livedata-overview/livedata-pane-filter-example.png)
+The Live Data feature includes search functionality. In the **Search** box, you can filter results by entering a keyword or term. Any matching results are highlighted to allow quick review. While you view the events, you can also limit the results by using the **Filter** feature to the right of the search bar. Depending on what resource you've selected, you can choose from a pod, namespace, or cluster.
-![Live Data console pane filter example for deployment](./media/container-insights-livedata-overview/live-data-deployment-search.png)
+![Screenshot that shows the Live Data console pane filter example.](./media/container-insights-livedata-overview/livedata-pane-filter-example.png)
-### Scroll Lock and Pause
+![Screenshot that shows the Live Data console pane filter example for deployment.](./media/container-insights-livedata-overview/live-data-deployment-search.png)
-To suspend autoscroll and control the behavior of the pane, allowing you to manually scroll through the new data read, you can use the **Scroll** option. To re-enable autoscroll, simply select the **Scroll** option again. You can also pause retrieval of log or event data by selecting the **Pause** option, and when you are ready to resume, simply select **Play**.
+### Scroll lock and pause
-![Live Data console pane pause live view](./media/container-insights-livedata-overview/livedata-pane-scroll-pause-example.png)
+To suspend autoscroll and control the behavior of the pane so that you can manually scroll through the new data read, select the **Scroll** option. To re-enable autoscroll, select **Scroll** again. You can also pause retrieval of log or event data by selecting the **Pause** option. When you're ready to resume, select **Play**.
-![Live Data console pane pause live view for deployment](./media/container-insights-livedata-overview/live-data-deployment-pause.png)
+![Screenshot that shows the Live Data console pane pause live view.](./media/container-insights-livedata-overview/livedata-pane-scroll-pause-example.png)
+![Screenshot that shows the Live Data console pane pause live view for deployment.](./media/container-insights-livedata-overview/live-data-deployment-pause.png)
+Suspend or pause autoscroll for only a short period of time while you're troubleshooting an issue. These requests might affect the availability and throttling of the Kubernetes API on your cluster.
>[!IMPORTANT]
->We recommend only suspending or pausing autoscroll for a short period of time while troubleshooting an issue. These requests may impact the availability and throttling of the Kubernetes API on your cluster.
-
->[!IMPORTANT]
->No data is stored permanently during operation of this feature. All information captured during the session is deleted when you close your browser or navigate away from it. Data only remains present for visualization inside the five minute window of the metrics feature; any metrics older than five minutes are also deleted. The Live Data buffer queries within reasonable memory usage limits.
+>No data is stored permanently during the operation of this feature. All information captured during the session is deleted when you close your browser or navigate away from it. Data only remains present for visualization inside the five-minute window of the metrics feature. Any metrics older than five minutes are also deleted. The Live Data buffer queries within reasonable memory usage limits.
## Next steps - To continue learning how to use Azure Monitor and monitor other aspects of your AKS cluster, see [View Azure Kubernetes Service health](container-insights-analyze.md).--- View [How to query logs from Container insights](container-insights-log-query.md) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
+- To see predefined queries and examples to create alerts and visualizations or perform further analysis of your clusters, see [How to query logs from Container insights](container-insights-log-query.md).
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
# Metric alert rules in Container insights (preview)
-Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides pre-configured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
+Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides preconfigured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
> [!IMPORTANT] > Container insights in Azure Monitor now supports alerts based on Prometheus metrics. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts.+ ## Types of metric alert rules+ There are two types of metric rules used by Container insights based on either Prometheus metrics or custom metrics. See a list of the specific alert rules for each at [Alert rule details](#alert-rule-details). | Alert rule type | Description | |:|:|
-| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus (preview)](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are hand-picked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>-*Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
-| [Metric rules](#metrics-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. |
-
+| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus (preview)](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are handpicked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>- *Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
+| [Metric rules](#metric-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. |
## Prometheus alert rules
-[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts-preview) use metric data from your Kubernetes cluster sent to [Azure Monitor manage service for Prometheus](../essentials/prometheus-metrics-overview.md).
+
+[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts-preview) use metric data from your Kubernetes cluster sent to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
### Prerequisites-- Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).+
+Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). For more information, see [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).
### Enable alert rules
-The only method currently available for creating Prometheus alert rules is a Resource Manager template.
+The only method currently available for creating Prometheus alert rules is an Azure Resource Manager template (ARM template).
-1. Download the template that includes the set of alert rules that you want to enable. See [Alert rule details](#alert-rule-details) for a listing of the rules for each.
+1. Download the template that includes the set of alert rules you want to enable. For a list of the rules for each, see [Alert rule details](#alert-rule-details).
- [Community alerts](https://aka.ms/azureprometheus-communityalerts) - [Recommended alerts](https://aka.ms/azureprometheus-recommendedalerts)
-2. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates) for guidance.
+1. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates).
-> [!NOTE]
-> While the Prometheus alert could be created in a different resource group to the target resource, you should use the same resource group as your target resource.
+> [!NOTE]
+> Although you can create the Prometheus alert in a resource group different from the target resource, use the same resource group as your target resource.
### Edit alert rules
- To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it using any deployment method.
+ To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it by using any deployment method.
### Configure alertable metrics in ConfigMaps
-Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
+Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps only apply to the following alertable metrics:
-- *cpuExceededPercentage*-- *cpuThresholdViolated*-- *memoryRssExceededPercentage*-- *memoryRssThresholdViolated*-- *memoryWorkingSetExceededPercentage*-- *memoryWorkingSetThresholdViolated*-- *pvUsageExceededPercentage*-- *pvUsageThresholdViolated*
+- cpuExceededPercentage
+- cpuThresholdViolated
+- memoryRssExceededPercentage
+- memoryRssThresholdViolated
+- memoryWorkingSetExceededPercentage
+- memoryWorkingSetThresholdViolated
+- pvUsageExceededPercentage
+- pvUsageThresholdViolated
> [!TIP]
-> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
-
+> Download the new ConfigMap from [this GitHub content](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
- - **Example**. Use the following ConfigMap configuration to modify the *cpuExceededPercentage* threshold to 90%:
+ - **Example:** Use the following ConfigMap configuration to modify the `cpuExceededPercentage` threshold to 90%:
``` [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
Perform the following steps to configure your ConfigMap configuration file to ov
container_memory_working_set_threshold_percentage = 95.0 ```
- - **Example**. Use the following ConfigMap configuration to modify the *pvUsageExceededPercentage* threshold to 80%:
+ - **Example:** Use the following ConfigMap configuration to modify the `pvUsageExceededPercentage` threshold to 80%:
``` [alertable_metrics_configuration_settings.pv_utilization_thresholds]
Perform the following steps to configure your ConfigMap configuration file to ov
pv_usage_threshold_percentage = 80.0 ```
-2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+1. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before it takes effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, so they don't all restart at the same time. When the restarts are finished, a message similar to the following example includes the result: `configmap "container-azm-ms-agentconfig" created`.
-## Metrics alert rules
-[Metric alert rules](../alerts/alerts-types.md#metric-alerts) use [custom metric data from your Kubernetes cluster](container-insights-custom-metrics.md).
+## Metric alert rules
+[Metric alert rules](../alerts/alerts-types.md#metric-alerts) use [custom metric data from your Kubernetes cluster](container-insights-custom-metrics.md).
### Prerequisites
- - You may need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
- - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
+ - You might need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
+ - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
### Enable and configure alert rules
The configuration change can take a few minutes to finish before taking effect,
#### Enable alert rules
-1. From the **Insights** menu for your cluster, select **Recommended alerts**.
+1. On the **Insights** menu for your cluster, select **Recommended alerts**.
- :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot showing recommended alerts option in Container insights.":::
+ :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot that shows recommended alerts option in Container insights.":::
+1. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
-2. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
+ :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot that shows a list of recommended alerts and options for enabling each.":::
- :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot showing list of recommended alerts and option for enabling each.":::
+1. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page. Specify an existing action group or create an action group by selecting **Create action group**.
-3. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page, specify an existing or create an action group by selecting **Create action group**.
-
- :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot showing selection of an action group.":::
+ :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot that shows selecting an action group.":::
#### Edit alert rules
-To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your AKS cluster.
+To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your Azure Kubernetes Service (AKS) cluster.
1. From Container insights for your cluster, select **Recommended alerts**.
-2. Click the **Rule Name** to open the alert rule.
-3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for details on the alert rule settings.
+2. Select the **Rule Name** to open the alert rule.
+3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for information on the alert rule settings.
#### Disable alert rules+ 1. From Container insights for your cluster, select **Recommended alerts**.
-2. Change the status for the alert rule to **Disabled**.
+1. Change the status for the alert rule to **Disabled**.
### [Resource Manager](#tab/resource-manager)
-For custom metrics, a separate Resource Manager template is provided for each alert rule.
+
+For custom metrics, a separate ARM template is provided for each alert rule.
#### Enable alert rules 1. Download one or all of the available templates that describe how to create the alert from [GitHub](https://github.com/microsoft/Docker-Provider/tree/ci_dev/alerts/recommended_alerts_ARM).
-2. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
-3. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md) for guidance.
+1. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
+1. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md).
#### Disable alert rules
-To disable custom alert rules, use the same Resource Manager template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
-
+To disable custom alert rules, use the same ARM template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
+ ## Alert rule details
-The following sections provide details on the alert rules provided by Container insights.
+
+The following sections present information on the alert rules provided by Container insights.
### Community alert rules
-These are hand-picked alerts from Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-mixins).
+
+These handpicked alerts come from the Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-mixins):
- KubeJobNotCompleted - KubeJobFailed
These are hand-picked alerts from Prometheus community. Source code for these mi
- KubeNodeReadinessFlapping - KubeletTooManyPods - KubeNodeUnreachable+ ### Recommended alert rules+ The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics. | Prometheus alert name | Custom metric alert name | Description | Default threshold |
The following table lists the recommended alert rules that you can enable for ei
| Average container working set memory % | Average container working set memory % | Calculates average working set memory used per container. | 95% | | Average CPU % | Average CPU % | Calculates average CPU used per node. | 80% | | Average Disk Usage % | Average Disk Usage % | Calculates average disk usage for a node. | 80% |
-| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average PV usage per pod. | 80% |
+| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average persistent volume usage per pod. | 80% |
| Average Working set memory % | Average Working set memory % | Calculates average Working set memory for a node. | 80% | | Restarting container count | Restarting container count | Calculates number of restarting containers. | 0 | | Failed Pod Counts | Failed Pod Counts | Calculates number of restarting containers. | 0 |
The following table lists the recommended alert rules that you can enable for ei
| Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 | > [!NOTE]
-> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule is not included with the Prometheus alert rules.
->
-> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) using the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
+> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule isn't included with the Prometheus alert rules.
+>
+> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) that uses the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
+Common properties across all these alert rules include:
-Common properties across all of these alert rules include:
--- All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
+- All alert rules are evaluated once per minute, and they look back at the last five minutes of data.
- All alert rules are disabled by default.-- Alerts rules don't have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.-- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before modifying its threshold.
+- Alerts rules don't have an action group assigned to them by default. To add an [action group](../alerts/action-groups.md) to the alert, either select an existing action group or create a new action group while you edit the alert rule.
+- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before you modify its threshold.
The following metrics have unique behavior characteristics: **Prometheus and custom metrics**-- `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.-- `containerRestartCount` metric is only sent when there are containers restarting.-- `oomKilledContainerCount` metric is only sent when there are OOM killed containers.-- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). cpuThresholdViolated, memoryRssThresholdViolated, and memoryWorkingSetThresholdViolated metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.-- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). `pvUsageThresholdViolated` metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. -
-
-**Prometheus only**
-- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), you should configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. See [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.-- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
+- The `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.
+- The `containerRestartCount` metric is only sent when there are containers restarting.
+- The `oomKilledContainerCount` metric is only sent when there are OOM killed containers.
+- The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and memory working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.
+- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
+- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
+
+**Prometheus only**
+- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. For details related to configuring your ConfigMap configuration file, see [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps). Collection of persistent volume metrics with claims in the `kube-system` namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. For more information, see [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings).
+- The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and Memory Working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. If you want to collect these metrics and analyze them from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. For details related to configuring your ConfigMap configuration file, see the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps).
## View alerts
-View fired alerts for your cluster from [**Alerts** in the **Monitor menu** in the Azure portal] with other fired alerts in your subscription. You can also select **View in alerts** from the **Recommended alerts** pane to view alerts from custom metrics.
-> [!NOTE]
-> Prometheus alerts will not currently be displayed when you select **Alerts** from your AKs cluster since the alert rule doesn't use the cluster as its target.
+View fired alerts for your cluster from **Alerts** in the **Monitor** menu in the Azure portal with other fired alerts in your subscription. You can also select **View in alerts** on the **Recommended alerts** pane to view alerts from custom metrics.
+> [!NOTE]
+> Currently, Prometheus alerts won't be displayed when you select **Alerts** from your AKS cluster because the alert rule doesn't use the cluster as its target.
## Next steps -- [Read about the different alert rule types in Azure Monitor](../alerts/alerts-types.md).-- [Read about alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
+- Read about the [different alert rule types in Azure Monitor](../alerts/alerts-types.md).
+- Read about [alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
# Enable Container insights
-This article provides an overview of the requirements and options that are available for configuring Container insights to monitor the performance of workloads that are deployed to Kubernetes environments. You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using a number of supported methods.
+
+This article provides an overview of the requirements and options that are available for configuring Container insights to monitor the performance of workloads that are deployed to Kubernetes environments. You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using several supported methods.
## Supported configurations+ Container insights supports the following environments: -- [Azure Kubernetes Service (AKS)](../../aks/index.yml)
+- [Azure Kubernetes Service (AKS)](../../aks/index.yml)
- [Azure Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md) - [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises - [AKS engine](https://github.com/Azure/aks-engine) - [Red Hat OpenShift](https://docs.openshift.com/container-platform/latest/welcome/https://docsupdatetracker.net/index.html) version 4.x
-The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
+The versions of Kubernetes and support policy are the same as those versions [supported in AKS](../../aks/supported-kubernetes-versions.md).
### Differences between Windows and Linux clusters The main differences in monitoring a Windows Server cluster compared to a Linux cluster include: -- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
+- Windows doesn't have a Memory RSS metric. As a result, it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
- Disk storage capacity information isn't available for Windows nodes. - Only pod environments are monitored, not Docker environments. - With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers. >[!NOTE]
-> Container insights support for Windows Server 2022 operating system in public preview.
-
+> Container insights support for the Windows Server 2022 operating system is in preview.
## Installation options
The main differences in monitoring a Windows Server cluster compared to a Linux
- [Azure Arc-enabled cluster](container-insights-enable-arc-enabled-clusters.md) - [Hybrid Kubernetes clusters](container-insights-hybrid-setup.md) - ## Prerequisites+ Before you start, make sure that you've met the following requirements: ### Log Analytics workspace+ Container insights stores its data in a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md). It supports workspaces in the regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). For a list of the supported mapping pairs to use for the default workspace, see [Region mappings supported by Container insights](container-insights-region-mapping.md).
-You can let the onboarding experience create a Log Analytics workspace in the default resource group of the AKS cluster subscription. If you already have a workspace though, then you will most likely want to use that one. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for details.
+You can let the onboarding experience create a Log Analytics workspace in the default resource group of the AKS cluster subscription. If you already have a workspace, you'll probably want to use that one. For more information, see [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
-An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure portal, but can be done with Azure CLI or Resource Manager template.
+ You can attach an AKS cluster to a Log Analytics workspace in a different Azure subscription in the same Azure Active Directory tenant. Currently, you can't do it with the Azure portal, but you can use the Azure CLI or an Azure Resource Manager template.
### Azure Monitor workspace (preview)
-If you are going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus-metrics-addon.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), then you must have an Azure Monitor workspace which is where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace.
+
+If you're going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus-metrics-addon.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), you must have an Azure Monitor workspace where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace.
### Permissions+ To enable container monitoring, you require the following permissions: -- Member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role.-- Member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on any AKS cluster resources.
+- You must be a member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role.
+- You must be a member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on any AKS cluster resources.
-To view data once container monitoring is enabled, you require the following permissions:
+To view data after container monitoring is enabled, you require the following permissions:
-- Member of [Log Analytics reader](../logs/manage-access.md#azure-rbac) role if you aren't already a member of [Log Analytics contributor](../logs/manage-access.md#azure-rbac).
+- You must be a member of the [Log Analytics reader](../logs/manage-access.md#azure-rbac) role if you aren't already a member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role.
### Kubelet secure port
-The containerized Linux agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet Secure Port (10250) within the cluster to collect Node and Container Performance related Metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows Node and container performance related metrics collection to work.
-If you have a Kubernetes cluster with Windows nodes, then please review and configure the Network Security Group and Network Policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in cluster's virtual network.
+The containerized Linux agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet secure port (10250) within the cluster to collect node and container performance-related metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows node and container performance-related metrics collection to work.
+If you have a Kubernetes cluster with Windows nodes, review and configure the network security group and network policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in the cluster's virtual network.
### Network firewall requirements
-See [Network firewall requirements](#network-firewall-requirements) for details on the firewall requirements for the AKS cluster.
+
+For information on the firewall requirements for the AKS cluster, see [Network firewall requirements](#network-firewall-requirements).
## Authentication
-Container Insights now supports authentication using managed identity (preview). This is a secure and simplified authentication model where the monitoring agent uses the clusterΓÇÖs managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
+
+Container insights now supports authentication by using managed identity (in preview). This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
> [!NOTE]
-> Container Insights preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Container Insights previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see [Frequently asked questions about Azure Kubernetes Service (AKS)](../../aks/faq.md).
+> Container insights preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available." They're excluded from the service-level agreements and limited warranty. Container insights previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see [Frequently asked questions about Azure Kubernetes Service](../../aks/faq.md).
## Agent
-### Azure Monitor agent
-When using managed identity authentication (preview), Container insights relies on a containerized Azure Monitor agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
+This section reviews the agents used by Container insights.
+### Azure Monitor agent
-### Log Analytics agent
-When not using managed identity authentication, Container insights relies on a containerized Log Analytics agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
+When Container insights uses managed identity authentication (in preview), it relies on a containerized Azure Monitor agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster. The agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
-The agent version is *microsoft/oms:ciprod04202018* or later, and it's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS). To track which versions are released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+### Log Analytics agent
+When Container insights doesn't use managed identity authentication, it relies on a containerized Log Analytics agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster. The agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
->[!NOTE]
->With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemonset pod on each individual Windows server node to collect logs and forward it to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor on behalf all Windows nodes in the cluster.
+The agent version is *microsoft/oms:ciprod04202018* or later. It's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on AKS. To track which versions are released, see [Agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemonset pod on each individual Windows Server node to collect logs and forward them to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor for all Windows nodes in the cluster.
> [!NOTE]
-> If you've already deployed an AKS cluster and enabled monitoring using either the Azure CLI or a Azure Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
+> If you've already deployed an AKS cluster and enabled monitoring by using either the Azure CLI or a Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
## Network firewall requirements
The following table lists the proxy and firewall configuration information requi
| `*.monitoring.azure.com` | 443 | | `login.microsoftonline.com` | 443 |
-The following table lists the additional firewall configuration required for managed identity authentication.
+The following table lists the extra firewall configuration required for managed identity authentication.
|Agent resource| Purpose | Port | |--|||
The following table lists the additional firewall configuration required for man
**Azure China 21Vianet cloud**
-The following table lists the proxy and firewall configuration information for Azure China 21Vianet:
+The following table lists the proxy and firewall configuration information for Azure China 21Vianet.
|Agent resource| Purpose | Port | |--||-|
The following table lists the proxy and firewall configuration information for A
| `*.oms.opinsights.azure.cn` | OMS onboarding | 443 | | `dc.services.visualstudio.com` | For agent telemetry that uses Azure Public Cloud Application Insights | 443 | -
-The following table lists the additional firewall configuration required for managed identity authentication.
+The following table lists the extra firewall configuration required for managed identity authentication.
|Agent resource| Purpose | Port | |--|||
The following table lists the additional firewall configuration required for man
**Azure Government cloud**
-The following table lists the proxy and firewall configuration information for Azure US Government:
+The following table lists the proxy and firewall configuration information for Azure US Government.
|Agent resource| Purpose | Port | |--||-|
The following table lists the proxy and firewall configuration information for A
| `*.oms.opinsights.azure.us` | OMS onboarding | 443 | | `dc.services.visualstudio.com` | For agent telemetry that uses Azure Public Cloud Application Insights | 443 |
-The following table lists the additional firewall configuration required for managed identity authentication.
+The following table lists the extra firewall configuration required for managed identity authentication.
|Agent resource| Purpose | Port | |--||| | `global.handler.control.monitor.azure.us` | Access control service | 443 | | `<cluster-region-name>.handler.control.monitor.azure.us` | Fetch data collection rules for specific AKS cluster | 443 | - ## Next steps
-Once you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
+
+After you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on AKS, Azure Stack, or another environment.
+
+To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/partners.md
description: Learn about partners for Azure Monitor and how you can access docum
Previously updated : 10/27/2021 Last updated : 10/27/2022
The following partner products integrate with Azure Monitor. They're listed in alphabetical order.
+This is not a complete list of partners. The number keeps expanding and maintaining this list is no longer scalable. As such, we are not accepting new requests to be added to this list. Any GitHub changes opened will be closed without action. We suggest you use your favorite search engine to locate additional appropropriate partners.
+ ## AIMS ![AIMS AIOps logo.](./media/partners/aims.jpg)
Grafana is an open-source application that enables you to visualize metric data
## InfluxData
-![InfluxData logo.](./media/partners/Influxdata.png)
+![InfluxData logo.](./media/partners/influxdata.png)
InfluxData is the creator of InfluxDB, the open-source time series database. Its technology is purpose built to handle the massive volumes of time-stamped data produced by Internet of Things (IoT) devices, applications, networks, containers, and computers.
For more information, see the [Moogsoft documentation](https://www.moogsoft.com/
## New Relic
-![New Relic logo.](./media/partners/newrelic.png)
+![New Relic logo.](./media/partners/newrelic-logo.png)
-See the [New Relic documentation](https://newrelic.com/solutions/partners/azure).
+Microsoft Azure integration monitoring from New Relic gives you an overview of your ecosystem ΓÇô cloud migrations, digital transformations, and cloud native applications using New Relic Observability Platform.
+
+**NewRelic Azure monitoring helps you to:**
+* Monitor the entire software stack with Full-stack monitoring.
+* Reduce friction between engineers and ITOps teams by identifying, triaging, and delegating application and infrastructure issues quickly.
+* Identify service dependencies through cross-application tracing using New Relic APM.
+
+Refer to [New Relic Azure integration](https://newrelic.com/instant-observability/?category=azure&search=azure) for more information.
## OpsGenie
For more information, see the [SquaredUp website](https://squaredup.com/).
## Sumo Logic
-![Sumo Logic logo.](./media/partners/SumoLogic.png)
+![Sumo Logic logo.](./media/partners/sumologic.png)
Sumo Logic is a secure, cloud-native analytics service for machine data. It delivers real-time, continuous intelligence from structured, semistructured, and unstructured data across the entire application lifecycle and stack.
For more information, see the [Sumo Logic documentation](https://www.sumologic.c
## Turbonomic
-![Turbonomic logo.](./media/partners/Turbonomic.png)
+![Turbonomic logo.](./media/partners/turbonomic.png)
Turbonomic delivers workload automation for hybrid clouds by simultaneously optimizing performance, cost, and compliance in real time. Turbonomic helps organizations be elastic in their Azure estate by continuously optimizing the estate. Applications constantly get the resources they require to deliver their SLA, and nothing more, across compute, storage, and network for the IaaS and PaaS layer.
Organizations can simulate migrations, properly scale workloads, and retire data
For more information, see the [Turbonomic introduction](https://turbonomic.com/).
+## Zenduty
+
+![Zenduty logo.](./media/partners/zenduty.png)
+
+Zenduty is a novel collaborative incident management platform that provides end-to-end incident alerting, on-call management, and response orchestration, which gives teams greater control and automation over the incident management lifecycle. Zenduty is ideal for always-on services, helping teams orchestrate incident response for creating better user experiences and brand value and centralizing all incoming alerts through predefined notification rules to ensure that the right people are notified at the right time.
+
+Zenduty provides your NOC, SRE, and application engineers with detailed context around the Azure Monitor alert along with playbooks and a complete incident command framework to triage, remediate, and resolve incidents with speed.
+
+For more information, see the [Zenduty documentation](https://docs.zenduty.com/docs/microsoftazure).
+ ## Partner tools with Event Hubs integration If you use Azure Monitor to route monitoring data to an event hub, you can easily integrate with some external SIEM and monitoring tools. The following partners are known to have integration with the Event Hubs service.
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
You can download the Dependency agent from these locations:
| File | OS | Version | SHA-256 | |:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.14.20760 | D4DB398FAD36E86FEACCC41D7B8AF46711346A943806769B6CE017F0BF1625FF |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.14.20760 | 3DE3B485BA79B57E74B3DFB60FD277A30C8A5D1BD898455AD77FECF20E0E2610 |
+| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.15.22060 | 39427C875E08BF13E1FD3B78E28C96666B722DA675FAA94D8014D8F1A42AE724 |
+| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.15.22060 | 5B99CDEA77C6328BDEF448EAC9A6DEF03CE5A732C5F7C98A4D4F4FFB6220EF58 |
## Install the Dependency agent on Windows
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 10/14/2022 Last updated : 10/26/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* France Central * Germany West Central * Japan East
-* Japan West
* Korea Central * North Central US * North Europe
Azure NetApp Files Standard network features are supported for the following reg
* South Central US * South India * Southeast Asia
+* Sweden Central
* Switzerland North * UAE Central
+* UAE North
* UK South * West Europe * West US
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
Resource Manager provides several functions for working with objects in your Azu
* [createObject](#createobject) * [empty](#empty) * [intersection](#intersection)
+* [items](#items)
* [json](#json) * [length](#length) * [null](#null)
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md
You can add the test toolkit to your Azure Pipeline. With a pipeline, you can ru
The easiest way to add the test toolkit to your pipeline is with third-party extensions. The following two extensions are available: -- [Run ARM template TTK Tests](https://marketplace.visualstudio.com/items?itemName=Sam-Cogan.ARMTTKExtension)
+- [Run ARM template TTK Tests](https://marketplace.visualstudio.com/items?itemName=Sam-Cogan.ARMTTKExtensionXPlatform)
- [ARM Template Tester](https://marketplace.visualstudio.com/items?itemName=maikvandergaag.maikvandergaag-arm-ttk) Or, you can implement your own tasks. The following example shows how to download the test toolkit.
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Before the end of the 30 days of transition state, you can remove access from us
|**Subscription**| The subscription currently contains the classic account and other related resources such as the Media Services.| |**Resource Group**|Select an existing resource or create a new one. The resource group must be the same location as the classic account being connected| |**Azure Video Indexer account** (radio button)| Select the *"Connecting an existing classic account"*.|
- |**Existing account ID**| Enter the ID of existing Azure Video Indexer classic account.|
+ |**Existing account ID**|Select an existing Azure Video Indexer account from the dropdown.|
|**Resource name**|Enter the name of the new Azure Video Indexer account. Default value would be the same name the account had as classic.| |**Location**|The geographic region can't be changed in the connect process, the connected account must stay in the same region. | |**Media Services account name**|The original Media Services account name that was associated with classic account.|
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
This section describes languages supported by Azure Video Indexer API.
- Frame patterns (Only to Hebrew as of now) - Language customization
-| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (Language model) |
-|::|:--:|:--:|:-:|:-:|:-:|::|
-| Afrikaans | `af-ZA` | | Γ£ö | Γ£ö | Γ£ö | |
-| Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Bangla | `bn-BD` | | Γ£ö | Γ£ö | Γ£ö | |
-| Bosnian | `bs-Latn` | | Γ£ö | Γ£ö | Γ£ö | |
-| Bulgarian | `bg-BG` | | Γ£ö | Γ£ö | Γ£ö | |
-| Catalan | `ca-ES` | | Γ£ö | Γ£ö | Γ£ö | |
-| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-Hans` | Γ£ö | | | Γ£ö | Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | | Γ£ö | Γ£ö | Γ£ö | |
-| Croatian | `hr-HR` | | Γ£ö | Γ£ö | Γ£ö | |
+| **Language** | **Code** | **Transcription** | **LID**\* | **MLID**\* | **Translation** | **Customization** (language model) |
+|::|:--:|:--:|:-:|:-:|:-:|::|
+| Afrikaans | `af-ZA` | | | | | Γ£ö |
+| Arabic (Israel) | `ar-IL` | Γ£ö | | | | Γ£ö |
+| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Bangla | `bn-BD` | | | | Γ£ö | |
+| Bosnian | `bs-Latn` | | | | Γ£ö | |
+| Bulgarian | `bg-BG` | | | | Γ£ö | |
+| Catalan | `ca-ES` | | | | Γ£ö | |
+| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö\*| | Γ£ö | Γ£ö |
+| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | |
+| Croatian | `hr-HR` | | | | Γ£ö | |
| Czech | `cs-CZ` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Danish | `da-DK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Dutch | `nl-NL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | English Australia | `en-AU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | English United Kingdom | `en-GB` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English United States | `en-US` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Estonian | `et-EE` | | Γ£ö | Γ£ö | Γ£ö | |
-| Fijian | `en-FJ` | | Γ£ö | Γ£ö | Γ£ö | |
-| Filipino | `fil-PH` | | Γ£ö | Γ£ö | Γ£ö | |
+| English United States | `en-US` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
+| Estonian | `et-EE` | | | | Γ£ö | |
+| Fijian | `en-FJ` | | | | Γ£ö | |
+| Filipino | `fil-PH` | | | | Γ£ö | |
| Finnish | `fi-FI` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| French | `fr-FR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| French | `fr-FR` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
| French (Canada) | `fr-CA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| German | `de-DE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Greek | `el-GR` | | Γ£ö | Γ£ö | Γ£ö | |
-| Haitian | `fr-HT` | | Γ£ö | Γ£ö | Γ£ö | |
+| German | `de-DE` | Γ£ö | Γ£ö \*| Γ£ö \*| Γ£ö | Γ£ö |
+| Greek | `el-GR` | | | | Γ£ö | |
+| Haitian | `fr-HT` | | | | Γ£ö | |
| Hebrew | `he-IL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Hindi | `hi-IN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Hungarian | `hu-HU` | | Γ£ö | Γ£ö | Γ£ö | |
-| Indonesian | `id-ID` | | Γ£ö | Γ£ö | Γ£ö | |
-| Italian | `it-IT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Kiswahili | `sw-KE` | | Γ£ö | Γ£ö | Γ£ö | |
+| Hungarian | `hu-HU` | | | | Γ£ö | |
+| Indonesian | `id-ID` | | | | Γ£ö | |
+| Italian | `it-IT` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Kiswahili | `sw-KE` | | | | Γ£ö | |
| Korean | `ko-KR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Latvian | `lv-LV` | | Γ£ö | Γ£ö | Γ£ö | |
-| Lithuanian | `lt-LT` | | Γ£ö | Γ£ö | Γ£ö | |
-| Malagasy | `mg-MG` | | Γ£ö | Γ£ö | Γ£ö | |
-| Malay | `ms-MY` | | Γ£ö | Γ£ö | Γ£ö | |
-| Maltese | `mt-MT` | | Γ£ö | Γ£ö | Γ£ö | |
+| Latvian | `lv-LV` | | | | Γ£ö | |
+| Lithuanian | `lt-LT` | | | | Γ£ö | |
+| Malagasy | `mg-MG` | | | | Γ£ö | |
+| Malay | `ms-MY` | | | | Γ£ö | |
+| Maltese | `mt-MT` | | | | Γ£ö | |
| Norwegian | `nb-NO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
| Polish | `pl-PL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Romanian | `ro-RO` | | Γ£ö | Γ£ö | Γ£ö | |
-| Russian | `ru-RU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | Γ£ö | Γ£ö | Γ£ö | |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | | Γ£ö | Γ£ö | Γ£ö | |
-| Serbian (Latin) | `sr-Latn-RS` | | Γ£ö | Γ£ö | Γ£ö | |
-| Slovak | `sk-SK` | | Γ£ö | Γ£ö | Γ£ö | |
-| Slovenian | `sl-SI` | | Γ£ö | Γ£ö | Γ£ö | |
-| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Spanish (Mexico) | `es-MX` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Swedish | `sv-SE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | | Γ£ö | Γ£ö | Γ£ö | |
-| Thai | `th-TH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tongan | `to-TO` | | Γ£ö | Γ£ö | Γ£ö | |
+| Romanian | `ro-RO` | | | | Γ£ö | |
+| Russian | `ru-RU` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Samoan | `en-WS` | | | | Γ£ö | |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | |
+| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | |
+| Slovak | `sk-SK` | | | | Γ£ö | |
+| Slovenian | `sl-SI` | | | | Γ£ö | |
+| Spanish | `es-ES` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
+| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
+| Swedish | `sv-SE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tamil | `ta-IN` | | | | Γ£ö | |
+| Thai | `th-TH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tongan | `to-TO` | | | | Γ£ö | |
| Turkish | `tr-TR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Ukrainian | `uk-UA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Urdu | `ur-PK` | | Γ£ö | Γ£ö | Γ£ö | |
+| Urdu | `ur-PK` | | | | Γ£ö | |
| Vietnamese | `vi-VN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+\*By default, languages marked by * are supported by LID or/and MLID auto-detection. When [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with API, you can specify to use other supported languages (see the table above) to auto-detect one or more languages by language identification (LID) or multi-language identification (MLID) by using `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by language identification (LID) or multi-language identification (MLID).
+
+> [!NOTE]
+> To change the default languages, set the `customLanguages` parameter. Setting the parameter, will replace the default languages supported by language identification (LID) and by multi-language identification (MLID).
+ ## Language support in frontend experiences The following table describes language support in the Azure Video Indexer frontend experiences.
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
+ # Run command in Azure VMware Solution In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
azure-vmware Configure Alerts For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-alerts-for-azure-vmware-solution.md
Title: Configure alerts and work with metrics in Azure VMware Solution description: Learn how to use alerts to receive notifications. Also learn how to work with metrics to gain deeper insights into your Azure VMware Solution private cloud. + Last updated 07/23/2021
azure-vmware Configure Github Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-github-enterprise-server.md
Title: Configure GitHub Enterprise Server on Azure VMware Solution
description: Learn how to Set up GitHub Enterprise Server on your Azure VMware Solution private cloud. Previously updated : 07/07/2021 Last updated : 10/25/2022+ # Configure GitHub Enterprise Server on Azure VMware Solution
azure-vmware Configure Hcx Network Extension High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-hcx-network-extension-high-availability.md
Title: Configure HCX network extension high availability
description: Learn how to configure HCX network extension high availability Previously updated : 05/06/2022 Last updated : 10/26/2022+ # HCX Network extension high availability (HA)
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
Title: Configure VMware HCX in Azure VMware Solution
description: Configure the on-premises VMware HCX Connector for your Azure VMware Solution private cloud. Previously updated : 09/07/2021 Last updated : 10/26/2022+ # Configure on-premises VMware HCX Connector
azure-vmware Connect Multiple Private Clouds Same Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/connect-multiple-private-clouds-same-region.md
Title: Connect multiple Azure VMware Solution private clouds in the same region
description: Learn how to create a network connection between two or more Azure VMware Solution private clouds located in the same region. Previously updated : 09/20/2021 Last updated : 10/26/2022+ # Connect multiple Azure VMware Solution private clouds in the same region
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-azure-vmware-solution.md
Title: Deploy and configure Azure VMware Solution description: Learn how to use the information gathered in the planning stage to deploy and configure the Azure VMware Solution private cloud. -+ Previously updated : 07/28/2021 Last updated : 10/22/2022
azure-vmware Deploy Disaster Recovery Using Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-vmware-hcx.md
Title: Deploy disaster recovery using VMware HCX
description: Learn how to deploy disaster recovery of your virtual machines (VMs) with VMware HCX Disaster Recovery. Also learn how to use Azure VMware Solution as the recovery or target site. Previously updated : 06/10/2021 Last updated : 10/26/2022+ # Deploy disaster recovery using VMware HCX
azure-vmware Deploy Traffic Manager Balance Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
Title: Deploy Traffic Manager to balance Azure VMware Solution workloads
description: Learn how to integrate Traffic Manager with Azure VMware Solution to balance application workloads across multiple endpoints in different regions. Previously updated : 02/08/2021 Last updated : 10/26/2022++ # Deploy Azure Traffic Manager to balance Azure VMware Solution workloads
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
Title: Deploy Zerto disaster recovery on Azure VMware Solution
description: Learn how to implement Zerto disaster recovery for on-premises VMware or Azure VMware Solution virtual machines. Previously updated : 10/25/2021- Last updated : 10/26/2022+ # Deploy Zerto disaster recovery on Azure VMware Solution
azure-vmware Ecosystem Back Up Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-back-up-vms.md
Title: Backup solutions for Azure VMware Solution virtual machines
description: Learn about leading backup and restore solutions for your Azure VMware Solution virtual machines. Previously updated : 04/21/2021 Last updated : 10/26/2022+ # Backup solutions for Azure VMware Solution virtual machines (VMs)
Back up network traffic between Azure VMware Solution VMs and the backup reposit
>[!NOTE] >For common questions, see [our third-party backup solution FAQ](./faq.yml). -- You can find more information on these backup solutions here: - [Cohesity](https://www.cohesity.com/blogs/expanding-cohesitys-support-for-microsofts-ecosystem-azure-stack-and-azure-vmware-solution/) - [Commvault](https://documentation.commvault.com/11.21/essential/128997_support_for_azure_vmware_solution.html)
azure-vmware Ecosystem Migration Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-migration-vms.md
Title: Migration solutions for Azure VMware Solution virtual machines
description: Learn about leading migration solutions for your Azure VMware Solution virtual machines. Previously updated : 03/22/2021 Last updated : 10/26/2022+ # Migration solutions for Azure VMware Solution virtual machines (VMs)
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
Title: Move Azure VMware Solution resources across regions description: This article describes how to move Azure VMware Solution resources from one Azure region to another. -+ Last updated 04/11/2022
azure-vmware Move Ea Csp Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-ea-csp-subscriptions.md
Title: Move Azure VMware Solution subscription to another subscription description: This article describes how to move Azure VMware Solution subscription to another subscription. You might move your resources for various reasons, such as billing. -+ Previously updated : 04/26/2021 Last updated : 10/26/2022 # Customer intent: As an Azure service administrator, I want to move my Azure VMware Solution subscription to another subscription.
Last updated 04/26/2021
This article describes how to move an Azure VMware Solution subscription to another subscription. You might move your subscription for various reasons, like billing. ## Prerequisites
-You should have at least contributor rights on both **source** and **target** subscriptions.
+
+You should have at least contributor rights on both **source** and **target** subscriptions.
>[!IMPORTANT]
->VNet and VNet gateway cannot be moved from one subscription to another. Additionally, moving your subscriptions has no impact on the management and workloads, like the vCenter, NSX, and workload virtual machines.
+>VNet and VNet gateway can't' be moved from one subscription to another. Additionally, moving your subscriptions has no impact on the management and workloads, like the vCenter, NSX, and workload virtual machines.
-## Prepare and move
+## Prepare and move
1. In the Azure portal, select the private cloud you want to move.
- :::image type="content" source="media/move-subscriptions/source-subscription-id.png" alt-text="Screenshot that shows the overview details of the selected private cloud.":::
+ :::image type="content" source="media/move-subscriptions/source-subscription-id.png" alt-text="Screenshot that shows the overview details of the selected private cloud."lightbox="media/move-subscriptions/source-subscription-id.png":::
1. From a command prompt, ping the components and workloads to verify that they are pinging from the same subscription.
You should have at least contributor rights on both **source** and **target** su
1. Select the **Subscription (change)** link.
- :::image type="content" source="media/move-subscriptions/private-cloud-overview-subscription-id.png" alt-text="Screenshot showing the private cloud details.":::
+ :::image type="content" source="media/move-subscriptions/private-cloud-overview-subscription-id.png" alt-text="Screenshot showing the private cloud details."lightbox="media/move-subscriptions/private-cloud-overview-subscription-id.png":::
1. Provide the subscription details for **Target** and select **Next**.
- :::image type="content" source="media/move-subscriptions/move-resources-subscription-target.png" alt-text="Screenshot of the target resource.":::
+ :::image type="content" source="media/move-subscriptions/move-resources-subscription-target.png" alt-text="Screenshot of the target resource."lightbox="media/move-subscriptions/move-resources-subscription-target.png":::
-1. Confirm the validation of the resources you selected to move. During the validation, youΓÇÖll see **Pending validation** for the status.
+1. Confirm the validation of the resources you selected to move. During the validation, youΓÇÖll see *Pending validation* under **Validation status**.
- :::image type="content" source="media/move-subscriptions/pending-move-resources-subscription-target.png" alt-text="Screenshot showing the resource being moved.":::
+ :::image type="content" source="media/move-subscriptions/pending-move-resources-subscription-target.png" alt-text="Screenshot showing the resource being moved."lightbox="media/move-subscriptions/pending-move-resources-subscription-target.png":::
1. Once the validation is successful, select **Next** to start the migration of your private cloud.
- :::image type="content" source="media/move-subscriptions/move-resources-succeeded.png" alt-text=" Screenshot showing the validation status of Succeeded.":::
+ :::image type="content" source="media/move-subscriptions/move-resources-succeeded.png" alt-text=" Screenshot showing the validation status of Succeeded."lightbox="media/move-subscriptions/move-resources-succeeded.png":::
1. Select the check box indicating you understand that the tools and scripts associated won't work until you update them to use the new resource IDs. Then select **Move**.
- :::image type="content" source="media/move-subscriptions/review-move-resources-subscription-target.png" alt-text="Screenshot showing the summary of the selected resource being moved.":::
+ :::image type="content" source="media/move-subscriptions/review-move-resources-subscription-target.png" alt-text="Screenshot showing the summary of the selected resource being moved."lightbox="media/move-subscriptions/review-move-resources-subscription-target.png":::
## Verify the move
-A notification appears once the resource move is complete.
+A notification appears once the resource move is complete.
The new subscription appears in the private cloud Overview. ## Next steps+ Learn more about: - [Move Azure VMware Solution across regions](move-azure-vmware-solution-across-regions.md)
azure-vmware Plan Private Cloud Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/plan-private-cloud-deployment.md
Title: Plan the Azure VMware Solution deployment description: Learn how to plan your Azure VMware Solution deployment. -+ Previously updated : 09/27/2021 Last updated : 10/26/2022 # Plan the Azure VMware Solution deployment Planning your Azure VMware Solution deployment is critical for a successful production-ready environment for creating virtual machines (VMs) and migration. During the planning process, you'll identify and gather what's needed for your deployment. As you plan, make sure to document the information you gather for easy reference during the deployment. A successful deployment results in a production-ready environment for creating virtual machines (VMs) and migration.
-In this how-to, you'll:
+In this how-to article, you'll do the following tasks:
> [!div class="checklist"] > * Identify the Azure subscription, resource group, region, and resource name
In this how-to, you'll:
> * Define the virtual network gateway > * Define VMware HCX network segments
-After you're finished, follow the recommended next steps at the end to continue with this getting started guide.
-
+After you're finished, follow the recommended [Next steps](#next-steps) at the end of this article to continue with this getting started guide.
## Identify the subscription
Identify the resource group you want to use for your Azure VMware Solution. Gen
## Identify the region or location
-Identify the [region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware) you want Azure VMware Solution deployed.
+Identify the [region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware) you want Azure VMware Solution deployed.
## Define the resource name The resource name is a friendly and descriptive name in which you title your Azure VMware Solution private cloud, for example, **MyPrivateCloud**. >[!IMPORTANT]
->The name must not exceed 40 characters. If the name exceeds this limit, you won't be able to create public IP addresses for use with the private cloud.
+>The name must not exceed 40 characters. If the name exceeds this limit, you won't be able to create public IP addresses for use with the private cloud.
## Identify the size hosts
The first Azure VMware Solution deployment you do consists of a private cloud co
[!INCLUDE [hosts-minimum-initial-deployment-statement](includes/hosts-minimum-initial-deployment-statement.md)] - >[!NOTE] >To learn about the limits for the number of hosts per cluster, the number of clusters per private cloud, and the number of hosts per private cloud, check [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-vmware-solution-limits).
-## Request a host quota
+## Request a host quota
It's crucial to request a host quota early, so after you've finished the planning process, you're ready to deploy your Azure VMware Solution private cloud. Before requesting a host quota, make sure you've identified the Azure subscription, resource group, and region. Also, make sure you've identified the size hosts and determine the number of clusters and hosts you'll need.
After the support team receives your request for a host quota, it takes up to fi
- [EA customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-ea-and-mca-customers) - [CSP customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-csp-customers) - ## Define the IP address segment for private cloud management
-Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. This address space is carved into smaller network segments (subnets) and used for Azure VMware Solution management segments, including vCenter Server, VMware HCX, NSX-T Data Center, and vMotion functionality. The diagram highlights Azure VMware Solution management IP address segments.
+Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. This address space is carved into smaller network segments (subnets) and used for Azure VMware Solution management segments, including: vCenter Server, VMware HCX, NSX-T Data Center, and vMotion functionality. The diagram highlights Azure VMware Solution management IP address segments.
>[!IMPORTANT] >The /22 CIDR network address block shouldn't overlap with any existing network segment you already have on-premises or in Azure. For details of how the /22 CIDR network is broken down per private cloud, see [Routing and subnet considerations](tutorial-network-checklist.md#routing-and-subnet-considerations). -- ## Define the IP address segment for VM workloads
-Like with any VMware vSphere environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there is often a combination of L2 extended segments from on-premises and local NSX-T network segments.
+Like with any VMware vSphere environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there's often a combination of L2 extended segments from on-premises and local NSX-T network segments.
-For the initial deployment, identify a single network segment (IP network), for example, `10.0.4.0/24`. This network segment is used primarily for testing purposes during the initial deployment. The address block shouldn't overlap with any network segments on-premises or within Azure and shouldn't be within the /22 network segment already defined.
+For the initial deployment, identify a single network segment (IP network), for example, `10.0.4.0/24`. This network segment is used primarily for testing purposes during the initial deployment. The address block shouldn't overlap with any network segments on-premises or within Azure and shouldn't be within the /22 network segment already defined.
- ## Define the virtual network gateway
-Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circuit. Define whether you want to use an *existing* OR *new* ExpressRoute virtual network gateway. If you decide to use a *new* virtual network gateway, you'll create it after creating your private cloud. It's acceptable to use an existing ExpressRoute virtual network gateway, and for planning purposes, make a note of which ExpressRoute virtual network gateway you'll use.
+Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circuit. Define whether you want to use an *existing* OR *new* ExpressRoute virtual network gateway. If you decide to use a *new* virtual network gateway, you'll create it after creating your private cloud. It's acceptable to use an existing ExpressRoute virtual network gateway. For planning purposes, make a note of which ExpressRoute virtual network gateway you'll use.
>[!IMPORTANT] >You can connect to a virtual network gateway in an Azure Virtual WAN, but it is out of scope for this quick start. ## Define VMware HCX network segments
-VMware HCX is an application mobility platform that simplifies application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware vSphere workloads to Azure VMware Solution and other connected sites through various migration types.
+VMware HCX is an application mobility platform that simplifies application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware vSphere workloads to Azure VMware Solution and other connected sites through various migration types.
-VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments. Identify the following for the VMware HCX deployment, which supports a pilot or small product use case. Depending on the needs of your migration, modify as necessary.
+VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments. Identify the following listed items for the VMware HCX deployment, which supports a pilot or small product use case. Depending on the needs of your migration, modify as necessary.
- **Management network:** When deploying VMware HCX on-premises, you'll need to identify a management network for VMware HCX. Typically, it's the same management network used by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case. >[!NOTE] >Preparing for large environments, instead of using the management network used for the on-premises VMware vSphere cluster, create a new /26 network and present that network as a port group to your on-premises VMware vSphere cluster. You can then create up to 10 service meshes and 60 network extenders (-1 per service mesh). You can stretch **eight** networks per network extender by using Azure VMware Solution private clouds. -- **Uplink network:** When deploying VMware HCX on-premises, you'll need to identify an Uplink network for VMware HCX. Use the same network which youΓÇÖll use for the Management network.
+- **Uplink network:** When deploying VMware HCX on-premises, you'll need to identify an Uplink network for VMware HCX. Use the same network you plan to use for the Management network.
- **vMotion network:** When deploying VMware HCX on-premises, you'll need to identify a vMotion network for VMware HCX. Typically, it's the same network used for vMotion by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
VMware HCX Connector deploys a subset of virtual appliances (automated) that req
>[!NOTE] >Many VMware vSphere environments use non-routed network segments for vMotion, which poses no problems. -- **Replication network:** When deploying VMware HCX on-premises, you'll need to define a replication network. Use the same network as you are using for your Management and Uplink networks. If the on-premises cluster hosts use a dedicated Replication VMkernel network, reserve **two** IP addresses in this network segment and use the Replication VMkernel network for the replication network.-
+- **Replication network:** When deploying VMware HCX on-premises, you'll need to define a replication network. Use the same network you're using for your Management and Uplink networks. If the on-premises cluster hosts use a dedicated Replication VMkernel network, reserve **two** IP addresses in this network segment and use the Replication VMkernel network for the replication network.
## Determine whether to extend your networks
Optionally, you can extend network segments from on-premises to Azure VMware Sol
>[!IMPORTANT] >These networks are extended as a final step of the configuration, not during deployment. - ## Next steps+ Now that you've gathered and documented the information needed, continue to the next tutorial to create your Azure VMware Solution private cloud. > [!div class="nextstepaction"]
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
Title: Protect web apps on Azure VMware Solution with Azure Application Gateway
description: Configure Azure Application Gateway to securely expose your web apps running on Azure VMware Solution. Previously updated : 02/10/2021 Last updated : 10/26/2022+ # Protect web apps on Azure VMware Solution with Azure Application Gateway [Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) is a layer 7 web traffic load balancer that lets you manage traffic to your web applications. It's offered in both Azure VMware Solution v1.0 and v2.0. Both versions tested with web apps running on Azure VMware Solution.
-The capabilities include:
+The capabilities include:
+ - Cookie-based session affinity - URL-based routing - Web Application Firewall (WAF)
-For a complete list of features, see [Azure Application Gateway features](../application-gateway/features.md).
+For a complete list of features, see [Azure Application Gateway features](../application-gateway/features.md).
This article shows you how to use Application Gateway in front of a web server farm to protect a web app running on Azure VMware Solution. ## Topology
-The diagram shows how Application Gateway is used to protect Azure IaaS virtual machines (VMs), Azure virtual machine scale sets, or on-premises servers. Application Gateway treats Azure VMware Solution VMs as on-premises servers.
+The diagram shows how Application Gateway is used to protect Azure IaaS virtual machines (VMs), Azure Virtual Machine Scale Sets, or on-premises servers. Application Gateway treats Azure VMware Solution VMs as on-premises servers.
+ > [!IMPORTANT] > Azure Application Gateway is currently the only supported method to expose web apps running on Azure VMware Solution VMs. The diagram shows the testing scenario used to validate the Application Gateway with Azure VMware Solution web applications. The Application Gateway instance gets deployed on the hub in a dedicated subnet with an Azure public IP address. Activating the [Azure DDoS Protection Standard](../ddos-protection/ddos-protection-overview.md) for the virtual network is recommended. The web server is hosted on an Azure VMware Solution private cloud behind NSX T0 and T1 Gateways. Additionally, Azure VMware Solution uses [ExpressRoute Global Reach](../expressroute/expressroute-global-reach.md) to enable communication with the hub and on-premises systems. ## Prerequisites -- An Azure account with an active subscription.
+- An Azure account with an active subscription.
- An Azure VMware Solution private cloud deployed and running. ## Deployment and configuration 1. In the Azure portal, search for **Application Gateway** and select **Create application gateway**.
-2. Provide the basic details as in the following figure; then select **Next: Frontends>**.
+2. Provide the basic details as in the following figure; then select **Next: Frontends>**.
- :::image type="content" source="media/application-gateway/create-app-gateway.png" alt-text="Screenshot showing Create application gateway page in Azure portal.":::
+ :::image type="content" source="media/application-gateway/create-app-gateway.png" alt-text="Screenshot showing Create application gateway page in Azure portal."lightbox="media/application-gateway/create-app-gateway.png":::
3. Choose the frontend IP address type. For public, choose an existing public IP address or create a new one. Select **Next: Backends>**.
The Application Gateway instance gets deployed on the hub in a dedicated subnet
5. On the **Configuration** tab, select **Add a routing rule**.
-6. On the **Listener** tab, provide the details for the listener. If HTTPS is selected, you must provide a certificate, either from a PFX file or an existing Azure Key Vault certificate.
+6. On the **Listener** tab, provide the details for the listener. If HTTPS is selected, you must provide a certificate, either from a PFX file or an existing Azure Key Vault certificate.
7. Select the **Backend targets** tab and select the backend pool previously created. For the **HTTP settings** field, select **Add new**. 8. Configure the parameters for the HTTP settings. Select **Add**.
-9. If you want to configure path-based rules, select **Add multiple targets to create a path-based rule**.
+9. If you want to configure path-based rules, select **Add multiple targets to create a path-based rule**.
-10. Add a path-based rule and select **Add**. Repeat to add more path-based rules.
+10. Add a path-based rule and select **Add**. Repeat to add more path-based rules.
-11. When you have finished adding path-based rules, select **Add** again; then select **Next: Tags>**.
+11. When you have finished adding path-based rules, select **Add** again; then select **Next: Tags>**.
12. Add tags and then select **Next: Review + Create>**.
The Application Gateway instance gets deployed on the hub in a dedicated subnet
## Configuration examples
-Now you'll configure Application Gateway with Azure VMware Solution VMs as backend pools for the following use cases:
+Now you'll configure Application Gateway with Azure VMware Solution VMs as backend pools for the following use cases:
- [Hosting multiple sites](#hosting-multiple-sites) - [Routing by URL](#routing-by-url) ### Hosting multiple sites
-This procedure shows you how to define backend address pools using VMs running on an Azure VMware Solution private cloud on an existing application gateway.
+This procedure shows you how to define backend address pools using VMs running on an Azure VMware Solution private cloud on an existing application gateway.
>[!NOTE] >This procedure assumes you have multiple domains, so we'll use examples of www.contoso.com and www.fabrikam.com.
+1. In your private cloud, create two different pools of VMs. One represents Contoso and the second Fabrikam.
-1. In your private cloud, create two different pools of VMs. One represents Contoso and the second Fabrikam.
+ :::image type="content" source="media/application-gateway/app-gateway-multi-backend-pool.png" alt-text="Screenshot showing summary of a web server's details in VMware vSphere Client."lightbox="media/application-gateway/app-gateway-multi-backend-pool.png":::
- :::image type="content" source="media/application-gateway/app-gateway-multi-backend-pool.png" alt-text="Screenshot showing summary of a web server's details in VSphere Client.":::
-
- We've used Windows Server 2016 with the Internet Information Services (IIS) role installed. Once the VMs are installed, run the following PowerShell commands to configure IIS on each of the VMs.
+ We've used Windows Server 2016 with the Internet Information Services (IIS) role installed. Once the VMs are installed, run the following PowerShell commands to configure IIS on each of the VMs.
```powershell Install-WindowsFeature -Name Web-Server
This procedure shows you how to define backend address pools using VMs running o
The following steps define backend address pools using VMs running on an Azure VMware Solution private cloud. The private cloud is on an existing application gateway. You then create routing rules that make sure web traffic arrives at the appropriate servers in the pools.
-1. In your private cloud, create a virtual machine pool to represent the web farm.
+1. In your private cloud, create a virtual machine pool to represent the web farm.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool.png" alt-text="Screenshot of page in VMSphere Client showing summary of another VM.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool.png" alt-text="Screenshot of page in VMware vSphere Client showing summary of another VM."lightbox="media/application-gateway/app-gateway-url-route-backend-pool.png":::
- Windows Server 2016 with IIS role installed has been used to illustrate this tutorial. Once the VMs are installed, run the following PowerShell commands to configure IIS for each VM tutorial.
+ Windows Server 2016 with IIS role installed has been used to illustrate this tutorial. Once the VMs are installed, run the following PowerShell commands to configure IIS for each VM tutorial.
The first virtual machine, contoso-web-01, hosts the main website.
The following steps define backend address pools using VMs running on an Azure V
``` The second virtual machine, contoso-web-02, hosts the images site.
-
+ ```powershell Install-WindowsFeature -Name Web-Server New-Item -Path "C:\inetpub\wwwroot\" -Name "images" -ItemType "directory"
The following steps define backend address pools using VMs running on an Azure V
Add-Content -Path C:\inetpub\wwwroot\video\test.htm -Value $($env:computername) ```
-2. Add three new backend pools in an existing application gateway instance.
+2. Add three new backend pools in an existing application gateway instance.
1. Select **Backend pools** from the left menu.
- 1. Select **Add** and enter the details of the first pool, **contoso-web**.
- 1. Add one VM as the target.
- 1. Select **Add**.
- 1. Repeat this process for **contoso-images** and **contoso-video**, adding one unique VM as the target.
+ 1. Select **Add** and enter the details of the first pool, **contoso-web**.
+ 1. Add one VM as the target.
+ 1. Select **Add**.
+ 1. Repeat this process for **contoso-images** and **contoso-video**, adding one unique VM as the target.
:::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-02.png" alt-text="Screenshot of Backend pools page showing the addition of three new backend pools." lightbox="media/application-gateway/app-gateway-url-route-backend-pool-02.png":::
The following steps define backend address pools using VMs running on an Azure V
4. On the left navigation, select **HTTP settings** and select **Add** in the left pane. Fill in the details to create a new HTTP setting and select **Save**.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-04.png" alt-text="Screenshot of Add HTTP setting page showing HTTP settings configuration.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-04.png" alt-text="Screenshot of Add HTTP setting page showing HTTP settings configuration."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-04.png":::
5. Create the rules in the **Rules** section of the left menu and associate each rule with the previously created listener. Then configure the main backend pool and HTTP settings, and then select **Add**.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-07.png" alt-text="Screenshot of Add a routing rule page to configure routing rules to a backend target.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-07.png" alt-text="Screenshot of Add a routing rule page to configure routing rules to a backend target."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-07.png":::
-6. Test the configuration. Access the application gateway on the Azure portal and copy the public IP address in the **Overview** section.
+6. Test the configuration. Access the application gateway on the Azure portal and copy the public IP address in the **Overview** section.
- 1. Open a new browser window and enter the URL `http://<app-gw-ip-address>:8080`.
+ 1. Open a new browser window and enter the URL `http://<app-gw-ip-address>:8080`.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-08.png" alt-text="Screenshot of browser page showing successful test of the configuration.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-08.png" alt-text="Screenshot of browser page showing successful test of the configuration."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-08.png":::
1. Change the URL to `http://<app-gw-ip-address>:8080/images/test.htm`.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-09.png" alt-text="Screenshot of another successful test with the new URL.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-09.png" alt-text="Screenshot of another successful test with the new URL."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-09.png":::
1. Change the URL again to `http://<app-gw-ip-address>:8080/video/test.htm`.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-10.png" alt-text="Screenshot of successful test with the final URL.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-10.png" alt-text="Screenshot of successful test with the final URL."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-10.png":::
## Next Steps
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
Open function host index page: `http://localhost:7071/api/index` to view the rea
> [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md) > [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
backup Backup Azure Backup Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sharepoint.md
Title: Back up a SharePoint farm to Azure with DPM description: This article provides an overview of DPM/Azure Backup server protection of a SharePoint farm to Azure- Previously updated : 03/09/2020+ Last updated : 10/27/2022++++
-# Back up a SharePoint farm to Azure with DPM
+# Back up a SharePoint farm to Azure with Data Protection Manager
-You back up a SharePoint farm to Microsoft Azure by using System Center Data Protection Manager (DPM) in much the same way that you back up other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points and gives you retention policy options for various backup points. DPM provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
-Backing up SharePoint to Azure with DPM is a very similar process to backing up SharePoint to DPM locally. Particular considerations for Azure will be noted in this article.
+This article describes how to back up and restore SharePoint data using System Center Data Protection Manager (DPM). The backup operation of SharePoint to Azure with DPM is similar to SharePoint backup to DPM locally.
-## SharePoint supported versions and related protection scenarios
+System Center Data Protection Manager (DPM) enables you back up a SharePoint farm to Microsoft Azure, which gives an experience similar to back up of other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points, and gives you retention policy options for various backup points. DPM provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
-For a list of supported SharePoint versions and the DPM versions required to back them up see [What can DPM back up?](/system-center/dpm/dpm-protection-matrix#applications-backup)
+In this article, you'll learn about:
-## Before you start
+> [!div class="checklist"]
+> - SharePoint supported scenarios
+> - Prerequisites
+> - Configure backup
+> - Monitor operations
+> - Restore SharePoint data
+> - Restore a SharePoint database from Azure using DPM
+> - Switch the Front-End Web Server
-There are a few things you need to confirm before you back up a SharePoint farm to Azure.
+## SharePoint supported scenarios
-### Prerequisites
+For information on the supported SharePoint versions and the DPM versions required to back them up, see [What can DPM back up?](/system-center/dpm/dpm-protection-matrix#applications-backup).
-Before you proceed, make sure that you have met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. Some tasks for prerequisites include: create a backup vault, download vault credentials, install Azure Backup Agent, and register DPM/Azure Backup Server with the vault.
+## Prerequisites
-Additional prerequisites and limitations can be found on the [Back up SharePoint with DPM](/system-center/dpm/back-up-sharepoint#prerequisites-and-limitations) article.
+Before you proceed to back up a SharePoint farm to Azure, ensure that you've met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. The tasks in prerequisites also include: create a backup vault, download vault credentials, install Azure Backup agent, and register DPM/Azure Backup Server with the vault.
+
+For other prerequisites and limitations, see [Back up SharePoint with DPM](/system-center/dpm/back-up-sharepoint#prerequisites-and-limitations).
## Configure backup
-To back up SharePoint farm you configure protection for SharePoint by using ConfigureSharePoint.exe and then create a protection group in DPM. For instructions, see [Configure Backup](/system-center/dpm/back-up-sharepoint#configure-backup) in the DPM documentation.
+To back up the SharePoint farm, configure protection for SharePoint using *ConfigureSharePoint.exe*, and then create a protection group in DPM. See the DPM documentation to learn [how to configure backup](/system-center/dpm/back-up-sharepoint#configure-backup).
-## Monitoring
+## Monitor operations
-To monitor the backup job, follow the instructions in [Monitoring DPM backup](/system-center/dpm/back-up-sharepoint#monitoring)
+To monitor the backup job, see [Monitoring DPM backup](/system-center/dpm/back-up-sharepoint#monitoring).
## Restore SharePoint data To learn how to restore a SharePoint item from a disk with DPM, see [Restore SharePoint data](/system-center/dpm/back-up-sharepoint#restore-sharepoint-data).
-## Restore a SharePoint database from Azure by using DPM
+## Restore a SharePoint database from Azure using DPM
+
+To recover a SharePoint content database, follow these steps:
-1. To recover a SharePoint content database, browse through various recovery points (as shown previously), and select the recovery point that you want to restore.
+1. Browse through various recovery points (as shown previously), and select the recovery point that you want to restore.
- ![DPM SharePoint Protection8](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection9.png)
+ ![Screenshot showing how to select a recovery point from the list.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection9.png)
2. Double-click the SharePoint recovery point to show the available SharePoint catalog information. > [!NOTE]
- > Because the SharePoint farm is protected for long-term retention in Azure, no catalog information (metadata) is available on the DPM server. As a result, whenever a point-in-time SharePoint content database needs to be recovered, you need to catalog the SharePoint farm again.
- >
- >
+ > Because the SharePoint farm is protected for long-term retention in Azure, no catalog information (metadata) is available on the DPM server. So, whenever a point-in-time SharePoint content database needs to be recovered, you need to catalog the SharePoint farm again.
+ 3. Select **Re-catalog**.
- ![DPM SharePoint Protection10](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection12.png)
+ ![Screenshot showing how to select Re-recatalog.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection12.png)
The **Cloud Recatalog** status window opens.
- ![DPM SharePoint Protection11](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection13.png)
+ ![Screenshot showing the Cloud Recatalog status window.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection13.png)
+
+ Once the cataloging is finished and the status changes to *Success*, select **Close**.
- After cataloging is finished, the status changes to *Success*. Select **Close**.
+ ![Screenshot showing the cataloging is complete with Success state.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection14.png)
- ![DPM SharePoint Protection12](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection14.png)
-4. Select the SharePoint object shown in the DPM **Recovery** tab to get the content database structure. Right-click the item, and then select **Recover**.
+4. On the DPM **Recovery** tab, select the *SharePoint object* to get the content database structure, right-click the item, and then select **Recover**.
- ![DPM SharePoint Protection13](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection15.png)
-5. At this point, follow the recovery steps earlier in this article to recover a SharePoint content database from disk.
+ ![Screenshot showing how to recover a SharePoint database from Azure.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection15.png)
+5. To recover a SharePoint content database from disk, see [this section](#restore-sharepoint-data).
-## Switching the Front-End Web Server
+## Switch the Front-End Web Server
-If you have more than one front-end web server, and want to switch the server that DPM uses to protect the farm, follow the instructions in [Switching the Front-End Web Server](/system-center/dpm/back-up-sharepoint#switching-the-front-end-web-server).
+If you've more than one front-end web server, and want to switch the server that DPM uses to protect the farm, see [Switching the Front-End Web Server](/system-center/dpm/back-up-sharepoint#switching-the-front-end-web-server).
## Next steps
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
Title: About Nutanix Cloud Clusters on Azure
description: Learn about Nutanix Cloud Clusters on Azure and the benefits it offers. Previously updated : 03/31/2021+ Last updated : 10/13/2022 # About Nutanix Cloud Clusters on Azure
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/architecture.md
Title: Architecture of BareMetal Infrastructure for NC2 description: Learn about the architecture of several configurations of BareMetal Infrastructure for NC2. - Previously updated : 04/14/2021++ Last updated : 10/13/2022 # Architecture of BareMetal Infrastructure for Nutanix
baremetal-infrastructure Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/faq.md
Title: FAQ description: Questions frequently asked about NC2 on Azure - Previously updated : 07/01/2022-++ Last updated : 10/13/2022 # Frequently asked questions about NC2 on Azure
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md
Title: Getting started
description: Learn how to sign up, set up, and use Nutanix Cloud Clusters on Azure. Previously updated : 07/01/2021+ Last updated : 10/13/2022 # Getting started with Nutanix Cloud Clusters on Azure
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/nc2-baremetal-overview.md
Title: What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?
description: Learn about the features BareMetal Infrastructure offers for NC2 workloads. Previously updated : 07/01/2022+ Last updated : 10/13/2022 # What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?
baremetal-infrastructure Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/requirements.md
Title: Requirements
description: Learn what you need to run NC2 on Azure, including Azure, Nutanix, networking, and other requirements. Previously updated : 03/31/2021+ Last updated : 10/13/2022 # Requirements
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/skus.md
Title: SKUs
description: Learn about SKU options for NC2 on Azure, including core, RAM, storage, and network. Previously updated : 07/01/2021+ Last updated : 10/13/2022 # SKUs
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
Title: Solution design description: Learn about topologies and constraints for NC2 on Azure. - Previously updated : 07/01/2022++ Last updated : 10/13/2022 # Solution design
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
Title: Supported instances and regions description: Learn about instances and regions supported for NC2 on Azure. -- Previously updated : 03/31/2021++ Last updated : 10/13/2022 # Supported instances and regions
NC2 on Azure supports the following region using AN36P:
* North Central US * East US 2 - ## Next steps Learn more:
baremetal-infrastructure Use Cases And Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/use-cases-and-supported-scenarios.md
Title: Use cases and supported scenarios description: Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift. - Previously updated : 07/01/2022+ Last updated : 10/13/2022 # Use cases and supported scenarios
cloud-services Cloud Services Configure Ssl Certificate Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-configure-ssl-certificate-portal.md
Next, you must include information about the certificate in your service definit
## Step 2: Modify the service definition and configuration files Your application must be configured to use the certificate, and an HTTPS endpoint must be added. As a result, the service definition and service configuration files need to be updated.
-1. In your development environment, open the service definition file
- (CSDEF), add a **Certificates** section within the **WebRole**
- section, and include the following information about the
- certificate (and intermediate certificates):
+1. In your development environment, open the service definition file (CSDEF), add a **Certificates** section within the **WebRole** section, and include the following information about the certificate (and intermediate certificates):
- ```xml
+ ```xml
<WebRole name="CertificateTesting" vmsize="Small"> ... <Certificates>
Your application must be configured to use the certificate, and an HTTPS endpoin
2. In your service definition file, add an **InputEndpoint** element within the **Endpoints** section to enable HTTPS:
- ```xml
+ ```xml
<WebRole name="CertificateTesting" vmsize="Small"> ... <Endpoints>
Your application must be configured to use the certificate, and an HTTPS endpoin
the **Sites** section. This element adds an HTTPS binding to map the endpoint to your site:
- ```xml
+ ```xml
<WebRole name="CertificateTesting" vmsize="Small"> ... <Sites>
Your application must be configured to use the certificate, and an HTTPS endpoin
value with that of your certificate. The following code sample provides details of the **Certificates** section, except for the thumbprint value.
- ```xml
+ ```xml
<Role name="Deployment"> ... <Certificates>
connect to it using HTTPS.
* Learn how to [deploy a cloud service](cloud-services-how-to-create-deploy-portal.md). * Configure a [custom domain name](cloud-services-custom-domain-name-portal.md). * [Manage your cloud service](cloud-services-how-to-manage-portal.md).---
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
The following steps create the cloud service project that will host the Socket.I
1. From the **Start Menu** or **Start Screen**, search for **Windows PowerShell**. Finally, right-click **Windows PowerShell** and select **Run As Administrator**. ![Azure PowerShell icon][powershell-menu]+ 2. Create a directory called **c:\\node**. ```powershell
The following steps create the cloud service project that will host the Socket.I
![The output of the new-azureservice and add-azurenodeworkerrolecmdlets](./media/cloud-services-nodejs-chat-app-socketio/socketio-1.png) ## Download the Chat Example+ For this project, we will use the chat example from the [Socket.IO GitHub repository]. Perform the following steps to download the example and add it to the project you previously created.
and add it to the project you previously created.
1. Create a local copy of the repository by using the **Clone** button. You may also use the **ZIP** button to download the project. ![A browser window viewing https://github.com/LearnBoost/socket.io/tree/master/examples/chat, with the ZIP download icon highlighted](./media/cloud-services-nodejs-chat-app-socketio/socketio-22.png)+ 2. Navigate the directory structure of the local repository until you arrive at the **examples\\chat** directory. Copy the contents of this directory to the **C:\\node\\chatapp\\WorkerRole1** directory created earlier.
and add it to the project you previously created.
![Explorer, displaying the contents of the examples\\chat directory extracted from the archive][chat-contents] The highlighted items in the screenshot above are the files copied from the **examples\\chat** directory+ 3. In the **C:\\node\\chatapp\\WorkerRole1** directory, delete the **server.js** file, and then rename the **app.js** file to **server.js**. This removes the default **server.js** file created previously by the **Add-AzureNodeWorkerRole** cmdlet and replaces it with the application file from the chat example. ### Modify Server.js and Install Modules
make some minor modifications. Perform the following steps to the
server.js file: 1. Open the **server.js** file in Visual Studio or any text editor.+ 2. Find the **Module dependencies** section at the beginning of server.js and change the line containing **sio = require('..//..//lib//socket.io')** to **sio = require('socket.io')** as shown below: ```js
Azure emulator:
following: ![The output of the npm install command][The-output-of-the-npm-install-command]+ 2. Since this example was originally a part of the Socket.IO GitHub repository, and directly referenced the Socket.IO library by relative path, Socket.IO was not referenced in the package.json
Azure emulator:
``` ### Test and Deploy+ 1. Launch the emulator by issuing the following command: ```powershell
Azure emulator:
> Reinstall AzureAuthoringTools v 2.7.1 and AzureComputeEmulator v 2.7 - make sure that version matches. 2. Open a browser and navigate to `http://127.0.0.1`.+ 3. When the browser window opens, enter a nickname and then hit enter. This will allow you to post messages as a specific nickname. To test multi-user functionality, open additional browser windows using the same URL and enter different nicknames. ![Two browser windows displaying chat messages from User1 and User2](./media/cloud-services-nodejs-chat-app-socketio/socketio-8.png)+ 4. After testing the application, stop the emulator by issuing the following command:
Azure emulator:
PS C:\node\chatapp\WorkerRole1> Stop-AzureEmulator ```
-5. To deploy the application to Azure, use the
- **Publish-AzureServiceProject** cmdlet. For example:
+5. To deploy the application to Azure, use the **Publish-AzureServiceProject** cmdlet. For example:
```powershell PS C:\node\chatapp\WorkerRole1> Publish-AzureServiceProject -ServiceName mychatapp -Location "East US" -Launch
Azure emulator:
> Be sure to use a unique name, otherwise the publish process will fail. After the deployment has completed, the browser will open and navigate to the deployed service. > > If you receive an error stating that the provided subscription name doesn't exist in the imported publish profile, you must download and import the publishing profile for your subscription before deploying to Azure. See the **Deploying the Application to Azure** section of [Build and deploy a Node.js application to an Azure Cloud Service](./cloud-services-nodejs-develop-deploy-app.md)
- >
- >
![A browser window displaying the service hosted on Azure][completed-app] > [!NOTE] > If you receive an error stating that the provided subscription name doesn't exist in the imported publish profile, you must download and import the publishing profile for your subscription before deploying to Azure. See the **Deploying the Application to Azure** section of [Build and deploy a Node.js application to an Azure Cloud Service](./cloud-services-nodejs-develop-deploy-app.md)
- >
- >
Your application is now running on Azure, and can relay chat messages between different clients using Socket.IO. > [!NOTE] > For simplicity, this sample is limited to chatting between users connected to the same instance. This means that if the cloud service creates two worker role instances, users will only be able to chat with others connected to the same worker role instance. To scale the application to work with multiple role instances, you could use a technology like Service Bus to share the Socket.IO store state across instances. For examples, see the Service Bus Queues and Topics usage samples in the [Azure SDK for Node.js GitHub repository](https://github.com/WindowsAzure/azure-sdk-for-node).
->
->
## Next steps+ In this tutorial you learned how to create a basic chat application hosted in an Azure Cloud Service. To learn how to host this application in an Azure Website, see [Build a Node.js Chat Application with Socket.IO on an Azure Web Site][chatwebsite]. For more information, see also the [Node.js Developer Center](/azure/developer/javascript/).
For more information, see also the [Node.js Developer Center](/azure/developer/j
[chat example]: https://github.com/LearnBoost/socket.io/tree/master/examples/chat [chat-example-view]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-22.png - [chat-contents]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-5.png [The-output-of-the-npm-install-command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-7.png
-[The output of the Publish-AzureService command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-9.png
+[The output of the Publish-AzureService command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-9.png
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Previously updated : 08/24/2022 Last updated : 10/24/2022
The following table lists accepted data types, when each data type should be use
| Data type | Used for testing | Recommended quantity | Used for training | Recommended quantity | |--|--|-|-|-|
-| [Audio only](#audio-data-for-testing) | Yes (visual inspection) | 5+ audio files | No | Not applicable |
+| [Audio only](#audio-data-for-training-or-testing) | Yes (visual inspection) | 5+ audio files | Yes (Preview for `en-US`) | 1-20 hours of audio |
| [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio | | [Plain text](#plain-text-data-for-training) | No | Not applicable | Yes | 1-200 MB of related text | | [Structured text](#structured-text-data-for-training) (public preview) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
Refer to the following table to ensure that your pronunciation dataset files are
| Number of pronunciations per line | 1 | | Maximum file size | 1 MB (1 KB for free tier) |
-## Audio data for testing
+### Audio data for training or testing
Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
+> [!NOTE]
+> Audio only data for training is available in preview for the `en-US` locale. For other locales, to train with audio data you must also provide [human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
+ Custom Speech projects require audio files with these properties: | Property | Value |
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
| Check the audio file format. | `sox --i <filename>` | | Convert the audio file to single channel, 16-bit, 16 KHz. | `sox <input> -b 16 -e signed-integer -c 1 -r 16k -t wav <output>.wav` |
-### Audio data for training
-
-Not all base models support [training with audio data](language-support.md?tabs=stt-tts). For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt-tts).
-
-Even if a base model supports training with audio data, the service might use only part of the audio. In [regions](regions.md#speech-service) with dedicated hardware available for training audio data, the Speech service will use up to 20 hours of your audio training data. In other regions, the Speech service uses up to 8 hours of your audio data.
- ## Next steps - [Upload your data](how-to-custom-speech-upload-data.md)
communication-services Program Brief Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/program-brief-guidelines.md
# Short Code Program Brief Filling Guidelines-- [!INCLUDE [Short code eligibility notice](../../includes/public-preview-include-short-code-eligibility.md)] Azure Communication Services allows you to apply for a short code for SMS programs. In this document, we'll review the guidelines on how to fill out a program brief for short code registration. A program brief application consists of 4 sections:
communication-services Apply For Short Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/apply-for-short-code.md
# Quickstart: Apply for a short code-- [!INCLUDE [Short code eligibility notice](../../includes/public-preview-include-short-code-eligibility.md)] ## Prerequisites
confidential-computing Confidential Nodes Aks Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-addon.md
RUN apt-get update && apt-get install -y \
libsgx-quote-ex \ az-dcap-client \ open-enclave
-WORKDIR /opt/openenclave/share/openenclave/samples/remote_attestation
+WORKDIR /opt/openenclave/share/openenclave/samples/attestation
RUN . /opt/openenclave/share/openenclave/openenclaverc \ && make build # this sets the flag for out of proc attestation mode, alternatively you can set this flag on the deployment files
container-apps Get Started Existing Container Image Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image-portal.md
If you're not going to continue to use this application, you can delete the Azur
## Next steps > [!div class="nextstepaction"]
-> [Environments in Azure Container Apps](environment.md)
+> [Communication between microservices](communicate-between-microservices.md)
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
Remove-AzResourceGroup -Name $ResourceGroupName -Force
## Next steps > [!div class="nextstepaction"]
-> [Environments in Azure Container Apps](environment.md)
+> [Communication between microservices](communicate-between-microservices.md)
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Remove-AzResourceGroup -Name $ResourceGroupName -Force
## Next steps > [!div class="nextstepaction"]
-> [Environments in Azure Container Apps](environment.md)
+> [Communication between microservices](communicate-between-microservices.md)
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
ms.devlang: azurecli
You learn how to: > [!div class="checklist"]
-> - Create a Container Apps environment to host your container apps
-> - Create an Azure Blob Storage account
-> - Create a Dapr state store component for the Azure Blob storage
-> - Deploy two container apps: one that produces messages, and one that consumes messages and persists them in the state store
-> - Verify the solution is up and running
+> * Create a Container Apps environment for your container apps
+> * Create an Azure Blob Storage state store for the container app
+> * Deploy two apps that produce and consume messages and persist them in the state store
+> * Verify the interaction between the two microservices.
-In this tutorial, you deploy the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) quickstart.
+With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
+
+In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
The application consists of:
There are multiple ways to authenticate to external resources via Dapr. This exa
# [Bash](#tab/bash)
-Create a config file named **statestore.yaml** with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. Since the application is authenticating directly via Managed Identity, there's no need to include the storage account key directly within the component. The following example shows how your **statestore.yaml** file should look when configured for your Azure Blob Storage account:
+Open a text editor and create a config file named *statestore.yaml* with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. The following example shows how your *statestore.yaml* file should look when configured for your Azure Blob Storage account:
```yaml # statestore.yaml for Azure Blob storage component
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
An Azure account with an active subscription is required. If you don't already h
## Setup
-> [!NOTE]
-> An Azure Container Apps environment can be deployed as a zone redundant resource in regions where support is available. This is a deployment-time only configuration option.
- <!-- Create --> [!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)]
If you're not going to continue to use this application, you can delete the Azur
## Next steps > [!div class="nextstepaction"]
-> [Environments in Azure Container Apps](environment.md)
+> [Communication between microservices](communicate-between-microservices.md)
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
Previously updated : 08/10/2022 Last updated : 10/25/2022
The following quotas are on a per subscription basis for Azure Container Apps.
-| Feature | Quantity | Scope | Limit can be extended | Remarks |
+To request an increase in quota amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
+
+| Feature | Scope | Default | Is Configurable<sup>1</sup> | Remarks |
|--|--|--|--|--|
-| Environments | 5 | For a subscription per region | Yes | |
-| Container Apps | 20 | Environment | Yes |
-| Revisions | 100 | Container app | No |
-| Replicas | 30 | Revision | No |
-| Cores | 2 | Replica | No | Maximum number of cores that can be requested by a revision replica. |
-| Memory | 4 GiB | Replica | No | Maximum amount of memory that can be requested by a revision replica. |
-| Cores | 20 | Environment | Yes| Calculated by the total cores an environment can accommodate. For instance, the sum of cores requested by each active replica of all revisions in an environment. |
+| Environments | Region | 5 | Yes | |
+| Container Apps | Environment | 20 | Yes | |
+| Revisions | Container app | 100 | No | |
+| Replicas | Revision | 30 | Yes | |
+| Cores | Replica | 2 | No | Maximum number of cores that can be requested by a revision replica. |
+| Cores | Environment | 20 | Yes | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
+
+<sup>1</sup> The **Is Configurable** column denotes that a feature maximum may be increased through a [support request](https://azure.microsoft.com/support/create-ticket/).
## Considerations
-* Pay-as-you-go and trial subscriptions are limited to 1 environment per region per subscription.
* If an environment runs out of allowed cores: * Provisioning times out with a failure * The app silently refuses to scale out-
-To request an increase in quota amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
Previously updated : 05/09/2022 Last updated : 10/26/2022 # Burst capacity in Azure Cosmos DB (preview)+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] Azure Cosmos DB burst capacity (preview) allows you to take advantage of your database or container's idle throughput capacity to handle spikes of traffic. With burst capacity, each physical partition can accumulate up to 5 minutes of idle capacity, which can be consumed at a rate up to 3000 RU/s. With burst capacity, requests that would have otherwise been rate limited can now be served with burst capacity while it's available.
Burst capacity applies only to Azure Cosmos DB accounts using provisioned throug
Let's take an example of a physical partition that has 100 RU/s of provisioned throughput and is idle for 5 minutes. With burst capacity, it can accumulate a maximum of 100 RU/s * 300 seconds = 30,000 RU of burst capacity. The capacity can be consumed at a maximum rate of 3000 RU/s, so if there's a sudden spike in request volume, the partition can burst up to 3000 RU/s for up 30,000 RU / 3000 RU/s = 10 seconds. Without burst capacity, any requests that are consumed beyond the provisioned 100 RU/s would have been rate limited (429).
-After the 10 seconds is over, the burst capacity has been used up. If the workload continues to exceed the provisioned 100 RU/s, any requests that are consumed beyond the provisioned 100 RU/s would now be rate limited (429). The maximum amount of burst capacity a physical partition can accumulate at any point in time is equal to 300 seconds * the provisioned RU/s of the physical partition.
+After the 10 seconds is over, the burst capacity has been used up. If the workload continues to exceed the provisioned 100 RU/s, any requests that are consumed beyond the provisioned 100 RU/s would now be rate limited (429). The maximum amount of burst capacity a physical partition can accumulate at any point in time is equal to 300 seconds * the provisioned RU/s of the physical partition.
## Getting started
-To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
Before submitting your request:-- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.-- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).+
+- Ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria).
The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
To check whether an Azure Cosmos DB account is eligible for the preview, you can
:::image type="content" source="media/burst-capacity/burst-capacity-eligibility-check.png" alt-text="Burst capacity eligibility check with table of all preview eligibility criteria":::
-## Limitations
+## Limitations (preview eligibility criteria)
-### Preview eligibility criteria
To enroll in the preview, your Azure Cosmos DB account must meet all the following criteria:
- - Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
- - If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
- - There are no SDK or driver requirements to use the feature with API for Cassandra, Gremlin, or MongoDB.
- - Your Azure Cosmos DB account isn't using any unsupported connectors
- - Azure Data Factory
- - Azure Stream Analytics
- - Logic Apps
- - Azure Functions
- - Azure Search
- - Azure Cosmos DB Spark connector
- - Azure Cosmos DB data migration tool
- - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-
-### SDK requirements (API for NoSQL and Table only)
-#### API for NoSQL
-For API for NoSQL accounts, burst capacity is supported only in the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with API for Gremlin, Cassandra, or MongoDB.
-
-Find the latest version of the supported SDK:
-
-| SDK | Supported versions | Package manager link |
-| | | |
-| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
-
-Support for other API for NoSQL SDKs is planned for the future.
-
-> [!TIP]
-> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
-
-#### Table API
-For API for Table accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
-
-| SDK | Supported versions | Package manager link |
-| | | |
-| **Azure Tables client library for .NET** | *>= 12.0.0* | <https://www.nuget.org/packages/Azure.Data.Tables/> |
-| **Azure Tables client library for Java** | *>= 12.0.0* | <https://mvnrepository.com/artifact/com.azure/azure-data-tables> |
-| **Azure Tables client library for JavaScript** | *>= 12.0.0* | <https://www.npmjs.com/package/@azure/data-tables> |
-| **Azure Tables client library for Python** | *>= 12.0.0* | <https://pypi.org/project/azure-data-tables/> |
-
-### Unsupported connectors
-
-If you enroll in the preview, the following connectors will fail.
-
-* Azure Data Factory<sup>1</sup>
-* Azure Stream Analytics<sup>1</sup>
-* Logic Apps<sup>1</sup>
-* Azure Functions<sup>1</sup>
-* Azure Search<sup>1</sup>
-* Azure Cosmos DB Spark connector<sup>1</sup>
-* Azure Cosmos DB data migration tool
-* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-
-<sup>1</sup>Support for these connectors is planned for the future.
+
+- Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
+- Your Azure Cosmos DB account is using API for NoSQL, Cassandra, Gremlin, MongoDB, or Table.
## Next steps
-* See the FAQ on [burst capacity.](burst-capacity-faq.yml)
-* Learn more about [provisioned throughput.](set-throughput.md)
-* Learn more about [request units.](request-units.md)
-* Trying to decide between provisioned throughput and serverless? See [choose between provisioned throughput and serverless.](throughput-serverless.md)
-* Want to learn the best practices? See [best practices for scaling provisioned throughput.](scaling-provisioned-throughput-best-practices.md)
+- See the FAQ on [burst capacity.](burst-capacity-faq.yml)
+- Learn more about [provisioned throughput.](set-throughput.md)
+- Learn more about [request units.](request-units.md)
+- Trying to decide between provisioned throughput and serverless? See [choose between provisioned throughput and serverless.](throughput-serverless.md)
+- Want to learn the best practices? See [best practices for scaling provisioned throughput.](scaling-provisioned-throughput-best-practices.md)
cosmos-db Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/consistency-mapping.md
Title: Apache Cassandra and Azure Cosmos DB consistency levels
description: Apache Cassandra and Azure Cosmos DB consistency levels. + Previously updated : 03/24/2022- Last updated : 10/18/2022 # Apache Cassandra and Azure Cosmos DB for Apache Cassandra consistency levels
-Unlike Azure Cosmos DB, Apache Cassandra does not natively provide precisely defined consistency guarantees. Instead, Apache Cassandra provides a write consistency level and a read consistency level, to enable the high availability, consistency, and latency tradeoffs. When using Azure Cosmos DB's API for Cassandra:
-* The write consistency level of Apache Cassandra is mapped to the default consistency level configured on your Azure Cosmos DB account. Consistency for a write operation (CL) can't be changed on a per-request basis.
+Unlike Azure Cosmos DB, Apache Cassandra doesn't natively provide precisely defined consistency guarantees. Instead, Apache Cassandra provides a write consistency level and a read consistency level, to enable the high availability, consistency, and latency tradeoffs. When using Azure Cosmos DB for Cassandra:
-* Azure Cosmos DB will dynamically map the read consistency level specified by the Cassandra client driver to one of the Azure Cosmos DB consistency levels configured dynamically on a read request.
+- The write consistency level of Apache Cassandra is mapped to the default consistency level configured on your Azure Cosmos DB account. Consistency for a write operation (CL) can't be changed on a per-request basis.
+- Azure Cosmos DB will dynamically map the read consistency level specified by the Cassandra client driver. The consistency level will be mapped to one of the Azure Cosmos DB consistency levels configured dynamically on a read request.
## Multi-region writes vs single-region writes
-Apache Cassandra database is a multi-master system by default, and does not provide an out-of-box option for single-region writes with multi-region replication for reads. However, Azure Cosmos DB provides turnkey ability to have either single region, or [multi-region](../how-to-multi-master.md) write configurations. One of the advantages of being able to choose a single region write configuration across multiple regions is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions.
+Apache Cassandra database is a multi-master system by default, and doesn't provide an out-of-box option for single-region writes with multi-region replication for reads. However, Azure Cosmos DB provides turnkey ability to have either single region, or [multi-region](../how-to-multi-master.md) write configurations. One of the advantages of being able to choose a single region write configuration across multiple regions is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions.
-With single-region writes, you can maintain strong consistency, while still maintaining a level of high availability across regions with [service-managed failover](../high-availability.md#region-outages). In this configuration, you can still exploit data locality to reduce read latency by downgrading to eventual consistency on a per request basis. In addition to these capabilities, the Azure Cosmos DB platform also provides the ability to enable [zone redundancy](/azure/architecture/reliability/architect) when selecting a region. Thus, unlike native Apache Cassandra, Azure Cosmos DB allows you to navigate the CAP Theorem [trade-off spectrum](../consistency-levels.md#rto) with more granularity.
+With single-region writes, you can maintain strong consistency, while still maintaining a level of high availability across regions with [service-managed failover](../high-availability.md#region-outages). In this configuration, you can still exploit data locality to reduce read latency by downgrading to eventual consistency on a per request basis. In addition to these capabilities, the Azure Cosmos DB platform also offers the option of [zone redundancy](/azure/architecture/reliability/architect) when selecting a region. Thus, unlike native Apache Cassandra, Azure Cosmos DB allows you to navigate the CAP Theorem [trade-off spectrum](../consistency-levels.md#rto) with more granularity.
## Mapping consistency levels
-The Azure Cosmos DB platform provides a set of five well-defined, business use-case oriented consistency settings with respect to replication and the tradeoffs defined by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem) and [PACLC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). As this approach differs significantly from Apache Cassandra, we would recommend that you take time to review and understand [Azure Cosmos DB consistency](../consistency-levels.md), or watch this short [video guide to understanding consistency settings](https://aka.ms/docs.consistency-levels) in the Azure Cosmos DB platform.
+The Azure Cosmos DB platform provides a set of five well-defined, business use-case oriented consistency settings with respect to replication. The tradeoffs to these consistency settings are defined by the [CAP](https://en.wikipedia.org/wiki/CAP_theorem) and [PACLC](https://en.wikipedia.org/wiki/PACELC_theorem) theorems. As this approach differs significantly from Apache Cassandra, we would recommend that you take time to review and understand [Azure Cosmos DB consistency](../consistency-levels.md). Alternatively, you can review this short [video guide to understanding consistency settings](https://aka.ms/docs.consistency-levels) in the Azure Cosmos DB platform. The following table illustrates the possible mappings between Apache Cassandra and Azure Cosmos DB consistency levels when using API for Cassandra. This table shows configurations for single region, multi-region reads with single-region writes, and multi-region writes.
+
+### Mappings
+
+> [!NOTE]
+> These are not exact mappings. Rather, we have provided the closest analogues to Apache Cassandra, and disambiguated any qualitative differences in the rightmost column. As mentioned above, we recommend reviewing Azure Cosmos DB's [consistency settings](../consistency-levels.md).
+
+### `ALL`, `EACH_QUOROM`, `QUOROM`, `LOCAL_QUORUM`, or `THREE` write consistency in Apache Cassandra
+
+| Apache read consistency | Reading from | Closest Azure Cosmos DB consistency level to Apache Cassandra read/write settings |
+| | | |
+| `ALL` | Local region | `Strong` |
+| `EACH_QUOROM` | Local region | `Strong` |
+| `QUOROM` | Local region | `Strong` |
+| `LOCAL_QUORUM` | Local region | `Strong` |
+| `LOCAL_ONE` | Local region | `Eventual` |
+| `ONE` | Local region | `Eventual` |
+| `TWO` | Local region | `Strong` |
+| `THREE` | Local region | `Strong` |
+
+Unlike Apache and DSE Cassandra, Azure Cosmos DB durably commits a quorum write by default. At least three out of four (3/4) nodes commit the write to disk, and NOT just an in-memory commit log.
+
+### `ONE`, `LOCAL_ONE`, or `ANY` write consistency in Apache Cassandra
+
+| Apache read consistency | Reading from | Closest Azure Cosmos DB consistency level to Apache Cassandra read/write settings |
+| | | |
+| `ALL` | Local region | `Strong` |
+| `EACH_QUOROM` | Local region | `Eventual` |
+| `QUOROM` | Local region | `Eventual` |
+| `LOCAL_QUORUM` | Local region | `Eventual` |
+| `LOCAL_ONE` | Local region | `Eventual` |
+| `ONE` | Local region | `Eventual` |
+| `TWO` | Local region | `Eventual` |
+| `THREE` | Local region | `Eventual` |
+
+Azure Cosmos DB API for Cassandra always durably commits a quorum write by default, hence all read consistencies can be made use of.
+
+### `TWO` write consistency in Apache Cassandra
+
+| Apache read consistency | Reading from | Closest Azure Cosmos DB consistency level to Apache Cassandra read/write settings |
+| | | |
+| `ALL` | Local region | `Strong` |
+| `EACH_QUOROM` | Local region | `Strong` |
+| `QUOROM` | Local region | `Strong` |
+| `LOCAL_QUORUM` | Local region | `Strong` |
+| `LOCAL_ONE` | Local region | `Eventual` |
+| `ONE` | Local region | `Eventual` |
+| `TWO` | Local region | `Eventual` |
+| `THREE` | Local region | `Strong` |
+
+Azure Cosmos DB has no notion of write consistency to only two nodes, hence we treat this consistency similar to quorum for most cases. For read consistency `TWO`, this consistency is equivalent to write with `QUOROM` and read from `ONE`.
+
+### `Serial`, or `Local_Serial` write consistency in Apache Cassandra
+
+| Apache read consistency | Reading from | Closest Azure Cosmos DB consistency level to Apache Cassandra read/write settings |
+| | | |
+| `ALL` | Local region | `Strong` |
+| `EACH_QUOROM` | Local region | `Strong` |
+| `QUOROM` | Local region | `Strong` |
+| `LOCAL_QUORUM` | Local region | `Strong` |
+| `LOCAL_ONE` | Local region | `Eventual` |
+| `ONE` | Local region | `Eventual` |
+| `TWO` | Local region | `Strong` |
+| `THREE` | Local region | `Strong` |
+
+Serial only applies to lightweight transactions. Azure Cosmos DB follows a [durably committed algorithm](https://www.microsoft.com/research/publication/revisiting-paxos-algorithm/) by default, and hence `Serial` consistency is similar to quorum.
+
+### Other regions for single-region write
+
+Azure Cosmos DB facilitates five consistency settings, including strong, across multiple regions where single-region writes is configured. This facilitation occurs as long as regions are within 2,000 miles of each other.
+
+Azure Cosmos DB doesn't have an applicable mapping to Apache Cassandra as all nodes/regions are writes and a strong consistency guarantee isn't possible across all regions.
+
+### Other regions for multi-region write
+
+Azure Cosmos DB facilitates only four consistency settings; `eventual`, `consistent prefix`, `session`, and `bounded staleness` across multiple regions where multi-region write is configured.
+
+Apache Cassandra would only provide eventual consistency for reads across other regions regardless of settings.
+
+### Dynamic overrides supported
+
+| Azure Cosmos DB account setting | Override value in client request | Override effect |
+| | | |
+| `Strong` | `All` | No effect (remain as `strong`) |
+| `Strong` | `Quorum` | No effect (remain as `strong`) |
+| `Strong` | `LocalQuorum` | No effect (remain as `strong`) |
+| `Strong` | `Two` | No effect (remain as `strong`) |
+| `Strong` | `Three` | No effect (remain as `strong`) |
+| `Strong` | `Serial` | No effect (remain as `strong`) |
+| `Strong` | `LocalSerial` | No effect (remain as `strong`) |
+| `Strong` | `One` | Consistency changes to `Eventual` |
+| `Strong` | `LocalOne` | Consistency changes to `Eventual` |
+| `Strong` | `Any` | Not allowed (error) |
+| `Strong` | `EachQuorum` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `All` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Quorum` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `LocalQuorum` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Two` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Three` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Serial` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `LocalSerial` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `One` | Consistency changes to `Eventual` |
+| `Bounded staleness`, `session`, or `consistent prefix` | `LocalOne` | Consistency changes to `Eventual` |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Any` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `EachQuorum` | Not allowed (error) |
+
+### Metrics
+
+If your Azure Cosmos DB account is configured with a consistency level other than the strong consistency, review the *Probabilistically Bounded Staleness* (PBS) metric. The metric captures the probability that your clients may get strong and consistent reads for your workloads. This metric is exposed in the Azure portal. To find more information about the PBS metric, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+
+Probabilistically bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you've currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting consistent reads for a combination of write and read regions.
+
+## Global strong consistency for write requests in Apache Cassandra
+
+Apache Cassandra, the setting of `EACH_QUORUM` or `QUORUM` gives a strong consistency. When a write request is sent to a region, `EACH_QUORUM` persists the data in a quorum number of nodes in each data center. This persistence requires every data center to be available for the write operation to succeed. `QUORUM` is slightly less restrictive, with a `QUORUM` number of nodes across all the data centers needed to persist the data prior to acknowledging the write to be successful.
-The following table illustrates the possible mappings between Apache Cassandra and Azure Cosmos DB consistency levels when using API for Cassandra. This shows configurations for single region, multi-region reads with single-region writes, and multi-region writes.
+The following graphic illustrates a global strong consistency setting in Apache Cassandra between two regions 1 and 2. After data is written to region 1, the write needs to be persisted in a quorum number of nodes in both region 1, and region 2 before an acknowledgment is received by the application.
++
+## Global strong consistency for write requests in Azure Cosmos DB for Apache Cassandra
+
+In Azure Cosmos DB consistency is set at the account level. With `Strong` consistency in Azure Cosmos DB for Cassandra, data is replicated synchronously to the read regions for the account. The further apart the regions for the Azure Cosmos DB account are, the higher the latency of the consistent write operations.
++
+How the number of regions affects your read or write request:
+
+- Two regions: With strong consistency, quorum `(N/2 + 1) = 2`. So, if the read region goes down, the account can no longer accept writes with strong consistency since a quorum number of regions isn't available for the write to be replicated to.
+- Three or more regions: for `N = 3`, `quorum = 2`. If one of the read regions is down, the write region can still replicate the writes to a total of two regions that meet the quorum requirement. Similarly, with four regions, `quorum = 4/2 + 1 = 3`. Even with one read region being down, quorum can be met.
> [!NOTE]
-> These are not exact mappings. Rather, we have provided the closest analogues to Apache Cassandra, and disambiguated any qualitative differences in the rightmost column. As mentioned above, we recommend reviewing Azure Cosmos DB's [consistency settings](../consistency-levels.md).
+> If a globally strong consistency is required for all write operations, the consistency for Azure Cosmos DB for Cassandra account must be set to Strong. The consistency level for write operations cannot be overridden to a lower consistency level on a per request basis in Azure Cosmos DB.
+
+## Weaker consistency for write requests in Apache Cassandra
+
+A consistency level of `ANY`, `ONE`, `TWO`, `THREE`, `LOCAL_QUORUM`, `Serial` or `Local_Serial`? Consider a write request with `LOCAL_QUORUM` with an `RF` of `4` in a six-node datacenter. `Quorum = 4/2 + 1 = 3`.
++
+## Weaker consistency for write requests in Azure Cosmos DB for Apache Cassandra
+
+When a write request is sent with any of the consistency levels lower than `Strong`, a success response is returned as soon as the local region persists the write in at least three out of four replicas.
++
+## Global strong consistency for read requests in Apache Cassandra
+
+With a consistency of `EACH_QUORUM`, a consistent read can be achieved in Apache Cassandra. In, a multi-region setup for `EACH_QUORUM` if the quorum number of nodes isn't met in each region, then the read will be unsuccessful.
++
+## Global strong consistency for read requests in Azure Cosmos DB for Apache Cassandra
+
+The read request is served from two replicas in the specified region. Since the write already took care of persisting to a quorum number of regions (and all regions if every region was available), simply reading from two replicas in the specified region provides Strong consistency. This strong consistency requires `EACH_QUORUM` to be specified in the driver when issuing the read against a region for the Cosmos DB account along with Strong Consistency as the default consistency level for the account.
++
+## Local strong consistency in Apache Cassandra
+
+A read request with a consistency level of `TWO`, `THREE`, or `LOCAL_QUORUM` will give us strong consistency reading from local region. With a consistency level of `LOCAL_QUORUM`, you need a response from two nodes in the specified datacenter for a successful read.
++
+## Local strong consistency in Azure Cosmos DB for Apache Cassandra
+
+In Azure Cosmos DB for Cassandra, having a consistency level of `TWO`, `THREE` or `LOCAL_QUORUM` will give a local strong consistency for a read request. Since the write path guarantees replicating to a minimum of three out of four replicas, a read from two replicas in the specified region will guarantee a quorum read of the data in that region.
++
+## Eventual consistency in Apache Cassandra
+
+A consistency level of `LOCAL_ONE`, `One` and `ANY with LOCAL_ONE` will result in eventual consistency. This consistency is used in cases where the focus is on latency.
++
+## Eventual consistency in Azure Cosmos DB for Apache Cassandra?
+
+A consistency level of `LOCAL_ONE`, `ONE` or `Any` will give you eventual consistency. With eventual consistency, a read is served from just one of the replicas in the specified region.
+
+## Override consistency level for read operations in Azure Cosmos DB for Cassandra
+Previously, the consistency level for read requests could only be overridden to a lower consistency than the default set on the account. For instance, with the default consistency of Strong, read requests could be issued with Strong by default and overridden on a per request basis (if needed) to a consistency level weaker than Strong. However, read requests couldn't be issued with an overridden consistency level higher than the accountΓÇÖs default. An account with Eventual consistency couldn't receive read requests with a consistency level higher than Eventual (which in the Apache Cassandra drivers translate to `TWO`, `THREE`, `LOCAL_QUORUM` or `QUORUM`).
-If your Azure Cosmos DB account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+Azure Cosmos DB for Cassandra now facilitates overriding the consistency on read requests to a value higher than the accountΓÇÖs default consistency. For instance, with the default consistency on the Cosmos DB account set to Eventual (Apache Cassandra equivalent of `One` or `ANY`), read requests can be overridden on a per request basis to `LOCAL_QUORUM`. This override ensures that a quorum number of replicas within the specified region are consulted prior to returning the result set, as required by `LOCAL_QUORUM`.
-Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
+This option also prevents the need to set a default consistency that is higher than `Eventual`, when it's only needed for read requests.
## Next steps Learn more about global distribution and consistency levels for Azure Cosmos DB:
-* [Global distribution overview](../distribute-data-globally.md)
-* [Consistency Level overview](../consistency-levels.md)
-* [High availability](../high-availability.md)
+- [Global distribution overview](../distribute-data-globally.md)
+- [Consistency Level overview](../consistency-levels.md)
+- [High availability](../high-availability.md)
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Azure Cosmos DB is a fully managed NoSQL database for modern app development. Az
## APIs in Azure Cosmos DB
-Azure Cosmos DB offers multiple database APIs, which include NoSQL, MongoDB, Cassandra, Gremlin, and Table. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying with its various APIs.
+Azure Cosmos DB offers multiple database APIs, which include NoSQL, MongoDB, PostgreSQL Cassandra, Gremlin, and Table. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying with its various APIs.
All the APIs offer automatic scaling of storage and throughput, flexibility, and performance guarantees. There's no one best API, and you may choose any one of the APIs to build your application. This article will help you choose an API based on your workload and team requirements.
All the APIs offer automatic scaling of storage and throughput, flexibility, and
API for NoSQL is native to Azure Cosmos DB.
-API for MongoDB, Cassandra, Gremlin, and Table implement the wire protocol of open-source database engines. These APIs are best suited if the following conditions are true:
+API for MongoDB, PostgreSQL, Cassandra, Gremlin, and Table implement the wire protocol of open-source database engines. These APIs are best suited if the following conditions are true:
-* If you have existing MongoDB, Cassandra, or Gremlin applications
+* If you have existing MongoDB, PostgreSQL Cassandra, or Gremlin applications
* If you don't want to rewrite your entire data access layer * If you want to use the open-source developer ecosystem, client-drivers, expertise, and resources for your database * If you want to use the Azure Cosmos DB core features such as:
Based on your workload, you must choose the API that fits your requirement. The
:::image type="content" source="./media/choose-api/choose-api-decision-tree.png" alt-text="Decision tree to choose an API in Azure Cosmos DB." lightbox="./media/choose-api/choose-api-decision-tree.png":::
+> [!NOTE]
+> This decision tree will be updated soon to include API for PostgreSQL.
+ ## <a id="coresql-api"></a> API for NoSQL The Azure Cosmos DB API for NoSQL stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on API for NoSQL accounts. NoSQL accounts provide support for querying items using the Structured Query Language (SQL) syntax, one of the most familiar and popular query languages to query JSON objects. To learn more, see the [Azure Cosmos DB API for NoSQL](/training/modules/intro-to-azure-cosmos-db-core-api/) training module and [getting started with SQL queries](nosql/query/getting-started.md) article.
The features that Azure Cosmos DB provides, that you don't have to compromise on
You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb/connect-using-compass.md), and [Robo3T](mongodb/connect-using-robomongo.md), can run queries and work with data as they do with native MongoDB. To learn more, see [API for MongoDB](mongodb/introduction.md) article.
+## API for PostgreSQL
+
+Azure Cosmos DB for PostgreSQL is a managed service for running PostgreSQL at any scale, with the [Citus open source](https://github.com/citusdata/citus) superpower of distributed tables. It stores data either on a single node, or distributed in a multi-node configuration.
+
+Azure Cosmos DB for PostgreSQL is built on native PostgreSQL--rather than a PostgreSQL fork--and lets you choose any major database versions supported by the PostgreSQL community. It's ideal for starting on a single-node database with rich indexing, geospatial capabilities, and JSONB support. Later, if your performance needs grow, you can add nodes to the cluster with zero downtime.
+
+If youΓÇÖre looking for a managed open source relational database with high performance and geo-replication, Azure Cosmos DB for PostgreSQL is the recommended choice. To learn more, see the [Azure Cosmos DB for PostgreSQL introduction](postgresql/introduction.md).
+ ## <a id="cassandra-api"></a> API for Apache Cassandra The Azure Cosmos DB API for Cassandra stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. API for Cassandra in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. This API for Cassandra is wire protocol compatible with native Apache Cassandra. You should consider API for Cassandra if you want to benefit from the elasticity and fully managed nature of Azure Cosmos DB and still use most of the native Apache Cassandra features, tools, and ecosystem. This fully managed nature means on API for Cassandra you don't need to manage the OS, Java VM, garbage collector, read/write performance, nodes, clusters, etc.
The Azure Cosmos DB API for Table stores data in key/value format. If you're cur
Applications written for Azure Table storage can migrate to the API for Table with little code changes and take advantage of premium capabilities. To learn more, see [API for Table](table/introduction.md) article.
-## API for PostgreSQL
-
-Azure Cosmos DB for PostgreSQL is a managed service for running PostgreSQL at any scale, with the [Citus open source](https://github.com/citusdata/citus) superpower of distributed tables. It stores data either on a single node, or distributed in a multi-node configuration.
-
-Azure Cosmos DB for PostgreSQL is built on native PostgreSQL--rather than a PostgreSQL fork--and lets you choose any major database versions supported by the PostgreSQL community. It's ideal for starting on a single-node database with rich indexing, geospatial capabilities, and JSONB support. Later, if your performance needs grow, you can add nodes to the cluster with zero downtime.
-
-If youΓÇÖre looking for a managed open source relational database with high performance and geo-replication, Azure Cosmos DB for PostgreSQL is the recommended choice. To learn more, see the [Azure Cosmos DB for PostgreSQL introduction](postgresql/introduction.md).
- ## Capacity planning when migrating data Trying to do capacity planning for a migration to Azure Cosmos DB for NoSQL or MongoDB from an existing database cluster? You can use information about your existing database cluster for capacity planning.
Trying to do capacity planning for a migration to Azure Cosmos DB for NoSQL or M
* [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md) * [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)
+* [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-create-portal.md)
* [Get started with Azure Cosmos DB for Cassandra](cassandr) * [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-dotnet.md) * [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
You can provision throughput at a container-level or a database-level in terms o
| Maximum storage per container | Unlimited | | Maximum attachment size per Account (Attachment feature is being deprecated) | 2 GB | | Minimum RU/s required per 1 GB | 10 RU/s <sup>3</sup> |
-
+ <sup>1</sup> You can increase Maximum RUs per container or database by [filing an Azure support ticket](create-support-request-quota-increase.md). <sup>2</sup> To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md). If your workload has already reached the logical partition limit of 20 GB in production, it's recommended to rearchitect your application with a different partition key as a long-term solution. To help give time to rearchitect your application, you can request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Requesting a temporary increase is intended as a temporary mitigation and not recommended as a long-term solution, as **SLA guarantees are not honored when the limit is increased**. To remove the configuration, file a support ticket and select quota type **Restore containerΓÇÖs logical partition key size to default (20 GB)**. Filing this support ticket can be done after you have either deleted data to fit the 20-GB logical partition limit or have rearchitected your application with a different partition key.
An Azure Cosmos DB item can represent either a document in a collection, a row i
| Maximum size of an item | 2 MB (UTF-8 length of JSON representation) <sup>1</sup> | | Maximum length of partition key value | 2048 bytes | | Maximum length of ID value | 1023 bytes |
+| Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are several known limitations in some versions of the Cosmos DB SDK, as well as connectors (ADF, Spark, Kafka etc.) and http-drivers/libraries etc. that can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, please encode the ID value - [for example via Base64 + custom encoding of special charatcers allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. |
| Maximum number of properties per item | No practical limit | | Maximum length of property name | No practical limit | | Maximum length of property value | No practical limit |
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Azure Cosmos DB accounts configured with multiple write regions cannot be config
To learn more about consistency concepts, read the following articles: -- [High-level TLA+ specifications for the five consistency levels offered by Azure Cosmos DB](https://github.com/Azure/azure-cosmos-tla)
+- [High-level TLA+ specifications for the five consistency levels offered by Azure Cosmos DB](https://github.com/tlaplus/azure-cosmos-tla)
- [Replicated Data Consistency Explained Through Baseball (video) by Doug Terry](https://www.youtube.com/watch?v=gluIh8zd26I) - [Replicated Data Consistency Explained Through Baseball (whitepaper) by Doug Terry](https://www.microsoft.com/research/publication/replicated-data-consistency-explained-through-baseball/) - [Session guarantees for weakly consistent replicated data](https://dl.acm.org/citation.cfm?id=383631)
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
The following table summarizes the high availability capability of various accou
|Zone failures ΓÇô data loss | Data loss | No data loss | No data loss | No data loss | No data loss | |Zone failures ΓÇô availability | Availability loss | No availability loss | No availability loss | No availability loss | No availability loss | |Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information.
-|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss for read region failure, temporary for write region failure | No availability loss |
+|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss for read region failure, temporary for write region failure | No read availability loss, temporary write availability loss in the affected region |
|Price (***1***) | N/A | Provisioned RU/s x 1.25 rate | Provisioned RU/s x n regions | Provisioned RU/s x 1.25 rate x n regions (***2***) | Multi-region write rate x n regions | ***1*** For Serverless accounts request units (RU) are multiplied by a factor of 1.25.
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
Previously updated : 05/09/2022 Last updated : 10/26/2022 # Merge partitions in Azure Cosmos DB (preview)+ [!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)] Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container in place. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container and RU/s per partition is low. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems. ## Getting started
-To get started using partition merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+To get started using partition merge, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Partition merge (preview)** feature.
+
+Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria). Once you've enabled the feature, it will take 15-20 minutes to take effect.
-Before submitting your request:
-- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.-- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+> [!CAUTION]
+> When merge is enabled on an account, only requests from .NET SDK version >= 3.27.0 will be allowed on the account, regardless of whether merges are ongoing or not. Requests from other SDKs (older .NET SDK, Java, JavaScript, Python, Go) or unsupported connectors (Azure Data Factory, Azure Search, Azure Cosmos DB Spark connector, Azure Functions, Azure Stream Analytics, and others) will be blocked and fail. Ensure you have upgraded to a supported SDK version before enabling the feature. After the feature is enabled or disabled, it may take 15-20 minutes to fully propagate to the account. If you plan to disable the feature after you've completed using it, it may take 15-20 minutes before requests from SDKs and connectors that are not supported for merge are allowed.
-The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Partition Merge**. Run the **Check eligibility for partition merge preview** diagnostic. :::image type="content" source="media/merge/merge-eligibility-check.png" alt-text="Screenshot of merge eligibility check with table of all preview eligibility criteria."::: ### How to identify containers to merge Containers that meet both of these conditions are likely to benefit from merging partitions:-- Condition 1: The current RU/s per physical partition is <3000 RU/s-- Condition 2: The current average storage in GB per physical partition is <20 GB
-Condition 1 often occurs when you have previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state.
+- **Condition 1**: The current RU/s per physical partition is <3000 RU/s
+- **Condition 2**: The current average storage in GB per physical partition is <20 GB
+
+Condition 1 often occurs when you've previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state.
Condition 2 often occurs when you delete/TTL a large volume of data, leaving unused partitions. #### Criteria 1
-To determine the current RU/s per physical partition, from your Cosmos account, navigate to **Metrics**. Select the metric **Physical Partition Throughput** and filter to your database and container. Apply splitting by **PhysicalPartitionId**.
+To determine the current RU/s per physical partition, from your Cosmos account, navigate to **Metrics**. Select the metric **Physical Partition Throughput** and filter to your database and container. Apply splitting by **PhysicalPartitionId**.
-For containers using autoscale, this will show the max RU/s currently provisioned on each physical partition. For containers using manual throughput, this will show the manual RU/s on each physical partition.
+For containers using autoscale, this metric will show the max RU/s currently provisioned on each physical partition. For containers using manual throughput, this metric will show the manual RU/s on each physical partition.
-In the below example, we have an autoscale container provisioned with 5000 RU/s (scales between 500 - 5000 RU/s). It has 5 physical partitions and each physical partition has 1000 RU/s.
+In the below example, we have an autoscale container provisioned with 5000 RU/s (scales between 500 - 5000 RU/s). It has five physical partitions and each physical partition has 1000 RU/s.
:::image type="content" source="media/merge/RU-per-physical-partition-metric.png" alt-text="Screenshot of Azure Monitor metric Physical Partition Throughput in Azure portal.":::
Navigate to **Insights** > **Storage** > **Data & Index Usage**. The total stora
:::image type="content" source="media/merge/storage-per-container.png" alt-text="Screenshot of Azure Monitor storage (data + index) metric for container in Azure portal.":::
-Next, find the total number of physical partitions. This is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Criteria 1. In our example, we have 5 physical partitions.
+Next, find the total number of physical partitions. This metric is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Criteria 1. In our example, we have five physical partitions.
-Finally, calculate: Total storage in GB / number of physical partitions. In our example, we have an average of (74 GB / 5 physical partitions) = 14.8 GB per physical partition.
+Finally, calculate: Total storage in GB / number of physical partitions. In our example, we have an average of (74 GB / five physical partitions) = 14.8 GB per physical partition.
Based on criteria 1 and 2, our container can potentially benefit from merging partitions.
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a
```azurepowershell // Add the preview extension
-Install-Module -Name Az.CosmosDB -AllowPrerelease -Force
+$parameters = @{
+ Name = "Az.CosmosDB"
+ AllowPrerelease = $true
+ Force = $true
+}
+Install-Module @parameters
+```
+```azurepowershell
// API for NoSQL
-Invoke-AzCosmosDBSqlContainerMerge `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-container-name>" `
- -WhatIf
+$parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+