Updates from: 10/27/2022 01:09:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
Once you've added the app ID and secrete, use the following steps to add the Azu
const { clientPrincipal } = payload; return clientPrincipal; }
-
+
await getUserInfo(); ``` - > [!TIP] > If you can't run the above JavaScript code in your browser, navigate to the following URL `https://<app-name>.azurewebsites.net/.auth/me`. Replace the `<app-name>` with your Azure Web App.
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-sendgrid.md
With a SendGrid account created and SendGrid API key stored in an Azure AD B2C p
1. Select **Blank Template** and then **Code Editor**. 1. In the HTML editor, paste following HTML template or use your own. The `{{otp}}` and `{{email}}` parameters will be replaced dynamically with the one-time password value and the user email address.
- ```HTML
+ ```html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="en"><head id="Head1">
With a SendGrid account created and SendGrid API key stored in an Azure AD B2C p
<td valign="top" width="50%"></td> </tr> </table>
- </body>
+ </body>
</html> ```
-1. Expand **Settings** on the left, and for **Version Name**, enter a template version.
+1. Expand **Settings** on the left, and for **Version Name**, enter a template version.
1. For **Subject**, enter `{{subject}}`. 1. A the top of the page, select **Save**. 1. Return to the **Transactional Templates** page by selecting the back arrow. 1. Record the **ID** of template you created for use in a later step. For example, `d-989077fbba9746e89f3f6411f596fb96`. You specify this ID when you [add the claims transformation](#add-the-claims-transformation). - [!INCLUDE [active-directory-b2c-important-for-custom-email-provider](../../includes/active-directory-b2c-important-for-custom-email-provider.md)] ## Add Azure AD B2C claim types
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
Title: Localization string IDs - Azure Active Directory B2C
+ Title: Localization string IDs - Azure Active Directory B2C
description: Specify the IDs for a content definition with an ID of api.signuporsignin in a custom policy in Azure Active Directory B2C.
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-The **Localization** element enables you to support multiple locales or languages in the policy for the user journeys. This article provides the list of localization IDs that you can use in your policy. To get familiar with UI localization, see [Localization](localization.md).
+The **Localization** element enables you to support multiple locales or languages in the policy for the user journeys. This article provides the list of localization IDs that you can use in your policy. For more information about UI localization, see [Localization](localization.md).
## Sign-up or sign-in page elements
The following IDs are used for a content definition with an ID of `api.signupors
| ID | Default value | Page Layout Version | | | - | |
-| **forgotpassword_link** | Forgot your password? | `All` |
-| **createaccount_intro** | Don't have an account? | `All` |
-| **button_signin** | Sign in | `All` |
-| **social_intro** | Sign in with your social account | `All` |
-| **remember_me** |Keep me signed in. | `All` |
-| **unknown_error** | We are having trouble signing you in. Please try again later. | `All` |
-| **divider_title** | OR | `All` |
-| **local_intro_email** | Sign in with your existing account | `< 2.0.0` |
-| **logonIdentifier_email** | Email Address | `< 2.0.0` |
-| **requiredField_email** | Please enter your email | `< 2.0.0` |
-| **invalid_email** | Please enter a valid email address | `< 2.0.0` |
-| **email_pattern** | ^[a-zA-Z0-9.!#$%&''\*+/=?^\_\`{\|}~-]+@[a-zA-Z0-9-]+(?:\\.[a-zA-Z0-9-]+)\*$ | `< 2.0.0` |
-| **local_intro_username** | Sign in with your user name | `< 2.0.0` |
-| **logonIdentifier_username** | Username | `< 2.0.0` |
-| **requiredField_username** | Please enter your user name | `< 2.0.0` |
-| **password** | Password | `< 2.0.0` |
-| **requiredField_password** | Please enter your password | `< 2.0.0` |
-| **createaccount_link** | Sign up now | `< 2.0.0` |
-| **cancel_message** | The user has forgotten their password | `< 2.0.0` |
-| **invalid_password** | The password you entered is not in the expected format. | `< 2.0.0` |
-| **createaccount_one_link** | Sign up now | `>= 2.0.0` |
-| **createaccount_two_links** | Sign up with {0} or {1} | `>= 2.0.0` |
-| **createaccount_three_links** | Sign up with {0}, {1}, or {2} | `>= 2.0.0` |
-| **local_intro_generic** | Sign in with your {0} | `>= 2.1.0` |
-| **requiredField_generic** | Please enter your {0} | `>= 2.1.0` |
-| **invalid_generic** | Please enter a valid {0} | `>= 2.1.1` |
-| **heading** | Sign in | `>= 2.1.1` |
+| `forgotpassword_link` | Forgot your password? | `All` |
+| `createaccount_intro` | Don't have an account? | `All` |
+| `button_signin` | Sign in | `All` |
+| `social_intro` | Sign in with your social account | `All` |
+| `remember_me` |Keep me signed in. | `All` |
+| `unknown_error` | We are having trouble signing you in. Please try again later. | `All` |
+| `divider_title` | OR | `All` |
+| `local_intro_email` | Sign in with your existing account | `< 2.0.0` |
+| `logonIdentifier_email` | Email Address | `< 2.0.0` |
+| `requiredField_email` | Please enter your email | `< 2.0.0` |
+| `invalid_email` | Please enter a valid email address | `< 2.0.0` |
+| `email_pattern` | ```^[a-zA-Z0-9.!#$%&''\*+/=?^\_\`{\|}~-]+@[a-zA-Z0-9-]+(?:\\.[a-zA-Z0-9-]+)\*$``` | `< 2.0.0` |
+| `local_intro_username` | Sign in with your user name | `< 2.0.0` |
+| `logonIdentifier_username` | Username | `< 2.0.0` |
+| `requiredField_username` | Please enter your user name | `< 2.0.0` |
+| `password` | Password | `< 2.0.0` |
+| `requiredField_password` | Please enter your password | `< 2.0.0` |
+| `createaccount_link` | Sign up now | `< 2.0.0` |
+| `cancel_message` | The user has forgotten their password | `< 2.0.0` |
+| `invalid_password` | The password you entered is not in the expected format. | `< 2.0.0` |
+| `createaccount_one_link` | Sign up now | `>= 2.0.0` |
+| `createaccount_two_links` | Sign up with {0} or {1} | `>= 2.0.0` |
+| `createaccount_three_links` | Sign up with {0}, {1}, or {2} | `>= 2.0.0` |
+| `local_intro_generic` | Sign in with your {0} | `>= 2.1.0` |
+| `requiredField_generic` | Please enter your {0} | `>= 2.1.0` |
+| `invalid_generic` | Please enter a valid {0} | `>= 2.1.1` |
+| `heading` | Sign in | `>= 2.1.1` |
> [!NOTE]
-> * Placeholders like {0} will be filled automatically with the `DisplayName` value of `ClaimType`.
+> * Placeholders like `{0}` are populated automatically with the `DisplayName` value of `ClaimType`.
> * To learn how to localize `ClaimType`, see [Sign-up or sign-in example](#signupsigninexample).
-The following example shows the use of some of the user interface elements in the sign-up or sign-in page:
+The following example shows the use of some user interface elements in the sign-up or sign-in page:
:::image type="content" source="./media/localization-string-ids/localization-susi-2.png" alt-text="Screenshot that shows sign-up or sign-in page U X elements."::: ### Sign-up or sign-in identity providers
-The ID of the identity providers is configured in the user journey **ClaimsExchange** element. To localize the title of the identity provider, the **ElementType** is set to `ClaimsProvider`, while the **StringId** is set to the ID of the `ClaimsExchange`.
+The ID of the identity providers is configured in the user journey **ClaimsExchange** element. To localize the title of the identity provider, the **ElementType** is set to `ClaimsProvider`, while the **StringId** is set to the ID of the `ClaimsExchange`.
```xml <OrchestrationStep Order="2" Type="ClaimsExchange">
The following example localizes the Facebook identity provider to Arabic:
| ID | Default value | | | - |
-| **UserMessageIfInvalidPassword** | Your password is incorrect. |
-| **UserMessageIfPasswordExpired**| Your password has expired.|
-| **UserMessageIfClaimsPrincipalDoesNotExist** | We can't seem to find your account. |
-| **UserMessageIfOldPasswordUsed** | Looks like you used an old password. |
-| **DefaultMessage** | Invalid username or password. |
-| **UserMessageIfUserAccountDisabled** | Your account has been locked. Contact your support person to unlock it, then try again. |
-| **UserMessageIfUserAccountLocked** | Your account is temporarily locked to prevent unauthorized use. Try again later. |
-| **AADRequestsThrottled** | There are too many requests at this moment. Please wait for some time and try again. |
+| `UserMessageIfInvalidPassword` | Your password is incorrect. |
+| `UserMessageIfPasswordExpired`| Your password has expired.|
+| `UserMessageIfClaimsPrincipalDoesNotExist` | We can't seem to find your account. |
+| `UserMessageIfOldPasswordUsed` | Looks like you used an old password. |
+| `DefaultMessage` | Invalid username or password. |
+| `UserMessageIfUserAccountDisabled` | Your account has been locked. Contact your support person to unlock it, then try again. |
+| `UserMessageIfUserAccountLocked` | Your account is temporarily locked to prevent unauthorized use. Try again later. |
+| `AADRequestsThrottled` | There are too many requests at this moment. Please wait for some time and try again. |
<a name="signupsigninexample"></a>+ ### Sign-up or sign-in example ```xml
The following example localizes the Facebook identity provider to Arabic:
## Sign-up and self-asserted pages user interface elements
-The following are the IDs for a content definition with an ID of `api.localaccountsignup` or any content definition that starts with `api.selfasserted`, such as `api.selfasserted.profileupdate` and `api.localaccountpasswordreset`, and [self-asserted technical profile](self-asserted-technical-profile.md).
+The following IDs are used for a content definition having an ID of `api.localaccountsignup` or any content definition that starts with `api.selfasserted`, such as `api.selfasserted.profileupdate` and `api.localaccountpasswordreset`, and [self-asserted technical profile](self-asserted-technical-profile.md).
| ID | Default value | | | - |
-| **ver_sent** | Verification code has been sent to: |
-| **ver_but_default** | Default |
-| **cancel_message** | The user has canceled entering self-asserted information |
-| **preloader_alt** | Please wait |
-| **ver_but_send** | Send verification code |
-| **alert_yes** | Yes |
-| **error_fieldIncorrect** | One or more fields are filled out incorrectly. Please check your entries and try again. |
-| **year** | Year |
-| **verifying_blurb** | Please wait while we process your information. |
-| **button_cancel** | Cancel |
-| **ver_fail_no_retry** | You've made too many incorrect attempts. Please try again later. |
-| **month** | Month |
-| **ver_success_msg** | E-mail address verified. You can now continue. |
-| **months** | January, February, March, April, May, June, July, August, September, October, November, December |
-| **ver_fail_server** | We are having trouble verifying your email address. Please enter a valid email address and try again. |
-| **error_requiredFieldMissing** | A required field is missing. Please fill out all required fields and try again. |
-| **heading** | User Details |
-| **initial_intro** | Please provide the following details. |
-| **ver_but_resend** | Send new code |
-| **button_continue** | Create |
-| **error_passwordEntryMismatch** | The password entry fields do not match. Please enter the same password in both fields and try again. |
-| **ver_incorrect_format** | Incorrect format. |
-| **ver_but_edit** | Change e-mail |
-| **ver_but_verify** | Verify code |
-| **alert_no** | No |
-| **ver_info_msg** | Verification code has been sent to your inbox. Please copy it to the input box below. |
-| **day** | Day |
-| **ver_fail_throttled** | There have been too many requests to verify this email address. Please wait a while, then try again. |
-| **helplink_text** | What is this? |
-| **ver_fail_retry** | That code is incorrect. Please try again. |
-| **alert_title** | Cancel Entering Your Details |
-| **required_field** | This information is required. |
-| **alert_message** | Are you sure that you want to cancel entering your details? |
-| **ver_intro_msg** | Verification is necessary. Please click Send button. |
-| **ver_input** | Verification code |
+| `ver_sent` | Verification code has been sent to: |
+| `ver_but_default` | Default |
+| `cancel_message` | The user has canceled entering self-asserted information |
+| `preloader_alt` | Please wait |
+| `ver_but_send` | Send verification code |
+| `alert_yes` | Yes |
+| `error_fieldIncorrect` | One or more fields are filled out incorrectly. Please check your entries and try again. |
+| `year` | Year |
+| `verifying_blurb` | Please wait while we process your information. |
+| `button_cancel` | Cancel |
+| `ver_fail_no_retry` | You've made too many incorrect attempts. Please try again later. |
+| `month` | Month |
+| `ver_success_msg` | E-mail address verified. You can now continue. |
+| `months` | January, February, March, April, May, June, July, August, September, October, November, December |
+| `ver_fail_server` | We are having trouble verifying your email address. Please enter a valid email address and try again. |
+| `error_requiredFieldMissing` | A required field is missing. Please fill out all required fields and try again. |
+| `heading` | User Details |
+| `initial_intro` | Please provide the following details. |
+| `ver_but_resend` | Send new code |
+| `button_continue` | Create |
+| `error_passwordEntryMismatch` | The password entry fields do not match. Please enter the same password in both fields and try again. |
+| `ver_incorrect_format` | Incorrect format. |
+| `ver_but_edit` | Change e-mail |
+| `ver_but_verify` | Verify code |
+| `alert_no` | No |
+| `ver_info_msg` | Verification code has been sent to your inbox. Please copy it to the input box below. |
+| `day` | Day |
+| `ver_fail_throttled` | There have been too many requests to verify this email address. Please wait a while, then try again. |
+| `helplink_text` | What is this? |
+| `ver_fail_retry` | That code is incorrect. Please try again. |
+| `alert_title` | Cancel Entering Your Details |
+| `required_field` | This information is required. |
+| `alert_message` | Are you sure that you want to cancel entering your details? |
+| `ver_intro_msg` | Verification is necessary. Please click Send button. |
+| `ver_input` | Verification code |
### Sign-up and self-asserted pages disclaimer links
The following `UxElement` string IDs will display disclaimer link(s) at the bott
| ID | Example value | | | - |
-| **disclaimer_msg_intro** | By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign into {insert your application name}. Standard messsage and data rates may apply. |
-| **disclaimer_link_1_text** | Privacy Statement |
-| **disclaimer_link_1_url** | {insert your privacy statement URL} |
-| **disclaimer_link_2_text** | Terms and Conditions |
-| **disclaimer_link_2_url** | {insert your terms and conditions URL} |
+| `disclaimer_msg_intro` | By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign into {insert your application name}. Standard message and data rates may apply. |
+| `disclaimer_link_1_text` | Privacy Statement |
+| `disclaimer_link_1_url` | {insert your privacy statement URL} |
+| `disclaimer_link_2_text` | Terms and Conditions |
+| `disclaimer_link_2_url` | {insert your terms and conditions URL} |
### Sign-up and self-asserted pages error messages | ID | Default value | | | - |
-| **UserMessageIfClaimsPrincipalAlreadyExists** | A user with the specified ID already exists. Please choose a different one. |
-| **UserMessageIfClaimNotVerified** | Claim not verified: {0} |
-| **UserMessageIfIncorrectPattern** | Incorrect pattern for: {0} |
-| **UserMessageIfMissingRequiredElement** | Missing required element: {0} |
-| **UserMessageIfValidationError** | Error in validation by: {0} |
-| **UserMessageIfInvalidInput** | {0} has invalid input. |
-| **ServiceThrottled** | There are too many requests at this moment. Please wait for some time and try again. |
+| `UserMessageIfClaimsPrincipalAlreadyExists` | A user with the specified ID already exists. Please choose a different one. |
+| `UserMessageIfClaimNotVerified` | Claim not verified: {0} |
+| `UserMessageIfIncorrectPattern` | Incorrect pattern for: {0} |
+| `UserMessageIfMissingRequiredElement` | Missing required element: {0} |
+| `UserMessageIfValidationError` | Error in validation by: {0} |
+| `UserMessageIfInvalidInput` | {0} has invalid input. |
+| `ServiceThrottled` | There are too many requests at this moment. Please wait for some time and try again. |
The following example shows the use of some of the user interface elements in the sign-up page:
The Following are the IDs for a content definition with an ID of `api.phonefacto
| ID | Default value | Page Layout Version | | | - | |
-| **button_verify** | Call Me | `All` |
-| **country_code_label** | Country Code | `All` |
-| **cancel_message** | The user has canceled multi-factor authentication | `All` |
-| **text_button_send_second_code** | send a new code | `All` |
-| **code_pattern** | \\d{6} | `All` |
-| **intro_mixed** | We have the following number on record for you. We can send a code via SMS or phone to authenticate you. | `All` |
-| **intro_mixed_p** | We have the following numbers on record for you. Choose a number that we can phone or send a code via SMS to authenticate you. | `All` |
-| **button_verify_code** | Verify Code | `All` |
-| **requiredField_code** | Please enter the verification code you received | `All` |
-| **invalid_code** | Please enter the 6-digit code you received | `All` |
-| **button_cancel** | Cancel | `All` |
-| **local_number_input_placeholder_text** | Phone number | `All` |
-| **button_retry** | Retry | `All` |
-| **alternative_text** | I don't have my phone | `All` |
-| **intro_phone_p** | We have the following numbers on record for you. Choose a number that we can phone to authenticate you. | `All` |
-| **intro_phone** | We have the following number on record for you. We will phone to authenticate you. | `All` |
-| **enter_code_text_intro** | Enter your verification code below, or | `All` |
-| **intro_entry_phone** | Enter a number below that we can phone to authenticate you. | `All` |
-| **intro_entry_sms** | Enter a number below that we can send a code via SMS to authenticate you. | `All` |
-| **button_send_code** | Send Code | `All` |
-| **invalid_number** | Please enter a valid phone number | `All` |
-| **intro_sms** | We have the following number on record for you. We will send a code via SMS to authenticate you. | `All` |
-| **intro_entry_mixed** | Enter a number below that we can send a code via SMS or phone to authenticate you. | `All` |
-| **number_pattern** | `^\\+(?:[0-9][\\x20-]?){6,14}[0-9]$` | `All` |
-| **intro_sms_p** |We have the following numbers on record for you. Choose a number that we can send a code via SMS to authenticate you. | `All` |
-| **requiredField_countryCode** | Please select your country code | `All` |
-| **requiredField_number** | Please enter your phone number | `All` |
-| **country_code_input_placeholder_text** |Country or region | `All` |
-| **number_label** | Phone Number | `All` |
-| **error_tryagain** | The phone number you provided is busy or unavailable. Please check the number and try again. | `All` |
-| **error_sms_throttled** | You hit the limit on the number of text messages. Try again shortly. | `>= 1.2.3` |
-| **error_phone_throttled** | You hit the limit on the number of call attempts. Try again shortly. | `>= 1.2.3` |
-| **error_throttled** | You hit the limit on the number of verification attempts. Try again shortly. | `>= 1.2.3` |
-| **error_incorrect_code** | The verification code you have entered does not match our records. Please try again, or request a new code. | `All` |
-| **countryList** | See [the countries list](#phone-factor-authentication-page-example). | `All` |
-| **error_448** | The phone number you provided is unreachable. | `All` |
-| **error_449** | User has exceeded the number of retry attempts. | `All` |
-| **verification_code_input_placeholder_text** | Verification code | `All` |
+| `button_verify` | Call Me | `All` |
+| `country_code_label` | Country Code | `All` |
+| `cancel_message` | The user has canceled multi-factor authentication | `All` |
+| `text_button_send_second_code` | send a new code | `All` |
+| `code_pattern` | \\d{6} | `All` |
+| `intro_mixed` | We have the following number on record for you. We can send a code via SMS or phone to authenticate you. | `All` |
+| `intro_mixed_p` | We have the following numbers on record for you. Choose a number that we can phone or send a code via SMS to authenticate you. | `All` |
+| `button_verify_code` | Verify Code | `All` |
+| `requiredField_code` | Please enter the verification code you received | `All` |
+| `invalid_code` | Please enter the 6-digit code you received | `All` |
+| `button_cancel` | Cancel | `All` |
+| `local_number_input_placeholder_text` | Phone number | `All` |
+| `button_retry` | Retry | `All` |
+| `alternative_text` | I don't have my phone | `All` |
+| `intro_phone_p` | We have the following numbers on record for you. Choose a number that we can phone to authenticate you. | `All` |
+| `intro_phone` | We have the following number on record for you. We will phone to authenticate you. | `All` |
+| `enter_code_text_intro` | Enter your verification code below, or | `All` |
+| `intro_entry_phone` | Enter a number below that we can phone to authenticate you. | `All` |
+| `intro_entry_sms` | Enter a number below that we can send a code via SMS to authenticate you. | `All` |
+| `button_send_code` | Send Code | `All` |
+| `invalid_number` | Please enter a valid phone number | `All` |
+| `intro_sms` | We have the following number on record for you. We will send a code via SMS to authenticate you. | `All` |
+| `intro_entry_mixed` | Enter a number below that we can send a code via SMS or phone to authenticate you. | `All` |
+| `number_pattern` | `^\\+(?:[0-9][\\x20-]?){6,14}[0-9]$` | `All` |
+| `intro_sms_p` |We have the following numbers on record for you. Choose a number that we can send a code via SMS to authenticate you. | `All` |
+| `requiredField_countryCode` | Please select your country code | `All` |
+| `requiredField_number` | Please enter your phone number | `All` |
+| `country_code_input_placeholder_text` |Country or region | `All` |
+| `number_label` | Phone Number | `All` |
+| `error_tryagain` | The phone number you provided is busy or unavailable. Please check the number and try again. | `All` |
+| `error_sms_throttled` | You hit the limit on the number of text messages. Try again shortly. | `>= 1.2.3` |
+| `error_phone_throttled` | You hit the limit on the number of call attempts. Try again shortly. | `>= 1.2.3` |
+| `error_throttled` | You hit the limit on the number of verification attempts. Try again shortly. | `>= 1.2.3` |
+| `error_incorrect_code` | The verification code you have entered does not match our records. Please try again, or request a new code. | `All` |
+| `countryList` | See [the countries list](#phone-factor-authentication-page-example). | `All` |
+| `error_448` | The phone number you provided is unreachable. | `All` |
+| `error_449` | User has exceeded the number of retry attempts. | `All` |
+| `verification_code_input_placeholder_text` | Verification code | `All` |
The following example shows the use of some of the user interface elements in the MFA enrollment page:
The following example shows the use of some of the user interface elements in th
## Verification display control user interface elements
-The following are the IDs for a [Verification display control](display-control-verification.md) with [page layout version](page-layout.md) 2.1.0 or higher.
+The following IDs are used for a [Verification display control](display-control-verification.md) with [page layout version](page-layout.md) 2.1.0 or higher.
| ID | Default value | | | - |
-|intro_msg<sup>1</sup>| Verification is necessary. Please click Send button.|
-|success_send_code_msg | Verification code has been sent. Please copy it to the input box below.|
-|failure_send_code_msg | We are having trouble verifying your email address. Please enter a valid email address and try again.|
-|success_verify_code_msg | E-mail address verified. You can now continue.|
-|failure_verify_code_msg | We are having trouble verifying your email address. Please try again.|
-|but_send_code | Send verification code|
-|but_verify_code | Verify code|
-|but_send_new_code | Send new code|
-|but_change_claims | Change e-mail|
-| UserMessageIfVerificationControlClaimsNotVerified<sup>2</sup>| The claims for verification control have not been verified. |
+| `intro_msg` <sup>1</sup>| Verification is necessary. Please click Send button.|
+| `success_send_code_msg` | Verification code has been sent. Please copy it to the input box below.|
+| `failure_send_code_msg` | We are having trouble verifying your email address. Please enter a valid email address and try again.|
+| `success_verify_code_msg` | E-mail address verified. You can now continue.|
+| `failure_verify_code_msg` | We are having trouble verifying your email address. Please try again.|
+| `but_send_code` | Send verification code|
+| `but_verify_code` | Verify code|
+| `but_send_new_code` | Send new code|
+| `but_change_claims` | Change e-mail|
+| `UserMessageIfVerificationControlClaimsNotVerified` <sup>2</sup> | The claims for verification control have not been verified. |
<sup>1</sup> The `intro_msg` element is hidden, and not shown on the self-asserted page. To make it visible, use the [HTML customization](customize-ui-with-html.md) with Cascading Style Sheets. For example:
-```css
-.verificationInfoText div{display: block!important}
-```
+`.verificationInfoText div{display: block!important}`
<sup>2</sup> This error message is displayed to the user if they enter a verification code, but instead of completing the verification by selecting on the **Verify** button, they select the **Continue** button.
-
+ ### Verification display control example ```xml
The following are the IDs for a [Verification display control](display-control-v
## Verification display control user interface elements (deprecated)
-The following are the IDs for a [Verification display control](display-control-verification.md) with [page layout version](page-layout.md) 2.0.0.
+The following IDs are used for a [Verification display control](display-control-verification.md) with [page layout version](page-layout.md) 2.0.0.
| ID | Default value | | | - |
-|verification_control_but_change_claims |Change |
-|verification_control_fail_send_code |Failed to send the code, please try again later. |
-|verification_control_fail_verify_code |Failed to verify the code, please try again later. |
-|verification_control_but_send_code |Send Code |
-|verification_control_but_send_new_code |Send New Code |
-|verification_control_but_verify_code |Verify Code |
-|verification_control_code_sent| Verification code has been sent. Please copy it to the input box below. |
+| `verification_control_but_change_claims` |Change |
+| `verification_control_fail_send_code` |Failed to send the code, please try again later. |
+| `verification_control_fail_verify_code` |Failed to verify the code, please try again later. |
+| `verification_control_but_send_code` |Send Code |
+| `verification_control_but_send_new_code` |Send New Code |
+| `verification_control_but_verify_code` |Verify Code |
+| `verification_control_code_sent`| Verification code has been sent. Please copy it to the input box below. |
### Verification display control example (deprecated)
The following are the IDs for a [Verification display control](display-control-v
## TOTP MFA controls display control user interface elements
-The following are the IDs for a [time-based one-time password (TOTP) display control](display-control-time-based-one-time-password.md) with [page layout version](page-layout.md) 2.1.9 and later.
+The following IDs are used for a [time-based one-time password (TOTP) display control](display-control-time-based-one-time-password.md) with [page layout version](page-layout.md) 2.1.9 and later.
| ID | Default value | | | - |
-|title_text |Download the Microsoft Authenticator using the download links for iOS and Android or use any other authenticator app of your choice. |
-| DN |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
-|DisplayName |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
-|title_text |Scan the QR code |
-|info_msg |You can download the Microsoft Authenticator app or use any other authenticator app of your choice. |
-|link_text |Can't scan? Try this |
-|title_text| Enter the account details manually. |
-|account_name | Account Name: |
-|display_prefix | Secret |
-|collapse_text | Still having trouble? |
-|DisplayName | Enter the verification code from your authenticator appΓÇï.|
-|DisplayName | Enter your code. |
-| button_continue | Verify |
+| `title_text` |Download the Microsoft Authenticator using the download links for iOS and Android or use any other authenticator app of your choice. |
+| `DN` |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
+| `DisplayName` |Once you've downloaded the Authenticator app, you can use any of the methods below to continue with enrollment. |
+| `title_text` |Scan the QR code |
+| `info_msg` |You can download the Microsoft Authenticator app or use any other authenticator app of your choice. |
+| `link_text` |Can't scan? Try this |
+| `title_text`| Enter the account details manually. |
+| `account_name` | Account Name: |
+| `display_prefix` | Secret |
+| `collapse_text` | Still having trouble? |
+| `DisplayName` | Enter the verification code from your authenticator appΓÇï.|
+| `DisplayName` | Enter your code. |
+| `button_continue` | Verify |
### TOTP MFA controls display control example
The following are the IDs for a [time-based one-time password (TOTP) display con
## Restful service error messages
-The following are the IDs for [Restful service technical profile](restful-technical-profile.md) error messages:
+The following IDs are used for [Restful service technical profile](restful-technical-profile.md) error messages:
| ID | Default value | | | - |
-|DefaultUserMessageIfRequestFailed | Failed to establish connection to restful service end point. Restful service URL: {0} |
-|UserMessageIfCircuitOpen | {0} Restful Service URL: {1} |
-|UserMessageIfDnsResolutionFailed | Failed to resolve the hostname of the restful service endpoint. Restful service URL: {0} |
-|UserMessageIfRequestTimeout | Failed to establish connection to restful service end point within timeout limit {0} seconds. Restful service URL: {1} |
+| `DefaultUserMessageIfRequestFailed` | Failed to establish connection to restful service end point. Restful service URL: {0} |
+| `UserMessageIfCircuitOpen` | {0} Restful Service URL: {1} |
+| `UserMessageIfDnsResolutionFailed` | Failed to resolve the hostname of the restful service endpoint. Restful service URL: {0} |
+| `UserMessageIfRequestTimeout` | Failed to establish connection to restful service end point within timeout limit {0} seconds. Restful service URL: {1} |
### Restful service example
The following are the IDs for [Restful service technical profile](restful-techni
## Azure AD MFA error messages
-The following are the IDs for an [Azure AD MFA technical profile](multi-factor-auth-technical-profile.md) error message:
+The following IDs are used for an [Azure AD MFA technical profile](multi-factor-auth-technical-profile.md) error message:
| ID | Default value | | | - |
-|UserMessageIfCouldntSendSms | Cannot Send SMS to the phone, please try another phone number. |
-|UserMessageIfInvalidFormat | Your phone number is not in a valid format, please correct it and try again.|
-|UserMessageIfMaxAllowedCodeRetryReached | Wrong code entered too many times, please try again later.|
-|UserMessageIfServerError | Cannot use MFA service, please try again later.|
-|UserMessageIfThrottled | Your request has been throttled, please try again later.|
-|UserMessageIfWrongCodeEntered|Wrong code entered, please try again.|
+| `UserMessageIfCouldntSendSms` | Cannot Send SMS to the phone, please try another phone number. |
+| `UserMessageIfInvalidFormat` | Your phone number is not in a valid format, please correct it and try again.|
+| `UserMessageIfMaxAllowedCodeRetryReached` | Wrong code entered too many times, please try again later.|
+| `UserMessageIfServerError` | Cannot use MFA service, please try again later.|
+| `UserMessageIfThrottled` | Your request has been throttled, please try again later.|
+| `UserMessageIfWrongCodeEntered` |Wrong code entered, please try again.|
### Azure AD MFA example
The following are the IDs for an [Azure AD MFA technical profile](multi-factor-a
## Azure AD SSPR
-The following are the IDs for [Azure AD SSPR technical profile](aad-sspr-technical-profile.md) error messages:
+The following IDs are used for [Azure AD SSPR technical profile](aad-sspr-technical-profile.md) error messages:
| ID | Default value | | | - |
-|UserMessageIfChallengeExpired | The code has expired.|
-|UserMessageIfInternalError | The email service has encountered an internal error, please try again later.|
-|UserMessageIfThrottled | You have sent too many requests, please try again later.|
-|UserMessageIfVerificationFailedNoRetry | You have exceeded maximum number of verification attempts.|
-|UserMessageIfVerificationFailedRetryAllowed | The verification has failed, please try again.|
+|`UserMessageIfChallengeExpired` | The code has expired.|
+|`UserMessageIfInternalError` | The email service has encountered an internal error, please try again later.|
+|`UserMessageIfThrottled` | You have sent too many requests, please try again later.|
+|`UserMessageIfVerificationFailedNoRetry` | You have exceeded maximum number of verification attempts.|
+|`UserMessageIfVerificationFailedRetryAllowed` | The verification has failed, please try again.|
### Azure AD SSPR example
The following are the IDs for [Azure AD SSPR technical profile](aad-sspr-technic
</LocalizedResources> ```
-## One time password error messages
+## One-time password error messages
-The following are the IDs for a [one-time password technical profile](one-time-password-technical-profile.md) error messages
+The following IDs are used for a [one-time password technical profile](one-time-password-technical-profile.md) error messages
| ID | Default value | Description | | | - | -- |
-| UserMessageIfSessionDoesNotExist | No | The message to display to the user if the code verification session has expired. It is either the code has expired or the code has never been generated for a given identifier. |
-| UserMessageIfMaxRetryAttempted | No | The message to display to the user if they've exceeded the maximum allowed verification attempts. |
-| UserMessageIfMaxNumberOfCodeGenerated | No | The message to display to the user if the code generation has exceeded the maximum allowed number of attempts. |
-| UserMessageIfInvalidCode | No | The message to display to the user if they've provided an invalid code. |
-| UserMessageIfVerificationFailedRetryAllowed | No | The message to display to the user if they've provided an invalid code, and user is allowed to provide the correct code. |
-|UserMessageIfSessionConflict|No| The message to display to the user if the code cannot be verified.|
+| `UserMessageIfSessionDoesNotExist` | No | The message to display to the user if the code verification session has expired. It is either the code has expired or the code has never been generated for a given identifier. |
+| `UserMessageIfMaxRetryAttempted` | No | The message to display to the user if they've exceeded the maximum allowed verification attempts. |
+| `UserMessageIfMaxNumberOfCodeGenerated` | No | The message to display to the user if the code generation has exceeded the maximum allowed number of attempts. |
+| `UserMessageIfInvalidCode` | No | The message to display to the user if they've provided an invalid code. |
+| `UserMessageIfVerificationFailedRetryAllowed` | No | The message to display to the user if they've provided an invalid code, and user is allowed to provide the correct code. |
+| `UserMessageIfSessionConflict` | No | The message to display to the user if the code cannot be verified.|
### One time password example
The following are the IDs for a [one-time password technical profile](one-time-p
## Claims transformations error messages
-The following are the IDs for claims transformations error messages:
+The following IDs are used for claims transformations error messages:
| ID | Claims transformation | Default value | | | - |- |
-|UserMessageIfClaimsTransformationBooleanValueIsNotEqual |[AssertBooleanClaimIsEqualToValue](boolean-transformations.md#assertbooleanclaimisequaltovalue) | Boolean claim value comparison failed for claim type "inputClaim".|
-|DateTimeGreaterThan |[AssertDateTimeIsGreaterThan](date-transformations.md#assertdatetimeisgreaterthan) | Claim value comparison failed: The provided left operand is greater than the right operand.|
-|UserMessageIfClaimsTransformationStringsAreNotEqual |[AssertStringClaimsAreEqual](string-transformations.md#assertstringclaimsareequal) | Claim value comparison failed using StringComparison "OrdinalIgnoreCase".|
+| `UserMessageIfClaimsTransformationBooleanValueIsNotEqual` |[AssertBooleanClaimIsEqualToValue](boolean-transformations.md#assertbooleanclaimisequaltovalue) | Boolean claim value comparison failed for claim type "inputClaim".|
+| `DateTimeGreaterThan` |[AssertDateTimeIsGreaterThan](date-transformations.md#assertdatetimeisgreaterthan) | Claim value comparison failed: The provided left operand is greater than the right operand.|
+| `UserMessageIfClaimsTransformationStringsAreNotEqual` |[AssertStringClaimsAreEqual](string-transformations.md#assertstringclaimsareequal) | Claim value comparison failed using StringComparison "OrdinalIgnoreCase".|
### Claims transformations example
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
</TechnicalProfile>
- </RelyingParty>
+ </RelyingParty>
``` ## Integrate with Azure AD B2C
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
To integrate your legacy on-premises app with Azure AD B2C, contact [Datawiza](h
## Run DAB with a header-based application 1. You can use either Docker or Kubernetes to run DAB. The docker image is needed for users to create a sample header-based application. See instructions on how to [configure DAB and SSO integration](https://docs.datawiza.com/step-by-step/step3.html) for more details and how to [deploy DAB with Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html) for Kubernetes-specific instructions. A sample docker image `docker-compose.yml file` is provided for you to download and use. Log in to the container registry to download the images of DAB and the header-based application. Follow [these instructions](https://docs.datawiza.com/step-by-step/step3.html#important-step).
-
- ```yaml
- version: '3'
+
+ ```yaml
+ version: '3'
datawiza-access-broker:
To integrate your legacy on-premises app with Azure AD B2C, contact [Datawiza](h
- "3001:3001" ```
- 2. After executing `docker-compose -f docker-compose.yml up`, the header-based application should have SSO enabled with Azure AD B2C. Open a browser and type in `http://localhost:9772/`.
+2. After executing `docker-compose -f docker-compose.yml up`, the header-based application should have SSO enabled with Azure AD B2C. Open a browser and type in `http://localhost:9772/`.
3. An Azure AD B2C login page will show up.
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
At this point, the **Deduce RESTfull API** has been set up, but it's not yet ava
1. Open the `TrustFrameworkBase.xml` file from the starter pack.
-1. Find and copy the entire contents of the **UserJourneys** element that includes 'Id=SignUpOrSignIn`.
+1. Find and copy the entire contents of the **UserJourneys** element that includes `Id=SignUpOrSignIn`.
1. Open the `TrustFrameworkExtensions.xml` and find the **UserJourneys** element. If the element doesn't exist, add one.
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
Now that you have a user journey add the new identity provider to the user journ
The following XML demonstrates the orchestration steps of a user journey with xID identity provider:
- ```xml
+ ```xml
<UserJourney Id="CombinedSignInAndSignUp"> <OrchestrationSteps>
active-directory-b2c Publish App To Azure Ad App Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/publish-app-to-azure-ad-app-gallery.md
Previously updated : 03/30/2022 Last updated : 09/30/2022
Here are some benefits of adding your Azure AD B2C app to the app gallery:
## Sign in flow overview
-The sign in flow involves the following steps:
+The sign-in flow involves the following steps:
-1. Users go to the [My Apps portal](https://myapps.microsoft.com/) and select your app. The app opens the app sign in URL.
-1. The app sign in URL starts an authorization request and redirects users to the Azure AD B2C authorization endpoint.
+1. Users go to the [My Apps portal](https://myapps.microsoft.com/) and select your app. The app opens the app sign-in URL.
+1. The app sign-in URL starts an authorization request and redirects users to the Azure AD B2C authorization endpoint.
1. Users choose to sign in with their Azure AD "Corporate" account. Azure AD B2C takes them to the Azure AD authorization endpoint, where they sign in with their work account. 1. If the Azure AD SSO session is active, Azure AD issues an access token without prompting users to sign in again. Otherwise, users are prompted to sign in again.
The sign in flow involves the following steps:
Depending on the users' SSO session and Azure AD identity settings, they might be prompted to: - Provide their email address or phone number.+ - Enter their password or sign in with the [Microsoft authenticator app](https://www.microsoft.com/p/microsoft-authenticator/9nblgggzmcj6).+ - Complete multifactor authentication.+ - Accept the consent page. Your customer's tenant administrator can [grant tenant-wide admin consent to an app](../active-directory/manage-apps/grant-admin-consent.md). When consent is granted, the consent page won't be presented to users.
-Upon successful sign in, Azure AD returns a token to Azure AD B2C. Azure AD B2C validates and reads the token claims, and then returns a token to your application.
+Upon successful sign-in, Azure AD returns a token to Azure AD B2C. Azure AD B2C validates and reads the token claims, and then returns a token to your application.
## Prerequisites
To enable sign in to your app with Azure AD B2C, register your app in the Azure
If you haven't already done so, [register a web application](tutorial-register-applications.md). Later, you'll register this app with the Azure app gallery.
-## Step 2: Set up sign in for multitenant Azure AD
+## Step 2: Set up sign-in for multitenant Azure AD
To allow employees and consumers from any Azure AD tenant to sign in by using Azure AD B2C, follow the guidance for [setting up sign in for multitenant Azure AD](identity-provider-azure-ad-multi-tenant.md?pivots=b2c-custom-policy). ## Step 3: Prepare your app
-In your app, copy the URL of the sign in endpoint. If you use the [web application sample](configure-authentication-sample-web-app.md), the sign in URL is `https://localhost:5001/MicrosoftIdentity/Account/SignIn?`. This URL is where the Azure AD app gallery takes users to sign in to your app.
+In your app, copy the URL of the sign-in endpoint. If you use the [web application sample](configure-authentication-sample-web-app.md), the sign-in URL is `https://localhost:5001/MicrosoftIdentity/Account/SignIn?`. This URL is where the Azure AD app gallery takes users to sign in to your app.
In production environments, the app registration redirect URI is ordinarily a publicly accessible endpoint where your app is running, such as `https://woodgrovedemo.com/Account/SignIn`. The reply URL must begin with `https`. ## Step 4: Publish your Azure AD B2C app
-Finally, add the multitenant app to the Azure AD app gallery. Follow the instructions in [Publish your app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md). To add your app to the app gallery, do the following:
+Finally, add the multitenant app to the Azure AD app gallery. Follow the instructions in [Publish your app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md). To add your app to the app gallery, use the following steps:
1. [Create and publish documentation](../active-directory/manage-apps/v2-howto-app-gallery-listing.md#create-and-publish-documentation). 1. [Submit your app](../active-directory/manage-apps/v2-howto-app-gallery-listing.md#submit-your-application) with the following information:
Finally, add the multitenant app to the Azure AD app gallery. Follow the instruc
|What feature would you like to enable when listing your application in the gallery? | Select **Federated SSO (SAML, WS-Fed & OpenID Connect)**. | | Select your application federation protocol| Select **OpenID Connect & OAuth 2.0**. | | Application (Client) ID | Provide the ID of [your Azure AD B2C application](#step-1-register-your-application-in-azure-ad-b2c). |
- | Application sign in URL|Provide the app sign in URL as it's configured in [Step 3. Prepare your app](#step-3-prepare-your-app).|
+ | Application sign in URL|Provide the app sign-in URL as it's configured in [Step 3. Prepare your app](#step-3-prepare-your-app).|
| Multitenant| Select **Yes**. |
- | | |
## Next steps -- Learn how to [Publish your app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md).
+- Learn how to [Publish your Azure AD app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md).
active-directory-b2c Register Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/register-apps.md
+
+ Title: Register apps in Azure Active Directory B2C
+
+description: Learn how to register different apps types such as web app, web API, single-page apps, mobile and desktop apps, daemon apps, Microsoft Graph apps and SAML app in Azure Active Directory B2C
+++++++ Last updated : 09/30/2022++++
+# Register apps in Azure Active Directory B2C
+
+Before your [applications](application-types.md) can interact with Azure Active Directory B2C (Azure AD B2C), you must register them in a tenant that you manage.
+
+Azure AD B2C supports authentication for various modern application architectures. The interaction of every application type with Azure AD B2C is different. Hence, you need to specify the type of app that you want to register.
++
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+- If you haven't already created your own [Azure AD B2C Tenant](tutorial-create-tenant.md), create one now. You can use an existing Azure AD B2C tenant.
++
+## Select an app type to register
+
+You can register different app types in your Azure AD B2C Tenant. The how-to guides below show you how to register and configure the various app types:
++
+- [Single-page application (SPA)](tutorial-register-spa.md)
+- [Web application](tutorial-register-applications.md)
+- [Native client (for mobile and desktop)](add-native-application.md)
+- [Web API](add-web-api-application.md)
+- [Daemon apps](client-credentials-grant-flow.md)
+- [Microsoft Graph application](microsoft-graph-get-started.md)
+- [SAML application](saml-service-provider.md?tabs=windows&pivots=b2c-custom-policy)
+- [Publish app in Azure AD app gallery](publish-app-to-azure-ad-app-gallery.md)
+
+
+
+
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Below are sample requests to help outline what the sync engine currently sends v
"value": "False" } ]
-}
+ }
``` **With feature flag**
Below are sample requests to help outline what the sync engine currently sends v
"value": false } ]
-}
+ }
``` **Requests made to add a single-value string attribute:**
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
If successful, this method returns a `204 No Content` response code and does not
##### Request Here is an example of the request. - ```http PATCH https://graph.microsoft.com/beta/applications/{<object-id-of--the-complex-app-under-APP-Registrations} Content-type: application/json
active-directory Application Proxy Register Connector Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-register-connector-powershell.md
There are two methods you can use to register the connector:
class Program {
- #region constants
- /// <summary>
- /// The AAD authentication endpoint uri
- /// </summary>
- static readonly string AadAuthenticationEndpoint = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize";
-
- /// <summary>
- /// The application ID of the connector in AAD
- /// </summary>
- static readonly string ConnectorAppId = "55747057-9b5d-4bd4-b387-abf52a8bd489";
-
- /// <summary>
- /// The AppIdUri of the registration service in AAD
- /// </summary>
- static readonly string RegistrationServiceAppIdUri = "https://proxy.cloudwebappproxy.net/registerapp/user_impersonation";
-
- #endregion
-
- #region private members
- private string token;
- private string tenantID;
- #endregion
-
- public void GetAuthenticationToken()
- {
-
- IPublicClientApplication clientApp = PublicClientApplicationBuilder
- .Create(ConnectorAppId)
- .WithDefaultRedirectUri() // will automatically use the default Uri for native app
- .WithAuthority(AadAuthenticationEndpoint)
- .Build();
-
- AuthenticationResult authResult = null;
-
- IAccount account = null;
-
- IEnumerable<string> scopes = new string[] { RegistrationServiceAppIdUri };
-
- try
- {
- authResult = await clientApp.AcquireTokenSilent(scopes, account).ExecuteAsync();
- }
- catch (MsalUiRequiredException ex)
- {
- authResult = await clientApp.AcquireTokenInteractive(scopes).ExecuteAsync();
- }
--
- if (authResult == null || string.IsNullOrEmpty(authResult.AccessToken) || string.IsNullOrEmpty(authResult.TenantId))
+ #region constants
+ /// <summary>
+ /// The AAD authentication endpoint uri
+ /// </summary>
+ static readonly string AadAuthenticationEndpoint = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize";
+
+ /// <summary>
+ /// The application ID of the connector in AAD
+ /// </summary>
+ static readonly string ConnectorAppId = "55747057-9b5d-4bd4-b387-abf52a8bd489";
+
+ /// <summary>
+ /// The AppIdUri of the registration service in AAD
+ /// </summary>
+ static readonly string RegistrationServiceAppIdUri = "https://proxy.cloudwebappproxy.net/registerapp/user_impersonation";
+
+ #endregion
+
+ #region private members
+ private string token;
+ private string tenantID;
+ #endregion
+
+ public void GetAuthenticationToken()
{
- Trace.TraceError("Authentication result, token or tenant id returned are null");
- throw new InvalidOperationException("Authentication result, token or tenant id returned are null");
+ IPublicClientApplication clientApp = PublicClientApplicationBuilder
+ .Create(ConnectorAppId)
+ .WithDefaultRedirectUri() // will automatically use the default Uri for native app
+ .WithAuthority(AadAuthenticationEndpoint)
+ .Build();
+
+ AuthenticationResult authResult = null;
+
+ IAccount account = null;
+
+ IEnumerable<string> scopes = new string[] { RegistrationServiceAppIdUri };
+
+ try
+ {
+ authResult = await clientApp.AcquireTokenSilent(scopes, account).ExecuteAsync();
+ }
+ catch (MsalUiRequiredException ex)
+ {
+ authResult = await clientApp.AcquireTokenInteractive(scopes).ExecuteAsync();
+ }
+
+ if (authResult == null || string.IsNullOrEmpty(authResult.AccessToken) || string.IsNullOrEmpty(authResult.TenantId))
+ {
+ Trace.TraceError("Authentication result, token or tenant id returned are null");
+ throw new InvalidOperationException("Authentication result, token or tenant id returned are null");
+ }
+
+ token = authResult.AccessToken;
+ tenantID = authResult.TenantId;
}-
- token = authResult.AccessToken;
- tenantID = authResult.TenantId;
- }
- ```
+ }
+ ```
**Using PowerShell:** ```powershell # Load MSAL (Tested with version 4.7.1)
- Add-Type -Path "..\MSAL\Microsoft.Identity.Client.dll"
-
+ Add-Type -Path "..\MSAL\Microsoft.Identity.Client.dll"
+ # The AAD authentication endpoint uri
-
+ $authority = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize" #The application ID of the connector in AAD
There are two methods you can use to register the connector:
#The AppIdUri of the registration service in AAD $registrationServiceAppIdUri = "https://proxy.cloudwebappproxy.net/registerapp/user_impersonation"
- # Define the resources and scopes you want to call
+ # Define the resources and scopes you want to call
$scopes = New-Object System.Collections.ObjectModel.Collection["string"]
There are two methods you can use to register the connector:
[Microsoft.Identity.Client.IAccount] $account = $null
- # Acquiring the token
+ # Acquiring the token
$authResult = $null
There are two methods you can use to register the connector:
# Check AuthN result If (($authResult) -and ($authResult.AccessToken) -and ($authResult.TenantId)) {
-
- $token = $authResult.AccessToken
- $tenantId = $authResult.TenantId
- Write-Output "Success: Authentication result returned."
-
+ $token = $authResult.AccessToken
+ $tenantId = $authResult.TenantId
+
+ Write-Output "Success: Authentication result returned."
} Else {
-
- Write-Output "Error: Authentication result, token or tenant id returned with null."
-
+
+ Write-Output "Error: Authentication result, token or tenant id returned with null."
+ } ```
There are two methods you can use to register the connector:
## Next steps * [Publish applications using your own domain name](application-proxy-configure-custom-domain.md) * [Enable single-sign on](application-proxy-configure-single-sign-on-with-kcd.md)
-* [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
+* [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 10/19/2022 Last updated : 10/26/2022
Azure Active Directory (Azure AD) adds and improves security features to better protect customers against increasing attacks. As new attack vectors become known, Azure AD may respond by enabling protection by default to help customers stay ahead of emerging security threats.
-For example, in response to increasing MFA fatigue attacks, Microsoft recommended ways for customers to [defend users](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/defend-your-users-from-mfa-fatigue-attacks/ba-p/2365677). One recommendation to prevent users from accidental multifactor authentication (MFA) approvals is to enable [number matching](how-to-mfa-number-match.md). As a result, default behavior for number matching will be explicitly **Enabled** for all Microsoft Authenticator users.
+For example, in response to increasing MFA fatigue attacks, Microsoft recommended ways for customers to [defend users](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/defend-your-users-from-mfa-fatigue-attacks/ba-p/2365677). One recommendation to prevent users from accidental multifactor authentication (MFA) approvals is to enable [number matching](how-to-mfa-number-match.md). As a result, default behavior for number matching will be explicitly **Enabled** for all Microsoft Authenticator users. You can learn more about new security features like number matching in our blog post [Advanced Microsoft Authenticator security features are now generally available!](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/advanced-microsoft-authenticator-security-features-are-now/ba-p/2365673).
There are two ways for protection of a security feature to be enabled by default:
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
To enable CBA and configure username bindings using Graph API, complete the foll
#### Request body: -
- ```http
+ ```http
PATCH https: //graph.microsoft.com/v1.0/policies/authenticationMethodsPolicy/authenticationMethodConfigurations/x509Certificate Content-Type: application/json
To enable CBA and configure username bindings using Graph API, complete the foll
} ] }
+ ```
1. You'll get a `204 No content` response code. Re-run the GET request to make sure the policies are updated correctly. 1. Test the configuration by signing in with a certificate that satisfies the policy.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
The Office 365 suite makes it possible to target these services all at once. We
Targeting this group of applications helps to avoid issues that may arise because of inconsistent policies and dependencies. For example: The Exchange Online app is tied to traditional Exchange Online data like mail, calendar, and contact information. Related metadata may be exposed through different resources like search. To ensure that all metadata is protected by as intended, administrators should assign policies to the Office 365 app.
-Administrators can exclude the entire Office 365 suite or specific Office 365 client apps from the Conditional Access policy.
+Administrators can exclude the entire Office 365 suite or specific Office 365 cloud apps from the Conditional Access policy.
-The following key applications are included in the Office 365 client app:
+The following key applications are affected by the Office 365 cloud app:
- Exchange Online - Microsoft 365 Search Service
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
Previously updated : 04/06/2022 Last updated : 10/26/2022
Hybrid Azure AD join requires devices to have access to the following Microsoft
- Your organization's Security Token Service (STS) (**For federated domains**) > [!WARNING]
-> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to these URLs are excluded from TLS break-and-inspect. Failure to exclude these URLs may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
+> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to `https://devices.login.microsoftonline.com` is excluded from TLS break-and-inspect. Failure to exclude this URL may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
If your organization requires access to the internet via an outbound proxy, you can use [Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v=technet.10)) to enable Windows 10 or newer computers for device registration with Azure AD. To address issues configuring and managing WPAD, see [Troubleshooting Automatic Detection](/previous-versions/tn-archive/cc302643(v=technet.10)).
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
To enable security defaults in your directory:
### Require all users to register for Azure AD Multi-Factor Authentication
-All users in your tenant must register for multifactor authentication (MFA) in the form of the Azure AD Multi-Factor Authentication. Users have 14 days to register for Azure AD Multi-Factor Authentication by using the Microsoft Authenticator app. After the 14 days have passed, the user can't sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults.
+All users in your tenant must register for multifactor authentication (MFA) in the form of the Azure AD Multi-Factor Authentication. Users have 14 days to register for Azure AD Multi-Factor Authentication by using the [Microsoft Authenticator app](../authentication/concept-authentication-authenticator-app.md) or any app supporting [OATH TOTP](../authentication/concept-authentication-oath-tokens.md). After the 14 days have passed, the user can't sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults.
### Require administrators to do multifactor authentication
This policy applies to all users who are accessing Azure Resource Manager servic
### Authentication methods
-Security defaults users are required to register for and use Azure AD Multi-Factor Authentication **using the Microsoft Authenticator app using notifications**. Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option.
+Security defaults users are required to register for and use Azure AD Multi-Factor Authentication using the [Microsoft Authenticator app using notifications](../authentication/concept-authentication-authenticator-app.md). Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option. Users can also use any third party application using [OATH TOTP](../authentication/concept-authentication-oath-tokens.md) to generate codes.
> [!WARNING] > Do not disable methods for your organization if you are using security defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
If you are reviewing access to an application, then before creating the review,
1. In the **Enable review decision helpers** section choose whether you want your reviewer to receive recommendations during the review process: 1. If you select **No sign-in within 30 days**, users who have signed in during the previous 30-day period are recommended for approval. Users who haven't signed in during the past 30 days are recommended for denial. This 30-day interval is irrespective of whether the sign-ins were interactive or not. The last sign-in date for the specified user will also display along with the recommendation.
- 1. If you select User-to-Group Affiliation, reviewers will get the recommendation to Approve or Deny access for the users based on userΓÇÖs average distance in the organizationΓÇÖs reporting-structure. Users who are very distant from all the other users within the group are considered to have "low affiliation" and will get a deny recommendation in the group access reviews.
+ 1. If you select **(Preview) User-to-Group Affiliation**, reviewers will get the recommendation to Approve or Deny access for the users based on userΓÇÖs average distance in the organizationΓÇÖs reporting-structure. Users who are very distant from all the other users within the group are considered to have "low affiliation" and will get a deny recommendation in the group access reviews.
> [!NOTE] > If you create an access review based on applications, your recommendations are based on the 30-day interval period depending on when the user last signed in to the application rather than the tenant.
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md
To generate a self-signed certificate,
```powershell $cert | ft Thumbprint
+ ```
1. After you have exported the files, you can remove the certificate and key pair from your local user certificate store. In subsequent steps you will remove the `.pfx` and `.crt` files as well, once the certificate and private key have been uploaded to the Azure Automation and Azure AD services.
Next, you will create an app registration in Azure AD, so that Azure AD will rec
1. Select each of the permissions that your Azure Automation account will require, then select **Add permissions**.
- * If your runbook is only performing queries or updates within a single catalog, then you do not need to assign it tenant-wide application permissions; instead you can assign the service principal to the catalog's **Catalog owner** or **Catalog reader** role.
- * If your runbook is only performing queries for entitlement management, then it can use the **EntitlementManagement.Read.All** permission.
- * If your runbook is making changes to entitlement management, for example to create assignments across multiple catalogs, then use the **EntitlementManagement.ReadWrite.All** permission.
- * For other APIs, ensure that the necessary permission is added. For example, for identity protection, the **IdentityRiskyUser.Read.All** permission should be added.
+ * If your runbook is only performing queries or updates within a single catalog, then you do not need to assign it tenant-wide application permissions; instead you can assign the service principal to the catalog's **Catalog owner** or **Catalog reader** role.
+ * If your runbook is only performing queries for entitlement management, then it can use the **EntitlementManagement.Read.All** permission.
+ * If your runbook is making changes to entitlement management, for example to create assignments across multiple catalogs, then use the **EntitlementManagement.ReadWrite.All** permission.
+ * For other APIs, ensure that the necessary permission is added. For example, for identity protection, the **IdentityRiskyUser.Read.All** permission should be added.
-10. Select **Grant admin permissions** to give your app those permissions.
+1. Select **Grant admin permissions** to give your app those permissions.
## Create Azure Automation variables
Import-Module Microsoft.Graph.Authentication
$ClientId = Get-AutomationVariable -Name 'ClientId' $TenantId = Get-AutomationVariable -Name 'TenantId' $Thumbprint = Get-AutomationVariable -Name 'Thumbprint'
-Connect-MgGraph -clientId $ClientId -tenantid $TenantId -certificatethumbprint $Thumbprint
+Connect-MgGraph -clientId $ClientId -tenantId $TenantId -certificatethumbprint $Thumbprint
``` 5. Select **Test pane**, and select **Start**. Wait a few seconds for the Azure Automation processing of your runbook script to complete.
You can also add input parameters to your runbook, by adding a `Param` section a
```powershell Param (
-  [String]$AccessPackageAssignmentId
+ [String] $AccessPackageAssignmentId
) ```
There are two places where you can see the expiration date in the Azure portal.
## Next steps -- [Create an Automation account using the Azure portal](../../automation/quickstarts/create-azure-automation-account-portal.md)
+- [Create an Automation account using the Azure portal](../../automation/quickstarts/create-azure-automation-account-portal.md)
active-directory Review Recommendations Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/review-recommendations-access-reviews.md
na Previously updated : 8/5/2022 Last updated : 10/25/2022
For more information, see [License requirements](access-reviews-overview.md#lice
## Inactive user recommendations A user is considered 'inactive' if they have not signed into the tenant within the last 30 days. This behavior is adjusted for reviews of application assignments, which checks each user's last activity in the app as opposed to the entire tenant. When inactive user recommendations are enabled for an access review, the last sign-in date for each user will be evaluated once the review starts, and any user that has not signed-in within 30 days will be given a recommended action of Deny. Additionally, when these decision helpers are enabled, reviewers will be able to see the last sign-in date for all users being reviewed. This sign-in date (as well as the resulting recommendation) is determined when the review begins and will not get updated while the review is in-progress.
+## User-to-Group Affiliation (preview)
+Making the review experience easier and more accurate empowers IT admins and reviewers to make more informed decisions. This Machine Learning based recommendation opens the journey to automate access reviews, thereby enabling intelligent automation and reducing access rights attestation fatigue.
+
+User-to-Group Affiliation in an organizationΓÇÖs chart is defined as two or more users who share similar characteristics in an organization's reporting structure.
+
+This recommendation detects user affiliation with other users within the group, based on organization's reporting-structure similarity. The recommendation relies on a scoring mechanism which is calculated by computing the userΓÇÖs average distance with the remaining users in the group. Users who are very distant from all the other group members based on their organization's chart, are considered to have "low affiliation" within the group.
+
+If this decision helper is enabled by the creator of the access review, reviewers can receive User-to-Group Affiliation recommendations for group access reviews.
+
+> [!NOTE]
+> This feature is only available for users in your directory. A user should have a manager attribute and should be a part of an organizational hierarchy for the User-to-group Affiliation to work.
+
+The following image has an example of an organization's reporting structure in a cosmetics company:
+
+![Screenshot that shows a fictitious hierarchial organization chart for a cosmetics company.](./media/review-recommendations-group-access-reviews/org-chart-example.png)
+
+Based on the reporting structure in the example image, users who are statistically significant amount of distance away from other users within the group, would get a "Deny" recommendation by the system if the User-to-Group Affiliation recommendation was selected by the reviewer for group access reviews.
+
+For example, Phil who works within the Personal care division is in a group with Debby, Irwin, and Emily who all work within the Cosmetics division. The group is called *Fresh Skin*. If an Access Review for the group Fresh Skin is performed, based on the reporting structure and distance away from the other group members, Phil would be considered to have low affiliation. The system will create a **Deny** recommendation in the group access review.
+ ## Next Steps - [Create an access review](create-access-review.md)-- [Review access to groups or applications](perform-access-review.md)-
+- [Review access to groups or applications](perform-access-review.md)
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
Yes, key user properties like employeeHireDate and employeeType are supported fo
### How do I see more details and parameters of tasks and the attributes that are being updated?
-Some tasks do update existing attributes; however, we donΓÇÖt currently share those specific details. As these tasks are updating attributes related to other Azure AD features, so you can find that info in those docs. For temporary access pass, weΓÇÖre writing to the appropriate attributes listed [here](/graph/api/resources/temporaryaccesspassauthenticationmethod).
+Some tasks do update existing attributes; however, we donΓÇÖt currently share those specific details. As these tasks are updating attributes related to other Azure AD features, so you can find that info in those docs. For temporary access pass, we're writing to the appropriate attributes listed [here](/graph/api/resources/temporaryaccesspassauthenticationmethod).
### Is it possible for me to create new tasks and how? For example, triggering other graph APIs/web hooks?
active-directory Reference Connect Adsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsync.md
The following documentation provides reference information for the ADSync.psm1 P
This cmdlet resets the password for the service account and updates it both in Azure AD and in the sync engine. ### SYNTAX+ #### byIdentifier
- ```
+ ```powershell
Add-ADSyncADDSConnectorAccount [-Identifier] <Guid> [-EACredential <PSCredential>] [<CommonParameters>]
- ```
+ ```
#### byName
- ```
+ ```powershell
Add-ADSyncADDSConnectorAccount [-Name] <String> [-EACredential <PSCredential>] [<CommonParameters>]
- ```
+ ```
### DESCRIPTION This cmdlet resets the password for the service account and updates it both in Azure AD and in the sync engine.
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
- Disable-ADSyncExportDeletionThreshold [[-AADCredential] <PSCredential>] [-WhatIf] [-Confirm]
+ ```powershell
+ Disable-ADSyncExportDeletionThreshold [[-AADCredential] <PSCredential>] [-WhatIf] [-Confirm]
[<CommonParameters>]
- ```
+ ```
### DESCRIPTION Disables feature for deletion threshold at Export stage.
The following documentation provides reference information for the ADSync.psm1 P
### EXAMPLES #### Example 1
- ```powershell
+ ```powershell
PS C:\> Disable-ADSyncExportDeletionThreshold -AADCredential $aadCreds
- ```
+ ```
Uses the provided AAD Credentials to disable the feature for export deletion threshold.
The following documentation provides reference information for the ADSync.psm1 P
#### -AADCredential The AAD credential.
- ```yaml
+ ```yaml
Type: PSCredential Parameter Sets: (All) Aliases:
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Enable-ADSyncExportDeletionThreshold [-DeletionThreshold] <UInt32> [[-AADCredential] <PSCredential>] [-WhatIf] [-Confirm] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncAutoUpgrade [-Detail] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### SearchByIdentifier
- ```
+ ```powershell
Get-ADSyncCSObject [-Identifier] <Guid> [<CommonParameters>] ``` #### SearchByConnectorIdentifierDistinguishedName
- ```
+ ```powershell
Get-ADSyncCSObject [-ConnectorIdentifier] <Guid> [-DistinguishedName] <String> [-SkipDNValidation] [-Transient] [<CommonParameters>] ``` #### SearchByConnectorIdentifier
- ```
+ ```powershell
Get-ADSyncCSObject [-ConnectorIdentifier] <Guid> [-Transient] [-StartIndex <Int32>] [-MaxResultCount <Int32>] [<CommonParameters>] ``` #### SearchByConnectorNameDistinguishedName
- ```
+ ```powershell
Get-ADSyncCSObject [-ConnectorName] <String> [-DistinguishedName] <String> [-SkipDNValidation] [-Transient] [<CommonParameters>] ``` #### SearchByConnectorName
- ```
+ ```powershell
Get-ADSyncCSObject [-ConnectorName] <String> [-Transient] [-StartIndex <Int32>] [-MaxResultCount <Int32>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncCSObjectLog [-Identifier] <Guid> [-Count] <UInt32> [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncDatabaseConfiguration [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncExportDeletionThreshold [[-AADCredential] <PSCredential>] [-WhatIf] [-Confirm] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncMVObject -Identifier <Guid> [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncRunProfileResult [-RunHistoryId <Guid>] [-ConnectorId <Guid>] [-RunProfileId <Guid>] [-RunNumber <Int32>] [-NumberRequested <Int32>] [-RunStepDetails] [-StepNumber <Int32>] [-WhatIf] [-Confirm] [<CommonParameters>]
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncRunStepResult [-RunHistoryId <Guid>] [-StepHistoryId <Guid>] [-First] [-StepNumber <Int32>] [-WhatIf] [-Confirm] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncScheduler [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Get-ADSyncSchedulerConnectorOverride [-ConnectorIdentifier <Guid>] [-ConnectorName <String>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### SearchByDistinguishedName
- ```
+ ```powershell
Invoke-ADSyncCSObjectPasswordHashSync [-ConnectorName] <String> [-DistinguishedName] <String> [<CommonParameters>] ``` #### SearchByIdentifier
- ```
+ ```powershell
Invoke-ADSyncCSObjectPasswordHashSync [-Identifier] <Guid> [<CommonParameters>] ``` #### CSObject
- ```
+ ```powershell
Invoke-ADSyncCSObjectPasswordHashSync [-CsObject] <CsObject> [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ConnectorName
- ```
+ ```powershell
Invoke-ADSyncRunProfile -ConnectorName <String> -RunProfileName <String> [-Resume] [<CommonParameters>] ``` #### ConnectorIdentifier
- ```
+ ```powershell
Invoke-ADSyncRunProfile -ConnectorIdentifier <Guid> -RunProfileName <String> [-Resume] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ServiceAccount
- ```
+ ```powershell
Remove-ADSyncAADServiceAccount [-AADCredential] <PSCredential> [-Name] <String> [-WhatIf] [-Confirm] [<CommonParameters>] ``` #### ServicePrincipal
- ```
+ ```powershell
Remove-ADSyncAADServiceAccount [-ServicePrincipal] [-WhatIf] [-Confirm] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Set-ADSyncAutoUpgrade [-AutoUpgradeState] <AutoUpgradeConfigurationState> [[-SuspensionReason] <String>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Set-ADSyncScheduler [[-CustomizedSyncCycleInterval] <TimeSpan>] [[-SyncCycleEnabled] <Boolean>] [[-NextSyncCyclePolicyType] <SynchronizationPolicyType>] [[-PurgeRunHistoryInterval] <TimeSpan>] [[-MaintenanceEnabled] <Boolean>] [[-SchedulerSuspended] <Boolean>] [-Force] [<CommonParameters>]
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ConnectorIdentifier
- ```
+ ```powershell
Set-ADSyncSchedulerConnectorOverride -ConnectorIdentifier <Guid> [-FullImportRequired <Boolean>] [-FullSyncRequired <Boolean>] [<CommonParameters>] ``` #### ConnectorName
- ```
+ ```powershell
Set-ADSyncSchedulerConnectorOverride -ConnectorName <String> [-FullImportRequired <Boolean>] [-FullSyncRequired <Boolean>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### online
- ```
+ ```powershell
Start-ADSyncPurgeRunHistory [[-PurgeRunHistoryInterval] <TimeSpan>] [<CommonParameters>] ``` #### offline
- ```
+ ```powershell
Start-ADSyncPurgeRunHistory [-Offline] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Start-ADSyncSyncCycle [[-PolicyType] <SynchronizationPolicyType>] [[-InteractiveMode] <Boolean>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Stop-ADSyncRunProfile [[-ConnectorName] <String>] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Stop-ADSyncSyncCycle [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ConnectorName_ObjectDN
- ```
+ ```powershell
Sync-ADSyncCSObject -ConnectorName <String> -DistinguishedName <String> [-Commit] [<CommonParameters>] ``` #### ConnectorIdentifier_ObjectDN
- ```
+ ```powershell
Sync-ADSyncCSObject -ConnectorIdentifier <Guid> -DistinguishedName <String> [-Commit] [<CommonParameters>] ``` #### ObjectIdentifier
- ```
+ ```powershell
Sync-ADSyncCSObject -Identifier <Guid> [-Commit] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX #### ByEnvironment
- ```
+ ```powershell
Test-AdSyncAzureServiceConnectivity [-AzureEnvironment] <Identifier> [[-Service] <AzureService>] [-CurrentUser] [<CommonParameters>] ``` #### ByTenantName
- ```
+ ```powershell
Test-AdSyncAzureServiceConnectivity [-Domain] <String> [[-Service] <AzureService>] [-CurrentUser] [<CommonParameters>] ```
The following documentation provides reference information for the ADSync.psm1 P
### SYNTAX
- ```
+ ```powershell
Test-AdSyncUserHasPermissions [-ForestFqdn] <String> [-AdConnectorId] <Guid> [-AdConnectorCredential] <PSCredential> [-BaseDn] <String> [-PropertyType] <String> [-PropertyValue] <String> [-WhatIf] [-Confirm] [<CommonParameters>]
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
To assign users to an app using PowerShell, you need:
# Assign the user to the app role New-AzureADUserAppRoleAssignment -ObjectId $user.ObjectId -PrincipalId $user.ObjectId -ResourceId $sp.ObjectId -Id $appRole.Id
+ ```
To assign a group to an enterprise app, you must replace `Get-AzureADUser` with `Get-AzureADGroup` and replace `New-AzureADUserAppRoleAssignment` with `New-AzureADGroupAppRoleAssignment`.
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
# Assign the values to the variables $username = "britta.simon@contoso.com" $app_name = "Workplace Analytics"
+ ```
1. In this example, we don't know what is the exact name of the application role we want to assign to Britta Simon. Run the following commands to get the user ($user) and the service principal ($sp) using the user UPN and the service principal display names.
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
# Get the user to assign, and the service principal for the app to assign to $user = Get-AzureADUser -ObjectId "$username" $sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
+ ```
1. Run the command `$sp.AppRoles` to display the roles available for the Workplace Analytics application. In this example, we want to assign Britta Simon the Analyst (Limited access) Role. ![Shows the roles available to a user using Workplace Analytics Role](./media/assign-user-or-group-access-portal/workplace-analytics-role.png)
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
# Assign the values to the variables $app_role_name = "Analyst (Limited access)" $appRole = $sp.AppRoles | Where-Object { $_.DisplayName -eq $app_role_name }
+ ```
1. Run the following command to assign the user to the app role:
This example assigns the user Britta Simon to the Microsoft Workplace Analytics
## Remove all users who are assigned to the application
- ```powershell
-
- #Retrieve the service principal object ID.
- $app_name = "<Your App's display name>"
- $sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
- $sp.ObjectId
+```powershell
+#Retrieve the service principal object ID.
+$app_name = "<Your App's display name>"
+$sp = Get-AzureADServicePrincipal -Filter "displayName eq '$app_name'"
+$sp.ObjectId
# Get Service Principal using objectId $sp = Get-AzureADServicePrincipal -ObjectId "<ServicePrincipal objectID>"
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
To delete an enterprise application, you need:
1. Get the list of enterprise applications in your tenant. ```powershell
- Get-MgServicePrincipal
+ Get-MgServicePrincipal
```+ 1. Record the object ID of the enterprise app you want to delete.+ 1. Delete the enterprise application. ```powershell Remove-MgServicePrincipal -ServicePrincipalId 'd4142c52-179b-4d31-b5b9-08940873507b'
+ ```
:::zone-end - :::zone pivot="ms-graph" Delete an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
active-directory Qs Configure Template Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vmss.md
In this section, you assign a user-assigned managed identity to a virtual machin
} }
- ```
+ ```
**Microsoft.Compute/virtualMachineScaleSets API version 2017-12-01**
In this section, you assign a user-assigned managed identity to a virtual machin
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('<USERASSIGNEDIDENTITY>'))]" ] }- }
+ ```
3. When you are done, your template should look similar to the following:
- **Microsoft.Compute/virtualMachineScaleSets API version 2018-06-01**
+ **Microsoft.Compute/virtualMachineScaleSets API version 2018-06-01**
```json "resources": [
In this section, you assign a user-assigned managed identity to a virtual machin
} ] ```+ ### Remove user-assigned managed identity from an Azure virtual machine scale set If you have a virtual machine scale set that no longer needs a user-assigned managed identity:
active-directory Tutorial Linux Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-cosmos-db.md
To gain access to the Azure Cosmos DB account access keys from the Resource Mana
```azurecli-interactive az resource show --id /subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.Compute/virtualMachines/<VM NAMe> --api-version 2017-12-01 ```+ The response includes the details of the system-assigned managed identity (note the principalID as it is used in the next section): ```output
To complete these steps, you need an SSH client. If you are using Windows, you c
> In the previous request, the value of the "resource" parameter must be an exact match for what is expected by Azure AD. When using the Azure Resource Manager resource ID, you must include the trailing slash on the URI. > In the following response, the access_token element as been shortened for brevity.
- ```bash
- {"access_token":"eyJ0eXAiOi...",
- "expires_in":"3599",
- "expires_on":"1518503375",
- "not_before":"1518499475",
- "resource":"https://management.azure.com/",
- "token_type":"Bearer",
- "client_id":"1ef89848-e14b-465f-8780-bf541d325cd5"}
- ```
-
+ ```json
+ {
+ "access_token":"eyJ0eXAiOi...",
+ "expires_in":"3599",
+ "expires_on":"1518503375",
+ "not_before":"1518499475",
+ "resource":"https://management.azure.com/",
+ "token_type":"Bearer",
+ "client_id":"1ef89848-e14b-465f-8780-bf541d325cd5"
+ }
+ ```
+ ### Get access keys from Azure Resource Manager to make Azure Cosmos DB calls Now use CURL to call Resource Manager using the access token retrieved in the previous section to retrieve the Azure Cosmos DB account access key. Once we have the access key, we can query Azure Cosmos DB. Be sure to replace the `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>`, and `<COSMOS DB ACCOUNT NAME>` parameter values with your own values. Replace the `<ACCESS TOKEN>` value with the access token you retrieved earlier. If you want to retrieve read/write keys, use key operation type `listKeys`. If you want to retrieve read-only keys, use the key operation type `readonlykeys`:
active-directory Tutorial Linux Vm Access Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-access-key.md
To complete these steps, you will need an SSH client. If you are using Windows,
> In the previous request, the value of the "resource" parameter must be an exact match for what is expected by Azure AD. When using the Azure Resource Manager resource ID, you must include the trailing slash on the URI. > In the following response, the access_token element as been shortened for brevity.
- ```bash
- {"access_token":"eyJ0eXAiOiJ...",
- "refresh_token":"",
- "expires_in":"3599",
- "expires_on":"1504130527",
- "not_before":"1504126627",
- "resource":"https://management.azure.com",
- "token_type":"Bearer"}
- ```
-
+ ```json
+ {
+ "access_token": "eyJ0eXAiOiJ...",
+ "refresh_token": "",
+ "expires_in": "3599",
+ "expires_on": "1504130527",
+ "not_before": "1504126627",
+ "resource": "https://management.azure.com",
+ "token_type": "Bearer"
+ }
+ ```
+ ## Get storage account access keys from Azure Resource Manager to make storage calls Now use CURL to call Resource Manager using the access token we retrieved in the previous section, to retrieve the storage access key. Once we have the storage access key, we can call storage upload/download operations. Be sure to replace the `<SUBSCRIPTION ID>`, `<RESOURCE GROUP>`, and `<STORAGE ACCOUNT NAME>` parameter values with your own values. Replace the `<ACCESS TOKEN>` value with the access token you retrieved earlier:
The CURL response gives you the list of Keys:
```bash {"keys":[{"keyName":"key1","permissions":"Full","value":"iqDPNt..."},{"keyName":"key2","permissions":"Full","value":"U+uI0B..."}]} ```+ Create a sample blob file to upload to your blob storage container. On a Linux VM, you can do this with the following command. ```bash
Response:
In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Storage using an access key. To learn more about Azure Storage access keys see: > [!div class="nextstepaction"]
->[Manage your storage access keys](../../storage/common/storage-account-create.md)
+>[Manage your storage access keys](../../storage/common/storage-account-create.md)
active-directory Tutorial Linux Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-sas.md
Now that you have your SSH client continue to the steps below:
> In the previous request, the value of the "resource" parameter must be an exact match for what is expected by Azure AD. When using the Azure Resource Manager resource ID, you must include the trailing slash on the URI. > In the following response, the access_token element has been shortened for brevity.
- ```bash
- {"access_token":"eyJ0eXAiOiJ...",
- "refresh_token":"",
- "expires_in":"3599",
- "expires_on":"1504130527",
- "not_before":"1504126627",
- "resource":"https://management.azure.com",
- "token_type":"Bearer"}
- ```
+ ```json
+ {
+ "access_token":"eyJ0eXAiOiJ...",
+ "refresh_token":"",
+ "expires_in":"3599",
+ "expires_on":"1504130527",
+ "not_before":"1504126627",
+ "resource":"https://management.azure.com",
+ "token_type":"Bearer"
+ }
+ ```
## Get a SAS credential from Azure Resource Manager to make storage calls
Response:
In this tutorial, you learned how to use a Linux VM system-assigned managed identity to access Azure Storage using a SAS credential. To learn more about Azure Storage SAS, see: > [!div class="nextstepaction"]
->[Using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md)
+>[Using shared access signatures (SAS)](../../storage/common/storage-sas-overview.md)
active-directory Custom User Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-user-permissions.md
+
+ Title: User management permissions for Azure AD custom roles (preview) - Azure Active Directory
+description: User management permissions for Azure AD custom roles in the Azure portal, PowerShell, or Microsoft Graph API.
+++++++ Last updated : 10/26/2022+++++
+# User management permissions for Azure AD custom roles (preview)
+
+> [!IMPORTANT]
+> User management permissions for Azure AD custom roles is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+User management permissions can be used in custom role definitions in Azure Active Directory (Azure AD) to grant fine-grained access such as the following:
+
+- Read or update basic properties of users
+- Read or update identity of users
+- Read or update job information of users
+- Update contact information of users
+- Update parental controls of users
+- Update settings of users
+- Read direct reports of users
+- Update extension properties of users
+- Read device information of users
+- Read or manage licenses of users
+- Update password policies of users
+- Read assignments and memberships of users
+
+This article lists the permissions you can use in your custom roles for different user management scenarios. For information about how to create custom roles, see [Create and assign a custom role](custom-create.md).
+
+## License requirements
++
+## Read or update basic properties of users
+
+The following permissions are available to read or update basic properties of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/standard/read | Read basic properties on users. |
+> | microsoft.directory/users/basic/update | Update basic properties on users. |
+
+## Read or update identity of users
+
+The following permissions are available to read or update identity of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/identities/read | Read identities of users. |
+> | microsoft.directory/users/identities/update | Update the identity properties of users, such as name and user principal name. |
+
+## Read or update job information of users
+
+The following permissions are available to read or update job information of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/manager/read | Read manager of users. |
+> | microsoft.directory/users/manager/update | Update manager for users. |
+> | microsoft.directory/users/jobInfo/update | Update the job info properties of users, such as job title, department, and company name. |
+
+## Update contact information of users
+
+The following permissions are available to update contact information of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/contactInfo/update | Update the contact info properties of users, such as address, phone, and email. |
+
+## Update parental controls of users
+
+The following permissions are available to update parental controls of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/parentalControls/update | Update parental controls of users. |
+
+## Update settings of users
+
+The following permissions are available to update settings of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/usageLocation/update | Update usage location of users. |
+
+## Read direct reports of users
+
+The following permissions are available to read direct reports of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/directReports/read | Read the direct reports for users. |
+
+## Update extension properties of users
+
+The following permissions are available to update extension properties of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/extensionProperties/update | Update extension properties of users. |
+
+## Read device information of users
+
+The following permissions are available to read device information of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/ownedDevices/read | Read owned devices of users |
+> | microsoft.directory/users/registeredDevices/read | Read registered devices of users |
+> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users. |
+
+## Read or manage licenses of users
+
+The following permissions are available to read or manage licenses of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/licenseDetails/read | Read license details of users. |
+> | microsoft.directory/users/assignLicense | Manage user licenses. |
+> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users. |
+
+## Update password policies of users
+
+The following permissions are available to update password policies of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/passwordPolicies/update | Update password policies properties of users. |
+
+## Read assignments and memberships of users
+
+The following permissions are available to read assignments and memberships of users.
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/appRoleAssignments/read | Read application role assignments for users |
+> | microsoft.directory/users/scopedRoleMemberOf/read | Read user's membership of an Azure AD role, that is scoped to an administrative unit |
+> | microsoft.directory/users/memberOf/read | Read the group memberships of users |
+
+## Full list of permissions
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | - | -- |
+> | microsoft.directory/users/appRoleAssignments/read | Read application role assignments for users. |
+> | microsoft.directory/users/assignLicense | Manage user licenses. |
+> | microsoft.directory/users/basic/update | Update basic properties on users. |
+> | microsoft.directory/users/contactInfo/update | Update the contact info properties of users, such as address, phone, and email. |
+> | microsoft.directory/users/deviceForResourceAccount/read | Read deviceForResourceAccount of users. |
+> | microsoft.directory/users/directReports/read | Read the direct reports for users. |
+> | microsoft.directory/users/extensionProperties/update | Update extension properties of users. |
+> | microsoft.directory/users/identities/read | Read identities of users. |
+> | microsoft.directory/users/identities/update | Update the identity properties of users, such as name and user principal name. |
+> | microsoft.directory/users/jobInfo/update | Update the job info properties of users, such as job title, department, and company name. |
+> | microsoft.directory/users/licenseDetails/read | Read license details of users. |
+> | microsoft.directory/users/manager/read | Read manager of users. |
+> | microsoft.directory/users/manager/update | Update manager for users. |
+> | microsoft.directory/users/memberOf/read | Read the group memberships of users. |
+> | microsoft.directory/users/ownedDevices/read | Read owned devices of users. |
+> | microsoft.directory/users/parentalControls/update | Update parental controls of users. |
+> | microsoft.directory/users/passwordPolicies/update | Update password policies properties of users. |
+> | microsoft.directory/users/registeredDevices/read | Read registered devices of users. |
+> | microsoft.directory/users/reprocessLicenseAssignment | Reprocess license assignments for users. |
+> | microsoft.directory/users/scopedRoleMemberOf/read | Read user's membership of an Azure AD role, that is scoped to an administrative unit. |
+> | microsoft.directory/users/standard/read | Read basic properties on users. |
+> | microsoft.directory/users/usageLocation/update | Update usage location of users. |
+
+## Next steps
+
+- [Create and assign a custom role in Azure Active Directory](custom-create.md)
+- [List Azure AD role assignments](view-assignments.md)
active-directory Ascentis Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ascentis-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
In the **Sign-on URL** text box, type a URL using any one of the following pattern:
- ```https
+ ```https
https://selfservice.ascentis.com/<clientname>/STS/signin.aspx?SAMLResponse=true https://selfservice2.ascentis.com/<clientname>/STS/signin.aspx?SAMLResponse=true ```
When you click the Ascentis tile in the Access Panel, you should be automaticall
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Cernercentral Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cernercentral-provisioning-tutorial.md
Before configuring and enabling the provisioning service, you should decide what
This section guides you through connecting your Azure AD to Cerner CentralΓÇÖs User Roster using Cerner's SCIM user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in Cerner Central based on user and group assignment in Azure AD. > [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for Cerner Central, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other. For more information, see the [Cerner Central single sign-on tutorial](cernercentral-tutorial.md).
+> You may also choose to enable SAML-based single sign-on for Cerner Central, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features complement each other. For more information, see the [Cerner Central single sign-on tutorial](cernercentral-tutorial.md).
### To configure automatic user account provisioning to Cerner Central in Azure AD:
In order to provision user accounts to Cerner Central, youΓÇÖll need to request
* In the **Secret Token** field, enter the OAuth bearer token you generated in step #3 and click **Test Connection**.
- * You should see a success notification on the upper­right side of your portal.
+ * You should see a success notification on the upper-right side of your portal.
1. Enter the email address of a person or group who should receive provisioning error notifications in the **Notification Email** field, and check the checkbox below.
For more information on how to read the Azure AD provisioning logs, see [Reporti
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal:
d. In the **Logout URL** box, enter a URL in the pattern `https://<FortiGate IP or FQDN address>:<Custom SSL VPN port><FQDN>/remote/saml/logout`.
- > [!NOTE]
- > These values are just patterns. You need to use the actual **Sign on URL**, **Identifier**, **Reply URL**, and **Logout URL** that is configured on the FortiGate.
+ > [!NOTE]
+ > These values are just patterns. You need to use the actual **Sign on URL**, **Identifier**, **Reply URL**, and **Logout URL** that is configured on the FortiGate.
1. The FortiGate SSL VPN application expects SAML assertions in a specific format, which requires you to add custom attribute mappings to the configuration. The following screenshot shows the list of default attributes.
- ![Screenshot of showing Attributes and Claims section.](./media/fortigate-ssl-vpn-tutorial/claims.png)
-
+ ![Screenshot of showing Attributes and Claims section.](./media/fortigate-ssl-vpn-tutorial/claims.png)
1. The claims required by FortiGate SSL VPN are shown in the following table. The names of these claims must match the names used in the **Perform FortiGate command-line configuration** section of this tutorial. Names are case-sensitive.
Follow these steps to enable Azure AD SSO in the Azure portal:
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select the **Download** link next to **Certificate (Base64)** to download the certificate and save it on your computer:
- ![Screenshot that shows the certificate download link.](common/certificatebase64.png)
+ ![Screenshot that shows the certificate download link.](common/certificatebase64.png)
1. In the **Set up FortiGate SSL VPN** section, copy the appropriate URL or URLs, based on your requirements:
- ![Screenshot that shows the configuration URLs.](common/copy-configuration-urls.png)
+ ![Screenshot that shows the configuration URLs.](common/copy-configuration-urls.png)
#### Create an Azure AD test user
To complete these steps, you'll need the values you recorded earlier:
| FortiGate SAML CLI setting | Equivalent Azure configuration | | | |
- | SP entity ID (`entity-id`) | Identifier (Entity ID) |
-| SP Single Sign-On URL (`single-sign-on-url`) | Reply URL (Assertion Consumer Service URL) |
+ | SP entity ID (`entity-id`) | Identifier (Entity ID) |
+| SP Single Sign-On URL (`single-sign-on-url`) | Reply URL (Assertion Consumer Service URL) |
| SP Single Logout URL (`single-logout-url`) | Logout URL | | IdP Entity ID (`idp-entity-id`) | Azure AD Identifier | | IdP Single Sign-On URL (`idp-single-sign-on-url`) | Azure Login URL |
To complete these steps, you'll need the values you recorded earlier:
1. Establish an SSH session to your FortiGate appliance, and sign in with a FortiGate Administrator account. 1. Run these commands and substitute the `<values>` with the information that you collected previously:
- ```console
+ ```console
config user saml
- edit azure
- set cert <FortiGate VPN Server Certificate Name>
- set entity-id < Identifier (Entity ID)Entity ID>
- set single-sign-on-url < Reply URL Reply URL>
- set single-logout-url <Logout URL>
- set idp-entity-id <Azure AD Identifier>
- set idp-single-sign-on-url <Azure Login URL>
- set idp-single-logout-url <Azure Logout URL>
- set idp-cert <Base64 SAML Certificate Name>
- set user-name username
- set group-name group
- next
+ edit azure
+ set cert <FortiGate VPN Server Certificate Name>
+ set entity-id < Identifier (Entity ID)Entity ID>
+ set single-sign-on-url < Reply URL Reply URL>
+ set single-logout-url <Logout URL>
+ set idp-entity-id <Azure AD Identifier>
+ set idp-single-sign-on-url <Azure Login URL>
+ set idp-single-logout-url <Azure Logout URL>
+ set idp-cert <Base64 SAML Certificate Name>
+ set user-name username
+ set group-name group
+ next
end-
- ```
+ ```
#### Configure FortiGate for group matching
In this section, you'll configure FortiGate to recognize the Object ID of the se
To complete these steps, you'll need the Object ID of the FortiGateAccess security group that you created earlier in this tutorial. 1. Establish an SSH session to your FortiGate appliance, and sign in with a FortiGate Administrator account.+ 1. Run these commands:
- ```console
+ ```console
config user group
- edit FortiGateAccess
- set member azure
- config match
- edit 1
- set server-name azure
- set group-name <Object Id>
- next
- end
- next
+ edit FortiGateAccess
+ set member azure
+ config match
+ edit 1
+ set server-name azure
+ set group-name <Object Id>
+ next
+ end
+ next
end
- ```
-
+ ```
+ #### Create a FortiGate VPN Portals and Firewall Policy In this section, you'll configure a FortiGate VPN Portals and Firewall Policy that grants access to the FortiGateAccess security group you created earlier in this tutorial.
active-directory Linkedinelevate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinelevate-provisioning-tutorial.md
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
4. Click **+ Add new SCIM configuration** and follow the procedure by filling in each field. > [!NOTE]
- > When auto­assign licenses is not enabled, it means that only user data is synced.
+ > When auto-assign licenses is not enabled, it means that only user data is synced.
![Screenshot shows the LinkedIn Account Center Global Settings.](./media/linkedinelevate-provisioning-tutorial/linkedin_elevate1.PNG) > [!NOTE]
- > When auto­license assignment is enabled, you need to note the application instance and license type. Licenses are assigned on a first come, first serve basis until all the licenses are taken.
+ > When auto-license assignment is enabled, you need to note the application instance and license type. Licenses are assigned on a first come, first serve basis until all the licenses are taken.
![Screenshot shows the S C I M Setup page.](./media/linkedinelevate-provisioning-tutorial/linkedin_elevate2.PNG)
The first step is to retrieve your LinkedIn access token. If you are an Enterpri
* In the **Secret Token** field, enter the access token you generated in step 1 and click **Test Connection** .
- * You should see a success notification on the upper­right side of
+ * You should see a success notification on the upper-right side of
your portal. 12. Enter the email address of a person or group who should receive provisioning error notifications in the **Notification Email** field, and check the checkbox below.
active-directory Uber Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uber-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Uber for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Uber.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: f16047ee-8ed6-4f8f-86e4-d9bc2cbd9016
+++
+ms.devlang: na
+ Last updated : 10/25/2022+++
+# Tutorial: Configure Uber for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Uber and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [Uber](https://www.uber.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Uber.
+> * Remove users in Uber when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Uber.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* You must be onboarded to a [Uber for Business](https://business.uber.com/) organization and have Admin access to it.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Uber](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Uber to support provisioning with Azure AD
+
+Before you start the setup, below are the requirements to enable SCIM provisioning end to end
+
+* You must be onboarded to a [Uber for Business](https://business.uber.com/) organization and have Admin access to it.
+* You must allow syncing via identity providers, you can find this by hovering your mouse above your profile photo in the top right corner and navigating to **Settings > Integrations section > toggle Allow**
+* Grab your `organization-id` and replace it in `https://api.uber.com/v1/scim/organizations/{organization-id}/v2` to create your **Tenant Url** .This Tenant Url is to be entered in the Provisioning tab of your Uber application in the Azure portal.
+
+ ![Screenshot of Grab Organization ID.](media/uber-provisioning-tutorial/organization-id.png)
+
+## Step 3. Add Uber from the Azure AD application gallery
+
+Add Uber from the Azure AD application gallery to start managing provisioning to Uber. If you have previously setup Uber for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5. Configure automatic user provisioning to Uber
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Uber based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Uber in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Uber**.
+
+ ![Screenshot of the Uber link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab,](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, enter the **Tenant Url** and then click on Authorize, make sure that you enter your Uber account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Uber. If the connection fails, ensure your Uber account has Admin permissions and try again.
+
+ ![Screenshot of Token.](media/uber-provisioning-tutorial/authorize.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Uber**.
+
+1. Review the user attributes that are synchronized from Azure AD to Uber in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Uber for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Uber API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Uber|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |externalId|String||&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Uber, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Uber by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
The Admin API is server over HTTPS. All URLs referenced in the documentation hav
## Authentication
-The API is protected through Azure Active Directory and uses OAuth2 bearer tokens. The app registration needs to have the API Permission for `Verifiable Credentials Service Admin` and then when acquiring the access token the app should use scope `6a8b4b39-c021-437c-b060-5a14a3fd65f3/full_access`.
+The API is protected through Azure Active Directory and uses OAuth2 bearer tokens. The app registration needs to have the API Permission for `Verifiable Credentials Service Admin` and then when acquiring the access token the app should use scope `6a8b4b39-c021-437c-b060-5a14a3fd65f3/full_access`. The access token must be for a user with the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) role.
## Onboarding
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
The issuance request payload contains information about your verifiable credenti
"clientName": "Verifiable Credential Expert Sample" }, "type": "VerifiedCredentialExpert",
- "manifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredentials/contracts/VerifiedCredentialExpert",
+ "manifest": "https://verifiedid.did.msidentity.com/v1.0/tenants/12345678-0000-0000-0000-000000000000/verifiableCredentials/contracts/MTIzNDU2NzgtMDAwMC0wMDAwLTAwMDAtMDAwMDAwMDAwMDAwdmVyaWZpZWRjcmVkZW50aWFsZXhwZXJ0/manifest",
"claims": { "given_name": "Megan", "family_name": "Bowen"
aks Azure Csi Blob Storage Static https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-static.md
The following example demonstrates how to mount a Blob storage container as a pe
storage: 10Gi volumeName: pv-blob storageClassName: azureblob-nfs-premium
- ```
+ ```
4. Run the following command to create the persistent volume claim using the `kubectl create` command referencing the YAML file created earlier:
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disks on Azure Kub
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 07/21/2022-- Last updated : 10/13/2022 # Use the Azure Disks Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
- Performance improvements during concurrent disk attach and detach - In-tree drivers attach or detach disks in serial, while CSI drivers attach or detach disks in batch. There's significant improvement when there are multiple disks attaching to one node.
+- Premium SSD v1 and v2 are supported.
- Zone-redundant storage (ZRS) disk support - `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported. ZRS disk could be scheduled on the zone or non-zone node, without the restriction that disk volume should be co-located in the same zone as a given node. For more information, including which regions are supported, see [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md). - [Snapshot](#volume-snapshots)
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
|Name | Meaning | Available Value | Mandatory | Default value | | | | |
-|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
+|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `PremiumV2_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows| |cachingMode | [Azure Data Disk Host Cache Setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`| |location | Specify Azure region where Azure Disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
An up-to-date cluster avoids unnecessary performance issues and ensures you bene
Add-ons and extensions covered by the [AKS support policy](/azure/aks/support-policies) provide additional and supported functionality to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
-* Ensure you install [Keda](/azure/aks/integrations#available-add-ons) as an add-on and [GitOps & Dapr](/azure/aks/cluster-extensions?tabs=azure-cli#currently-available-extensions) as extensions.
+* Ensure you install [KEDA](/azure/aks/integrations#available-add-ons) as an add-on and [GitOps & Dapr](/azure/aks/cluster-extensions?tabs=azure-cli#currently-available-extensions) as extensions.
### Containerize your workload where applicable
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
Title: Configure kubenet networking in Azure Kubernetes Service (AKS)
description: Learn how to configure kubenet (basic) network in Azure Kubernetes Service (AKS) to deploy an AKS cluster into an existing virtual network and subnet. Previously updated : 06/20/2022 Last updated : 10/26/2022
Limitations:
* For system-assigned managed identity, it's only supported to provide your own subnet and route table via Azure CLI. That's because CLI will add the role assignment automatically. If you are using an ARM template or other clients, you must use a [user-assigned managed identity][Create an AKS cluster with user-assigned managed identities], assign permissions before cluster creation, and ensure the user-assigned identity has write permissions to your custom subnet and custom route table. * Using the same route table with multiple AKS clusters isn't supported.
-After you create a custom route table and associate it to your subnet in your virtual network, you can create a new AKS cluster that uses your route table.
+> [!NOTE]
+> To create and use your own VNet and route table with `kubelet` network plugin, you need to use [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For system-assigned control plane identity, the identity ID cannot be retrieved before creating a cluster, which causes a delay during role assignment.
+> To create and use your own VNet and route table with `azure` network plugin, both system-assigned and user-assigned managed identities are supported. But user-assigned managed identity is more recommended for BYO scenarios.
+
+After creating a custom route table and associating it with a subnet in your virtual network, you can create a new AKS cluster specifying your route table with a user-assigned managed identity.
You need to use the subnet ID for where you plan to deploy your AKS cluster. This subnet also must be associated with your custom route table. ```azurecli-interactive
az network vnet subnet list --resource-group
```azurecli-interactive # Create a kubernetes cluster with with a custom subnet preconfigured with a route table
-az aks create -g MyResourceGroup -n MyManagedCluster --vnet-subnet-id <MySubnetID-resource-id>
+az aks create -g myResourceGroup -n myManagedCluster --vnet-subnet-id mySubnetIDResourceID --enable-managed-identity --assign-identity controlPlaneIdentityResourceID
``` ## Next steps
With an AKS cluster deployed into your existing virtual network subnet, you can
[network-comparisons]: concepts-network.md#compare-network-models [custom-route-table]: ../virtual-network/manage-route-table.md [Create an AKS cluster with user-assigned managed identities]: configure-kubenet.md#create-an-aks-cluster-with-user-assigned-managed-identities
+[bring-your-own-control-plane-managed-identity]: ../aks/use-managed-identity.md#bring-your-own-control-plane-managed-identity
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
This scenario is intended for customers using Azure Monitor to monitor AKS. It d
## Container insights AKS generates [platform metrics and resource logs](monitor-aks-reference.md), like any other Azure resource, that you can use to monitor its basic health and performance. Enable [Container insights](../azure-monitor/containers/container-insights-overview.md) to expand on this monitoring. Container insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS in addition to other cluster configurations. Container insights provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios.
-[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are CNCF backed widely popular open source tools for kubernetes monitoring. AKS exposes many metrics in Prometheus format which makes Prometheus a popular choice for monitoring. [Container insights](../azure-monitor/containers/container-insights-overview.md) has native integration with AKS, collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics, and many native Azure Monitor insights are built-up on top of Prometheus metrics. Container insights complements and completes E2E monitoring of AKS including log collection which Prometheus as stand-alone tool doesnΓÇÖt provide. Many customers use Prometheus integration and Azure Monitor together for E2E monitoring.
+[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are CNCF backed widely popular open source tools for kubernetes monitoring. AKS exposes many metrics in Prometheus format which makes Prometheus a popular choice for monitoring. [Container insights](../azure-monitor/containers/container-insights-overview.md) has native integration with AKS, collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics, and many native Azure Monitor Insights are built-up on top of Prometheus metrics. Container insights complements and completes E2E monitoring of AKS including log collection which Prometheus as stand-alone tool doesnΓÇÖt provide. Many customers use Prometheus integration and Azure Monitor together for E2E monitoring.
Learn more about using Container insights at [Container insights overview](../azure-monitor/containers/container-insights-overview.md). [Monitor layers of AKS with Container insights](#monitor-layers-of-aks-with-container-insights) below introduces various features of Container insights and the monitoring scenarios that they support.
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl
description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview). Previously updated : 10/03/2022 Last updated : 10/24/2022 # Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster
When ready, refresh the registration of the *Microsoft.ContainerService* resourc
az provider register --namespace Microsoft.ContainerService ```
-## Register the 'EnableOIDCIssuerPreview' feature flag
-
-Register the `EnableOIDCIssuerPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableOIDCIssuerPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableOIDCIssuerPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ## Create AKS cluster Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*:
In this article, you deployed a Kubernetes cluster and configured it to use a wo
<!-- INTERNAL LINKS --> [kubernetes-concepts]: concepts-clusters-workloads.md [az-feature-register]: /cli/azure/feature#az_feature_register
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-list]: /cli/azure/feature#az-feature-list
[workload-identity-overview]: workload-identity-overview.md [create-key-vault-azure-cli]: ../key-vault/general/quick-create-cli.md [az-keyvault-list]: /cli/azure/keyvault#az-keyvault-list
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
The following table lists all the upcoming breaking changes and feature retireme
| [Resource provider source IP address updates][bc1] | March 31, 2023 | | [Resource provider source IP address updates][rp2023] | September 30, 2023 | | [API version retirements][api2023] | September 30, 2023 |
-| [Deprecated (legacy) portal retirement][devportal2023] | October 2023 |
+| [Deprecated (legacy) portal retirement][devportal2023] | October 31, 2023 |
| [Self-hosted gateway v0/v1 retirement][shgwv0v1] | October 1, 2023 | | [stv1 platform retirement][stv12024] | August 31, 2024 | | [ADAL-based Azure AD or Azure AD B2C identity provider retirement][msal2025] | September 30, 2025 |
api-management How To Configure Service Fabric Backend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-service-fabric-backend.md
Add the [`set-backend-service`](api-management-transformation-policies.md#SetBac
1. On the **Design** tab, in the **Inbound processing** section, select the code editor (**</>**) icon. 1. Position the cursor inside the **&lt;inbound&gt;** element 1. Add the `set-service-backend` policy statement.
- * In `backend-id`, substitute the name of your Service Fabric backend.
+ * In `backend-id`, substitute the name of your Service Fabric backend.
- * The `sf-resolve-condition` is a condition for re-resolving a service location and resending a request. The number of retries was set when configuring the backend. For example:
+ * The `sf-resolve-condition` is a condition for re-resolving a service location and resending a request. The number of retries was set when configuring the backend. For example:
```xml <set-backend-service backend-id="mysfbackend" sf-resolve-condition="@(context.LastError?.Reason == "BackendConnectionFailure")"/>
- ```
+ ```
1. Select **Save**. :::image type="content" source="media/backends/set-backend-service.png" alt-text="Configure set-backend-service policy":::
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-container-github-action.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli). ```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName
-/providers/Microsoft.Web/sites/
+ az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/
``` 1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
When you configure the workflow file later, you use the secret for the input `cr
with: creds: ${{ secrets.AZURE_CREDENTIALS }} ```+ # [OpenID Connect](#tab/openid) You need to provide your application's **Client ID**, **Tenant ID** and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
jobs:
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }} images: 'mycontainer.azurecr.io/myapp:${{ github.sha }}' ```+ # [Service principal](#tab/service-principal) ```yaml
jobs:
run: | az logout ```+ ## Next steps
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 9/15/2022 Last updated : 10/26/2022
If your App Service Environment doesn't pass the validation checks or you try to
|Migration to ASEv3 is not allowed for this ASE. |You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) has been met. |Remove unneeded environments or contact support to review your options. | |`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location. |You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](using-an-ase.md#upgrade-preference) from the Azure portal. |Wait until the upgrade finishes and then migrate. |
+|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade will be initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. |
|App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You'll be able to migrate once these operations are complete. | ## Overview of the migration process using the migration feature
There's no cost to migrate your App Service Environment. You'll stop being charg
> [Using an App Service Environment v3](using.md) > [!div class="nextstepaction"]
-> [Custom domain suffix](./how-to-custom-domain-suffix.md)
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
When an outdated runtime is hidden from the Portal, any of your existing sites u
If you need to create another web app with an outdated runtime version that is no longer shown on the Portal see the language configuration guides for instructions on how to get the runtime version of your site. You can use the Azure CLI to create another site with the same runtime. Alternatively, you can use the **Export Template** button on the web app blade in the Portal to export an ARM template of the site. You can reuse this template to deploy a new site with the same runtime and configuration.
-#### Debian 9 End of Life
-
-On June 30th 2022 Debian 9 (also known as "Stretch") will reach End-of-Life (EOL) status, which means security patches and updates will cease. As of June 2022, a platform update is rolling out to provide an upgrade path to Debian 11 (also known as "Bullseye"). The runtimes listed below are currently using Debian 9; if you are using one of the listed runtimes, follow the instructions below to upgrade your site to Bullseye.
--- Python 3.8-- Python 3.7-- .NET 3.1-- PHP 7.4-
-> [!NOTE]
-> To ensure customer applications are running on secure and supported Debian distributions, after February 2023 all Linux web apps still running on Debian 9 (Stretch) will be upgraded to Debian 11 (Bullseye) automatically.
->
-
-##### Verify the platform update
-
-First, validate that the new platform update which contains Debian 11 has reached your site.
-
-1. Navigate to the SCM site (also known as Kudu site) of your webapp. You can browse to this site at `http://<your-site-name>.scm.azurewebsites.net/Env` (replace `\<your-site-name>` with the name of your web app).
-1. Under "Environment Variables", search for `PLATFORM_VERSION`. The value of this environment variable is the current platform version of your web app.
-1. If the value of `PLATFORM_VERSION` starts with "99" or greater, then your site is on the latest platform update and you can continue to the section below. If the value does **not** show "99" or greater, then your site has not yet received the latest platform update--please check again at a later date.
-
-Next, create a deployment slot to test that your application works properly with Debian 11 before applying the change to production.
-
-1. [Create a deployment slot](deploy-staging-slots.md#add-a-slot) if you do not already have one, and clone your settings from the production slot. A deployment slot will allow you to safely test changes to your application (such as upgrading to Debian 11) and swap those changes into production after review.
-1. To upgrade to Debian 11 (Bullseye), create an app setting on your slot named `WEBSITE_LINUX_OS_VERSION` with a value of `DEBIAN|BULLSEYE`.
-
- ```bash
- az webapp config appsettings set -g MyResourceGroup -n MyUniqueApp --settings WEBSITE_LINUX_OS_VERSION="DEBIAN|BULLSEYE"
- ```
-1. Deploy your application to the deployment slot using the tool of your choice (VS Code, Azure CLI, GitHub Actions, etc.)
-1. Confirm your application is functioning as expected in the deployment slot.
-1. [Swap your production and staging slots](deploy-staging-slots.md#swap-two-slots). This will apply the `WEBSITE_LINUX_OS_VERSION=DEBIAN|BULLSEYE` app setting to production.
-1. Delete the deployment slot if you are no longer using it.
-
-##### Resources
--- [Debian Long Term Support schedule](https://wiki.debian.org/LTS)-- [Debian 11 (Bullseye) Release Notes](https://www.debian.org/releases/bullseye/)-- [Debain 9 (Stretch) Release Notes](https://www.debian.org/releases/stretch/)- ### Limitations > [!NOTE]
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
First, create an Azure SQL Server to host the database. A new Azure SQL Server is created by using the [az sql server create ](/cli/azure/sql/server#az-sql-server-create) command.
-Replace the *server-name* placeholder with a unique SQL Database name. The SQL Database name is used as part of the globally unique SQL Database endpoint. Also, replace *db-username* and *db-username* with a username and password of your choice.
+Replace the *server-name* placeholder with a unique SQL Database name. The SQL Database name is used as part of the globally unique SQL Database endpoint. Also, replace *db-username* and *db-password* with a username and password of your choice.
```azurecli-interactive az sql server create \
application-gateway Understanding Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/understanding-pricing.md
Azure Application Gateway is a layer 7 load-balancing solution, which enables scalable, highly available, and secure web application delivery on Azure. There are no upfront costs or termination costs associated with Application Gateway.
-You will be billed only for the resources pre-provisioned and utilized based on actual hourly consumption. Costs associated with Application Gateway are classified into two components: fixed costs and variable costs. Actual costs within each component will vary according to the SKU being utilized.
+You'll be billed only for the resources pre-provisioned and utilized based on actual hourly consumption. Costs associated with Application Gateway are classified into two components: fixed costs and variable costs. Actual costs within each component will vary according to the SKU being utilized.
-This article describes the costs associated with each SKU and it is recommended that users utilize this document for planning and managing costs associated with the Azure Application Gateway.
+This article describes the costs associated with each SKU and it's recommended that users utilize this document for planning and managing costs associated with the Azure Application Gateway.
## V2 SKUs
Compute Unit is the measure of compute capacity consumed. Factors affecting comp
Compute unit guidance: * Standard_v2 - Each compute unit is capable of approximately 50 connections per second with RSA 2048-bit key TLS certificate.
-* WAF_v2 - Each compute unit can support approximately 10 concurrent requests per second for 70-30% mix of traffic with 70% requests less than 2 KB GET/POST and remaining higher. WAF performance is not affected by response size currently.
+* WAF_v2 - Each compute unit can support approximately 10 concurrent requests per second for 70-30% mix of traffic with 70% requests less than 2 KB GET/POST and remaining higher. WAF performance isn't affected by response size currently.
##### Instance Count Pre-provisioning of resources for Application Gateway V2 SKUs is defined in terms of instance count. Each instance guarantees a minimum of 10 capacity units in terms of processing capability. The same instance could potentially support more than 10 capacity units for different traffic patterns depending upon the Capacity Unit parameters.
V2 SKUs are billed based on the consumption and constitute of two parts:
The fixed cost also includes the cost associated with the public IP attached to the Application Gateway.
- The number of instances running at any point of time is not considered as a factor for fixed costs for V2 SKUs. The fixed costs of running a Standard_V2 (or WAF_V2) would be same per hour regardless of the number of instances running within the same Azure region.
+ The number of instances running at any point of time isn't considered as a factor for fixed costs for V2 SKUs. The fixed costs of running a Standard_V2 (or WAF_V2) would be same per hour regardless of the number of instances running within the same Azure region.
* Capacity Unit Costs
Since 80 (reserved capacity) > 40 (required capacity), no additional CUs are req
Fixed Price = $0.246 * 730 (Hours) = $179.58
-Variable Costs = $0.008 * 8 (Instance Units) * 10(capacity units) * 730 (Hours) = $467.2
+Variable Costs = $0.008 * 8 (Instance Units) * 10 (capacity units) * 730 (Hours) = $467.2
Total Costs = $179.58 + $467.2 = $646.78
If processing capacity equivalent to 10 additional CUs was available for use wit
Fixed Price = $0.246 * 730 (Hours) = $179.58
-Variable Costs = $0.008 * ( 3(Instance Units) * 10(capacity units) + 10 (additional capacity units) ) * 730 (Hours) = $233.6
+Variable Costs = $0.008 * ( 3 (Instance Units) * 10 (capacity units) + 10 (additional capacity units) ) * 730 (Hours) = $233.6
Total Costs = $179.58 + $233.6 = $413.18
In this scenario the Application Gateway resource is under scaled and could pote
Fixed Price = $0.246 * 730 (Hours) = $179.58
-Variable Costs = $0.008 * ( 3(Instance Units) * 10(capacity units) + 7 (additional capacity units) ) * 730 (Hours) = $216.08
+Variable Costs = $0.008 * ( 3(Instance Units) * 10 (capacity units) + 7 (additional capacity units) ) * 730 (Hours) = $216.08
Total Costs = $179.58 + $216.08 = $395.66
Total Costs = $179.58 + $216.08 = $395.66
### Example 2 ΓÇô WAF_V2 instance with Autoscaling
-LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 with autoscaling enabled and set the minimum instance count to 6 for the entire month. The request load has caused the WAF instance to scale out and utilize 65 Capacity units(scale out of 5 capacity units, while 60 units were reserved) for the entire month.
+LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 with autoscaling enabled and set the minimum instance count to 6 for the entire month. The request load has caused the WAF instance to scale out and utilize 65 Capacity units (scale out of 5 capacity units, while 60 units were reserved) for the entire month.
Your Application Gateway costs using the pricing mentioned above would be calculated as follows: Monthly price estimates are based on 730 hours of usage per month. Fixed Price = $0.443 * 730 (Hours) = $323.39
-Variable Costs = $0.0144 * 65(capacity units) * 730 (Hours) = $683.28
+Variable Costs = $0.0144 * 65 (capacity units) * 730 (Hours) = $683.28
Total Costs = $323.39 + $683.28 = $1006.67
Monthly price estimates are based on 730 hours of usage per month.
Fixed Price = $0.443 * 730 (Hours) = $323.39
-Variable Costs = $0.0144 * 1(capacity units) * 730 (Hours) = $10.512
+Variable Costs = $0.0144 * 1 (capacity units) * 730 (Hours) = $10.512
Total Costs = $323.39 + $10.512 = $333.902 ### Example 3 (b) ΓÇô WAF_V2 instance with Autoscaling with 0 Min instance count
-LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 with autoscaling enabled and set the minimum instance count to 0 for the entire month. However, there is 0 traffic directed to the WAF instance for the entire month.
+LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 with autoscaling enabled and set the minimum instance count to 0 for the entire month. However, there's 0 traffic directed to the WAF instance for the entire month.
Your Application Gateway costs using the pricing mentioned above would be calculated as follows: Fixed Price = $0.443 * 730 (Hours) = $323.39
-Variable Costs = $0.0144 * 0(capacity units) * 730 (Hours) = $0
+Variable Costs = $0.0144 * 0 (capacity units) * 730 (Hours) = $0
Total Costs = $323.39 + $0 = $323.39
-### Example 3 (C) ΓÇô WAF_V2 instance with manual scaling set to 1 instance
+### Example 3 (c) ΓÇô WAF_V2 instance with manual scaling set to 1 instance
-LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 and set it to manual scaling with the minimum acceptable value of 1 instance for the entire month. However, there is 0 traffic directed to the WAF for the entire month.
+LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 and set it to manual scaling with the minimum acceptable value of 1 instance for the entire month. However, there's 0 traffic directed to the WAF for the entire month.
Your Application Gateway costs using the pricing mentioned above would be calculated as follows: Monthly price estimates are based on 730 hours of usage per month. Fixed Price = $0.443 * 730 (Hours) = $323.39
-Variable Costs = $0.0144 * 1(Instance count) * 10(capacity units) * 730 (Hours) =
+Variable Costs = $0.0144 * 1 (Instance count) * 10 (capacity units) * 730 (Hours) =
$105.12 Total Costs = $323.39 + $105.12 = $428.51
Variable Costs = $0.0144 * 730 (Hours) * {Max (25/50, 8.88/2.22)} = $42.048 (4
Total Costs = $323.39 + $42.048 = $365.438
-### Example 5 (a) ΓÇô Standard_V2 with Autoscaling, time-based calculations
+### Example 5 ΓÇô Standard_V2 with Autoscaling, time-based calculations
LetΓÇÖs assume youΓÇÖve provisioned a standard_V2 with autoscaling enabled and set the minimum instance count to 0 and this application gateway is active for 2 hours. During the first hour, it receives traffic that can be handled by 10 Capacity Units and during the second hour it receives traffic that required 20 Capacity Units to handle the load.
Your Application Gateway costs using the pricing mentioned above would be calcul
Fixed Price = $0.246 * 2 (Hours) = $0.492
-Variable Costs = $0.008 * 10(capacity units) * 1 (Hours) + $0.008 * 20(capacity
+Variable Costs = $0.008 * 10 (capacity units) * 1 (Hours) + $0.008 * 20 (capacity
units) * 1 (Hours) = $0.24 Total Costs = $0.492 + $0.24 = $0.732
+### Example 6 ΓÇô WAF_V2 with DDoS Protection Standard Plan, and with manual scaling set to 2 instance
+
+LetΓÇÖs assume youΓÇÖve provisioned a WAF_V2 and set it to manual scaling with 2 instance for the entire month with 2 CUs. Let's also assume that you've enabled DDoS Protection Standard Plan. In this example, since you're paying the monthly fee for DDoS Protection Standard, there's no additional charges for WAF; and you're charged at the lower Standard_V2 rates.
+
+Monthly price estimates are based on 730 hours of usage per month.
+
+Fixed Price = $0.246 * 730 (Hours) = $179.58
+
+Variable Costs = $0.008 * 2 (capacity units) * 730 (Hours) = $11.68
+
+DDoS Protection Standard Cost = $2,944 * 1 (month) = $2,944
+
+Total Costs = $179.58 + $11.68 + $2,944 = $3,135.26
++ ## V1 SKUs Standard Application Gateway and WAF V1 SKUs are billed as a combination of:
Total Costs = $9 + $120 = $129
###### Large instance WAF Application Gateway 24 Hours * 15 Days = 360 Hours
-Fixed Price = $0.448 * 360 (Hours) = $161.28
+Fixed Price = $0.448 * 360 (Hours) = $161.28
-Variable Costs = 60 * 1000 * $0.0035/GB = $210 (Large tier has no costs for the first 40 TB processed per month)
+Variable Costs = 60 * 1000 * $0.0035/GB = $210 (Large tier has no costs for the first 40 TB processed per month)
Total Costs = $161.28 + $210 = $371.28
+### Example 3 ΓÇô WAF Application Gateway with DDoS Protection Standard Plan
+
+Let's assume you've provisioned a medium type WAF application Gateway, and you've enabled DDoS Protection Standard Plan. This medium WAF application gateway processes 40 TB in the duration that it is active. Your Application Gateway costs using the pricing method above would be calculated as follows:
+
+Monthly price estimates are based on 730 hours of usage per month.
+
+Fixed Price = $0.07 * 730 (Hours) = $51.1
+
+Variable Costs = 30 * 1000 * $0.007/GB = $210 (Medium tier has no cost for the first 10 TB processed per month)
+
+DDoS Protection Standard Costs = $2,944 * 1 (month) = $2,944
+
+Total Costs = $3,507.08
++
+## Azure DDoS Protection Standard Plan
+
+When Azure DDoS Protection Standard Plan is enabled on your application gateway with WAF you'll be billed at the lower non-WAF rates. Please see [Azure DDoS Protection pricing](https://azure.microsoft.com/pricing/details/ddos-protection/) for more details.
+ ## Monitoring Billed Usage
More metrics such as throughput, current connections and compute units are also
* Compute Units = 17.38 * Throughput = 1.37M Bytes/sec - 10.96 Mbps * Current Connections = 123.08k
-* Capacity Units calculated = max(17.38, 10.96/2.22, 123.08k/2500) = 49.232
+* Capacity Units calculated = max (17.38, 10.96/2.22, 123.08k/2500) = 49.232
Observed Capacity Units in metrics = 49.23
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Title: Form Recognizer business card model
+ Title: Business card data extraction - Form Recognizer
-description: Concepts related to data extraction and analysis using the prebuilt business card model.
+description: OCR and machine learning based business card scanning in Form Recognizer extracts key data from business cards.
recommendations: false
<!-- markdownlint-disable MD033 -->
-# Form Recognizer business card model
+# Business card data extraction
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
+## How business card data extraction works
+
+Business cards are a great way of representing a business or a professional. The company logo, fonts and background images found in business cards help the company branding and differentiate it from others. Applying OCR and machine-learning based techniques to automate scanning of business cards is a common image processing scenario. Enterprise systems used by sales and marketing teams typically have business card data extraction capability integrated into them for the benefit of their users.
+
+## Form Recognizer Business Card model
+ The business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation. ***Sample business card processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***
The following tools are supported by Form Recognizer v2.1:
|-|-| |**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try Form Recognizer
+### Try business card data extraction
See how data, including name, job title, address, email, and company name, is extracted from business cards using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Title: Form Recognizer composed models
+ Title: Composed custom models - Form Recognizer
-description: Learn about composed custom models
+description: Compose several custom models into a single model for easier data extraction from groups of distinct form types.
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Title: Form Recognizer custom neural model
+ Title: Custom neural document model - Form Recognizer
-description: Learn about custom neural (neural) model type, its features and how you train a model with high accuracy to extract data from structured and unstructured documents.
+description: Use the custom neural document model to train a model to extract data from structured, semistructured, and unstructured documents.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Form Recognizer custom neural model
+# Custom neural document model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-Custom neural models or neural models are a deep learned model that combines layout and language features to accurately extract labeled fields from documents. The base custom neural model is trained on various document types that makes it suitable to be trained for extracting fields from structured, semi-structured and unstructured documents. The table below lists common document types for each category:
+Custom neural document models or neural models are a deep learned model type that combines layout and language features to accurately extract labeled fields from documents. The base custom neural model is trained on various document types that makes it suitable to be trained for extracting fields from structured, semi-structured and unstructured documents. The table below lists common document types for each category:
|Documents | Examples | ||--|
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Title: Form Recognizer custom template model
+ Title: Custom template document model - Form Recognizer
-description: Learn about the custom template model type, its features and how you train a model with high accuracy to extract data from structured or templated forms
+description: Use the custom template document model to train a model to extract data from structured or templated forms.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Form Recognizer custom template model
+# Custom template document model
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-Custom template (formerly custom form) is an easy-to-train model that accurately extracts labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
+Custom template (formerly custom form) is an easy-to-train document model that accurately extracts labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
Custom template models share the same labeling format and strategy as custom neural models, with support for more field types and languages.
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Title: Form Recognizer custom and composed models
+ Title: Custom document models - Form Recognizer
-description: Learn to create, use, and manage Form Recognizer custom and composed models.
+description: Label and train customized models for your documents and compose multiple models into a single model identifier.
monikerRange: '>=form-recog-2.1.0' recommendations: false
-# Form Recognizer custom models
+# Custom document models
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
Form Recognizer uses advanced machine learning technology to detect and extract
To create a custom model, you label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
-## Custom model types
+## Custom document model types
-Custom models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
+Custom document models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
### Custom template model (v3.0)
The following tools are supported by Form Recognizer v2.1:
|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
-### Try Form Recognizer
+### Try building a custom model
Try extracting data from your specific or unique documents using custom models. You need the following resources:
Try extracting data from your specific or unique documents using custom models.
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)
-## Model capabilities
+## Custom model extraction summary
This table compares the supported data extraction areas:
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Title: Form Recognizer ID document model
+ Title: Identity document (ID) processing ΓÇô Form Recognizer
-description: Concepts related to data extraction and analysis using the prebuilt ID document model
+description: Automate identity document (ID) processing of driver licenses, passports, and more with Form Recognizer.
monikerRange: '>=form-recog-2.1.0' recommendations: false
-<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD033 -->
-# Form Recognizer ID document model
+# Identity document (ID) processing
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-The ID document model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
+## What is identity document (ID) processing
+
+Identity document (ID) processing involves extraction of data from identity documents whether manually or using OCR based techniques. Examples of identity documents include passports, driver licenses, resident cards, and national identity cards like the social security card in the US. It is an important step in any business process that requires some proof of identity. Examples include customer verification in banks and other financial institutions, mortgage applications, medical visits, claim processing, hospitality industry, and more. Individuals provide some proof of their identity via driver licenses, passports, and other similar documents so that the business can efficiently verify them before providing services and benefits.
+
+## Form Recognizer Identity document (ID) model
+
+The Form Recognizer Identity document (ID) model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from identity documents: US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state IDs, social security cards, and permanent resident cards and more. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***
The following tools are supported by Form Recognizer v3.0:
|-|-|--| |**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-idDocument**|
+### Try Identity document (ID) extraction
+ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try Form Recognizer
- Extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio. You'll need the following resources: * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
Extract data, including name, birth date, machine-readable zone, and expiration
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (Green card)</li></ul></br>|English (United States)ΓÇöen-US|
+|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (Residence permit card)</li></ul></br>|English (United States)ΓÇöen-US|
## Field extractions
-|Name| Type | Description | Standardized output|
-|:--|:-|:-|:-|
-| CountryRegion | countryRegion | Country or region code compliant with ISO 3166 standard | |
-| DateOfBirth | Date | DOB | yyyy-mm-dd |
-| DateOfExpiration | Date | Expiration date DOB | yyyy-mm-dd |
-| DocumentNumber | String | Relevant passport number, driver's license number, etc. | |
-| FirstName | String | Extracted given name and middle initial if applicable | |
-| LastName | String | Extracted surname | |
-| Nationality | countryRegion | Country or region code compliant with ISO 3166 standard (Passport only) | |
-| Sex | String | Possible extracted values include "M", "F" and "X" | |
-| MachineReadableZone | Object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
-| DocumentType | String | Document type, for example, Passport, Driver's License | "passport" |
-| Address | String | Extracted address (Driver's License only) ||
-| Region | String | Extracted region, state, province, etc. (Driver's License only) | |
-
-## Form Recognizer v3.0
-
- The Form Recognizer v3.0 introduces several new features and capabilities:
-
-* **ID document (v3.0)** prebuilt model supports extraction of endorsement, restriction, and vehicle class codes from US driver's licenses.
-
-* The ID Document **2022-06-30** and later releases support the following data extraction from US driver's licenses:
-
- * Date issued
- * Height
- * Weight
- * Eye color
- * Hair color
- * Document discriminator security code
-
-### ID document field extractions
-
-|Name| Type | Description | Standardized output|
-|:--|:-|:-|:-|
-| DateOfIssue | Date | Issue date | yyyy-mm-dd |
-| Height | String | Height of the holder. | |
-| Weight | String | Weight of the holder. | |
-| EyeColor | String | Eye color of the holder. | |
-| HairColor | String | Hair color of the holder. | |
-| DocumentDiscriminator | String | Document discriminator is a security code that identifies where and when the license was issued. | |
-| Endorsements | String | More driving privileges granted to a driver such as Motorcycle or School bus. | |
-| Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| |
-| VehicleClassification | String | Types of vehicles that can be driven by a driver. ||
-| CountryRegion | countryRegion | Country or region code compliant with ISO 3166 standard | |
-| DateOfBirth | Date | DOB | yyyy-mm-dd |
-| DateOfExpiration | Date | Expiration date DOB | yyyy-mm-dd |
-| DocumentNumber | String | Relevant passport number, driver's license number, etc. | |
-| FirstName | String | Extracted given name and middle initial if applicable | |
-| LastName | String | Extracted surname | |
-| Nationality | countryRegion | Country or region code compliant with ISO 3166 standard (Passport only) | |
-| Sex | String | Possible extracted values include "M", "F" and "X" | |
-| MachineReadableZone | Object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
-| DocumentType | String | Document type, for example, Passport, Driver's License, Social security card and more | "passport" |
-| Address | String | Extracted address, address is also parsed to its components - address, city, state, country, zip code ||
-| Region | String | Extracted region, state, province, etc. (Driver's License only) | |
-
-### Migration guide and REST API v3.0
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
+Below are the fields extracted per document type. The Azure Form Recognizer ID model `prebuilt-idDocument` extracts the below fields in the `documents.*.fields`. It also extracts all the text in the documents, words, lines and styles which will be included in the JSON output in the different sections.
+ * `pages.*.words`
+ * `pages.*.lines`
+ * `paragraphs`
+ * `styles`
+ * `documents`
+ * `documents.*.fields`
+
+#### Document type - `idDocument.driverLicense` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`Region`|`string`|State or province|Washington|
+|`DocumentNumber`|`string`|Driver license number|WDLABCD456DG|
+|`DocumentDiscriminator`|`string`|Driver license document discriminator|12645646464554646456464544|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`EyeColor`|`string`|Eye color|BLU|
+|`HairColor`|`string`|Hair color|BRO|
+|`Height`|`string`|Height|5'11"|
+|`Weight`|`string`|Weight|185LB|
+|`Sex`|`string`|Sex|M|
+|`Endorsements`|`string`|Endorsements|L|
+|`Restrictions`|`string`|Restrictions|B|
+|`VehicleClassifications`|`string`|Vehicle classification|D|
+
+#### Document type - `idDocument.passport` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`DocumentNumber`|`string`|Passport number|340020013|
+|`FirstName`|`string`|Given name and middle initial if applicable|JENNIFER|
+|`MiddleName`|`string`|Name between given name and surname|REYES|
+|`LastName`|`string`|Surname|BROOKS|
+|`Aliases`|`array`|||
+|`Aliases.*`|`string`|Also known as|MAY LIN|
+|`DateOfBirth`|`date`|Date of birth|1980-01-01|
+|`DateOfExpiration`|`date`|Date of expiration|2019-05-05|
+|`DateOfIssue`|`date`|Date of issue|2014-05-06|
+|`Sex`|`string`|Sex|F|
+|`CountryRegion`|`countryRegion`|Issuing country or organization|USA|
+|`DocumentType`|`string`|Document type|P|
+|`Nationality`|`countryRegion`|Nationality|USA|
+|`PlaceOfBirth`|`string`|Place of birth|MASSACHUSETTS, U.S.A.|
+|`PlaceOfIssue`|`string`|Place of issue|LA PAZ|
+|`IssuingAuthority`|`string`|Issuing authority|United States Department of State|
+|`PersonalNumber`|`string`|Personal Id. No.|A234567893|
+|`MachineReadableZone`|`object`|Machine readable zone (MRZ)|P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816|
+|`MachineReadableZone.FirstName`|`string`|Given name and middle initial if applicable|JENNIFER|
+|`MachineReadableZone.LastName`|`string`|Surname|BROOKS|
+|`MachineReadableZone.DocumentNumber`|`string`|Passport number|340020013|
+|`MachineReadableZone.CountryRegion`|`countryRegion`|Issuing country or organization|USA|
+|`MachineReadableZone.Nationality`|`countryRegion`|Nationality|USA|
+|`MachineReadableZone.DateOfBirth`|`date`|Date of birth|1980-01-01|
+|`MachineReadableZone.DateOfExpiration`|`date`|Date of expiration|2019-05-05|
+|`MachineReadableZone.Sex`|`string`|Sex|F|
+
+#### Document type - `idDocument.nationalIdentityCard` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`Region`|`string`|State or province|Washington|
+|`DocumentNumber`|`string`|National identity card number|WDLABCD456DG|
+|`DocumentDiscriminator`|`string`|National identity card document discriminator|12645646464554646456464544|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`EyeColor`|`string`|Eye color|BLU|
+|`HairColor`|`string`|Hair color|BRO|
+|`Height`|`string`|Height|5'11"|
+|`Weight`|`string`|Weight|185LB|
+|`Sex`|`string`|Sex|M|
+
+#### Document type - `idDocument.residencePermit` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`DocumentNumber`|`string`|Residence permit number|WDLABCD456DG|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`Sex`|`string`|Sex|M|
+|`PlaceOfBirth`|`string`|Place of birth|Germany|
+|`Category`|`string`|Permit category|DV2|
+
+#### Document type - `idDocument.usSocialSecurityCard` fields extracted:
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`DocumentNumber`|`string`|Social security card number|WDLABCD456DG|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+ ## Next steps
+* Try the prebuilt ID model in the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument). Use the sample documents or bring your own documents.
+ * Complete a Form Recognizer quickstart: > [!div class="nextstepaction"]
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
Title: Form Recognizer invoice model
+ Title: Invoice data extraction ΓÇô Form Recognizer
-description: Concepts related to data extraction and analysis using prebuilt invoice model
+description: Automate invoice data extraction with Form RecognizerΓÇÖs invoice model to extract accounts payable data including invoice line items.
recommendations: false
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
- The invoice model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key fields and line items from sales invoices. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices.
+## What is automated invoice processing?
+
+Automated invoice processing is the process of extracting key accounts payable fields from including invoice line items from invoices and integrating it with your accounts payable (AP) workflows for reviews and payments. Historically, the accounts payable process has been very manual and time consuming. Accurate extraction of key data from invoices is typically the first and one of the most critical steps in the invoice automation process.
+
+## Form Recognizer Invoice model
+
+The machine learning based invoice model combines powerful Optical Character Recognition (OCR) capabilities with invoice understanding models to analyze and extract key fields and line items from sales invoices. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices.
**Sample invoice processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)**:
The following tools are supported by Form Recognizer v2.1:
|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try Form Recognizer
+### Try invoice data extraction
See how data, including customer information, vendor details, and line items, is extracted from invoices using the Form Recognizer Studio. You'll need the following resources:
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Title: Layouts - Form Recognizer
+ Title: Document layout analysis - Form Recognizer
-description: Learn concepts related to the Layout API with Form Recognizer REST API usage and limits.
+description: Extract text, tables, selections, titles, section headings, page headers, page footers, and more with layout analysis model from Form Recognizer.
monikerRange: '>=form-recog-2.1.0'
recommendations: false
-# Form Recognizer layout model
+# Document layout analysis
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-The Form Recognizer Layout API extracts text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+## What is document layout analysis?
+
+Document structure and layout analysis is the process of analyzing a document to extract regions of interest and their inter-relationships. The goal is to extract text and structural elements from the page for building better semantic understanding models. For all extracted text, there are two types of roles that text plays in a document layout. Text, tables, and selection marks are examples of geometric roles. Titles, headings, and footers are examples of logical roles. For example. a reading system requires differentiating text regions from non-textual ones along with their reading order.
+
+The following illustration shows the typical components in an image of a sample page.
++
+## Form Recognizer Layout model
+
+The Form Recognizer Layout is an advanced machine-learning based document layout analysis model available in the Form Recognizer cloud API. In the version v2.1, the document layout model extracted text lines, words, tables, and selection marks.
+
+**Starting with v3.0 GA**, it extracts paragraphs and additional structure information like titles, section headings, page header, page footer, page number, and footnote from the document page. These are examples of logical roles described in the previous section. This capability is supported for PDF documents and images (JPG, PNG, BMP, TIFF).
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
### Data extraction
-| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** |
+| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Logical roles** |
| | | | | | | | Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-**Supported paragraph roles**:
+**Supported logical roles for paragraphs**:
The paragraph roles are best used with unstructured documents. Paragraph roles help analyze the structure of the extracted content for better semantic search and analysis. * title
The following tools are supported by Form Recognizer v2.1:
|-|-| |**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-## Try Form Recognizer
+## Try document layout analysis
Try extracting data from forms and documents using the Form Recognizer Studio. You'll need the following resources:
Try extracting data from forms and documents using the Form Recognizer Studio. Y
The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
-### Paragraphs <sup>🆕</sup>
+### Paragraph extraction <sup>🆕</sup>
The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
The Layout model extracts all identified blocks of text in the `paragraphs` coll
### Paragraph roles<sup> 🆕</sup>
-The Layout model may flag certain paragraphs with their specialized type or `role` as predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
+The new machine-learning based page object detection extracts logical roles like titles, section headings, page headers, page footers, and more. The Form Recognizer Layout model assigns certain text blocks in the `paragraphs` collection with their specialized role or type predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
| **Predicted role** | **Description** | | | |
The Layout model may flag certain paragraphs with their specialized type or `rol
```
-### Pages
+### Pages extraction
The pages collection is the very first object you see in the service response.
The pages collection is the very first object you see in the service response.
] ```
-### Text lines and words
+### Text lines and words extraction
-Read extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+The document layout model in Form Recognizer extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
```json "words": [
Read extracts print and handwritten style text as `lines` and `words`. The model
} ] ```
-### Selection marks
+### Selection marks extraction
-Layout API also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
+The Layout model also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
```json {
Layout API also extracts selection marks from documents. Extracted selection mar
} ```
-### Tables and table headers
+### Extract tables from documents and images
-Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding `polygon` is output along with information whether it's recognized as a `columnHeader` or not. The API also works with rotated tables. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top level `content` that contains the full text from the document.
+Extracting tables is a key requirement for processing documents containing large volumes of data typically formatted as tables. The Layout model extracts tables in the `pageResults` section of the JSON output. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding polygon is output along with information whether it's recognized as a `columnHeader` or not. The model supports extracting tables that are rotated. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top-level content that contains the full text from the document.
```json {
Layout API extracts tables in the `pageResults` section of the JSON output. Docu
] }
+```
+### Handwritten style for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
+
+```json
+"styles": [
+{
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
+}
```
-### Select page numbers or ranges for text extraction
+### Extracts selected pages from documents
For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Title: Form Recognizer models
+ Title: Document processing models - Form Recognizer
-description: Concepts related to data extraction and analysis using prebuilt models.
+description: Document processing models for OCR, document layout, invoices, identity, custom models, and more to extract text, structure, and key-value pairs.
recommendations: false
<!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD033 -->
-# Form Recognizer models
+# Document processing models
::: moniker range="form-recog-3.0.0" [!INCLUDE [applies to v3.0](includes/applies-to-v3-0.md)]
recommendations: false
| **Model** | **Description** | | | |
-|**Document analysis**||
-| [Read](#read) | Extract typeface and handwritten text lines, words, locations, and detected languages.|
-| [General document](#general-document) | Extract text, tables, structure, key-value pairs, and named entities.|
-| [Layout](#layout) | Extract text and layout information from documents.|
-|**Prebuilt**||
-| [W-2](#w-2) | Extract employee, employer, wage information, etc. from US W-2 forms. |
-| [Invoice](#invoice) | Extract key information from English and Spanish invoices. |
-| [Receipt](#receipt) | Extract key information from English receipts. |
-| [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
-| [Business card](#business-card) | Extract key information from English business cards. |
-|**Custom**||
-| [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
-| [Composed](#composed-custom-model) | Compose a collection of custom models and assign them to a single model built from your form types.
-
-### Read
+|**Document analysis models**||
+| [Read OCR](#read-ocr) | Extract print and handwritten text including words, locations, and detected languages.|
+| [Layout analysis](#layout-analysis) | Extract text and document layout elements like tables, selection marks, titles, section headings, and more.|
+| [General document](#general-document) | Extract key-value pairs in addition to text and document structure information.|
+|**Prebuilt models**||
+| [W-2](#w-2) | Process W2 forms to extract employee, employer, wage, and other information. |
+| [Invoice](#invoice) | Automate invoice processing for English and Spanish invoices. |
+| [Receipt](#receipt) | Extract receipt data from English receipts.|
+| [Identity document (ID)](#identity-document-id) | Extract identity (ID) fields from US driver licenses and international passports. |
+| [Business card](#business-card) | Scan business cards to extract key fields and data into your applications. |
+|**Custom models**||
+| [Custom models](#custom-models) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
+| [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model.
+
+### Read OCR
[:::image type="icon" source="media/studio/read-card.png" :::](https://formrecognizer.appliedai.azure.com/studio/read)
The Read API analyzes and extracts ext lines, words, their locations, detected l
> [!div class="nextstepaction"] > [Learn more: read model](concept-read.md)
-### W-2
+### Layout analysis
-[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
+[:::image type="icon" source="media/studio/layout.png":::](https://formrecognizer.appliedai.azure.com/studio/layout)
-The W-2 model analyzes and extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including single and multiple forms on one page.
+The Layout analysis model analyzes and extracts text, tables, selection marks, and other structure elements like titles, section headings, page headers, page footers, and more.
-***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
+***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***:
> [!div class="nextstepaction"]
-> [Learn more: W-2 model](concept-w2.md)
+>
+> [Learn more: layout model](concept-layout.md)
### General document [:::image type="icon" source="media/studio/general-document.png":::](https://formrecognizer.appliedai.azure.com/studio/document)
-* The general document API supports most form types and will analyze your documents and associate values to keys and entries to tables that it discovers. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
-
-* The general document is a pre-trained model and can be directly invoked via the REST API.
-
-* The general document model supports named entity recognition (NER) for several entity categories. NER is the ability to identify different entities in text and categorize them into pre-defined classes or types such as: person, location, event, product, and organization. Extracting entities can be useful in scenarios where you want to validate extracted values. The entities are extracted from the entire content.
+The general document model is ideal for extracting common key-value pairs from forms and documents. ItΓÇÖs a pre-trained model and can be directly invoked via the REST API and the SDKs. You can use the general document model as an alternative to training a custom model.
***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document)***:
The W-2 model analyzes and extracts key information reported in each box on a W-
> [!div class="nextstepaction"] > [Learn more: general document model](concept-general-document.md)
-### Layout
-[:::image type="icon" source="media/studio/layout.png":::](https://formrecognizer.appliedai.azure.com/studio/layout)
+### W-2
-The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from documents.
+[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
-***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***:
+The W-2 form model extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including single and multiple forms on one page.
+***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
+ > [!div class="nextstepaction"]
->
-> [Learn more: layout model](concept-layout.md)
+> [Learn more: W-2 model](concept-w2.md)
### Invoice [:::image type="icon" source="media/studio/invoice.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
-The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
+The invoice model automates processing of invoices to extracts customer name, billing address, due date, and amount due, line items and other key data. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
***Sample invoice processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)***:
The invoice model analyzes and extracts key information from sales invoices. The
[:::image type="icon" source="media/studio/receipt.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
-* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
-
-* Version v3.0 also supports single-page hotel receipt processing.
+Use the receipt model to scan sales receipts for merchant name, dates, line items, quantities, and totals from printed and handwritten receipts. The version v3.0 also supports single-page hotel receipt processing.
***Sample receipt processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The invoice model analyzes and extracts key information from sales invoices. The
> [!div class="nextstepaction"] > [Learn more: receipt model](concept-receipt.md)
-### ID document
+### Identity document (ID)
[:::image type="icon" source="media/studio/id-document.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
- The ID document model analyzes and extracts key information from the following documents:
-
-* U.S. Driver's Licenses (all 50 states and District of Columbia)
-
-* Biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts
+Use the Identity document (ID) model to process U.S. Driver's Licenses (all 50 states and District of Columbia) and biographical pages from international passports (excluding visa and other travel documents) to extract key fields.
***Sample U.S. Driver's License processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***:
The invoice model analyzes and extracts key information from sales invoices. The
[:::image type="icon" source="media/studio/business-card.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
-The business card model analyzes and extracts key information from business card images.
+Use the business card model to scan and extract key information from business card images.
***Sample business card processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***:
The business card model analyzes and extracts key information from business card
> [!div class="nextstepaction"] > [Learn more: business card model](concept-business-card.md)
-### Custom
+### Custom models
[:::image type="icon" source="media/studio/custom.png":::](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)
-* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+Custom document models analyze and extract data from forms and documents specific to your business. They are trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started.
-* Version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
+Version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
***Sample custom template processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
The business card model analyzes and extracts key information from business card
> [!div class="nextstepaction"] > [Learn more: custom model](concept-custom.md)
-#### Composed custom model
+#### Composed models
-A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
+A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. You can assign up to 100 trained custom models to a single composed model.
***Composed model dialog window in [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
A composed model is created by taking a collection of custom models and assignin
## Model data extraction
-| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** |
+| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** | **Key-Value pairs** | **Fields** |
|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | | | [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Title: Read OCR - Form Recognizer
+ Title: OCR for documents - Form Recognizer
-description: Learn concepts related to Read OCR API analysis with Form Recognizer APIΓÇöusage and limits.
+description: Extract print and handwritten text from scanned and digital documents with Form RecognizerΓÇÖs Read OCR model.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Form Recognizer Read OCR model
+# OCR for documents
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-Form Recognizer v3.0 includes the new Read Optical Character Recognition (OCR) model. The Read OCR model extracts typeface and handwritten text including mixed languages in documents. The Read OCR model can detect lines, words, locations, and languages and is the core of all other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the Read OCR model as a foundation for extracting texts from documents.
+> [!NOTE]
+>
+> For general, in-the-wild images like labels, street signs, and posters, use the [Computer Vision v4.0 preview Read](../../cognitive-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
+>
+
+## What is OCR for documents?
+
+Optical Character Recognition (OCR) for documents is optimized for large text-heavy documents in multiple file formats and global languages. It should include features like higher-resolution scanning of document images for better handling of smaller and dense text, paragraphs detection, handling fillable forms, and advanced forms and document scenarios like single character boxes and accurate extraction of key fields commonly found in invoices, receipts, and other prebuilt scenarios.
+
+## Form Recognizer Read model
+
+Form Recognizer v3.0ΓÇÖs Read Optical Character Recognition (OCR) model runs at a higher resolution than Computer Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes preview support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages, and is the underlying OCR engine for other Form Recognizer models like Layout, General Document, Invoice, Receipt, Identity (ID) document, and other prebuilt models, as well as custom models.
## Supported document types
The following resources are supported by Form Recognizer v3.0:
|-||| |**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
-## Try Form Recognizer
+## Try OCR in Form Recognizer
Try extracting text from forms and documents using the Form Recognizer Studio. You'll need the following assets:
Form Recognizer v3.0 version supports several languages for the read model. *See
## Data detection and extraction
-### Paragraphs <sup>🆕</sup>
+### Microsoft Office and HTML text extraction (preview) <sup>🆕</sup>
+Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview text extraction from Microsoft Word, Excel, PowerPoint, and HTML files. The following illustration shows extraction of the digital text as well as text from the images embedded in the Word document by running OCR on the images.
++
+The page units in the model output are computed as shown:
+
+ **File format** | **Computed page unit** | **Total pages** |
+| | | |
+|Word (preview) | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
+|Excel (preview) | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
+|PowerPoint (preview)| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
+|HTML (preview)| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+
+### Paragraphs extraction <sup>🆕</sup>
-The Read model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
+The Read OCR model in Form Recognizer extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top-level `content` property that contains the full text from the document.
```json "paragraphs": [
The Read model extracts all identified blocks of text in the `paragraphs` collec
``` ### Language detection <sup>🆕</sup>
-Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
+The Read OCR model in Form Recognizer adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
```json "languages": [
Read adds [language detection](language-support.md#detected-languages-read-api)
}, ] ```
-### Microsoft Office and HTML support (preview) <sup>🆕</sup>
-Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview the support for Microsoft Word, Excel, PowerPoint, and HTML files.
-
-The page units in the model output are computed as shown:
-
- **File format** | **Computed page unit** | **Total pages** |
-| | | |
-|Word (preview) | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
-|Excel (preview) | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
-|PowerPoint (preview)| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
-|HTML (preview)| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
-
-### Pages
+### Extracting pages from documents
The page units in the model output are computed as shown:
The page units in the model output are computed as shown:
] ```
-### Text lines and words
+### Extract text lines and words
-Read extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+The Read OCR model extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, Read will extract all embedded text as is. For any embedded images, it will run OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries will include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
For large multi-page PDF documents, use the `pages` query parameter to indicate
> [!NOTE] > For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, the Read API ignores the pages parameter and extracts all pages by default.
+### Handwritten style for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
+
+```json
+"styles": [
+{
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
+}
+```
+ ## Next steps Complete a Form Recognizer quickstart:
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Title: Form Recognizer receipt model
+ Title: Receipt data extraction - Form Recognizer
-description: Concepts related to data extraction and analysis using the prebuilt receipt model
+description: Use machine learning powered receipt data extraction model to digitize receipts.
recommendations: false
<!-- markdownlint-disable MD033 -->
-# Form Recognizer receipt model
+# Receipt data extraction
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
+## What is receipt digitization
+
+Receipt digitization is the process of converting scanned receipts into digital form for downstream processing. OCR powered receipt data extraction helps to automate the conversion and save time and effort. The output from the receipt data extraction is used for accounts payable and receivables automation, sales data analytics, and other business scenarios.
+
+## Form Recognizer receipt model
+
+The Form Recognizer receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The following tools are supported by Form Recognizer v2.1:
|-|-| |**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api?view=form-recog-2.1.0&preserve-view=true&tabs=windows&pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-### Try Form Recognizer
+### Try receipt data extraction
See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts using the Form Recognizer Studio. You'll need the following resources:
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Title: Form Recognizer W-2 prebuilt model
+ Title: Automated W-2 form processing - Form Recognizer
-description: Data extraction and analysis extraction using the prebuilt W-2 model
+description: Use the Form Recognizer prebuilt W-2 model to automate extraction of W2 form data.
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Form Recognizer W-2 model
+# Automated W-2 form processing
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+## Why is automated W-2 form processing important?
+
+Form W-2, also known as the Wage and Tax Statement, is sent by an employer to each employee and the Internal Revenue Service (IRS) at the end of the year. A W-2 form reports employees' annual wages and the amount of taxes withheld from their paychecks. The IRS also uses W-2 forms to track individuals' tax obligations. The Social Security Administration (SSA) uses the information on this and other forms to compute the Social Security benefits for all workers.
+
+## Form Recognizer W-2 form model
+ The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported on [US Internal Revenue Service (IRS) tax forms](https://www.irs.gov/forms-pubs/about-form-w-2). A W-2 tax form is a multipart form divided into state and federal sections consisting of more than 14 boxes detailing an employee's income from the previous year. The W-2 tax form is a key document used in employees' federal and state tax filings, as well as other processes like mortgage loans and Social Security Administration (SSA) benefits. The Form Recognizer W-2 model supports both single and multiple standard and customized forms from 2018 to the present. ***Sample W-2 tax form processed using Form Recognizer Studio***
The prebuilt W-2 model is supported by Form Recognizer v3.0 with the following t
|-|-|--| |**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
-### Try Form Recognizer
+### Try W-2 form data extraction
Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
applied-ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/disaster-recovery.md
Request body
You'll get a `200` response code with response body that contains the JSON payload required to initiate the copy.
-```http
+```json
{
- "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
- "targetResourceRegion": "region",
- "targetModelId": "target-model-name",
- "targetModelLocation": "model path",
- "accessToken": "access token",
- "expirationDateTime": "timestamp"
+ "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
+ "targetResourceRegion": "region",
+ "targetModelId": "target-model-name",
+ "targetModelLocation": "model path",
+ "accessToken": "access token",
+ "expirationDateTime": "timestamp"
} ```
The body of your request is the response from the previous step.
```json {
- "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
- "targetResourceRegion": "region",
- "targetModelId": "target-model-name",
- "targetModelLocation": "model path",
- "accessToken": "access token",
- "expirationDateTime": "timestamp"
+ "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
+ "targetResourceRegion": "region",
+ "targetModelId": "target-model-name",
+ "targetModelLocation": "model path",
+ "accessToken": "access token",
+ "expirationDateTime": "timestamp"
} ```
The following code snippets use cURL to make API calls outlined in the steps abo
### Generate Copy authorization
- **Request**
+**Request**
- ```bash
- curl -i -X POST "{YOUR-ENDPOINT}formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31"
- -H "Content-Type: application/json"
- -H "Ocp-Apim-Subscription-Key: {YOUR-KEY}"
- --data-ascii "{
- 'modelId': '{modelId}',
- 'description': '{description}'
- }"
- ```
+```bash
+curl -i -X POST "{YOUR-ENDPOINT}formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31"
+-H "Content-Type: application/json"
+-H "Ocp-Apim-Subscription-Key: {YOUR-KEY}"
+--data-ascii "{
+ 'modelId': '{modelId}',
+ 'description': '{description}'
+}"
+```
- **Successful response**
+**Successful response**
- ```http
- {
+```json
+{
"targetResourceId": "string", "targetResourceRegion": "string", "targetModelId": "string", "targetModelLocation": "string", "accessToken": "string", "expirationDateTime": "string"
- }
- ```
+}
+```
### Begin Copy operation
- **Request**
+**Request**
- ```bash
- curl -i -X POST "{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}:copyTo?api-version=2022-08-31"
+```bash
+curl -i -X POST "{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}:copyTo?api-version=2022-08-31"
-H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {YOUR-KEY}" --data-ascii "{
The following code snippets use cURL to make API calls outlined in the steps abo
'expirationDateTime': '{expirationDateTime}' }"
- ```
+```
- **Successful response**
+**Successful response**
- ```http
- HTTP/1.1 202 Accepted
- Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
- ```
+```http
+HTTP/1.1 202 Accepted
+Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
+```
### Track copy operation progress
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Title: "Overview: What is Azure Form Recognizer?"
+ Title: Intelligent document processing - Form Recognizer
-description: Azure Form Recognizer service that analyzes and extracts text, table and data, maps field relationships as key-value pairs, and returns a structured JSON output from your forms and documents.
+description: Machine-learning based OCR and document understanding service to automate extraction of text, table and structure, and key-value pairs from your forms and documents.
recommendations: false
<!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
-# What is Azure Form Recognizer?
+
+# What is Intelligent Document Processing?
+
+Intelligent Document Processing (IDP) refers to capturing, transforming, and processing data from documents (e.g., PDF, or scanned documents including Microsoft Office and HTML documents). It typically uses advanced machine-learning based technologies like computer vision, Optical Character Recognition (OCR), document layout analysis, and Natural Language Processing (NLP) to extract meaningful information, process and integrate with other systems.
+
+IDP solutions can extract data from structured documents with pre-defined layouts like a tax form, unstructured or free-form documents like a contract, and semi-structured documents. They have a wide variety of benefits spanning knowledge mining, business process automation, and industry-specific applications. Examples include invoice processing, medical claims processing, and contracts workflow automation.
+
+## What is Azure Form Recognizer?
::: moniker range="form-recog-3.0.0" [!INCLUDE [applies to v3.0](includes/applies-to-v3-0.md)]
recommendations: false
::: moniker range="form-recog-3.0.0"
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* the Concepts articles:
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning based optical character recognition (OCR) and document understanding technologies to extract print and handwritten text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
| Model type | Model name | ||--|
-|**Document analysis models**| &#9679; [**Read model**](concept-read.md)</br> &#9679; [**General document model**](concept-general-document.md)</br> &#9679; [**Layout model**](concept-layout.md) </br> |
-| **Prebuilt models** | &#9679; [**W-2 form model**](concept-w2.md) </br>&#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
+|**Document analysis models**| &#9679; [**Read OCR model**](concept-read.md)</br> &#9679; [**General document model**](concept-general-document.md)</br> &#9679; [**Layout analysis model**](concept-layout.md) </br> |
+| **Prebuilt models** | &#9679; [**W-2 form model**](concept-w2.md) </br>&#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**Identity (ID) document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)| ## Which Form Recognizer model should I use?
This section will help you decide which **Form Recognizer v3.0** supported model
| Type of document | Data to extract |Document format | Your best solution | | --|-| -|-|
-|**A text-based document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read model**](concept-read.md)|
-|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
+|**A generic document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read OCR model**](concept-read.md)|
+|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout analysis model**](concept-layout.md)
|**A structured or semi-structured document that includes content formatted as fields and values**, like a credit application or survey form.|You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| The form or document is a standardized format commonly used in your business or industry and printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).|[**General document model**](concept-general-document.md) |**U.S. W-2 form**|You want to extract key information such as salary, wages, and taxes withheld from US W2 tax forms.</li></ul> |The W-2 document is in United States English (en-US) text.|[**W-2 model**](concept-w2.md) |**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md) |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
-|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
+|**Identity document (ID)** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**Identity document (ID) model**](concept-id-document.md)|
|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)| |**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)| >[!Tip] >
-> * If you're still unsure which model to use, try the General Document model.
-> * The General Document model is powered by the Read OCR model to detect lines, words, locations, and languages.
-> * General document extracts all the same fields as Layout model (pages, tables, styles) and also extracts key-value pairs.
+> * If you're still unsure which model to use, try the General Document model to extract key-value pairs.
+> * The General Document model is powered by the Read OCR engine to detect text lines, words, locations, and languages.
+> * General document also extracts the same data as the document layout model (pages, tables, styles).
-## Form Recognizer models and development options
+## Document processing models and development options
> [!NOTE]
->The following models and development options are supported by the Form Recognizer service v3.0.
+>The following document understanding models and development options are supported by the Form Recognizer service v3.0.
-You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
+You can Use Form Recognizer to automate your document processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references.
| Model | Description |Automation use cases | Development options | |-|--|-|--|
-|[**Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
+|[**Read OCR model**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
+|[**Layout analysis model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:<ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>| |[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> | |[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>| |[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Identity document (ID) model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>| ::: moniker-end
You can Use Form Recognizer to automate your data processing in applications and
::: moniker range="form-recog-2.1.0"
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that analyzes forms and documents, extracts text and data, and maps field relationships as key-value pairs. To learn more about each model, *see* the Concepts articles:
+Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning based optical character recognition (OCR) and document understanding technologies to extract print and handwritten text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
| Model type | Model name | ||--|
-|**Document analysis model**| &#9679; [**Layout model**](concept-layout.md) </br> |
-| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**ID document model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
+|**Document analysis model**| &#9679; [**Layout analysis model**](concept-layout.md) </br> |
+| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md)</br>&#9679; [**Receipt model**](concept-receipt.md) </br>&#9679; [**Identity document (ID) model**](concept-id-document.md) </br>&#9679; [**Business card model**](concept-business-card.md) </br>
| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md)|
-## Which Form Recognizer model should I use?
+## Which document processing model should I use?
This section will help you decide which Form Recognizer v2.1 supported model you should use for your application: | Type of document | Data to extract |Document format | Your best solution | | --|-| -|-|
-|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
+|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables and selection marks.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout analysis model**](concept-layout.md)
|**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md) |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
-|**ID document** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
+|**Identity document (ID)** like a passport or driver's license. |You want to extract key information such as first name, last name, and date of birth from US drivers' licenses or international passports. |Your ID document is a US driver's license or the biographical page from an international passport (not a visa).| [**ID document model**](concept-id-document.md)|
|**Business card**|You want to extract key information such as first name, last name, company name, email address, and phone number from business cards.</li></ul>|The business card document is in English or Japanese text. | [**Business card model**](concept-business-card.md)| |**Mixed-type document(s)**| You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| You have various documents with structured, semi-structured, and/or unstructured elements.| [**Custom model**](concept-custom.md)|
Use the links in the table to learn more about each model and browse the API ref
| Model| Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout analysis**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Identity document (ID) model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
::: moniker-end ## Data privacy and security
- As with all the cognitive services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
+ As with all AI services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
## Next steps
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Use the REST API parameter `api-version=2022-06-30-preview` when using the API o
### New Prebuilt Contract model
-A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. Contracts is currenlty in preview, please request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
+A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. Contracts is currently in preview, please request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
### Region expansion for training custom neural models
The **2022-06-30-preview** release presents extensive updates across the feature
* [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales). * [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales). * [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
-* [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [page extraction](concept-read.md#pages).
+* [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction (preview)](concept-read.md#microsoft-office-and-html-text-extraction-preview-).
#### Form Recognizer SDK beta June 2022 preview release
applied-ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
Choose **Single View App**.
![New Single View App](./media/ios/xcode-single-view-app.png) ## Get the SDK CocoaPod+ The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via Cocoapods:+ 1. [Install CocoaPods](http://guides.cocoapods.org/using/getting-started.html) - Follow the getting started guide to install Cocoapods.+ 2. Create a Podfile by running `pod init` in your Xcode project's root directory.
-3. Add the CocoaPod to your Podfile by adding `pod 'immersive-reader-sdk', :path => 'https://github.com/microsoft/immersive-reader-sdk/tree/master/iOS/immersive-reader-sdk'`. Your Podfile should look like the following, with your target's name replacing picture-to-immersive-reader-swift:
- ```ruby
- platform :ios, '9.0'
-
- target 'picture-to-immersive-reader-swift' do
- use_frameworks!
- # Pods for picture-to-immersive-reader-swift
- pod 'immersive-reader-sdk', :git => 'https://github.com/microsoft/immersive-reader-sdk.git'
- end
-```
+
+3. Add the CocoaPod to your Podfile by adding `pod 'immersive-reader-sdk', :path => 'https://github.com/microsoft/immersive-reader-sdk/tree/master/iOS/immersive-reader-sdk'`. Your Podfile should look like the following, with your target's name replacing picture-to-immersive-reader-swift:
+
+ ```ruby
+ platform :ios, '9.0'
+
+ target 'picture-to-immersive-reader-swift' do
+ use_frameworks!
+ # Pods for picture-to-immersive-reader-swift
+ pod 'immersive-reader-sdk', :git => 'https://github.com/microsoft/immersive-reader-sdk.git'
+ end
+ ```
+ 4. In the terminal, in the directory of your Xcode project, run the command `pod install` to install the Immersive Reader SDK pod.+ 5. Add `import immersive_reader_sdk` to all files that need to reference the SDK.+ 6. Ensure to open the project by opening the `.xcworkspace` file and not the `.xcodeproj` file. ## Acquire an Azure AD authentication token
applied-ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
The first section lists a summary of the current incident, including basic infor
- Analyzed root cause is an automatically analyzed result. Metrics Advisor analyzes all anomalies that are captured on time series within one metric with different dimension values at the same timestamp. Then performs correlation, clustering to group related anomalies together and generates root cause advice. + For metrics with multiple dimensions, it's a common case that multiple anomalies will be detected at the same time. However, those anomalies may share the same root cause. Instead of analyzing all anomalies one by one, leveraging **Analyzed root cause** should be the most efficient way to diagnose current incident.
applied-ai-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/manage-data-feeds.md
Previously updated : 04/20/2021 Last updated : 10/25/2022
Select the **Backfill** button to trigger an immediate ingestion on a time-stam
## Manage permission of a data feed
-Workspace access is controlled by the Metrics Advisor resource, which uses Azure Active Directory for authentication. Another layer of permission control is applied to metric data.
+Azure operations can be divided into two categories - control plane and data plane. You use the control plane to manage resources in your subscription. You use the data plane to use capabilities exposed by your instance of a resource type.
+Metrics Advisor requires at least a 'Reader' role to use its capabilities, but cannot perform edit/delete action to the resource itself.
-Metrics Advisor lets you grant permissions to different groups of people on different data feeds. There are two types of roles:
+Within Metrics Advisor there're other fine-grained roles to enable permission control on specific entities, like data feeds, hooks, credentials etc. There are two types of roles:
-- **Administrator**: Has full permissions to manage a data feed, including modify and delete.-- **Viewer**: Has access to a read-only view of the data feed.
-
+- **Administrator**: Has full permissions to manage a data feed, hook, credentials, etc. including modify and delete.
+- **Viewer**: Has access to a read-only view of the data feed, hook, credentials, etc.
## Advanced settings
applied-ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
Previously updated : 05/20/2021 Last updated : 05/20/2021 # Tutorial: Enable anomaly notification in Metrics Advisor
There are several options to send email, both Microsoft hosted and 3rd-party off
Fill in the content that you'd like to include to 'Body', 'Subject' in the email and fill in an email address in 'To'. ![Screenshot of send an email](../media/tutorial/logic-apps-send-email.png)
-
+ #### [Teams Channel](#tab/teams)
-
-### Send anomaly notification through a Microsoft Teams channel
-This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
-
+### Send anomaly notification through a Microsoft Teams channel
+This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
**Step 1.** Add a 'Incoming Webhook' connector to your Teams channel
automation Automation Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-disaster-recovery.md
+
+ Title: Disaster recovery for Azure Automation
+description: This article details on disaster recovery strategy to handle service outage or zone failure for Azure Automation
+keywords: automation disaster recovery
++ Last updated : 10/17/2022+++
+# Disaster recovery for Azure Automation
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
+
+This article explains the disaster recovery strategy to handle a region-wide or zone-wide failure.
+
+You must have a disaster recovery strategy to handle a region-wide service outage or zone-wide failure to help reduce the impact and effects arising from unpredictable events on your business and customers. You are responsible to set up disaster recovery of Automation accounts, and its dependent resources such as Modules, Connections, Credentials, Certificates, Variables and Schedules. An important aspect of a disaster recovery plan is preparing to failover to the replica of the Automation account created in advance in the secondary region, if the Automation account in the primary region becomes unavailable. Ensure that your disaster recovery strategy considers your Automation account and the dependent resources.
+
+In addition to high availability offered by Availability zones, some regions are paired with another region to provide protection from regional or large geographical disasters. Irrespective of whether the primary region has a regional pair or not, the disaster recovery strategy for the Automation account remains the same. For more information about regional pairs, [learn more](../availability-zones/cross-region-replication-azure.md).
++
+## Enable disaster recovery
+
+Every Automation account that you [create](https://learn.microsoft.com/azure/automation/quickstarts/create-azure-automation-account-portal)
+requires a location that you must use for deployment. This would be the primary region for your Automation account and it includes Assets, runbooks created for the Automation account, job execution data, and logs. For disaster recovery, the replica Automation account must be already deployed and ready in the secondary region.
+
+- Begin by [creating a replica Automation account](https://learn.microsoft.com/azure/automation/quickstarts/create-azure-automation-account-portal#create-automation-account) in any alternate [region](https://azure.microsoft.com/global-infrastructure/services/?products=automation&regions=all).
+- Select the secondary region of your choice - paired region or any other region where Azure Automation is available.
+- Apart from creating a replica of the Automation account, replicate the dependent resources such as Runbooks, Modules, Connections, Credentials, Certificates, Variables, Schedules and permissions assigned for the Run As account and Managed Identities in the Automation account in primary region to the Automation account in secondary region. You can use the [PowerShell script](#script-to-migrate-automation-account-assets-from-one-region-to-another) to migrate assets of the Automation account from one region to another.
+- If you are using [ARM templates](../azure-resource-manager/management/overview.md) to define and deploy Automation runbooks, you can use these templates to deploy the same runbooks in any other Azure region where you create the replica Automation account. In case of a region-wide outage or zone-wide failure in the primary region, you can execute the runbooks replicated in the secondary region to continue business as usual. This ensures that the secondary region steps up to continue the work if the primary region has a disruption or failure.
+
+>[!NOTE]
+> Due to data residency requirements, jobs data and logs present in the primary region are not available in the secondary region.
+
+## Scenarios for cloud and hybrid jobs
+
+### Scenario: Execute Cloud jobs in secondary region
+For Cloud jobs, there would be a negligible downtime, provided a replica Automation account and all dependent resources and runbooks are already deployed and available in the secondary region. You can use the replica account for executing jobs as usual.
+
+### Scenario: Execute jobs on Hybrid Runbook Worker deployed in a region different from primary region of failure
+If the Windows or Linux Hybrid Runbook worker is deployed using the extension-based approach in a region *different* from the primary region of failure, follow these steps to continue executing the Hybrid jobs:
+
+1. [Delete](extension-based-hybrid-runbook-worker-install.md?tabs=windows#delete-a-hybrid-runbook-worker) the extension installed on Hybrid Runbook worker in the Automation account in the primary region.
+1. [Add](extension-based-hybrid-runbook-worker-install.md?tabs=windows#create-hybrid-worker-group) the same Hybrid Runbook worker to a Hybrid Worker group in the Automation account in the secondary region. The Hybrid worker extension is installed on the machine in the replica of the Automation account.
+1. Execute the jobs on the Hybrid Runbook worker created in Step 2.
+
+For Hybrid Runbook worker deployed using the agent-based approach, choose from below:
+
+#### [Windows Hybrid Runbook worker](#tab/win-hrw)
+
+If the Windows Hybrid Runbook worker is deployed using an agent-based approach in a region different from the primary region of failure, follow the steps to continue executing Hybrid jobs:
+1. [Uninstall](automation-windows-hrw-install.md#remove-windows-hybrid-runbook-worker) the agent from the Hybrid Runbook worker present in the Automation account in the primary region.
+1. [Re-install](automation-windows-hrw-install.md#installation-options) the agent on the same machine in the replica Automation account in the secondary region.
+1. You can now execute jobs on the Hybrid Runbook worker created in Step 2.
+
+#### [Linux Hybrid Runbook worker](#tab/linux-hrw)
+
+If the Linux Hybrid Runbook worker is deployed using agent-based approach in a region different from the primary region of failure, follow the below steps to continue executing Hybrid jobs:
+1. [Uninstall](automation-linux-hrw-install.md#remove-linux-hybrid-runbook-worker) the agent from the Hybrid Runbook worker present in Automation account in the primary region.
+1. [Re-install](automation-linux-hrw-install.md#install-a-linux-hybrid-runbook-worker) the agent on the same machine in the replica Automation account in the secondary region.
+1. You can now execute jobs on the Hybrid Runbook worker created in Step 2.
+++
+### Scenario: Execute jobs on Hybrid Runbook Worker deployed in the primary region of failure
+If the Hybrid Runbook worker is deployed in the primary region, and there is a compute failure in that region, the machine will not be available for executing Automation jobs. You must provision a new virtual machine in an alternate region and register it as Hybrid Runbook Worker in Automation account in the secondary region.
+
+- See the installation steps in [how to deploy an extension-based Windows or Linux User Hybrid Runbook Worker](extension-based-hybrid-runbook-worker-install.md?tabs=windows#create-hybrid-worker-group).
+- See the installation steps in [how to deploy an agent-based Windows Hybrid Worker](automation-windows-hrw-install.md#installation-options).
+- See the installation steps in [how to deploy an agent-based Linux Hybrid Worker](automation-linux-hrw-install.md#install-a-linux-hybrid-runbook-worker).
+
+## Script to migrate Automation account assets from one region to another
+
+You can use these scripts for migration of Automation account assets from the account in primary region to the account in the secondary region. These scripts are used to migrate only Runbooks, Modules, Connections, Credentials, Certificates and Variables. The execution of these scripts does not affect the Automation account and its assets present in the primary region.
+
+### Prerequisites
+
+ 1. Ensure that the Automation account in the secondary region is created and available so that assets from primary region can be migrated to it. It is preferred if the destination automation account is one without any custom resources as it prevents potential resource class due to same name and loss of data.
+ 1. Ensure that the system assigned identities are enabled in the Automation account in the primary region.
+ 1. Ensure that the primary Automation account's Managed Identity has Contributor access with read and write permissions to the Automation account in secondary region. To enable, provide the necessary permissions in secondary Automation account's managed identities. [Learn more](../role-based-access-control/quickstart-assign-role-user-portal.md).
+ 1. Ensure that the script has access to the Automation account assets in primary region. Hence, it should be executed as a runbook in that Automation account for successful migration.
+ 1. If the primary Automation account is deployed using a Run as account, then it must be switched to Managed Identity before migration. [Learn more](migrate-run-as-accounts-managed-identity.md).
+ 1. Modules required are:
+
+ - Az.Accounts version 2.8.0
+ - Az.Resources version 6.0.0
+ - Az.Automation version 1.7.3
+ - Az.Storage version 4.6.0
+1. Ensure that both the source and destination Automation accounts should belong to the same Azure Active Directory tenant.
+
+### Create and execute the runbook
+You can use the[PowerShell script](https://github.com/azureautomation/Migrate-automation-account-assets-from-one-region-to-another) or [PowerShell workflow](https://github.com/azureautomation/Migrate-automation-account-assets-from-one-region-to-another-PwshWorkflow/tree/main) runbook or import from the Runbook gallery and execute it to enable migration of assets from one Automation account to another.
+
+Follow the steps to import and execute the runbook:
+
+#### [PowerShell script](#tab/ps-script)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to Automation account that you want to migrate to another region.
+1. Under **Process Automation**, select **Runbooks**.
+1. Select **Browse gallery** and in the search, enter *Migrate Automation account assets from one region to another* and select **PowerShell script**.
+1. In the **Import a runbook** page, enter a name for the runbook.
+1. Select **Runtime version** as either 5.1 or 7.1 (preview)
+1. Enter the description and select **Import**.
+1. In the **Edit PowerShell Runbook** page, edit the required parameters and execute it.
+
+You can choose either of the options to edit and execute the script. You can provide the seven mandatory parameters as given in Option 1 **or** three mandatory parameters given in Option 2 to edit and execute the script.
+
+#### [PowerShell Workflow](#tab/ps-workflow)
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to Automation account that you want to migrate to another region.
+1. Under **Process Automation**, select **Runbooks**.
+1. Select **Browse gallery** and in the search, enter *Migrate Automation account assets from one region to another* and Select **PowerShell workflow**.
+1. In the **Import a runbook** page, enter a name for the runbook.
+1. Select **Runtime version** as 5.1
+1. Enter the description and select **Import**.
+
+You can input the parameters during execution of PowerShell Workflow runbook. You can provide the seven mandatory parameters as given in Option 1 **or** three mandatory parameters given in Option 2 to execute the script.
+++
+The options are:
+
+#### [Option 1](#tab/option-one)
+
+**Name** | **Required** | **Description**
+-- | - | --
+SourceAutomationAccountName | True | Name of automation account in primary region from where assets need to be migrated. |
+DestinationAutomationAccountName | True | Name of automation account in secondary region to which assets need to be migrated. |
+SourceResourceGroup | True | Resource group name of the Automation account in the primary region. |
+DestinationResourceGroup | True | Resource group name of the Automation account in the secondary region. |
+SourceSubscriptionId | True | Subscription ID of the Automation account in primary region |
+DestinationSubscriptionId | True | Subscription ID of the Automation account in secondary region. |
+Type[] | True | Array consisting of all the types of assets that need to be migrated, possible values are Certificates, Connections, Credentials, Modules, Runbooks, and Variables. |
+
+#### [Option 2](#tab/option-two)
+
+**Name** | **Required** | **Description**
+-- | - | --
+SourceAutomationAccountResourceId | True | Resource ID of the Automation account in primary region from where assets need to be migrated. |
+DestinationAutomationAccountResourceId | True | Resource ID of the Automation account in secondary region to which assets need to be migrated. |
+Type[] | True | Array consisting of all the types of assets that need to be migrated, possible values are Certificates, Connections, Credentials, Modules, Runbooks, and Variables. |
+++
+### Limitations
+- The script migrates only Custom PowerShell modules. Default modules and Python packages would not be migrated to replica Automation account.
+- The script does not migrate **Schedules** and **Managed identities** present in Automation account in primary region. These would have to be created manually in replica Automation account.
+- Jobs data and activity logs would not be migrated to the replica account.
+
+## Next steps
+
+- Learn more about [regions that support availability zones](../availability-zones/az-region.md).
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-secure-asset-encryption.md
For more information about Azure Key Vault, see [What is Azure Key Vault?](../ke
When you use encryption with customer-managed keys for an Automation account, Azure Automation wraps the account encryption key with the customer-managed key in the associated key vault. Enabling customer-managed keys doesn't impact performance, and the account is encrypted with the new key immediately, without any delay.
-A new Automation account is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the account is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Automation account. The managed identity is available only after the storage account is created.
+A new Automation account is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the account is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Automation account. The managed identity is available only after the automation account is created.
When you modify the key being used for Azure Automation secure asset encryption, by enabling or disabling customer-managed keys, updating the key version, or specifying a different key, the encryption of the account encryption key changes but the secure assets in your Azure Automation account don't need to be re-encrypted.
automation Migrate Oms Update Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-oms-update-deployments.md
- Title: Migrate Azure Monitor logs update deployments to Azure portal
-description: This article tells how to migrate Azure Monitor logs update deployments to Azure portal.
-- Previously updated : 07/16/2018--
-# Migrate Azure Monitor logs update deployments to Azure portal
-
-The Operations Management Suite (OMS) portal is being [deprecated](../azure-monitor/logs/oms-portal-transition.md). All functionality that was available in the OMS portal for Update Management is available in the Azure portal, through Azure Monitor logs. This article provides the information you need to migrate to the Azure portal.
-
-## Key information
-
-* Existing deployments will continue to work. Once you have recreated the deployment in Azure, you can delete your old deployment.
-* All existing features that you had in OMS are available in Azure. To learn more about Update Management, see [Update Management overview](./update-management/overview.md).
-
-## Access the Azure portal
-
-1. From your workspace, click **Open in Azure**.
-
- ![Open in Azure - Log Analytics](media/migrate-oms-update-deployments/link-to-azure-portal.png)
-
-2. In the Azure portal, click **Automation Account**
-
- ![Azure Monitor logs](media/migrate-oms-update-deployments/log-analytics.png)
-
-3. In your Automation account, click **Update Management**.
-
- :::image type="content" source="media/migrate-oms-update-deployments/azure-automation.png" alt-text="Screenshot of the Update management page.":::
-
-4. In the Azure portal, select **Automation Accounts** under **All services**.
-
-5. Under **Management Tools**, select the appropriate Automation account, and click **Update Management**.
-
-## Recreate existing deployments
-
-All update deployments created in the OMS portal have a [saved search](../azure-monitor/logs/computer-groups.md) also known as a computer group, with the same name as the update deployment that exists. The saved search contains the list of machines that were scheduled in the update deployment.
--
-To use this existing saved search, follow these steps:
-
-1. To create a new update deployment, go to the Azure portal, select the Automation account that is used, and click **Update Management**. Click **Schedule update deployment**.
-
- ![Schedule update deployment](media/migrate-oms-update-deployments/schedule-update-deployment.png)
-
-2. The New Update Deployment pane opens. Enter values for the properties described in the following table and then click **Create**:
-
-3. For **Machines to update**, select the saved search used by the OMS deployment.
-
- | Property | Description |
- | | |
- |Name |Unique name to identify the update deployment. |
- |Operating System| Select **Linux** or **Windows**.|
- |Machines to update |Select a Saved search, Imported group, or pick Machine from the dropdown and select individual machines. If you choose **Machines**, the readiness of the machine is shown in the **UPDATE AGENT READINESS** column.</br> To learn about the different methods of creating computer groups in Azure Monitor logs, see [Computer groups in Azure Monitor logs](../azure-monitor/logs/computer-groups.md) |
- |Update classifications|Select all the update classifications that you need. CentOS does not support this out of the box.|
- |Updates to exclude|Enter the updates to exclude. For Windows, enter the KB article without the **KB** prefix. For Linux, enter the package name or use a wildcard character. |
- |Schedule settings|Select the time to start, and then select either **Once** or **Recurring** for the recurrence. |
- | Maintenance window |Number of minutes set for updates. The value can't be less than 30 minutes or more than 6 hours. |
- | Reboot control| Determines how reboots should be handled.</br>Available options are:</br>Reboot if required (Default)</br>Always reboot</br>Never reboot</br>Only reboot - will not install updates|
-
-4. Click **Scheduled update deployments** to view the status of the newly created update deployment.
-
- ![new update deployment](media/migrate-oms-update-deployments/new-update-deployment.png)
-
-5. As mentioned previously, once your new deployments are configured through the Azure portal, you can remove the existing deployments from the Azure portal.
-
-## Next steps
-
-To learn more about Update Management in Azure Automation, see [Update Management overview](./update-management/overview.md).
availability-zones Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service.md
Availability zone support is a property of the App Service plan. The following a
- Central US - East US - East US 2
+ - South Central US
- Canada Central - Brazil South - North Europe - West Europe
+ - Sweden Central
- Germany West Central - France Central - UK South - Japan East - East Asia - Southeast Asia
+ - Qatar Central
+ - Central India
- Australia East - Availability zones can only be specified when creating a **new** App Service plan. A pre-existing App Service plan can't be converted to use availability zones. - Availability zones are only supported in the newer portion of the App Service footprint.
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Learn more about the cluster extensions currently available for Azure Arc-enable
* [Azure Monitor](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json) * [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json) * [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md)
-* [Microsoft Defender for Cloud](../../security-center/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json)
* [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) * [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) * [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md)
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
If you run into problems, the following suggestions may help:
nslookup gbl.his.arc.azure.com nslookup agentserviceapi.guestconfiguration.azure.com nslookup dp.kubernetesconfiguration.azure.com
- ```
+ ```
* If you are having trouble onboarding your Kubernetes cluster, confirm that youΓÇÖve added the Azure Active Directory, Azure Resource Manager, AzureFrontDoor.FirstParty and Microsoft Container Registry service tags to your local network firewall.
If you run into problems, the following suggestions may help:
* Learn more about [Azure Private Endpoint](../../private-link/private-link-overview.md). * Learn how to [troubleshoot Azure Private Endpoint connectivity problems](../../private-link/troubleshoot-private-endpoint-connectivity.md).
-* Learn how to [configure Private Link for Azure Monitor](../../azure-monitor/logs/private-link-security.md).
+* Learn how to [configure Private Link for Azure Monitor](../../azure-monitor/logs/private-link-security.md).
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
Title: Configure geo-replication for Premium Azure Cache for Redis instances
-description: Learn how to replicate your Azure Cache for Redis Premium instances across Azure regions
+ Title: Configure passive geo-replication for Premium Azure Cache for Redis instances
+description: Learn how to use cross-region replication to provide disaster recovery on the Premium tier of Azure Cache for Redis.
+ Previously updated : 05/24/2022+ Last updated : 10/20/2022
-# Configure geo-replication for Premium Azure Cache for Redis instances
+# Configure passive geo-replication for Premium Azure Cache for Redis instances
+
+In this article, you learn how to configure passive geo-replication on a pair of Azure Cache for Redis instances using the Azure portal.
+
+Passive geo-replication links together two Premium tier Azure Cache for Redis instances and creates an _active-passive_ data replication relationship. Active-passive means that there's a pair of caches, primary and secondary, that have their data synchronized. But you can only write to one side of the pair, the primary. The other side of the pair, the secondary cache, is read-only.
-In this article, you learn how to configure a geo-replicated Azure Cache using the Azure portal.
+Compare _active-passive_ to _active-active_, where you can write to either side of the pair, and it will synchronize with the other side.
-Geo-replication links together two Premium Azure Cache for Redis instances and creates a data replication relationship. These cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagate changes to the secondary. This process continues until the link between the two instances is removed.
+With passive geo-replication, the cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagates changes to the secondary.
+
+Failover is not automatic. For more information and information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
> [!NOTE] > Geo-replication is designed as a disaster-recovery solution. > >
+## Scope of availability
+
+|Tier | Basic, Standard | Premium |Enterprise, Enterprise Flash |
+|||||
+|Available | No | Yes | Yes |
+
+_Passive geo-replication_ is only available in the Premium tier of Azure Cache for Redis. The Enterprise and Enterprise Flash tiers also offer geo-replication, but those tiers use a more advanced version called _active geo-replication_.
## Geo-replication prerequisites
To configure geo-replication between two caches, the following prerequisites mus
- Both caches are [Premium tier](cache-overview.md#service-tiers) caches. - Both caches are in the same Azure subscription.-- The secondary linked cache is either the same cache size or a larger cache size than the primary linked cache.
+- The secondary linked cache is either the same cache size or a larger cache size than the primary linked cache. To use geo-failover, both caches must be the same size.
- Both caches are created and in a running state.-- Neither cache can have more than one replica. > [!NOTE]
-> Data transfer between Azure regions will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
+> Data transfer between Azure regions is charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
Some features aren't supported with geo-replication: - Zone Redundancy isn't supported with geo-replication. - Persistence isn't supported with geo-replication.
+- Caches with more than one replica can't be geo-replicated.
- Clustering is supported if both caches have clustering enabled and have the same number of shards. - Caches in the same Virtual Network (VNet) are supported. - Caches in different VNets are supported with caveats. See [Can I use geo-replication with my caches in a VNet?](#can-i-use-geo-replication-with-my-caches-in-a-vnet) for more information.-- Caches with more than one replica can't be geo-replicated. After geo-replication is configured, the following restrictions apply to your linked cache pair: -- The secondary linked cache is read-only; you can read from it, but you can't write any data to it. If you choose to read from the Geo-Secondary instance when a full data sync is happening between the Geo-Primary and the Geo-Secondary, the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync is complete. The errors state that a full data sync is in progress. Also, the errors are thrown when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios. Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
+- The secondary linked cache is read-only. You can read from it, but you can't write any data to it. If you choose to read from the Geo-Secondary instance when a full data sync is happening between the Geo-Primary and the Geo-Secondary, the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync is complete. The errors state that a full data sync is in progress. Also, the errors are thrown when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios. Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
- Any data that was in the secondary linked cache before the link was added is removed. If the geo-replication is later removed however, the replicated data remains in the secondary linked cache. - You can't [scale](cache-how-to-scale.md) either cache while the caches are linked. - You can't [change the number of shards](cache-how-to-premium-clustering.md) if the cache has clustering enabled.
After geo-replication is configured, the following restrictions apply to your li
- You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache. - You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](#how-much-does-it-cost-to-replicate-my-data-across-azure-regions)-- Automatic failover doesn't occur between the primary and secondary linked cache. For more information and information on how to failover a client application, see [How does failing over to the secondary linked cache work?](#how-does-failing-over-to-the-secondary-linked-cache-work)
+- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information and information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+ - Private links can't be added to caches that are already geo-replicated. To add a private link to a geo-replicated cache: 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication. ## Add a geo-replication link
-1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from **Geo-replication** on the left.
+1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from the working pane.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Cache geo-replication menu":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Screenshot showing the cache's Geo-replication menu.":::
1. Select the name of your intended secondary cache from the **Compatible caches** list. If your secondary cache isn't displayed in the list, verify that the [Geo-replication prerequisites](#geo-replication-prerequisites) for the secondary cache are met. To filter the caches by region, select the region in the map to display only those caches in the **Compatible caches** list.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link.png" alt-text="Select compatible cache":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link.png" alt-text="Screenshot showing compatible caches for linking with geo-replication.":::
You can also start the linking process or view details about the secondary cache by using the context menu.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png" alt-text="Geo-replication context menu":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png" alt-text="Screenshot showing the Geo-replication context menu.":::
1. Select **Link** to link the two caches together and begin the replication process.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Link caches":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Screenshot showing how to link caches for geo-replication.":::
1. You can view the progress of the replication process using **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Linking status":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Screenshot showing the current Linking status.":::
You can also view the linking status on the left, using **Overview**, for both the primary and secondary caches.
After geo-replication is configured, the following restrictions apply to your li
Once the replication process is complete, the **Link status** changes to **Succeeded**.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Cache status":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Screenshot showing cache linking status as Succeeded.":::
The primary linked cache remains available for use during the linking process. The secondary linked cache isn't available until the linking process completes.
-> [!NOTE]
-> Geo-replication can be enabled for this cache if you scale it to 'Premium' pricing tier and disable data persistence. This feature is not available at this time when using extra replicas.
+## Geo-primary URLs (preview)
+
+Once the caches are linked, URLs are generated that always point to the geo-primary cache. If a failover is initiated from the geo-primary to the geo-secondary, the URL remains the same, and the underlying DNS record is updated automatically to point to the new geo-primary.
++
+Four URLs are shown:
+
+- **Geo-Primary URL** is a proxy URL with the format of `<cache-1-name>.geo.redis.cache.windows.net`. This URL always has the name of the first cache to be linked, but it always points to whichever cache is the current geo-primary.
+- **Linked cache Geo-Primary URL** is a proxy URL with the format of `<cache-2-name>.geo.redis.cache.windows.net`. This URL always has the name of the second cache to be linked, and it will also always point to whichever cache is the current geo-primary.
+- **Current Geo Primary Cache** is the direct address of the cache that is currently the geo-primary. The address is `redis.cache.windows.net` not `geo.redis.cache.windows.net`. The address listed in this field changes if a failover is initiated.
+- **Current Geo Secondary Cache** is the direct address of the cache that is currently the geo-secondary. The address is `redis.cache.windows.net` not `geo.redis.cache.windows.net`. The address listed in this field changes if a failover is initiated.
+
+The goal of the two geo-primary URLs is to make updating the cache address easier on the application side in the event of a failover. Changing the address of either linked cache from `redis.cache.windows.net` to `geo.redis.cache.windows.net` ensures that your application is always pointing to the geo-primary, even if a failover is triggered.
+
+The URLs for the current geo-primary and current geo-secondary cache are provided in case youΓÇÖd like to link directly to a cache resource without any automatic routing.
+
+## Initiate a failover from geo-primary to geo-secondary (preview)
+
+With one click, you can trigger a failover from the geo-primary to the geo-secondary.
++
+This causes the following steps to be taken:
+
+1. The geo-secondary cache is promoted to geo-primary.
+1. DNS records are updated to redirect the geo-primary URLs to the new geo-primary.
+1. The old geo-primary cache is demoted to secondary, and attempts to form a link to the new geo-primary cache.
+
+The geo-failover process takes a few minutes to complete.
+
+### Settings to check before initiating geo-failover
+
+When the failover is initiated, the geo-primary and geo-secondary caches will swap. If the new geo-primary is configured differently from the geo-secondary, it can create problems for your application.
+
+Be sure to check the following items:
+
+- If youΓÇÖre using a firewall in either cache, make sure that the firewall settings are similar so you have no connection issues.
+- Make sure both caches are using the same port and TLS/SSL settings
+- The geo-primary and geo-secondary caches have different access keys. In the event of a failover being triggered, make sure your application can update the access key it's using to match the new geo-primary.
+
+### Failover with minimal data loss
+
+Geo-failover events can introduce data inconsistencies during the transition, especially if the client maintains a connection to the old geo-primary during the failover process. It's possible to minimize data loss in a planned geo-failover event using the following tips:
+
+- Check the geo-replication data sync offset metric. The metric is emitted by the current geo-primary cache. This metric indicates how much data has yet to be replicated to the geo-primary. If possible, only initiate failover if the metric indicates fewer than 14 bytes remain to be written.
+- Run the `CLIENT PAUSE` command in the current geo-primary before initiating failover. Running `CLIENT PAUSE` blocks any new write requests and instead returns timeout failures to the Azure Cache for Redis client. The `CLIENT PAUSE` command requires providing a timeout period in milliseconds. Make sure a long enough timeout period is provided to allow the failover to occur. Setting this to around 30 minutes (1,800,000 milliseconds) is a good place to start. You can always lower this number as needed.
+
+There's no need to run the CLIENT UNPAUSE command as the new geo-primary does retain the client pause.
## Remove a geo-replication link 1. To remove the link between two caches and stop geo-replication, select **Unlink caches** from the **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Unlink caches":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Screenshot showing how to unlink caches.":::
When the unlinking process completes, the secondary cache is available for both reads and writes. >[!NOTE] >When the geo-replication link is removed, the replicated data from the primary linked cache remains in the secondary cache. >
->
## Geo-replication FAQ - [Can I use geo-replication with a Standard or Basic tier cache?](#can-i-use-geo-replication-with-a-standard-or-basic-tier-cache) - [Is my cache available for use during the linking or unlinking process?](#is-my-cache-available-for-use-during-the-linking-or-unlinking-process)
+- Can I track the health of the geo-replication link?
- [Can I link more than two caches together?](#can-i-link-more-than-two-caches-together) - [Can I link two caches from different Azure subscriptions?](#can-i-link-two-caches-from-different-azure-subscriptions) - [Can I link two caches with different sizes?](#can-i-link-two-caches-with-different-sizes)
After geo-replication is configured, the following restrictions apply to your li
- [How much does it cost to replicate my data across Azure regions?](#how-much-does-it-cost-to-replicate-my-data-across-azure-regions) - [Why did the operation fail when I tried to delete my linked cache?](#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - [What region should I use for my secondary linked cache?](#what-region-should-i-use-for-my-secondary-linked-cache)-- [How does failing over to the secondary linked cache work?](#how-does-failing-over-to-the-secondary-linked-cache-work) - [Can I configure Firewall with geo-replication?](#can-i-configure-a-firewall-with-geo-replication) ### Can I use geo-replication with a Standard or Basic tier cache?
-No, geo-replication is only available for Premium tier caches.
+No, passive geo-replication is only available in the Premium tier. A more advanced version of geo-replication called, _active geo-replication_, is available in the Enterprise and Enterprise Flash tier.
### Is my cache available for use during the linking or unlinking process?
No, geo-replication is only available for Premium tier caches.
- The secondary linked cache isn't available until the linking process completes. - Both caches remain available until the unlinking process completes.
+### Can I track the health of the geo-replication link?
+
+Yes, there are several metrics available to help track the status of the geo-replication. These metrics are available in the Azure portal.
+
+- **Geo Replication Healthy** shows the status of the geo-replication link. The link will show up as unhealthy if either the geo-primary or geo-secondary caches are down. This is typically due to standard patching operations, but it could also indicate a failure situation.
+- **Geo Replication Connectivity Lag** shows the time since the last successful data synchronization between geo-primary and geo-secondary.
+- **Geo Replication Data Sync Offset** shows the amount of data that has yet to be synchronized to the geo-secondary cache.
+- **Geo Replication Fully Sync Event Started** indicates that a full synchronization action has been initiated between the geo-primary and geo-secondary caches. This occurs if standard replication can't keep up with the number of new writes.
+- **Geo Replication Full Sync Event Finished** indicates that a full synchronization action has been completed.
+ ### Can I link more than two caches together? No, you can only link two caches together.
No, both caches must be in the same Azure subscription.
### Can I link two caches with different sizes?
-Yes, as long as the secondary linked cache is larger than the primary linked cache.
+Yes, as long as the secondary linked cache is larger than the primary linked cache. However, you can't use the failover feature if the caches are different sizes.
### Can I use geo-replication with clustering enabled?
Replication is continuous and asynchronous. It doesn't happen on a specific sche
### How long does geo-replication replication take?
-Replication is incremental, asynchronous, and continuous and the time taken isn't much different from the latency across regions. Under certain circumstances, the secondary cache can be required to do a full sync of the data from the primary. The replication time in this case depends on many factors like: load on the primary cache, available network bandwidth, and inter-region latency. We have found replication time for a full 53-GB geo-replicated pair can be anywhere between 5 to 10 minutes.
+Replication is incremental, asynchronous, and continuous and the time taken isn't much different from the latency across regions. Under certain circumstances, the secondary cache can be required to do a full sync of the data from the primary. The replication time in this case depends on many factors like: load on the primary cache, available network bandwidth, and inter-region latency. We have found replication time for a full 53-GB geo-replicated pair can be anywhere between 5 to 10 minutes. You can track the amount of data that has yet to be replicated using the `Geo Replication Data Sync Offset` metric in Azure monitor.
### Is the replication recovery point guaranteed?
Geo-replicated caches and their resource groups can't be deleted while linked un
In general, it's recommended for your cache to exist in the same Azure region as the application that accesses it. For applications with separate primary and fallback regions, it's recommended your primary and secondary caches exist in those same regions. For more information about paired regions, see [Best Practices ΓÇô Azure Paired regions](../availability-zones/cross-region-replication-azure.md).
-### How does failing over to the secondary linked cache work?
-
-Automatic failover across Azure regions isn't supported for geo-replicated caches. In a disaster-recovery scenario, customers should bring up the entire application stack in a coordinated manner in their backup region. Letting individual application components decide when to switch to their backups on their own can negatively affect performance.
-
-One of the key benefits of Redis is that it's a very low-latency store. If the customer's main application is in a different region than its cache, the added round-trip time would have a noticeable effect on performance. For this reason, we avoid failing over automatically because of transient availability issues.
-
-To start a customer-initiated failover, first unlink the caches. Then, change your Redis client to use the connection endpoint of the (formerly linked) secondary cache. When the two caches are unlinked, the secondary cache becomes a regular read-write cache again and accepts requests directly from Redis clients.
- ### Can I configure a firewall with geo-replication? Yes, you can configure a [firewall](./cache-configure.md#firewall) with geo-replication. For geo-replication to function alongside a firewall, ensure that the secondary cache's IP address is added to the primary cache's firewall rules.
azure-cache-for-redis Cache Moving Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-moving-resources.md
Title: Move Azure Cache for Redis instances to different regions description: How to move Azure Cache for Redis instances to a different Azure region. + - Previously updated : 11/17/2021
-#Customer intent: As an Azure developer, I want to move my Azure Cache for Redis resource to another Azure region.
+ Last updated : 10/20/2022+ # Move Azure Cache for Redis instances to different regions
-In this article, you learn how to move Azure Cache for Redis instances to a different Azure region. You might move your resources to another region for a number of reasons:
+In this article, you learn how to move Azure Cache for Redis instances to a different Azure region. You might move your resources to another region for many reasons:
+ - To take advantage of a new Azure region. - To deploy features or services available in specific regions only. - To meet internal policy and governance requirements.
In this article, you learn how to move Azure Cache for Redis instances to a diff
If you're looking to migrate to Azure Cache for Redis from on-premises, cloud-based VMs, or another hosting service, we recommend you see [Migrate to Azure Cache for Redis](cache-migration-guide.md).
-The tier of Azure Cache for Redis you use determines the option that's best for you.
+The tier of Azure Cache for Redis you use determines the option that's best for you.
-| Cache Tier | Options |
-| | - |
-| Premium | Geo-replication, create a new cache, dual-write to two caches, export and import data via RDB file, or migrate programmatically |
-| Basic or Standard | Create a new cache, dual-write to two caches, or migrate programmatically |
-| Enterprise or Enterprise Flash | Create a new cache or export and import data with an RDB file, or migrate programmatically |
+| Cache Tier | Options |
+| | - |
+| Premium | Geo-replication, create a new cache, dual-write to two caches, export and import data via RDB file, or migrate programmatically |
+| Basic or Standard | Create a new cache, dual-write to two caches, or migrate programmatically |
+| Enterprise or Enterprise Flash | Create a new cache or export and import data with an RDB file, or migrate programmatically |
-## Geo-replication (Premium)
+## Passive geo-replication (Premium)
-### Prerequisites
+### Prerequisites
To configure geo-replication between two caches, the following prerequisites must be met:
To configure geo-replication between two caches, the following prerequisites mus
### Prepare
-To move your cache instance to another region, you need to [create a second premium cache instance](quickstart-create-redis.md) in the desired region. Once both caches are running, you can set up geo-replication between the two cache instances.
+To move your cache instance to another region, you need to [create a second premium cache instance](quickstart-create-redis.md) in the desired region. Once both caches are running, you can set up geo-replication between the two cache instances.
> [!NOTE] > Data transfer between Azure regions is charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
Conditions for geo-replications support:
After geo-replication is configured, the following restrictions apply to your linked cache pair: -- The secondary linked cache is read-only. You can read from it, but you can't write any data to it.
- - If you choose to read from the Geo-Secondary instance, whenever a full data sync is happening between the Geo-Primary and the Geo-Secondary, such as when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios as well,
- the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync between Geo-Primary and Geo-Secondary is complete.
- - Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
+- The secondary linked cache is read-only. You can read from it, but you can't write any data to it.
+ - If you choose to read from the Geo-Secondary instance when a full data sync is happening between the Geo-Primary and the Geo-Secondary, such as when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios as well, the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync between Geo-Primary and Geo-Secondary is complete.
+ - Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
- Any data that was in the secondary linked cache before the link was added is removed. If the geo-replication is later removed however, the replicated data remains in the secondary linked cache. - You can't [scale](cache-how-to-scale.md) either cache while the caches are linked. - You can't [change the number of shards](cache-how-to-premium-clustering.md) if the cache has clustering enabled.
After geo-replication is configured, the following restrictions apply to your li
- You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache. - You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](cache-how-to-geo-replication.md#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](cache-how-to-geo-replication.md#how-much-does-it-cost-to-replicate-my-data-across-azure-regions)-- Automatic failover doesn't occur between the primary and secondary linked cache. For more information and information on how to failover a client application, see [How does failing over to the secondary linked cache work?](cache-how-to-geo-replication.md#how-does-failing-over-to-the-secondary-linked-cache-work)
+- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information and information on how to failover a client application, see [Initiate a failover from geo-primary to geo-secondary (preview)](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
### Move
-1. To link two caches together for geo-replication, fist click **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, click **Add cache replication link** from **Geo-replication** on the left.
+1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Add link":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Screenshot showing the cache's Geo-replication menu.":::
1. Select the name of your intended secondary cache from the **Compatible caches** list. If your secondary cache isn't displayed in the list, verify that the [Geo-replication prerequisites](#prerequisites) for the secondary cache are met. To filter the caches by region, select the region in the map to display only those caches in the **Compatible caches** list.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link.png" alt-text="Geo-replication compatible caches":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link.png" alt-text="Screenshot showing compatible caches for linking with geo-replication.":::
You can also start the linking process or view details about the secondary cache by using the context menu.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png" alt-text="Geo-replication context menu":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png" alt-text="Screenshot showing the Geo-replication context menu.":::
1. Select **Link** to link the two caches together and begin the replication process.
-
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Link caches":::
+
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Screenshot showing how to link caches for geo-replication.":::
### Verify 1. You can view the progress of the replication process using **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Linking status":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Screenshot showing the current Linking status.":::
You can also view the linking status on the left, using **Overview**, for both the primary and secondary caches.
After geo-replication is configured, the following restrictions apply to your li
Once the replication process is complete, the **Link status** changes to **Succeeded**.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Cache status":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Screenshot showing cache linking status as Succeeded.":::
The primary linked cache remains available for use during the linking process. The secondary linked cache isn't available until the linking process completes.
-### Clean up source resources
+### Clean up source resources
Once your new cache in the targeted region is populated with all necessary data, remove the link between the two caches and delete the original instance.
-1. To remove the link between two caches and stop geo-replication, click **Unlink caches** from the **Geo-replication** on the left.
+1. To remove the link between two caches and stop geo-replication, select **Unlink caches** from the **Geo-replication** on the left.
- :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Unlink caches":::
+ :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Screenshot showing how to unlink caches.":::
When the unlinking process completes, the secondary cache is available for both reads and writes. >[!NOTE] >When the geo-replication link is removed, the replicated data from the primary linked cache remains in the secondary cache. >
->
-2. Delete the original instance.
+1. Delete the original instance.
## Create a new cache (All tiers) ### Prerequisites+ - Azure subscription - [create one for free](https://azure.microsoft.com/free/) ### Prepare+ If you don't need to maintain your data during the move, the easiest way to move regions is to create a new cache instance in the targeted region and connect your application to it. For example, if you use Redis as a look-aside cache of database records, you can easily rebuild the cache from scratch. ### Move [!INCLUDE [redis-cache-create](includes/redis-cache-create.md)]
-Finally, update your application to use the new instances.
+Finally, update your application to use the new instances.
-### Clean up source resources
-Once your new cache in the targeted region is running, delete the original instance.
+### Clean up source resources
+Once your new cache in the targeted region is running, delete the original instance.
## Export and import data with an RDB file (Premium, Enterprise, Enterprise Flash)+ Open-source Redis defines a standard mechanism for taking a snapshot of a cache's in-memory dataset and saving it to a file. This file, called RDB, can be read by another Redis cache. [Azure Cache for Redis Premium and Enterprise](cache-overview.md#service-tiers) supports importing data into a cache instance with RDB files. You can use an RDB file to transfer data from an existing cache to Azure Cache for Redis. > [!IMPORTANT]
Open-source Redis defines a standard mechanism for taking a snapshot of a cache'
> ### Prerequisites+ - Both caches are [Premium tier or Enterprise tier](cache-overview.md#service-tiers) caches. - The second cache is either the same cache size or a larger cache size than the original cache. - The Redis version of the cache you're exporting from should be the same or lower than the version of your new cache instance. ### Prepare+ To move your cache instance to another region, you'll need to create [a second premium cache instance](quickstart-create-redis.md) or [a second enterprise cache instance](quickstart-create-redis-enterprise.md) in the desired region. ### Move
-1. See [here](cache-how-to-import-export-data.md) for more information on how to import and export data in Azure Cache for Redis.
-2. Update your application to use the new cache instance.
+1. For more information on how to import and export data in Azure Cache for Redis. see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
+
+1. Update your application to use the new cache instance.
### Verify+ You can monitor the progress of the import operation by following the notifications from the Azure portal, or by viewing the events in the [audit log](../azure-monitor/essentials/activity-log.md).
-### Clean up source resources
+### Clean up source resources
+ Once your new cache in the targeted region is running, delete the original instance. ## Dual-write to two caches (Basic, Standard, and Premium)+ Rather than moving data directly between caches, you can use your application to write data to both an existing cache and a new one you're setting up. The application initially reads data from the existing cache initially. When the new cache has the necessary data, you switch the application to that cache and retire the old one. Let's say, for example, you use Redis as a session store and the application sessions are valid for seven days. After writing to the two caches for a week, you'll be certain the new cache contains all non-expired session information. You can safely rely on it from that point onward without concern over data loss. ### Prerequisites+ - The second cache is either the same cache size or a larger cache size than the original cache. ### Prepare+ To move your cache instance to another region, you'll need to [create a second cache instance](quickstart-create-redis.md) in the desired region. ### Move+ General steps to implement this option are: 1. Modify application code to write to both the new and the original instances.
-2. Continue reading data from the original instance until the new instance is sufficiently populated with data.
+1. Continue reading data from the original instance until the new instance is sufficiently populated with data.
-3. Update the application code to reading and writing from the new instance only.
+1. Update the application code to reading and writing from the new instance only.
-### Clean up source resources
-Once your new cache in the targeted region is running, delete the original instance.
+### Clean up source resources
+Once your new cache in the targeted region is running, delete the original instance.
## Migrate programmatically (All tiers)
-You can create a custom migration process by programmatically reading data from an existing cache and writing them into Azure Cache for Redis. This [open-source tool](https://github.com/deepakverma/redis-copy) can be used to copy data from one Azure Cache for Redis instance to an another instance in a different Azure Cache region. A [compiled version](https://github.com/deepakverma/redis-copy/releases/download/alpha/Release.zip) is available as well. You may also find the source code to be a useful guide for writing your own migration tool.
+
+You can create a custom migration process by programmatically reading data from an existing cache and writing them into Azure Cache for Redis. This [open-source tool](https://github.com/deepakverma/redis-copy) can be used to copy data from one Azure Cache for Redis instance to another instance in a different Azure Cache region. A [compiled version](https://github.com/deepakverma/redis-copy/releases/download/alpha/Release.zip) is available as well. You may also find the source code to be a useful guide for writing your own migration tool.
> [!NOTE]
-> This tool isn't officially supported by Microsoft.
->
+> This tool isn't officially supported by Microsoft.
### Prerequisites+ - The second cache is either the same cache size or a larger cache size than the original cache. ### Prepare
You can create a custom migration process by programmatically reading data from
- To move your cache instance to another region, you'll need to [create a second cache instance](quickstart-create-redis.md) in the desired region. ### Move+ After creating a VM in the region where the existing cache is located and creating a new cache in the desired region, the general steps to implement this option are: 1. Flush data from the new cache to ensure that it's empty. This step is required because the copy tool itself doesn't overwrite any existing key in the target cache.
After creating a VM in the region where the existing cache is located and creati
2. Use an application such as the open-source tool above to automate the copying of data from the source cache to the target. Remember that the copy process could take a while to complete depending on the size of your dataset.
-### Clean up source resources
+### Clean up source resources
+ Once your new cache in the targeted region is running, delete the original instance. ## Next steps Learn more about Azure Cache for Redis features.+ - [Geo-replication FAQ](cache-how-to-geo-replication.md#geo-replication-faq) - [Azure Cache for Redis service tiers](cache-overview.md#service-tiers) - [High availability for Azure Cache for Redis](cache-high-availability.md)--
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Last updated 06/15/2022
ms.devlang: python
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./create-first-function-cli-python-uiex
+zone_pivot_groups: python-mode-functions
+ # Quickstart: Create a Python function in Azure from the command line
+In this article, you use command-line tools to create a Python function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+
+This article covers both Python programming models supported by Azure Functions. Use the selector at the top to choose your programming model.
-In this article, you use command-line tools to create a Python function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+>[!NOTE]
+>The Python v2 programming model for Functions is currently in Preview. To learn more about the Python v2 programming model, see the [Developer Reference Guide](functions-reference-python.md).
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Before you begin, you must have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x.-++ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.0.4785 or later. + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
Before you begin, you must have the following requirements in place:
+ The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later. + [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version).++ The [Azurite storage emulator](../storage/common/storage-use-azurite.md?tabs=npm#install-azurite). While you can also use an actual Azure Storage account, the article assumes you're using this emulator. ### Prerequisite check
Verify your prerequisites, which depend on whether you're using Azure CLI or Azu
# [Azure CLI](#tab/azure-cli) + In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.x.-++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.0.4785 or later. + Run `az --version` to check that the Azure CLI version is 2.4 or later. + Run `az login` to sign in to Azure and verify an active subscription.
You run all subsequent commands in this activated virtual environment.
In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function. 1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime. ```console
In Azure Functions, a function project is a container for one or more individual
```console func templates list -l python ```
+1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime and the specified programming model version.
+
+ ```console
+ func init LocalFunctionProj --python -m V2
+ ```
+
+1. Go to the project folder.
+
+ ```console
+ cd LocalFunctionProj
+ ```
+
+ This folder contains various files for the project, including configuration files named *[local.settings.json]*(functions-develop-local.md#local-settings-file) and *[host.json]*(functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+
+1. The file `function_app.py` can include all functions within your project. To start with, there's already an HTTP function stored in the file.
+
+```python
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello")
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ return func.HttpResponse("HttpTrigger1 function processed a request!")%
+```
### (Optional) Examine the file contents If desired, you can skip to [Run the function locally](#run-the-function-locally) and examine the file contents later. #### \_\_init\_\_.py *\_\_init\_\_.py* contains a `main()` Python function that's triggered according to the configuration in *function.json*.
If desired, you can change `scriptFile` to invoke a different Python file.
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/function.json"::: Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type [`httpTrigger`](functions-bindings-http-webhook-trigger.md) and output binding of type [`http`](functions-bindings-http-webhook-output.md).
+`function_app.py` is the entry point to the function and where functions will be stored and/or referenced. This file will include configuration of triggers and bindings through decorators, and the function content itself.
+
+For more information, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=python).
+
+## Start the storage emulator
+
+Before running the function locally, you must start the local Azurite storage emulator. You can skip this step if the `AzureWebJobsStorage` setting in the local.settings.json file is set to the connection string for an Azure Storage account.
+
+Use the following command to start the Azurite storage emulator:
+
+```cmd
+azurite
+```
+
+For more information, see [Run Azurite](../storage/common/storage-use-azurite.md?tabs=npm#run-azurite)
[!INCLUDE [functions-run-function-test-local-cli](../../includes/functions-run-function-test-local-cli.md)]
Use the following commands to create these items. Both Azure CLI and PowerShell
az login ```
- The [az login](/cli/azure/reference-index#az-login) command signs you into your Azure account.
+ The [`az login`](/cli/azure/reference-index#az-login) command signs you into your Azure account.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
Use the following commands to create these items. Both Azure CLI and PowerShell
+ ::: zone pivot="python-mode-decorators"
+ In the current v2 programming model preview, choose a region from one of the following locations: France Central, West Central US, North Europe, China East, East US, or North Central US.
+ ::: zone-end
+ > [!NOTE] > You can't host Linux and Windows apps in the same resource group. If you have an existing resource group named `AzureFunctionsQuickstart-rg` with a Windows function app or web app, you must use a different resource group.
Use the following commands to create these items. Both Azure CLI and PowerShell
In the previous example, replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
+## Update app settings
+
+To use the Python v2 model in your function app, you need to add a new application setting in Azure named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file.
+
+Run the following command to add this setting to your new function app in Azure.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"}
+```
+++
+In the previous example, replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. This setting is already in your local.settings.json file.
+
+## Verify in Azure
Run the following command to view near real-time [streaming logs](functions-run-local.md#enable-streaming-logs) in Application Insights in the Azure portal.
In a separate terminal window or in the browser, call the remote function again.
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-python)
-[Having issues? Let us know.](https://aka.ms/python-functions-qs-survey)
+Having issues with this article?
+++ [Troubleshoot Python function apps in Azure Functions](recover-python-functions.md)++ [Let us know](https://aka.ms/python-functions-qs-survey)
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
Title: Create a Python function using Visual Studio Code - Azure Functions description: Learn how to create a Python function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 06/15/2022 Last updated : 10/24/2022 ms.devlang: python
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./create-first-function-vs-code-python-uiex
+zone_pivot_groups: python-mode-functions
# Quickstart: Create a function in Azure with Python using Visual Studio Code - In this article, you use Visual Studio Code to create a Python function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+This article covers both Python programming models supported by Azure Functions. Use the selector at the top to choose your programming model.
+
+>[!NOTE]
+>The Python v2 programming model for Functions is currently in Preview. To learn more about the v2 programming model, see the [Developer Reference Guide](functions-reference-python.md).
+ Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. There's also a [CLI-based version](create-first-function-cli-python.md) of this article.
There's also a [CLI-based version](create-first-function-cli-python.md) of this
Before you begin, make sure that you have the following requirements in place: ++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).+++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x.++ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools), version 4.0.4785 or a later version.++ Python versions that are [supported by Azure Functions](supported-languages.md#languages-by-runtime-version). For more information, see [How to install Python](https://wiki.python.org/moin/BeginnersGuide/Download).+++ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).+++ The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.+++ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.++ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code, version 1.8.1 or later.+++ The [Azurite V3 extension](https://marketplace.visualstudio.com/items?itemName=Azurite.azurite) local storage emulator. While you can also use an actual Azure storage account, this article assumes you're using the Azurite emulator. ## <a name="create-an-azure-functions-project"></a>Create your local project
In this section, you use Visual Studio Code to create a local Azure Functions pr
:::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
-1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
-
-1. Provide the following information at the prompts:
+2. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
+3. Provide the following information at the prompts:
|Prompt|Selection| |--|--|
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**| Choose `Add to workspace`.|
-1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=python#generated-project-files).
+4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=python#generated-project-files).
+3. Provide the following information at the prompts:
+
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language**| Choose `Python (Programming Model V2)`.|
+ |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.|
+ |**Select how you would like to open your project**| Choose `Add to workspace`.|
+
+4. Visual Studio Code uses the provided information and generates an Azure Functions project.
+
+5. Open the generated `function_app.py` project file, which contains your functions.
+
+6. Uncomment the `test_function` function, which is an HTTP triggered function.
+
+7. Replace the `app.route()` method call with the following code:
+
+ ```python
+ @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
+ ```
+
+ This code enables your HTTP function endpoint to be called in Azure without having to provide an [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys). Local execution doesn't require authorization keys.
+
+ Your function code should now look like the following example:
+
+ ```python
+ @app.function_name(name="HttpTrigger1")
+ @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
+ def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+
+ if name:
+ return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
+ else:
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
+ status_code=200
+ )
+ ```
+
+8. Open the local.settings.json project file and updated the `AzureWebJobsStorage` setting as in the following example:
+
+ ```json
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ ```
+
+ This tells the local Functions host to use the storage emulator for the storage connection currently required by the v2 model. When you publish your project to Azure, you'll instead use the default storage account. If you're instead using an Azure Storage account, set your storage account connection string here.
+
+## Start the emulator
+
+1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azurite: Start`.
+
+1. Check the bottom bar and verify that Azurite emulation services are running. If so, you can now run your function locally.
[!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)]
After you've verified that the function runs correctly on your local computer, i
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
+<! Go back to the shared include after preview
[!INCLUDE [functions-publish-project-vscode](../../includes/functions-publish-project-vscode.md)]
+-->
+## <a name="publish-the-project-to-azure"></a>Create the function app in Azure
+
+In this section, you create a function app and related resources in your Azure subscription.
+
+1. Choose the Azure icon in the Activity bar. Then in the **Resources** area, select the **+** icon and choose the **Create Function App in Azure** option.
+
+ ![Create a resource in your Azure subscription](../../includes/media/functions-publish-project-vscode/function-app-create-resource.png)
+
+1. Provide the following information at the prompts:
+
+ |Prompt|Selection|
+ |--|--|
+ |**Select subscription**| Choose the subscription to use. You won't see this prompt when you have only one subscription visible under **Resources**. |
+ |**Enter a globally unique name for the function app**| Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.|
+ |**Select a runtime stack**| Choose the language version on which you've been running locally. |
+ |**Select a location for new resources**| Choose a region for your function app.|
+
+ ::: zone pivot="python-mode-decorators"
+ In the current v2 programming model preview, choose a region from one of the following locations: France Central, West Central US, North Europe, China East, East US, or North Central US.
+ ::: zone-end
+
+ The extension shows the status of individual resources as they're being created in Azure in the **Azure: Activity Log** panel.
+
+ ![Log of Azure resource creation](../../includes/media/functions-publish-project-vscode/resource-activity-log.png)
+
+1. When the creation is complete, the following Azure resources are created in your subscription. The resources are named based on your function app name:
+
+ [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
+
+ A notification is displayed after your function app is created and the deployment package is applied.
+
+ [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
+
+## Deploy the project to Azure
++
+## Update app settings
+
+To use the Python v2 model in your function app, you need to add a new application setting in Azure named `AzureWebJobsFeatureFlags` with a value of `EnableWorkerIndexing`. This setting is already in your local.settings.json file.
+
+1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`.
+
+1. Choose your new function app, type `AzureWebJobsFeatureFlags` for the new app setting name, and press <kbd>Enter</kbd>.
+
+1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>.
+
+The setting added to your new function app, which enables it to run the v2 model in Azure.
[!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)]
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions 1.x apps automatically have a reference to the extension.
|Property |Default | Description | ||||
-| customHeaders|none|Allows you to set custom headers in the HTTP response. The previous example adds the `X-Content-Type-Options` header to the response to avoid content type sniffing. |
+| customHeaders|none|Allows you to set custom headers in the HTTP response. The previous example adds the `X-Content-Type-Options` header to the response to avoid content type sniffing. This custom header applies to all HTTP triggered functions in the function app. |
|dynamicThrottlesEnabled|true<sup>\*</sup>|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like `connections/threads/processes/memory/cpu/etc` and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a `429 "Too Busy"` response until the counter(s) return to normal levels.<br/><sup>\*</sup>The default in a Consumption plan is `true`. The default in a Dedicated plan is `false`.| |hsts|not enabled|When `isEnabled` is set to `true`, the [HTTP Strict Transport Security (HSTS) behavior of .NET Core](/aspnet/core/security/enforcing-ssl?tabs=visual-studio#hsts) is enforced, as defined in the [`HstsOptions` class](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions). The above example also sets the [`maxAge`](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions.maxage#Microsoft_AspNetCore_HttpsPolicy_HstsOptions_MaxAge) property to 10 days. Supported properties of `hsts` are: <table><tr><th>Property</th><th>Description</th></tr><tr><td>excludedHosts</td><td>A string array of host names for which the HSTS header isn't added.</td></tr><tr><td>includeSubDomains</td><td>Boolean value that indicates whether the includeSubDomain parameter of the Strict-Transport-Security header is enabled.</td></tr><tr><td>maxAge</td><td>String that defines the max-age parameter of the Strict-Transport-Security header.</td></tr><tr><td>preload</td><td>Boolean that indicates whether the preload parameter of the Strict-Transport-Security header is enabled.</td></tr></table>| |maxConcurrentRequests|100<sup>\*</sup>|The maximum number of HTTP functions that are executed in parallel. This value allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a large number of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third-party service, and those calls need to be rate limited. In these cases, applying a throttle here can help. <br/><sup>*</sup>The default for a Consumption plan is 100. The default for a Dedicated plan is unbounded (`-1`).|
Functions 1.x apps automatically have a reference to the extension.
- [Return an HTTP response from a function](./functions-bindings-http-webhook-output.md) [extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Update your extensions]: ./functions-bindings-register.md
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions Bindings Triggers Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-triggers-python.md
+
+ Title: Python V2 model Azure Functions triggers and bindings
+description: Provides examples of how to define Python triggers and bindings in Azure Functions using the preview v2 model
+ Last updated : 10/25/2022
+ms.devlang: python
+++
+# Python V2 model Azure Functions triggers and bindings (preview)
+
+The new Python v2 programming model in Azure Functions is intended to provide better alignment with Python development principles and with commonly used Python frameworks.
+
+The improved v2 programming model requires fewer files than the default model (v1), and specifically eliminates the need for a configuration file (`function.json`). Instead, triggers and bindings are represented in the `function_app.py` file as decorators. Moreover, functions can be logically organized with support for multiple functions to be stored in the same file. Functions within the same function application can also be stored in different files, and be referenced as blueprints.
+
+To learn more about using the new Python programming model for Azure Functions, see the [Azure Functions Python developer guide](./functions-reference-python.md). In addition to the documentation, [hints](https://aka.ms/functions-python-hints) are available in code editors that support type checking with .pyi files.
+
+This article contains example code snippets that define various triggers and bindings using the Python v2 programming model. To be able to run the code snippets below, ensure the following:
+
+- The function application is defined and named `app`.
+- Confirm that the parameters within the trigger reflect values that correspond with your storage account.
+- The name of the file the function is in must be `function_app.py`.
+
+To create your first function in the new v2 model, see one of these quickstart articles:
+++ [Get started with Visual Studio](./create-first-function-vs-code-python.md)++ [Get started command prompt](./create-first-function-cli-python.md)+
+## Blob trigger
+
+The following code snippet defines a function triggered from Azure Blob Storage:
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="BlobTrigger1")
+@app.blob_trigger(arg_name="myblob", path="samples-workitems/{name}",
+ connection="<STORAGE_CONNECTION_SETTING>")
+def test_function(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+```
+
+## Azure Cosmos DB trigger
+
+The following code snippet defines a function triggered from an Azure Cosmos DB (SQL API):
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="CosmosDBTrigger1")
+@app.cosmos_db_trigger(arg_name="documents", database_name="<DB_NAME>", collection_name="<COLLECTION_NAME>", connection_string_setting="<COSMOS_CONNECTION_SETTING>",
+ lease_collection_name="leases", create_lease_collection_if_not_exists="true")
+def test_function(documents: func.DocumentList) -> str:
+ if documents:
+ logging.info('Document id: %s', documents[0]['id'])
+```
+
+## Azure EventHub trigger
+
+The following code snippet defines a function triggered from an event hub instance:
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="EventHubTrigger1")
+@app.event_hub_message_trigger(arg_name="myhub", event_hub_name="samples-workitems",
+ connection="<EVENT_HUB_CONNECTION_SETTING>")
+def test_function(myhub: func.EventHubEvent):
+ logging.info('Python EventHub trigger processed an event: %s',
+ myhub.get_body().decode('utf-8'))
+```
+
+## HTTP trigger
+
+The following code snippet defines an HTTP triggered function:
+
+```python
+import azure.functions as func
+import logging
+app = func.FunctionApp(auth_level=func.AuthLevel.ANONYMOUS)
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello")
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+ if name:
+ return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
+ else:
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
+ status_code=200
+ )
+```
+
+## Azure Queue Storage trigger
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="QueueTrigger1")
+@app.queue_trigger(arg_name="msg", queue_name="python-queue-items",
+ connection="")
+def test_function(msg: func.QueueMessage):
+ logging.info('Python EventHub trigger processed an event: %s',
+ msg.get_body().decode('utf-8'))
+```
+
+## Azure Service Bus queue trigger
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="ServiceBusQueueTrigger1")
+@app.service_bus_queue_trigger(arg_name="msg", queue_name="myinputqueue", connection="")
+def test_function(msg: func.ServiceBusMessage):
+ logging.info('Python ServiceBus queue trigger processed message: %s',
+ msg.get_body().decode('utf-8'))
+```
+
+## Azure Service Bus topic trigger
+
+```python
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="ServiceBusTopicTrigger1")
+@app.service_bus_topic_trigger(arg_name="message", topic_name="mytopic", connection="", subscription_name="testsub")
+def test_function(message: func.ServiceBusMessage):
+ message_body = message.get_body().decode("utf-8")
+ logging.info("Python ServiceBus topic trigger processed message.")
+ logging.info("Message Body: " + message_body)
+```
+
+## Timer trigger
+
+```python
+import datetime
+import logging
+import azure.functions as func
+app = func.FunctionApp()
+@app.function_name(name="mytimer")
+@app.schedule(schedule="0 */5 * * * *", arg_name="mytimer", run_on_startup=True,
+ use_monitor=False)
+def test_function(mytimer: func.TimerRequest) -> None:
+ utc_timestamp = datetime.datetime.utcnow().replace(
+ tzinfo=datetime.timezone.utc).isoformat()
+ if mytimer.past_due:
+ logging.info('The timer is past due!')
+ logging.info('Python timer trigger function ran at %s', utc_timestamp)
+```
+## Next steps
+++ [Python developer guide](./functions-reference-python.md)++ [Get started with Visual Studio](./create-first-function-vs-code-python.md)++ [Get started command prompt](./create-first-function-cli-python.md)
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
Replace `<TARGET_VERSION>` in the example with a specific version of the package
## Add a function to your project
-You can add a new function to an existing project by using one of the predefined Functions triggers templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
+You can add a new function to an existing project by using one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
The results of this action depend on your project's language:
A new folder is created in the project. The folder contains a new function.json
# [Python](#tab/python)
-A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
+The results depend on the Python programming model. For more information, see the [Azure Functions Python developer guide](./functions-reference-python.md).
+
+**Python v1**: A new folder is created in the project. The folder contains a new function.json file and the new Python code file.
+
+**Python v2**: New function code is added either to the default function_app.py file or to another Python file you selected.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Title: Python developer reference for Azure Functions
-description: Understand how to develop functions with Python.
+description: Understand how to develop functions with Python
Last updated 05/25/2022 ms.devlang: python
+zone_pivot_groups: python-mode-functions
# Azure Functions Python developer guide
-This article is an introduction to developing for Azure Functions by using Python. It assumes that you've already read the [Azure Functions developer guide](functions-reference.md).
+This article is an introduction to developing Azure Functions using Python. The content below assumes that you've already read the [Azure Functions developers guide](functions-reference.md).
-As a Python developer, you might also be interested in one of the following articles:
+> [!IMPORTANT]
+> This article supports both the v1 and v2 programming model for Python in Azure Functions.
+> The v2 programming model is currently in preview.
+> While the v1 model uses a functions.json file to define functions, the new v2 model lets you instead use a decorator-based approach. This new approach results in a simpler file structure and a more code-centric approach. Choose the **v2** selector at the top of the article to learn about this new programming model. .
+
+As a Python developer, you may also be interested in one of the following articles:
| Getting started | Concepts| Scenarios/Samples | |--|--|--|
-| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> |
+| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-configuration)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-configuration)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> |
+| Getting started | Concepts|
+|--|--|--|
+| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-decorators)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-decorators)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> |
> [!NOTE]
-> Although you can [develop your Python-based functions locally on Windows](create-first-function-vs-code-python.md#run-the-function-locally), Python functions are supported in Azure only when they're running on Linux. See the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime).
+> While you can develop your Python based Azure Functions locally on Windows, Python is only supported on a Linux based hosting plan when running in Azure. See the list of supported [operating system/runtime](functions-scale.md#operating-systemruntime) combinations.
## Programming model
-Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the *\__init\__.py* file. You can also [specify an alternate entry point](#alternate-entry-point).
+Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the `__init__.py` file. You can also [specify an alternate entry point](#alternate-entry-point).
-Data from triggers and bindings is bound to the function via method attributes that use the `name` property defined in the *function.json* file. For example, the following _function.json_ file describes a simple function triggered by an HTTP request named `req`:
+Data from triggers and bindings is bound to the function via method attributes using the `name` property defined in the *function.json* file. For example, the _function.json_ below describes a simple function triggered by an HTTP request named `req`:
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/function.json":::
-Based on this definition, the *\__init\__.py* file that contains the function code might look like the following example:
+Based on this definition, the `__init__.py` file that contains the function code might look like the following example:
+
+```python
+def main(req):
+ user = req.params.get('user')
+ return f'Hello, {user}!'
+```
+
+You can also explicitly declare the attribute types and return type in the function using Python type annotations. This action helps you to use the IntelliSense and autocomplete features provided by many Python code editors.
+
+```python
+import azure.functions
++
+def main(req: azure.functions.HttpRequest) -> str:
+ user = req.params.get('user')
+ return f'Hello, {user}!'
+```
+
+Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind input and outputs to your methods.
+Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method in the `function_app.py` file.
+
+Triggers and bindings can be declared and used in a function in a decorator based approach. They're defined in the same file, `function_app.py`, as the functions. As an example, the below _function_app.py_ file represents a function trigger by an HTTP request.
```python
+@app.function_name(name="HttpTrigger1")
+@app.route(route="req")
+ def main(req): user = req.params.get('user') return f'Hello, {user}!' ```
-You can also explicitly declare the attribute types and return type in the function by using Python type annotations. This action helps you to use the IntelliSense and autocomplete features that many Python code editors provide.
+You can also explicitly declare the attribute types and return type in the function using Python type annotations. This helps you use the IntelliSense and autocomplete features provided by many Python code editors.
```python import azure.functions
+@app.function_name(name="HttpTrigger1")
+@app.route(route="req")
def main(req: azure.functions.HttpRequest) -> str: user = req.params.get('user') return f'Hello, {user}!' ```
-Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind inputs and outputs to your methods.
+At this time, only specific triggers and bindings are supported by the v2 programming model. Supported triggers and bindings are as follows.
+
+| Type | Trigger | Input Binding | Output Binding |
+| | | | |
+| HTTP | x | | |
+| Timer | x | | |
+| Azure Queue Storage | x | | x |
+| Azure Service Bus Topic | x | | x |
+| Azure Service Bus Queue | x | | x |
+| Azure Cosmos DB | x | x | x |
+| Azure Blob Storage | x | x | x |
+| Azure Event Grid | x | | x |
+
+To learn about known limitations with the v2 model and their workarounds, see [Troubleshoot Python errors in Azure Functions](./recover-python-functions.md?pivots=python-mode-decorators).
## Alternate entry point
-You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the following _function.json_ file tells the runtime to use the `customentry()` method in the _main.py_ file as the entry point for your function:
+You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the _function.json_ below tells the runtime to use the `customentry()` method in the _main.py_ file, as the entry point for your Azure Function.
```json {
You can change the default behavior of a function by optionally specifying the `
} ```
+During preview, the entry point is only in the file `function_app.py`. However, functions within the project can be referenced in function_app.py using [blueprints](#blueprints) or by importing.
+ ## Folder structure
-The recommended folder structure for an Azure Functions project in Python looks like the following example:
+The recommended folder structure for a Python Functions project looks like the following example:
``` <project_root>/
The recommended folder structure for an Azure Functions project in Python looks
| - requirements.txt | - Dockerfile ```
-The main project folder (*<project_root>*) can contain the following files:
+The main project folder (<project_root>) can contain the following files:
+
+* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
+* *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure.
+* *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
+* *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings).
+* *.venv/*: (Optional) Contains a Python virtual environment used by local development.
+* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
+* *tests/*: (Optional) Contains the test cases of your function app.
+* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings being published.
+
+Each function has its own code file and binding configuration file (function.json).
+The recommended folder structure for a Python Functions project looks like the following example:
-* *local.settings.json*: Used to store app settings and connection strings when functions are running locally. This file isn't published to Azure. To learn more, see [Local settings file](functions-develop-local.md#local-settings-file).
-* *requirements.txt*: Contains the list of Python packages that the system installs when you're publishing to Azure.
-* *host.json*: Contains configuration options that affect all functions in a function app instance. This file is published to Azure. Not all options are supported when functions are running locally. To learn more, see the [host.json reference](functions-host-json.md).
-* *.vscode/*: (Optional) Contains stored Visual Studio Code configurations. To learn more, see [User and Workspace Settings](https://code.visualstudio.com/docs/getstarted/settings).
-* *.venv/*: (Optional) Contains a Python virtual environment that's used for local development.
-* *Dockerfile*: (Optional) Used when you're publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
+```
+ <project_root>/
+ | - .venv/
+ | - .vscode/
+ | - function_app.py
+ | - additional_functions.py
+ | - tests/
+ | | - test_my_function.py
+ | - .funcignore
+ | - host.json
+ | - local.settings.json
+ | - requirements.txt
+ | - Dockerfile
+```
+
+The main project folder (<project_root>) can contain the following files:
+* *.venv/*: (Optional) Contains a Python virtual environment used by local development.
+* *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings).
+* *function_app.py*: This is the default location for all functions and their related triggers and bindings.
+* *additional_functions.py*: (Optional) Any other Python files that contain functions (usually for logical grouping) that are referenced in `function_app.py` through blueprints.
* *tests/*: (Optional) Contains the test cases of your function app.
-* *.funcignore*: (Optional) Declares files that shouldn't be published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore the local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings from being published.
+* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings being published.
+* *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
+* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
+* *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure.
+* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
+
+When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing).
+
+## Blueprints
+
+The v2 programming model introduces the concept of _blueprints_. A blueprint is a new class instantiated to register functions outside of the core function application. The functions registered in blueprint instances aren't indexed directly by function runtime. To get these blueprint functions indexed, the function app needs to register the functions from blueprint instances.
-Each function has its own code file and binding configuration file (*function.json*).
+Using blueprints provides the following benefits:
-When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself. That means *host.json* should be in the package root. We recommend that you maintain your tests in a folder along with other functions. In this example, the folder is *tests/*. For more information, see [Unit testing](#unit-testing).
+* Lets you break-up the function app into modular components enabling you to define functions in multiple Python files and divide them into different components per file.
+* Provides extensible public function app interfaces to build and reuse your own APIs.
+
+The following example shows how to use blueprints:
+
+First, in an `http_blueprint.py` file HTTP triggered function is first defined and added to a blueprint object.
+
+```python
+import logging
+
+import azure.functions as func
+
+bp = func.Blueprint()
+
+@bp.route(route="default_template")
+def default_template(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+
+ if name:
+ return func.HttpResponse(
+ f"Hello, {name}. This HTTP triggered function "
+ f"executed successfully.")
+ else:
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully. "
+ "Pass a name in the query string or in the request body for a"
+ " personalized response.",
+ status_code=200
+ )
+```
+
+Next, in `function_app.py` the blueprint object is imported and its functions are registered to function app.
+
+```python
+import azure.functions as func
+from http_blueprint import bp
+
+app = func.FunctionApp()
+
+app.register_functions(bp)
+```
+ ## Import behavior
-You can import modules in your function code by using both absolute and relative references. Based on the folder structure shown earlier, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*:
+You can import modules in your function code using both absolute and relative references. Based on the folder structure shown above, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*:
```python from shared_code import my_first_helper_function #(absolute)
from . import example #(relative)
``` > [!NOTE]
-> The *shared_code/* folder needs to contain an *\_\_init\_\_.py* file to mark it as a Python package when you're using absolute import syntax.
+> The *shared_code/* folder needs to contain an \_\_init\_\_.py file to mark it as a Python package when using absolute import syntax.
-The following *\_\_app\_\_* import and beyond top-level relative import are deprecated. The static type checker and the Python test frameworks don't support them.
+The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it isn't supported by static type checker and not supported by Python test frameworks:
```python from __app__.shared_code import my_first_helper_function #(deprecated __app__ import)
from __app__.shared_code import my_first_helper_function #(deprecated __app__ im
from ..shared_code import my_first_helper_function #(deprecated beyond top-level relative import) ``` + ## Triggers and inputs
-Inputs are divided into two categories in Azure Functions: trigger input and other binding input. Although they're different in the *function.json* file, usage is identical in Python code. When functions are running locally, connection strings or secrets required by trigger and input sources are maintained in the `Values` collection of the *local.settings.json* file. When functions are running in Azure, those same connection strings or secrets are stored securely as [application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're different in the `function.json` file, usage is identical in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure.
-The following example code demonstrates the difference between the two:
+For example, the following code demonstrates the difference between the two:
```json // function.json
def main(req: func.HttpRequest,
logging.info(f'Python HTTP triggered function processed: {obj.read()}') ```
-When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from Azure Blob Storage based on the ID in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the `AzureWebJobsStorage` app setting, which is the same storage account that the function app uses.
+When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the same storage account used by the function app.
+Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're defined using different decorators, usage is similar in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure.
+
+As an example, the following code demonstrates the difference between the two:
+
+```json
+// local.settings.json
+{
+ "IsEncrypted": false,
+ "Values": {
+ "FUNCTIONS_WORKER_RUNTIME": "python",
+ "AzureWebJobsStorage": "<azure-storage-connection-string>"
+ }
+}
+```
+
+```python
+# function_app.py
+import azure.functions as func
+import logging
+
+app = func.FunctionApp()
+
+@app.route(route="req")
+@app.read_blob(arg_name="obj", path="samples/{id}", connection="AzureWebJobsStorage")
+
+def main(req: func.HttpRequest,
+ obj: func.InputStream):
+ logging.info(f'Python HTTP triggered function processed: {obj.read()}')
+```
+
+When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the same storage account used by the function app.
+
+At this time, only specific triggers and bindings are supported by the v2 programming model. Supported triggers and bindings are as follows.
+
+| Type | Trigger | Input Binding | Output Binding |
+| | | | |
+| HTTP | x | | |
+| Timer | x | | |
+| Azure Queue Storage | x | | x |
+| Azure Service Bus topic | x | | x |
+| Azure Service Bus queue | x | | x |
+| Azure Cosmos DB | x | x | x |
+| Azure Blob Storage | x | x | x |
+| Azure Event Grid | x | | x |
+
+To learn more about defining triggers and bindings in the v2 model, see this [documentation](https://github.com/Azure/azure-functions-python-library/blob/dev/docs/ProgModelSpec.pyi).
+ ## Outputs
-Output can be expressed in the return value and in output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
+Output can be expressed both in return value and output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
-To use the return value of a function as the value of an output binding, set the `name` property of the binding to `$return` in *function.json*.
+To use the return value of a function as the value of an output binding, the `name` property of the binding should be set to `$return` in `function.json`.
-To produce multiple outputs, use the `set()` method provided by the [azure.functions.Out](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and return an HTTP response:
+To produce multiple outputs, use the `set()` method provided by the [`azure.functions.Out`](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and also return an HTTP response.
```json {
def main(req: func.HttpRequest,
return message ```
+Output can be expressed both in return value and output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
+
+To produce multiple outputs, use the `set()` method provided by the [`azure.functions.Out`](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and also return an HTTP response.
+
+```python
+# function_app.py
+import azure.functions as func
++
+@app.write_blob(arg_name="msg", path="output-container/{name}",
+ connection="AzureWebJobsStorage")
+
+def test_function(req: func.HttpRequest,
+ msg: func.Out[str]) -> str:
+
+ message = req.params.get('body')
+ msg.set(message)
+ return message
+```
+ ## Logging Access to the Azure Functions runtime logger is available via a root [`logging`](https://docs.python.org/3/library/logging.html#module-logging) handler in your function app. This logger is tied to Application Insights and allows you to flag warnings and errors that occur during the function execution.
-The following example logs an info message when the function is invoked via an HTTP trigger:
+The following example logs an info message when the function is invoked via an HTTP trigger.
```python import logging
More logging methods are available that let you write to the console at differen
| Method | Description | | - | |
-| `critical(_message_)` | Writes a message with level CRITICAL on the root logger. |
-| `error(_message_)` | Writes a message with level ERROR on the root logger. |
-| `warning(_message_)` | Writes a message with level WARNING on the root logger. |
-| `info(_message_)` | Writes a message with level INFO on the root logger. |
-| `debug(_message_)` | Writes a message with level DEBUG on the root logger. |
+| **`critical(_message_)`** | Writes a message with level CRITICAL on the root logger. |
+| **`error(_message_)`** | Writes a message with level ERROR on the root logger. |
+| **`warning(_message_)`** | Writes a message with level WARNING on the root logger. |
+| **`info(_message_)`** | Writes a message with level INFO on the root logger. |
+| **`debug(_message_)`** | Writes a message with level DEBUG on the root logger. |
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.md). ### Log custom telemetry
-By default, the Azure Functions runtime collects logs and other telemetry data that your functions generate. This telemetry ends up as traces in Application Insights. By default, [triggers and bindings](functions-triggers-bindings.md#supported-bindings) also collect request and dependency telemetry for certain Azure services.
-
-To collect custom request and custom dependency telemetry outside bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). The Azure Functions extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
+By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). This extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
>[!NOTE] >To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there.
def main(req, context):
}) ```
-## HTTP trigger and bindings
+## HTTP trigger
-The HTTP trigger is defined in the *function.json* file. The `name` parameter of the binding must match the named parameter in the function.
+The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function.
+In the previous examples, a binding name `req` is used. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
-The previous examples use the binding name `req`. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
+From the [HttpRequest] object, you can get request headers, query parameters, route parameters, and the message body.
-From the `HttpRequest` object, you can get request headers, query parameters, route parameters, and the message body.
-
-The following example is from the [HTTP trigger template for Python](https://github.com/Azure/azure-functions-templates/tree/dev/Functions.Templates/Templates/HttpTrigger-Python):
+The following example is from the [HTTP trigger template for Python](https://github.com/Azure/azure-functions-templates/tree/dev/Functions.Templates/Templates/HttpTrigger-Python).
```python def main(req: func.HttpRequest) -> func.HttpResponse:
def main(req: func.HttpRequest) -> func.HttpResponse:
) ```
-In this function, the value of the `name` query parameter is obtained from the `params` parameter of the `HttpRequest` object. The JSON-encoded message body is read using the `get_json` method.
+In this function, the value of the `name` query parameter is obtained from the `params` parameter of the [HttpRequest] object. The JSON-encoded message body is read using the `get_json` method.
+
+Likewise, you can set the `status_code` and `headers` for the response message in the returned [HttpResponse] object.
+The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function.
+In the previous examples, a binding name `req` is used. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
-Likewise, you can set the `status_code` and `headers` information for the response message in the returned `HttpResponse` object.
+From the [HttpRequest] object, you can get request headers, query parameters, route parameters, and the message body.
+
+The following example is from the HTTP trigger template for Python v2 programming model. It's the sample code provided when you create a function from Core Tools or VS Code.
+
+```python
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello")
+
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+
+ if name:
+ return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
+ else:
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
+ status_code=200
+ )
+```
+
+In this function, the value of the `name` query parameter is obtained from the `params` parameter of the [HttpRequest] object. The JSON-encoded message body is read using the `get_json` method.
+
+Likewise, you can set the `status_code` and `headers` for the response message in the returned [HttpResponse] object.
+
+To pass in a name in this example, paste the URL provided when running the function, and append it with "?name={name}"
+ ## Web frameworks You can use WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
-First, the *function.json* file must be updated to include `route` in the HTTP trigger, as shown in the following example:
+First, the function.json file must be updated to include a `route` in the HTTP trigger, as shown in the following example:
```json {
First, the *function.json* file must be updated to include `route` in the HTTP t
} ```
-The *host.json* file must also be updated to include an HTTP `routePrefix` value, as shown in the following example:
+The host.json file must also be updated to include an HTTP `routePrefix`, as shown in the following example.
```json {
The *host.json* file must also be updated to include an HTTP `routePrefix` value
} ```
-Update the Python code file *__init__.py*, based on the interface that your framework uses. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
+Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
# [ASGI](#tab/asgi)
def main(req: func.HttpRequest, context) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.') return func.WsgiMiddleware(app).handle(req, context) ```
-For a full example, see [Using the Flask framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+For a full example, see [Using Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+++
+You can use ASGI and WSGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions, which is shown in the following example:
+
+# [ASGI](#tab/asgi)
+
+`AsgiFunctionApp` is the top-level function app class for constructing ASGI HTTP functions.
+
+```python
+# function_app.py
+
+import azure.functions as func
+from fastapi import FastAPI, Request, Response
+
+fast_app = FastAPI()
+
+@fast_app.get("/return_http_no_body")
+async def return_http_no_body():
+ return Response(content='', media_type="text/plain")
+
+app = func.AsgiFunctionApp(app=fast_app,
+ http_auth_level=func.AuthLevel.ANONYMOUS)
+```
+
+# [WSGI](#tab/wsgi)
+
+`WsgiFunctionApp` is top level function app class for constructing WSGI HTTP functions.
+
+```python
+# function_app.py
+
+import azure.functions as func
+from flask import Flask, request, Response, redirect, url_for
+
+flask_app = Flask(__name__)
+logger = logging.getLogger("my-function")
+
+@flask_app.get("/return_http")
+def return_http():
+ return Response('<h1>Hello WorldΓäó</h1>', mimetype='text/html')
+
+app = func.WsgiFunctionApp(app=flask_app.wsgi_app,
+ http_auth_level=func.AuthLevel.ANONYMOUS)
+```
## Scaling and performance
-For scaling and performance best practices for Python function apps, see [Improve throughput performance of Python apps in Azure Functions](python-scale-performance-reference.md).
+For scaling and performance best practices for Python function apps, see the [Python scale and performance article](python-scale-performance-reference.md).
## Context
def main(req: azure.functions.HttpRequest,
return f'{context.invocation_id}' ```
-The [Context](/python/api/azure-functions/azure.functions.context) class has the following string attributes:
+The [**Context**](/python/api/azure-functions/azure.functions.context) class has the following string attributes:
-- `function_directory`: Directory in which the function is running.
+`function_directory`
+The directory in which the function is running.
-- `function_name`: Name of the function.
+`function_name`
+Name of the function.
-- `invocation_id`: ID of the current function invocation.
+`invocation_id`
+ID of the current function invocation.
-- `trace_context`: Context for distributed tracing. For more information, see [Trace Context](https://www.w3.org/TR/trace-context/) on the W3C website.
+`trace_context`
+Context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/).
-- `retry_context`: Context for retries to the function. For more information, see [Retry policies](./functions-bindings-errors.md#retry-policies).
+`retry_context`
+Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies).
## Global variables
-It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. To cache the results of an expensive computation, declare it as a global variable:
+It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable.
```python CACHED_DATA = None
def main(req):
## Environment variables
-In Azure Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code:
+In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code.
| Method | Description | | | |
-| `os.environ["myAppSetting"]` | Tries to get the application setting by key name. It raises an error when unsuccessful. |
-| `os.getenv("myAppSetting")` | Tries to get the application setting by key name. It returns `null` when unsuccessful. |
+| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. |
+| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. |
Both of these ways require you to declare `import os`.
def main(req: func.HttpRequest) -> func.HttpResponse:
For local development, application settings are [maintained in the local.settings.json file](functions-develop-local.md#local-settings-file).
-## Python version
+In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code.
-Azure Functions supports the following Python versions. These are official Python distributions.
+| Method | Description |
+| | |
+| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. |
+| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. |
-| Functions version | Python versions |
-| -- | -- |
-| 4.x | 3.9<br/> 3.8<br/>3.7 |
-| 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 |
-| 2.x | 3.7<br/>3.6 |
+Both of these ways require you to declare `import os`.
-To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The `--functions-version` option sets the Azure Functions runtime version.
+The following example uses `os.environ["myAppSetting"]` to get the [application setting](functions-how-to-use-azure-function-app-settings.md#settings), with the key named `myAppSetting`:
-### Changing Python version
+```python
+import logging
+import os
+import azure.functions as func
-To set a Python function app to a specific language version, you need to specify the language and the version of the language in `linuxFxVersion` field in site configuration. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
+@app.function_name(name="HttpTrigger1")
+@app.route(route="req")
-To learn more about the Azure Functions runtime support policy, see [Language runtime support policy](./language-support-policy.md).
+def main(req: func.HttpRequest) -> func.HttpResponse:
-You can view and set `linuxFxVersion` from the Azure CLI by using the [az functionapp config show](/cli/azure/functionapp/config) command. Replace `<function_app>` with the name of your function app. Replace `<my_resource_group>` with the name of the resource group for your function app.
-```azurecli-interactive
-az functionapp config show --name <function_app> \
resource-group <my_resource_group>
+ # Get the setting named 'myAppSetting'
+ my_app_setting_value = os.environ["myAppSetting"]
+ logging.info(f'My app setting value:{my_app_setting_value}')
```
-You see `linuxFxVersion` in the following output, which has been truncated for clarity:
+For local development, application settings are [maintained in the local.settings.json file](functions-develop-local.md#local-settings-file).
-```output
-{
- ...
- "kind": null,
- "limits": null,
- "linuxFxVersion": <LINUX_FX_VERSION>,
- "loadBalancing": "LeastRequests",
- "localMySqlEnabled": false,
- "location": "West US",
- "logsDirectorySizeLimit": 35,
- ...
-}
+When using the new programming model, the following app setting needs to be enabled in the file `localsettings.json` as follows.
+
+```json
+"AzureWebJobsFeatureFlags": "EnableWorkerIndexing"
```
-You can update the `linuxFxVersion` setting in the function app by using the [az functionapp config set](/cli/azure/functionapp/config) command. In the following code:
+When deploying the function, this setting won't be automatically created. You must explicitly create this setting in your function app in Azure for it to run using the v2 model.
-- Replace `<FUNCTION_APP>` with the name of your function app. -- Replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. -- Replace `<LINUX_FX_VERSION>` with the Python version that you want to use, prefixed by `python|`. For example: `python|3.9`.
+Multiple Python workers aren't supported in v2 at this time. This means that setting `FUNCTIONS_WORKER_PROCESS_COUNT` to greater than 1 isn't supported for the functions using the v2 model.
-```azurecli-interactive
-az functionapp config set --name <FUNCTION_APP> \
resource-group <RESOURCE_GROUP> \linux-fx-version <LINUX_FX_VERSION>
-```
-You can run the command from [Azure Cloud Shell](../cloud-shell/overview.md) by selecting **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to run the command after you use [az login](/cli/azure/reference-index#az-login) to sign in.
+## Python version
+
+Azure Functions supports the following Python versions:
+
+| Functions version | Python<sup>*</sup> versions |
+| -- | -- |
+| 4.x | 3.9<br/> 3.8<br/>3.7 |
+| 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 |
+| 2.x | 3.7<br/>3.6 |
-The function app restarts after you change the site configuration.
+<sup>*</sup>Official Python distributions
-### Local Python version
+To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created and can't be changed.
-When running locally, the Azure Functions Core Tools uses the available Python version.
+The runtime uses the available Python version, when you run it locally.
+
+### Changing Python version
+
+To set a Python function app to a specific language version, you need to specify the language and the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
+
+To learn how to view and change the `linuxFxVersion` site setting, see [How to target Azure Functions runtime versions](set-runtime-version.md#manual-version-updates-on-linux).
+
+For more general information, see the [Azure Functions runtime support policy](./language-support-policy.md) and [Supported languages in Azure Functions](./supported-languages.md).
## Package management
-When you're developing locally by using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the *requirements.txt* file and install them by using `pip`.
+When developing locally using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the `requirements.txt` file and install them using `pip`.
-For example, you can use the following requirements file and `pip` command to install the `requests` package from PyPI:
+For example, the following requirements file and pip command can be used to install the `requests` package from PyPI.
```txt requests==2.19.1
pip install -r requirements.txt
## Publishing to Azure
-When you're ready to publish, make sure that all your publicly available dependencies are listed in the *requirements.txt* file. This file is at the root of your project directory.
+When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file. You can locate this file at the root of your project directory.
-You can also find project files and folders that are excluded from publishing, including the virtual environment folder, in the root directory of your project.
+Project files and folders that are excluded from publishing, including the virtual environment folder, you can find them in the root directory of your project.
-Three build actions are supported for publishing your Python project to Azure: remote build, local build, and builds that use custom dependencies.
+There are three build actions supported for publishing your Python project to Azure: remote build, local build, and builds using custom dependencies.
-You can also use Azure Pipelines to build your dependencies and publish by using continuous delivery (CD). To learn more, see [Continuous delivery by using Azure DevOps](functions-how-to-azure-devops.md).
+You can also use Azure Pipelines to build your dependencies and publish using continuous delivery (CD). To learn more, see [Continuous delivery with Azure Pipelines](functions-how-to-azure-devops.md).
### Remote build
-When you use a remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use a remote build when you're developing Python apps on Windows. If your project has custom dependencies, you can [use a remote build with an extra index URL](#remote-build-with-extra-index-url).
+When you use remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url).
-Dependencies are obtained remotely based on the contents of the *requirements.txt* file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, Azure Functions Core Tools requests a remote build when you use the following [func azure functionapp publish](functions-run-local.md#publish) command to publish your Python project to Azure. Replace `<APP_NAME>` with the name of your function app in Azure.
+Dependencies are obtained remotely based on the contents of the requirements.txt file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, the Azure Functions Core Tools requests a remote build when you use the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish your Python project to Azure.
```bash func azure functionapp publish <APP_NAME> ```
-The [Azure Functions extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) also requests a remote build by default.
+Remember to replace `<APP_NAME>` with the name of your function app in Azure.
+
+The [Azure Functions Extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) also requests a remote build by default.
### Local build
-Dependencies are obtained locally based on the contents of the *requirements.txt* file. You can prevent a remote build by using the following [func azure functionapp publish](functions-run-local.md#publish) command to publish with a local build. Replace `<APP_NAME>` with the name of your function app in Azure.
+Dependencies are obtained locally based on the contents of the requirements.txt file. You can prevent doing a remote build by using the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish with a local build.
```command func azure functionapp publish <APP_NAME> --build local ```
-When you use the `--build local` option, project dependencies are read from the *requirements.txt* file. Those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in the upload of a larger deployment package to Azure. If you can't get the *requirements.txt* file by using Core Tools, you must use the custom dependencies option for publishing.
+Remember to replace `<APP_NAME>` with the name of your function app in Azure.
+
+When you use the `--build local` option, project dependencies are read from the requirements.txt file, and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, you can't get requirements.txt file by Core Tools, you must use the custom dependencies option for publishing.
-We don't recommend using local builds when you're developing locally on Windows.
+We don't recommend using local builds when developing locally on Windows.
### Custom dependencies
-When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project.
+When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project. The build method depends on how you build the project.
#### Remote build with extra index URL
-When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` with the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
+When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` using the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
You can also use basic authentication credentials with your extra package index URLs. To learn more, see [Basic authentication credentials](https://pip.pypa.io/en/stable/user_guide/#basic-authentication-credentials) in Python documentation.
-> [!NOTE]
-> If you need to change the base URL of the Python Package Index from the default of `https://pypi.org/simple`, you can do this by [creating an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named [`PIP_INDEX_URL`](functions-app-settings.md#pip_index_url) that points to a different package index URL. Like [`PIP_EXTRA_INDEX_URL`](functions-app-settings.md#pip_extra_index_url), [`PIP_INDEX_URL`](functions-app-settings.md#pip_index_url) is a pip-specific application setting that changes the source for pip to use.
-
+#### Install local packages
-#### Installing local packages
-
-If your project uses packages that aren't publicly available, you can make them available to your app by putting them in the *\_\_app\_\_/.python_packages* directory. Before publishing, run the following command to install the dependencies locally:
+If your project uses packages not publicly available to our tools, you can make them available to your app by putting them in the \_\_app\_\_/.python_packages directory. Before publishing, run the following command to install the dependencies locally:
```command pip install --target="<PROJECT_DIR>/.python_packages/lib/site-packages" -r requirements.txt ```
-When you're using custom dependencies, use the following `--no-build` publishing option because you've already installed the dependencies into the project folder. Replace `<APP_NAME>` with the name of your function app in Azure.
+When using custom dependencies, you should use the `--no-build` publishing option, since you've already installed the dependencies into the project folder.
```command func azure functionapp publish <APP_NAME> --no-build ```
+Remember to replace `<APP_NAME>` with the name of your function app in Azure.
+ ## Unit testing
-You can test functions written in Python the same way that you test other Python code: through standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the [azure.functions](https://pypi.org/project/azure-functions/) package. Because the `azure.functions` package isn't immediately available, be sure to install it via your *requirements.txt* file as described in the earlier [Package management](#package-management) section.
+Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package isn't immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above.
-Take *my_second_function* as an example. Following is a mock test of an HTTP triggered function.
+Take *my_second_function* as an example, following is a mock test of an HTTP triggered function:
-First, to create the *<project_root>/my_second_function/function.json* file and define this function as an HTTP trigger, use the following code:
+First we need to create *<project_root>/my_second_function/function.json* file and define this function as an http trigger.
```json {
First, to create the *<project_root>/my_second_function/function.json* file and
} ```
-Now, you can implement *my_second_function* and *shared_code.my_second_helper_function*:
+Now, we can implement the *my_second_function* and the *shared_code.my_second_helper_function*.
```python # <project_root>/my_second_function/__init__.py
import logging
# Use absolute import to resolve shared_code modules from shared_code import my_second_helper_function
-# Define an HTTP trigger that accepts the ?value=<int> query parameter
+# Define an http trigger which accepts ?value=<int> query parameter
# Double the value and return the result in HttpResponse def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Executing my_second_function.')
def double(value: int) -> int:
return value * 2 ```
-You can start writing test cases for your HTTP trigger:
+We can start writing test cases for our http trigger.
```python # <project_root>/tests/test_my_second_function.py
class TestFunction(unittest.TestCase):
) ```
-Inside your *.venv* Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
+Inside your `.venv` Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
+First we need to create *<project_root>/function_app.py* file and implement *my_second_function* function as http trigger and the *shared_code.my_second_helper_function*.
+
+```python
+# <project_root>/function_app.py
+import azure.functions as func
+import logging
+
+# Use absolute import to resolve shared_code modules
+from shared_code import my_second_helper_function
+
+app = func.FunctionApp()
++
+# Define http trigger which accepts ?value=<int> query parameter
+# Double the value and return the result in HttpResponse
+@app.function_name(name="my_second_function")
+@app.route(route="hello")
+def main(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Executing my_second_function.')
+
+ initial_value: int = int(req.params.get('value'))
+ doubled_value: int = my_second_helper_function.double(initial_value)
+
+ return func.HttpResponse(
+ body=f"{initial_value} * 2 = {doubled_value}",
+ status_code=200
+ )
+```
+
+```python
+# <project_root>/shared_code/__init__.py
+# Empty __init__.py file marks shared_code folder as a Python package
+```
+
+```python
+# <project_root>/shared_code/my_second_helper_function.py
+
+def double(value: int) -> int:
+ return value * 2
+```
+
+We can start writing test cases for our http trigger.
+
+```python
+# <project_root>/tests/test_my_second_function.py
+import unittest
+import azure.functions as func
+from function_app import main
++
+class TestFunction(unittest.TestCase):
+ def test_my_second_function(self):
+ # Construct a mock HTTP request.
+ req = func.HttpRequest(
+ method='GET',
+ body=None,
+ url='/api/my_second_function',
+ params={'value': '21'})
+
+ # Call the function.
+ func_call = main.build().get_user_function()
+ resp = func_call(req)
+
+ # Check the output.
+ self.assertEqual(
+ resp.get_body(),
+ b'21 * 2 = 42',
+ )
+```
+
+Inside your `.venv` Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
+ ## Temporary files
-The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is */tmp*. Your application can use this directory to store temporary files that your functions generate and use during execution.
+The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is `/tmp`. Your application can use this directory to store temporary files generated and used by your functions during execution.
> [!IMPORTANT]
-> Files written to the temporary directory aren't guaranteed to persist across invocations. During scale-out, temporary files aren't shared between instances.
+> Files written to the temporary directory aren't guaranteed to persist across invocations. During scale out, temporary files aren't shared between instances.
-The following example creates a named temporary file in the temporary directory (*/tmp*):
+The following example creates a named temporary file in the temporary directory (`/tmp`):
```python import logging
from os import listdir
filesDirListInTemp = listdir(tempFilePath) ```
-We recommend that you maintain your tests in a folder that's separate from the project folder. This action keeps you from deploying test code with your app.
+We recommend that you maintain your tests in a folder separate from the project folder. This action keeps you from deploying test code with your app.
## Preinstalled libraries
-A few libraries come with the runtime for Azure Functions on Python.
+There are a few libraries that come with the Python Functions runtime.
### Python Standard Library
-The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On Unix systems, package collections provide them.
+The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they're provided by package collections.
-To view the full details of these libraries, use these links:
+To view the full details of the list of these libraries, see the links below:
* [Python 3.6 Standard Library](https://docs.python.org/3.6/library/) * [Python 3.7 Standard Library](https://docs.python.org/3.7/library/) * [Python 3.8 Standard Library](https://docs.python.org/3.8/library/) * [Python 3.9 Standard Library](https://docs.python.org/3.9/library/)
-### Worker dependencies
+### Azure Functions Python worker dependencies
-The Python worker for Azure Functions requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they might not be available to your code when you're running outside Azure Functions. You can find a detailed list of dependencies in the `install\_requires` section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
+The Functions Python worker requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they may not be available to your code when running outside of Azure Functions. You can find a detailed list of dependencies in the **install\_requires** section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
> [!NOTE]
-> If your function app's *requirements.txt* file contains an `azure-functions-worker` entry, remove it. The Azure Functions platform automatically manages this worker, and we regularly update it with new features and bug fixes. Manually installing an old version of the worker in *requirements.txt* might cause unexpected problems.
+> If your function app's requirements.txt contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in requirements.txt may cause unexpected issues.
> [!NOTE]
-> If your package contains certain libraries that might collide with the worker's dependencies (for example, protobuf, TensorFlow, or grpcio), configure [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring to the worker's dependencies. This feature is in preview.
+> If your package contains certain libraries that may collide with worker's dependencies (e.g. protobuf, tensorflow, grpcio), please configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring worker's dependencies. This feature is in preview.
-### Python library for Azure Functions
+### Azure Functions Python library
-Every Python worker update includes a new version of the [Python library for Azure Functions (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backward compatible. You can find a list of releases of this library in the [azure-functions information on the PyPi website](https://pypi.org/project/azure-functions/#history).
+Every Python worker update includes a new version of [Azure Functions Python library (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backwards-compatible. A list of releases of this library can be found in [azure-functions PyPi](https://pypi.org/project/azure-functions/#history).
-The runtime library version is fixed by Azure, and *requirements.txt* can't override it. The `azure-functions` entry in *requirements.txt* is only for linting and customer awareness.
+The runtime library version is fixed by Azure, and it can't be overridden by requirements.txt. The `azure-functions` entry in requirements.txt is only for linting and customer awareness.
-Use the following code to track the version of the Python library for Azure Functions in your runtime:
+Use the following code to track the actual version of the Python Functions library in your runtime:
```python getattr(azure.functions, '__version__', '< 1.2.1')
getattr(azure.functions, '__version__', '< 1.2.1')
### Runtime system libraries
-The following table lists preinstalled system libraries in Docker images for the Python worker:
+For a list of preinstalled system libraries in Python worker Docker images, see the links below:
| Functions runtime | Debian version | Python versions | ||||
-| Version 2.x | Stretch | [Python 3.7](https://github.com/Azure/azure-functions-docker/blob/dev/host/4/bullseye/amd64/python/python37/python37.Dockerfile) |
| Version 3.x | Buster | [Python 3.6](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python36/python36.Dockerfile)<br/>[Python 3.7](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python37/python37.Dockerfile)<br />[Python 3.8](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python38/python38.Dockerfile)<br/> [Python 3.9](https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/python/python39/python39.Dockerfile)| ## Python worker extensions
Extensions are imported in your function code much like a standard Python librar
| Scope | Description | | | |
-| **Application level** | When the extension is imported into any function trigger, it applies to every function execution in the app. |
-| **Function level** | Execution is limited to only the specific function trigger into which it's imported. |
+| **Application-level** | When imported into any function trigger, the extension applies to every function execution in the app. |
+| **Function-level** | Execution is limited to only the specific function trigger into which it's imported. |
-Review the information for an extension to learn more about the scope in which the extension runs.
+Review the information for a given extension to learn more about the scope in which the extension runs.
-Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle.
+Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions).
### Using extensions You can use a Python worker extension library in your Python functions by following these basic steps:
-1. Add the extension package in the *requirements.txt* file for your project.
+1. Add the extension package in the requirements.txt file for your project.
1. Install the library into your app. 1. Add the application setting `PYTHON_ENABLE_WORKER_EXTENSIONS`:
- + To add the setting locally, add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
- + To add the setting in Azure, add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings).
+ + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
+ + Azure: add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings).
1. Import the extension module into your function trigger.
-1. Configure the extension instance, if needed. Configuration requirements should be called out in the extension's documentation.
+1. Configure the extension instance, if needed. Configuration requirements should be called-out in the extension's documentation.
> [!IMPORTANT]
-> Microsoft doesn't support or warranty third-party Python worker extension libraries. Make sure that any extensions you use in your function app are trustworthy. You bear the full risk of using a malicious or poorly written extension.
+> Third-party Python worker extension libraries are not supported or warrantied by Microsoft. You must make sure that any extensions you use in your function app is trustworthy, and you bear the full risk of using a malicious or poorly written extension.
-Third parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally).
+Third-parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally).
Here are examples of using extensions in a function app, by scope:
-# [Application level](#tab/application-level)
+# [Application-level](#tab/application-level)
```python # <project_root>/requirements.txt
AppExtension.configure(key=value)
def main(req, context): # Use context.app_ext_attributes here ```
-# [Function level](#tab/function-level)
+# [Function-level](#tab/function-level)
```python # <project_root>/requirements.txt function-level-extension==1.0.0
def main(req, context):
### Creating extensions
-Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer designs, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
+Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer design, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
To learn how to create, package, publish, and consume a Python worker extension package, see [Develop Python worker extensions for Azure Functions](develop-python-worker-extensions.md).
An extension inherited from [`AppExtensionBase`](https://github.com/Azure/azure-
| Method | Description | | | |
-| `init` | Called after the extension is imported. |
-| `configure` | Called from function code when it's needed to configure the extension. |
-| `post_function_load_app_level` | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only. Any attempt to write to a local file in this directory fails. |
-| `pre_invocation_app_level` | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
-| `post_invocation_app_level` | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
+| **`init`** | Called after the extension is imported. |
+| **`configure`** | Called from function code when needed to configure the extension. |
+| **`post_function_load_app_level`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. |
+| **`pre_invocation_app_level`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
+| **`post_invocation_app_level`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
#### Function-level extensions
An extension that inherits from [FuncExtensionBase](https://github.com/Azure/azu
| Method | Description | | | |
-| `__init__` | Called when an extension instance is initialized in a specific function. This method is the constructor of the extension. When you're implementing this abstract method, you might want to accept a `filename` parameter and pass it to the parent's `super().__init__(filename)` method for proper extension registration. |
-| `post_function_load` | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only. Any attempt to write to a local file in this directory fails. |
-| `pre_invocation` | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
-| `post_invocation` | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
+| **`__init__`** | This method is the constructor of the extension. It's called when an extension instance is initialized in a specific function. When implementing this abstract method, you may want to accept a `filename` parameter and pass it to the parent's method `super().__init__(filename)` for proper extension registration. |
+| **`post_function_load`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. |
+| **`pre_invocation`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
+| **`post_invocation`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
## Cross-origin resource sharing
By default, a host instance for Python can process only one function invocation
## <a name="shared-memory"></a>Shared memory (preview)
-To improve throughput, Azure Functions lets your out-of-process Python language worker share memory with the host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
+To improve throughput, Functions let your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
-For example, you might enable shared memory to reduce bottlenecks when using Azure Blob Storage bindings to transfer payloads larger than 1 MB.
+For example, you might enable shared memory to reduce bottlenecks when using Blob storage bindings to transfer payloads larger than 1 MB.
-This functionality is available only for function apps running in Premium and Dedicated (Azure App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory).
+This functionality is available only for function apps running in Premium and Dedicated (App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory).
-## Known issues and FAQs
+## Known issues and FAQ
-Here's a list of troubleshooting guides for common issues:
+The following is a list of troubleshooting guides for common issues:
* [ModuleNotFoundError and ImportError](recover-python-functions.md#troubleshoot-modulenotfounderror)
-* [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
-* [Troubleshoot Errors with Protobuf](recover-python-functions.md#troubleshoot-errors-with-protocol-buffers)
+* [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc).
+
+Following is a list of troubleshooting guides for known issues with the v2 programming model:
+
+* [Couldn't load file or assembly](recover-python-functions.md#troubleshoot-could-not-load-file-or-assembly)
+* [Unable to resolve the Azure Storage connection named Storage](recover-python-functions.md#troubleshoot-unable-to-resolve-the-azure-storage-connection).
-All known issues and feature requests are tracked through the [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
+All known issues and feature requests are tracked using [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
## Next steps
For more information, see the following resources:
* [Azure Functions package API documentation](/python/api/azure-functions/azure.functions) * [Best practices for Azure Functions](functions-best-practices.md) * [Azure Functions triggers and bindings](functions-triggers-bindings.md)
-* [Blob Storage bindings](functions-bindings-storage-blob.md)
-* [HTTP and webhook bindings](functions-bindings-http-webhook.md)
-* [Azure Queue Storage bindings](functions-bindings-storage-queue.md)
+* [Blob storage bindings](functions-bindings-storage-blob.md)
+* [HTTP and Webhook bindings](functions-bindings-http-webhook.md)
+* [Queue storage bindings](functions-bindings-storage-queue.md)
* [Timer trigger](functions-bindings-timer.md) [Having issues? Let us know.](https://aka.ms/python-functions-ref-survey) [HttpRequest]: /python/api/azure-functions/azure.functions.httprequest
-[HttpResponse]: /python/api/azure-functions/azure.functions.httpresponse
+[HttpResponse]: /python/api/azure-functions/azure.functions.httpresponse
azure-functions Python Scale Performance Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-scale-performance-reference.md
When developing for Azure Functions using Python, you need to understand how your functions perform and how that performance affects the way your function app gets scaled. The need is more important when designing highly performant apps. The main factors to consider when designing, writing, and configuring your functions apps are horizontal scaling and throughput performance configurations. ## Horizontal scaling
-By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Python as needed. Azure Functions uses built-in thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't user configurable. For more information, see [Event-driven scaling in Azure Functions](event-driven-scaling.md).
+By default, Azure Functions automatically monitors the load on your application and creates more host instances for Python as needed. Azure Functions uses built-in thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't user configurable. For more information, see [Event-driven scaling in Azure Functions](event-driven-scaling.md).
## Improving throughput performance
-The default configurations are suitable for most of Azure Functions applications. However, you can improve the performance of your applications' throughput by employing configurations based on your workload profile. The first step is to understand the type of workload that you are running.
+The default configurations are suitable for most of Azure Functions applications. However, you can improve the performance of your applications' throughput by employing configurations based on your workload profile. The first step is to understand the type of workload that you're running.
| Workload type | Function app characteristics | Examples | | - | - | - |
To run a function asynchronously, use the `async def` statement, which runs the
async def main(): await some_nonblocking_socket_io_op() ```
-Here is an example of a function with HTTP trigger that uses [aiohttp](https://pypi.org/project/aiohttp/) http client:
+Here's an example of a function with HTTP trigger that uses [aiohttp](https://pypi.org/project/aiohttp/) http client:
```python import aiohttp
async def main(req: func.HttpRequest) -> func.HttpResponse:
```
-A function without the `async` keyword is run automatically in an ThreadPoolExecutor thread pool:
+A function without the `async` keyword is run automatically in a ThreadPoolExecutor thread pool:
```python # Runs in an ThreadPoolExecutor threadpool. Number of threads is defined by PYTHON_THREADPOOL_THREAD_COUNT.
def main():
some_blocking_socket_io() ```
-In order to achieve the full benefit of running functions asynchronously, the I/O operation/library that is used in your code needs to have async implemented as well. Using synchronous I/O operations in functions that are defined as asynchronous **may hurt** the overall performance. If the libraries you are using do not have async version implemented, you may still benefit from running your code asynchronously by [managing event loop](#managing-event-loop) in your app.
+In order to achieve the full benefit of running functions asynchronously, the I/O operation/library that is used in your code needs to have async implemented as well. Using synchronous I/O operations in functions that are defined as asynchronous **may hurt** the overall performance. If the libraries you're using don't have async version implemented, you may still benefit from running your code asynchronously by [managing event loop](#managing-event-loop) in your app.
-Here are a few examples of client libraries that has implemented async pattern:
+Here are a few examples of client libraries that have implemented async patterns:
- [aiohttp](https://pypi.org/project/aiohttp/) - Http client/server for asyncio - [Streams API](https://docs.python.org/3/library/asyncio-stream.html) - High-level async/await-ready primitives to work with network connection - [Janus Queue](https://pypi.org/project/janus/) - Thread-safe asyncio-aware queue for Python
Here are a few examples of client libraries that has implemented async pattern:
##### Understanding async in Python worker
-When you define `async` in front of a function signature, Python will mark the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop and allow event loop to process next task during the wait time.
+When you define `async` in front of a function signature, Python will mark the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop, which allows the event loop to process the next task during the wait time.
-In our Python Worker, the worker shares the event loop with the customer's `async` function and it is capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries (e.g. [aiohttp](https://pypi.org/project/aiohttp/), [pyzmq](https://pypi.org/project/pyzmq/)). Employing these recommendations will greatly increase your function's throughput compared to those libraries implemented in synchronous fashion.
+In our Python Worker, the worker shares the event loop with the customer's `async` function and it's capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries, such as [aiohttp](https://pypi.org/project/aiohttp/) and [pyzmq](https://pypi.org/project/pyzmq/). Following these recommendations increases your function's throughput compared to those libraries when implemented synchronously.
> [!NOTE] > If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibit the Python worker to handle concurrent requests. #### Use multiple language worker processes
-By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
+By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [`FUNCTIONS_WORKER_PROCESS_COUNT`](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
-For CPU bound apps, you should set the number of language worker to be the same as or higher than the number of cores that are available per function app. To learn more, see [Available instance SKUs](functions-premium-plan.md#available-instance-skus).
+For CPU bound apps, you should set the number of language workers to be the same as or higher than the number of cores that are available per function app. To learn more, see [Available instance SKUs](functions-premium-plan.md#available-instance-skus).
I/O-bound apps may also benefit from increasing the number of worker processes beyond the number of cores available. Keep in mind that setting the number of workers too high can impact overall performance due to the increased number of required context switches.
-The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your application to meet demand.
+The `FUNCTIONS_WORKER_PROCESS_COUNT` applies to each host that Functions creates when scaling out your application to meet demand.
+
+> [!NOTE]
+> Multiple Python workers are not supported in V2 at this time. This means that enabling intelligent concurrency and setting `FUNCTIONS_WORKER_PROCESS_COUNT` greater than 1 is not supported for functions developed using the V2 model.
#### Set up max workers within a language worker process
-As mentioned in the async [section](#understanding-async-in-python-worker), the Python language worker treats functions and [coroutines](https://docs.python.org/3/library/asyncio-task.html#coroutines) differently. A coroutine is run within the same event loop that the language worker runs on. On the other hand, a function invocation is run within a [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor), that is maintained by the language worker, as a thread.
+As mentioned in the async [section](#understanding-async-in-python-worker), the Python language worker treats functions and [coroutines](https://docs.python.org/3/library/asyncio-task.html#coroutines) differently. A coroutine is run within the same event loop that the language worker runs on. On the other hand, a function invocation is run within a [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor), which is maintained by the language worker as a thread.
You can set the value of maximum workers allowed for running sync functions using the [PYTHON_THREADPOOL_THREAD_COUNT](functions-app-settings.md#python_threadpool_thread_count) application setting. This value sets the `max_worker` argument of the ThreadPoolExecutor object, which lets Python use a pool of at most `max_worker` threads to execute calls asynchronously. The `PYTHON_THREADPOOL_THREAD_COUNT` applies to each worker that Functions host creates, and Python decides when to create a new thread or reuse the existing idle thread. For older Python versions(that is, `3.8`, `3.7`, and `3.6`), `max_worker` value is set to 1. For Python version `3.9` , `max_worker` is set to `None`. For CPU-bound apps, you should keep the setting to a low number, starting from 1 and increasing as you experiment with your workload. This suggestion is to reduce the time spent on context switches and allowing CPU-bound tasks to finish.
-For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default - the number of cores + 4 and then tweak based on the throughput values you are seeing.
+For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default - the number of cores + 4 and then tweak based on the throughput values you're seeing.
-For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend to profile them and set the values according to the behavior they present. Also refer to this [section](#use-multiple-language-worker-processes) to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
+For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend profiling them and set the values according to the behavior they present. Also refer to this [section](#use-multiple-language-worker-processes) to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
> [!NOTE] > Although these recommendations apply to both HTTP and non-HTTP triggered functions, you might need to adjust other trigger specific configurations for non-HTTP triggered functions to get the expected performance from your function apps. For more information about this, please refer to this [article](functions-best-practices.md).
For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT`
You should use asyncio compatible third-party libraries. If none of the third-party libraries meet your needs, you can also manage the event loops in Azure Functions. Managing event loops give you more flexibility in compute resource management, and it also makes it possible to wrap synchronous I/O libraries into coroutines.
-There are many useful Python official documents discussing the [Coroutines and Tasks](https://docs.python.org/3/library/asyncio-task.html) and [Event Loop](https://docs.python.org/3.8/library/asyncio-eventloop.html) by using the built in **asyncio** library.
+There are many useful Python official documents discussing the [Coroutines and Tasks](https://docs.python.org/3/library/asyncio-task.html) and [Event Loop](https://docs.python.org/3.8/library/asyncio-eventloop.html) by using the built-in **asyncio** library.
Take the following [requests](https://github.com/psf/requests) library as an example, this code snippet uses the **asyncio** library to wrap the `requests.get()` method into a coroutine, running multiple web requests to SAMPLE_URL concurrently.
async def main(req: func.HttpRequest) -> func.HttpResponse:
mimetype='application/json') ``` #### Vertical scaling
-For more processing units especially in CPU-bound operation, you might be able to get this by upgrading to premium plan with higher specifications. With higher processing units, you can adjust the number of worker process count according to the number of cores available and achieve higher degree of parallelism.
+For more processing units especially in CPU-bound operation, you might be able to get this by upgrading to premium plan with higher specifications. With higher processing units, you can adjust the number of worker processes count according to the number of cores available and achieve higher degree of parallelism.
## Next steps
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
Title: Troubleshoot Python function apps in Azure Functions description: Learn how to troubleshoot Python functions.- Previously updated : 07/29/2020 Last updated : 10/25/2022 ms.devlang: python
+zone_pivot_groups: python-mode-functions
# Troubleshoot Python errors in Azure Functions
-Following is a list of troubleshooting guides for common issues in Python functions:
+This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose your desired model from the selector at the top of the article. The v2 model is currently in preview. For more information on Python programming models, see the [Python developer guide](./functions-reference-python.md).
+
+The following is a list of troubleshooting sections for common issues in Python functions:
* [ModuleNotFoundError and ImportError](#troubleshoot-modulenotfounderror) * [Cannot import 'cygrpc'](#troubleshoot-cannot-import-cygrpc) * [Python exited with code 137](#troubleshoot-python-exited-with-code-137) * [Python exited with code 139](#troubleshoot-python-exited-with-code-139)
+* [Troubleshoot errors with Protocol Buffers](#troubleshoot-errors-with-protocol-buffers)
+* [ModuleNotFoundError and ImportError](#troubleshoot-modulenotfounderror)
+* [Cannot import 'cygrpc'](#troubleshoot-cannot-import-cygrpc)
+* [Python exited with code 137](#troubleshoot-python-exited-with-code-137)
+* [Python exited with code 139](#troubleshoot-python-exited-with-code-139)
+* [Troubleshoot errors with Protocol Buffers](#troubleshoot-errors-with-protocol-buffers)
+* [Multiple Python workers not supported](#multiple-python-workers-not-supported)
+* [Could not load file or assembly](#troubleshoot-could-not-load-file-or-assembly)
+* [Unable to resolve the Azure Storage connection named Storage](#troubleshoot-unable-to-resolve-the-azure-storage-connection)
+* [Issues with deployment](#issue-with-deployment)
## Troubleshoot ModuleNotFoundError
This error occurs when a Python function app fails to load a Python module. The
To identify the actual cause of your issue, you need to get the Python project files that run on your function app. If you don't have the project files on your local computer, you can get them in one of the following ways: * If the function app has `WEBSITE_RUN_FROM_PACKAGE` app setting and its value is a URL, download the file by copy and paste the URL into your browser.
-* If the function app has `WEBSITE_RUN_FROM_PACKAGE` and it is set to `1`, navigate to `https://<app-name>.scm.azurewebsites.net/api/vfs/data/SitePackages` and download the file from the latest `href` URL.
+* If the function app has `WEBSITE_RUN_FROM_PACKAGE` and it's set to `1`, navigate to `https://<app-name>.scm.azurewebsites.net/api/vfs/data/SitePackages` and download the file from the latest `href` URL.
* If the function app doesn't have the app setting mentioned above, navigate to `https://<app-name>.scm.azurewebsites.net/api/settings` and find the URL under `SCM_RUN_FROM_PACKAGE`. Download the file by copy and paste the URL into your browser.
-* If none of these works for you, navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and reveal the content under `/home/site/wwwroot`.
+* If none of these suggestions resolve the issue, navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and view the content under `/home/site/wwwroot`.
The rest of this article helps you troubleshoot potential causes of this error by inspecting your function app's content, identifying the root cause, and resolving the specific issue.
See [Enable remote build](#enable-remote-build) or [Build native dependencies](#
Go to `.python_packages/lib/python3.6/site-packages/<package-name>-<version>-dist-info` or `.python_packages/lib/site-packages/<package-name>-<version>-dist-info`. Use your favorite text editor to open the **wheel** file and check the **Tag:** section. If the value of the tag doesn't contain **linux**, this could be the issue.
-Python functions run only on Linux in Azure: Functions runtime v2.x runs on Debian Stretch and the v3.x runtime on Debian Buster. The artifact is expected to contain the correct Linux binaries. Using `--build local` flag in Core Tools, third-party, or outdated tools may cause older binaries to be used.
+Python functions run only on Linux in Azure: Functions runtime v2.x runs on Debian Stretch and the v3.x runtime on Debian Buster. The artifact is expected to contain the correct Linux binaries. When you use the `--build local` flag in Core Tools, third-party, or outdated tools it may cause older binaries to be used.
See [Enable remote build](#enable-remote-build) or [Build native dependencies](#build-native-dependencies) for mitigation.
See [Update your package to the latest version](#update-your-package-to-the-late
#### The package conflicts with other packages
-If you have verified that the package is resolved correctly with the proper Linux wheels, there may be a conflict with other packages. In certain packages, the PyPi documentations may clarify the incompatible modules. For example in [`azure 4.0.0`](https://pypi.org/project/azure/4.0.0/), there's a statement as follows:
+If you've verified that the package is resolved correctly with the proper Linux wheels, there may be a conflict with other packages. In certain packages, the PyPi documentations may clarify the incompatible modules. For example in [`azure 4.0.0`](https://pypi.org/project/azure/4.0.0/), there's a statement as follows:
<pre>This package isn't compatible with azure-storage. If you installed azure-storage, or if you installed azure 1.x/2.x and didnΓÇÖt uninstall azure-storage,
See [Update your package to the latest version](#update-your-package-to-the-late
Open the `requirements.txt` with a text editor and check the package in `https://pypi.org/project/<package-name>`. Some packages only run on Windows or macOS platforms. For example, pywin32 only runs on Windows.
-The `Module Not Found` error may not occur when you're using Windows or macOS for local development. However, the package fails to import on Azure Functions, which uses Linux at runtime. This is likely to be caused by using `pip freeze` to export virtual environment into requirements.txt from your Windows or macOS machine during project initialization.
+The `Module Not Found` error may not occur when you're using Windows or macOS for local development. However, the package fails to import on Azure Functions, which uses Linux at runtime. This issue is likely to be caused by using `pip freeze` to export virtual environment into requirements.txt from your Windows or macOS machine during project initialization.
See [Replace the package with equivalents](#replace-the-package-with-equivalents) or [Handcraft requirements.txt](#handcraft-requirementstxt) for mitigation.
The following are potential mitigations for module-related issues. Use the [diag
Make sure that remote build is enabled. The way that you do this depends on your deployment method.
-## [Visual Studio Code](#tab/vscode)
+# [Visual Studio Code](#tab/vscode)
Make sure that the latest version of the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) is installed. Verify that `.vscode/settings.json` exists and it contains the setting `"azureFunctions.scmDoBuildDuringDeployment": true`. If not, please create this file with the `azureFunctions.scmDoBuildDuringDeployment` setting enabled and redeploy the project.
-## [Azure Functions Core Tools](#tab/coretools)
+# [Azure Functions Core Tools](#tab/coretools)
Make sure that the latest version of [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools/releases) is installed. Go to your local function project folder, and use `func azure functionapp publish <app-name>` for deployment.
-## [Manual publishing](#tab/manual)
+# [Manual publishing](#tab/manual)
-If you're manually publishing your package into the `https://<app-name>.scm.azurewebsites.net/api/zipdeploy` endpoint, make sure that both **SCM_DO_BUILD_DURING_DEPLOYMENT** and **ENABLE_ORYX_BUILD** are set to **true**. To learn more, see [how to work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+If you're manually publishing your package into the `https://<app-name>.scm.azurewebsites.net/api/zipdeploy` endpoint, make sure that both `SCM_DO_BUILD_DURING_DEPLOYMENT` and `ENABLE_ORYX_BUILD` are set to `true`. To learn more, see [how to work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
Make sure that the latest version of both **docker** and [Azure Functions Core T
#### Update your package to the latest version
-Browse the latest package version in `https://pypi.org/project/<package-name>` and check the **Classifiers:** section. The package should be `OS Independent`, or compatible with `POSIX` or `POSIX :: Linux` in **Operating System**. Also, the Programming Language should contains `Python :: 3`, `Python :: 3.6`, `Python :: 3.7`, `Python :: 3.8`, or `Python :: 3.9`.
+Browse the latest package version in `https://pypi.org/project/<package-name>` and check the **Classifiers:** section. The package should be `OS Independent`, or compatible with `POSIX` or `POSIX :: Linux` in **Operating System**. Also, the Programming Language should contain: `Python :: 3`, `Python :: 3.6`, `Python :: 3.7`, `Python :: 3.8`, or `Python :: 3.9`.
If these are correct, you can update the package to the latest version by changing the line `<package-name>~=<latest-version>` in requirements.txt.
The best practice is to check the import statement from each .py file in your pr
First, we should take a look into the latest version of the package in `https://pypi.org/project/<package-name>`. Usually, this package has their own GitHub page, go to the **Issues** section on GitHub and search if your issue has been fixed. If so, update the package to the latest version.
-Sometimes, the package may have been integrated into [Python Standard Library](https://docs.python.org/3/library/) (such as pathlib). If so, since we provide a certain Python distribution in Azure Functions (Python 3.6, Python 3.7, Python 3.8, and Python 3.9), the package in your requirements.txt should be removed.
+Sometimes, the package may have been integrated into [Python Standard Library](https://docs.python.org/3/library/) (such as `pathlib`). If so, since we provide a certain Python distribution in Azure Functions (Python 3.6, Python 3.7, Python 3.8, and Python 3.9), the package in your requirements.txt should be removed.
-However, if you're facing an issue that it has not been fixed and you're on a deadline. I encourage you to do some research and find a similar package for your project. Usually, the Python community will provide you with a wide variety of similar libraries that you can use.
+However, if you're facing an issue that it hasn't been fixed and you're on a deadline. I encourage you to do some research and find a similar package for your project. Usually, the Python community will provide you with a wide variety of similar libraries that you can use.
This section helps you troubleshoot 'cygrpc' related errors in your Python funct
This error occurs when a Python function app fails to start with a proper Python interpreter. The root cause for this error is one of the following issues: - [The Python interpreter mismatches OS architecture](#the-python-interpreter-mismatches-os-architecture)-- [The Python interpreter is not supported by Azure Functions Python Worker](#the-python-interpreter-is-not-supported-by-azure-functions-python-worker)
+- [The Python interpreter isn't supported by Azure Functions Python Worker](#the-python-interpreter-isnt-supported-by-azure-functions-python-worker)
### Diagnose 'cygrpc' reference error
On Unix-like shell: `python3 -c 'import platform; print(platform.architecture()[
If there's a mismatch between Python interpreter bitness and operating system architecture, please download a proper Python interpreter from [Python Software Foundation](https://www.python.org/downloads).
-#### The Python interpreter is not supported by Azure Functions Python Worker
+#### The Python interpreter isn't supported by Azure Functions Python Worker
The Azure Functions Python Worker only supports Python 3.6, 3.7, 3.8, and 3.9.
-Please check if your Python interpreter matches our expected version by `py --version` in Windows or `python3 --version` in Unix-like systems. Ensure the return result is Python 3.6.x, Python 3.7.x, Python 3.8.x, or Python 3.9.x.
+Check if your Python interpreter matches our expected version by `py --version` in Windows or `python3 --version` in Unix-like systems. Ensure the return result is Python 3.6.x, Python 3.7.x, Python 3.8.x, or Python 3.9.x.
-If your Python interpreter version does not meet our expectation, please download the Python 3.6, 3.7, 3.8, or 3.9 interpreter from [Python Software Foundation](https://www.python.org/downloads).
+If your Python interpreter version doesn't meet the requirements for Functions, instead download the Python 3.6, 3.7, 3.8, or 3.9 interpreter from [Python Software Foundation](https://www.python.org/downloads).
Code 137 errors are typically caused by out-of-memory issues in your Python func
This error occurs when a Python function app is forced to terminate by the operating system with a SIGKILL signal. This signal usually indicates an out-of-memory error in your Python process. The Azure Functions platform has a [service limitation](functions-scale.md#service-limits) which will terminate any function apps that exceeded this limit.
-Please visit the tutorial section in [memory profiling on Python functions](python-memory-profiler-reference.md#memory-profiling-process) to analyze the memory bottleneck in your function app.
+Visit the tutorial section in [memory profiling on Python functions](python-memory-profiler-reference.md#memory-profiling-process) to analyze the memory bottleneck in your function app.
This section helps you troubleshoot segmentation fault errors in your Python fun
> `Microsoft.Azure.WebJobs.Script.Workers.WorkerProcessExitException : python exited with code 139`
-This error occurs when a Python function app is forced to terminate by the operating system with a SIGSEGV signal. This signal indicates a memory segmentation violation which can be caused by unexpectedly reading from or writing into a restricted memory region. In the following sections, we provide a list of common root causes.
+This error occurs when a Python function app is forced to terminate by the operating system with a SIGSEGV signal. This signal indicates a memory segmentation violation, which can be caused by unexpectedly reading from or writing into a restricted memory region. In the following sections, we provide a list of common root causes.
### A regression from third-party packages
In your function app's requirements.txt, an unpinned package will be upgraded to
### Unpickling from a malformed .pkl file
-If your function app is using the Python pickel library to load Python object from .pkl file, it is possible that the .pkl contains malformed bytes string, or invalid address reference in it. To recover from this issue, try commenting out the pickle.load() function.
+If your function app is using the Python pickel library to load Python object from .pkl file, it's possible that the .pkl contains malformed bytes string, or invalid address reference in it. To recover from this issue, try commenting out the pickle.load() function.
### Pyodbc connection collision
There are two ways to mitigate this issue.
+## Multiple Python workers not supported
+
+Multiple Python workers aren't supported in the v2 programming model at this time. This means that enabling intelligent concurrency by setting `FUNCTIONS_WORKER_PROCESS_COUNT` greater than 1 isn't supported for functions developed using the V2 model.
+
+## Troubleshoot could not load file or assembly
+
+If you're facing this error, it may be the case that you are using the V2 programming model. This error is due to a known issue that will be resolved in an upcoming release.
+
+This specific error may ready:
+
+> `DurableTask.Netherite.AzureFunctions: Could not load file or assembly 'Microsoft.Azure.WebJobs.Extensions.DurableTask, Version=2.0.0.0, Culture=neutral, PublicKeyToken=014045d636e89289'.`
+> `The system cannot find the file specified.`
+
+The reason this error may be occurring is because of an issue with how the extension bundle was cached. To detect if this is the issue, you can run the command with `--verbose` to see more details.
+
+> `func host start --verbose`
+
+Upon running the command, if you notice that `Loading startup extension <>` is not followed by `Loaded extension <>` for each extension, it is likely that you are facing a caching issue.
+
+To resolve this issue,
+
+1. Find the path of `.azure-functions-core-tools` by running
+```console
+func GetExtensionBundlePath
+```
+
+2. Delete the directory `.azure-functions-core-tools`
+
+# [bash](#tab/bash)
+
+```bash
+rm -r <insert path>/.azure-functions-core-tools
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+Remove-Item <insert path>/.azure-functions-core-tools
+```
+
+# [Cmd](#tab/cmd)
+
+```cmd
+rmdir <insert path>/.azure-functions-core-tools
+```
++
+## Troubleshoot unable to resolve the Azure Storage connection
+
+You may see this error in your local output as the following message:
+
+> `Microsoft.Azure.WebJobs.Extensions.DurableTask: Unable to resolve the Azure Storage connection named 'Storage'.`
+> `Value cannot be null. (Parameter 'provider')`
+
+This error is a result of how extensions are loaded from the bundle locally. To resolve this error, you can do one of the following:
+* Use a storage emulator such as [Azurite](../storage/common/storage-use-azurite.md). This is a good option when you aren't planning to use a storage account in your function application.
+* Create a storage account and add a connection string to the `AzureWebJobsStorage` environment variable in `localsettings.json`. Use this option when you are using a storage account trigger or binding with your application, or if you have an existing storage account. To get started, see [Create a storage account](../storage/common/storage-account-create.md).
+
+## Issue with Deployment
+
+In the [Azure portal](https://portal.azure.com), navigate to **Settings** > **Configuration** and make sure that the `AzureWebJobsFeatureFlags` application setting has a value of `EnableWorkerIndexing`. If it is not found, add this setting to the function app.
+ ## Next steps If you're unable to resolve your issue, please report this to the Functions team:
azure-government Documentation Government Get Started Connect To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-to-storage.md
These endpoint differences must be taken into account when you connect to storag
- Read more about [Azure Storage](../storage/index.yml). - Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/) - Get help on Stack Overflow by using the [azure-gov](https://stackoverflow.com/questions/tagged/azure-gov) tag-
azure-monitor Itsmc Dashboard Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard-errors.md
The following sections describe common errors that appear in the connector statu
**Cause**: The IP address of the ITSM application doesn't allow ITSM connections from partner ITSM tools.
-**Resolution**: To allow ITSM connections from partner ITSM tools, we recommend that the to list includes the entire public IP range of the Azure region of the LogAnalytics workspace. For more information, see this article about [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=56519). You can only list the ActionGroup network tag in these regions: EUS/WEU/EUS2/WUS2/US South Central.
-
+**Resolution**: To allow ITSM connections make sure ActionGroup network tag is allowed on your network.
## Authentication **Error**: "User Not Authenticated"
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
To create an action group:
When you create or edit an Azure alert rule, use an action group, which has an ITSM action. When the alert triggers, the work item is created or updated in the ITSM tool. > [!NOTE]
-> For information about the pricing of the ITSM action, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for action groups.
+> * For information about the pricing of the ITSM action, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for action groups.
>
-> The short description field in the alert rule definition is limited to 40 characters when you send it by using the ITSM action.
+> * The short description field in the alert rule definition is limited to 40 characters when you send it by using the ITSM action.
+>
+> * In case you have policies for inbound traffic for your ServiceNow instances, add ActionGroup service tag to allowList.
## Next steps
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This section will guide you through manually adding Application Insights to a te
3. Copy the following XML configuration into your newly created file:
- ```xml
- <?xml version="1.0" encoding="utf-8"?>
+ ```xml
+ <?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings"> <TelemetryInitializers> <Add Type="Microsoft.ApplicationInsights.DependencyCollector.HttpDependenciesParsingTelemetryInitializer, Microsoft.AI.DependencyCollector" />
This section will guide you through manually adding Application Insights to a te
--> <ConnectionString>Copy connection string from Application Insights Resource Overview</ConnectionString> </ApplicationInsights>
- ```
+ ```
4. Before the closing `</ApplicationInsights>` tag, add the connection string for your Application Insights resource. You can find your connection string on the overview pane of the newly created Application Insights resource.
This section will guide you through manually adding Application Insights to a te
} } }
-
``` 6. In the *App_Start* folder, open the *FilterConfig.cs* file and change it to match the sample:
For the latest updates and bug fixes, [consult the release notes](./release-note
## Next steps * Add synthetic transactions to test that your website is available from all over the world with [availability monitoring](monitor-web-app-availability.md).
-* [Configure sampling](sampling.md) to help reduce telemetry traffic and data storage costs.
+* [Configure sampling](sampling.md) to help reduce telemetry traffic and data storage costs.
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
To create a new file, right click under your timer trigger function (for example
```xml <Project Sdk="Microsoft.NET.Sdk">
-     <PropertyGroup>
-         <TargetFramework>netstandard2.0</TargetFramework>
-     </PropertyGroup>
-     <ItemGroup>
-         <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure you’re using the latest version -->
-     </ItemGroup>
+ <PropertyGroup>
+ <TargetFramework>netstandard2.0</TargetFramework>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure youΓÇÖre using the latest version -->
+ </ItemGroup>
</Project>
-
```
- :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot of function.proj in App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
+ :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot of function.proj in App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
2. Create a new file called "runAvailabilityTest.csx" and paste the following code:
To create a new file, right click under your timer trigger function (for example
public async static Task RunAvailabilityTestAsync(ILogger log) {
-     using (var httpClient = new HttpClient())
-     {
-         // TODO: Replace with your business logic
-         await httpClient.GetStringAsync("https://www.bing.com/");
-     }
+ using (var httpClient = new HttpClient())
+ {
+ // TODO: Replace with your business logic
+ await httpClient.GetStringAsync("https://www.bing.com/");
+ }
} ```
To create a new file, right click under your timer trigger function (for example
public async static Task Run(TimerInfo myTimer, ILogger log, ExecutionContext executionContext) {
-     if (telemetryClient == null)
-     {
-         // Initializing a telemetry configuration for Application Insights based on connection string
-
-         var telemetryConfiguration = new TelemetryConfiguration();
-         telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
-         telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
-         telemetryClient = new TelemetryClient(telemetryConfiguration);
-     }
-
-     string testName = executionContext.FunctionName;
-     string location = Environment.GetEnvironmentVariable("REGION_NAME");
-     var availability = new AvailabilityTelemetry
-     {
-         Name = testName,
-
-         RunLocation = location,
-
-         Success = false,
-     };
-
-     availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
-     availability.Context.Operation.Id = Activity.Current.RootId;
-     var stopwatch = new Stopwatch();
-     stopwatch.Start();
-
-     try
-     {
-         using (var activity = new Activity("AvailabilityContext"))
-         {
-             activity.Start();
-             availability.Id = Activity.Current.SpanId.ToString();
-             // Run business logic
-             await RunAvailabilityTestAsync(log);
-         }
-         availability.Success = true;
-     }
-
-     catch (Exception ex)
-     {
-         availability.Message = ex.Message;
-         throw;
-     }
-
-     finally
-     {
-         stopwatch.Stop();
-         availability.Duration = stopwatch.Elapsed;
-         availability.Timestamp = DateTimeOffset.UtcNow;
-         telemetryClient.TrackAvailability(availability);
-         telemetryClient.Flush();
-     }
+ if (telemetryClient == null)
+ {
+ // Initializing a telemetry configuration for Application Insights based on connection string
+
+ var telemetryConfiguration = new TelemetryConfiguration();
+ telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
+ telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
+ telemetryClient = new TelemetryClient(telemetryConfiguration);
+ }
+
+ string testName = executionContext.FunctionName;
+ string location = Environment.GetEnvironmentVariable("REGION_NAME");
+ var availability = new AvailabilityTelemetry
+ {
+ Name = testName,
+
+ RunLocation = location,
+
+ Success = false,
+ };
+
+ availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
+ availability.Context.Operation.Id = Activity.Current.RootId;
+ var stopwatch = new Stopwatch();
+ stopwatch.Start();
+
+ try
+ {
+ using (var activity = new Activity("AvailabilityContext"))
+ {
+ activity.Start();
+ availability.Id = Activity.Current.SpanId.ToString();
+ // Run business logic
+ await RunAvailabilityTestAsync(log);
+ }
+ availability.Success = true;
+ }
+
+ catch (Exception ex)
+ {
+ availability.Message = ex.Message;
+ throw;
+ }
+
+ finally
+ {
+ stopwatch.Stop();
+ availability.Duration = stopwatch.Elapsed;
+ availability.Timestamp = DateTimeOffset.UtcNow;
+ telemetryClient.TrackAvailability(availability);
+ telemetryClient.Flush();
+ }
} ```
azure-monitor Tutorial Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-alert.md
- Title: Send alerts from Azure Application Insights | Microsoft Docs
-description: Tutorial shows how to send alerts in response to errors in your application by using Application Insights.
- Previously updated : 04/10/2019----
-# Monitor and alert on application health with Application Insights
-
-Application Insights allows you to monitor your application and sends you alerts when it's unavailable, experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating tests to continuously check the availability of your application.
-
-You'll learn how to:
-
-> [!div class="checklist"]
-> * Create availability tests to continuously check the response of the application.
-> * Send mail to administrators when a problem occurs.
-
-## Prerequisites
-
-To complete this tutorial, create an [Application Insights resource](../app/create-new-resource.md).
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create availability test
-
-Availability tests in Application Insights allow you to automatically test your application from various locations around the world. In this tutorial, you'll perform a URL test to ensure that your web application is available. You could also create a complete walkthrough to test its detailed operation.
-
-1. Select **Application Insights** and then select your subscription.
-
-1. Under the **Investigate** menu, select **Availability**. Then select **Create test**.
-
- ![Screenshot that shows adding an availability test.](media/tutorial-alert/add-test-001.png)
-
-1. Enter a name for the test and leave the other defaults. This selection will trigger requests for the application URL every 5 minutes from five different geographic locations.
-
-1. Select **Alerts** to open the **Alerts** dropdown where you can define details for how to respond if the test fails. Choose **Near-realtime** and set the status to **Enabled.**
-
- Enter an email address to send when the alert criteria are met. Optionally, you can enter the address of a webhook to call when the alert criteria are met.
-
- ![Screenshot that shows creating a test.](media/tutorial-alert/create-test-001.png)
-
-1. Return to the test panel, select the ellipses, and edit the alert to enter the configuration for your near-realtime alert.
-
- ![Screenshot that shows editing an alert.](media/tutorial-alert/edit-alert-001.png)
-
-1. Set failed locations to greater than or equal to 3. Create an [action group](../alerts/action-groups.md) to configure who gets notified when your alert threshold is breached.
-
- ![Screenshot that shows saving alert UI.](media/tutorial-alert/save-alert-001.png)
-
-1. After you've configured your alert, select the test name to view details from each location. Tests can be viewed in both line graph and scatter plot format to visualize the successes and failures for a given time range.
-
- ![Screenshot that shows test details.](media/tutorial-alert/test-details-001.png)
-
-1. To see the details of any test, select its dot in the scatter chart to open the **End-to-end transaction details** screen. The following example shows the details for a failed request.
-
- ![Screenshot that shows test results.](media/tutorial-alert/test-result-001.png)
-
-## Next steps
-
-Now that you've learned how to alert on issues, advance to the next tutorial to learn how to analyze how users are interacting with your application.
-
-> [!div class="nextstepaction"]
-> [Understand users](./tutorial-users.md)
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md
Like the data collected for server performance, Application Insights makes all c
Now that you've learned how to identify runtime exceptions, proceed to the next tutorial to learn how to create alerts in response to failures. > [!div class="nextstepaction"]
-> [Alert on application health](./tutorial-alert.md)
+> [Standard test](availability-standard-tests.md)
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
# Configure agent data collection for Container insights
-Container insights collects stdout, stderr, and environmental variables from container workloads deployed to managed Kubernetes clusters from the containerized agent. You can configure agent data collection settings by creating a custom Kubernetes ConfigMaps to control this experience.
+Container insights collects stdout, stderr, and environmental variables from container workloads deployed to managed Kubernetes clusters from the containerized agent. You can configure agent data collection settings by creating a custom Kubernetes ConfigMap to control this experience.
-This article demonstrates how to create ConfigMap and configure data collection based on your requirements.
+This article demonstrates how to create ConfigMaps and configure data collection based on your requirements.
## ConfigMap file settings overview
-A template ConfigMap file is provided that allows you to easily edit it with your customizations without having to create it from scratch. Before starting, you should review the Kubernetes documentation about [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and familiarize yourself with how to create, configure, and deploy ConfigMaps. This will allow you to filter stderr and stdout per namespace or across the entire cluster, and environment variables for any container running across all pods/nodes in the cluster.
+A template ConfigMap file is provided so that you can easily edit it with your customizations without having to create it from scratch. Before you start, review the Kubernetes documentation about [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/). Familiarize yourself with how to create, configure, and deploy ConfigMaps. You need to know how to filter stderr and stdout per namespace or across the entire cluster. You also need to know how to filter environment variables for any container running across all pods/nodes in the cluster.
>[!IMPORTANT]
->The minimum agent version supported to collect stdout, stderr, and environmental variables from container workloads is ciprod06142019 or later. To verify your agent version, from the **Node** tab select a node, and in the properties pane note value of the **Agent Image Tag** property. For additional information about the agent versions and what's included in each release, see [agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
+>The minimum agent version supported to collect stdout, stderr, and environmental variables from container workloads is **ciprod06142019** or later. To verify your agent version, on the **Node** tab, select a node. On the **Properties** pane, note the value of the **Agent Image Tag** property. For more information about the agent versions and what's included in each release, see [Agent release notes](https://github.com/microsoft/Docker-Provider/tree/ci_feature_prod).
### Data collection settings
-The following table describes the settings you can configure to control data collection:
+The following table describes the settings you can configure to control data collection.
| Key | Data type | Value | Description | |--|--|--|--|
-| `schema-version` | String (case sensitive) | v1 | This is the schema version used by the agent<br> when parsing this ConfigMap.<br> Currently supported schema-version is v1.<br> Modifying this value is not supported and will be<br> rejected when ConfigMap is evaluated. |
-| `config-version` | String | | Supports ability to keep track of this config file's version in your source control system/repository.<br> Maximum allowed characters are 10, and all other characters are truncated. |
-| `[log_collection_settings.stdout] enabled =` | Boolean | true or false | This controls if stdout container log collection is enabled. When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stdout.exclude_namespaces` setting below), stdout logs will be collected from all containers across all pods/nodes in the cluster. If not specified in ConfigMaps,<br> the default value is `enabled = true`. |
-| `[log_collection_settings.stdout] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs will not be collected. This setting is effective only if<br> `log_collection_settings.stdout.enabled`<br> is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. |
-| `[log_collection_settings.stderr] enabled =` | Boolean | true or false | This controls if stderr container log collection is enabled.<br> When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stderr.exclude_namespaces` setting), stderr logs will be collected from all containers across all pods/nodes in the cluster.<br> If not specified in ConfigMaps, the default value is<br> `enabled = true`. |
-| `[log_collection_settings.stderr] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs will not be collected.<br> This setting is effective only if<br> `log_collection_settings.stdout.enabled` is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. |
-| `[log_collection_settings.env_var] enabled =` | Boolean | true or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in ConfigMaps.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to **False** either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the **env:** section.<br> If collection of environment variables is globally disabled, then you cannot enable collection for a specific container (that is, the only override that can be applied at the container level is to disable collection when it's already enabled globally.). |
-| `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | true or false | This setting controls container log enrichment to populate the Name and Image property values<br> for every log record written to the ContainerLog table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in ConfigMap. |
-| `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | true or false | This setting allows the collection of Kube events of all types.<br> By default the Kube events with type *Normal* are not collected. When this setting is set to `true`, the *Normal* events are no longer filtered and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap |
+| `schema-version` | String (case sensitive) | v1 | This schema version is used by the agent<br> when parsing this ConfigMap.<br> Currently supported schema-version is v1.<br> Modifying this value isn't supported and will be<br> rejected when the ConfigMap is evaluated. |
+| `config-version` | String | | Supports the ability to keep track of this config file's version in your source control system/repository.<br> Maximum allowed characters are 10, and all other characters are truncated. |
+| `[log_collection_settings.stdout] enabled =` | Boolean | True or false | Controls if stdout container log collection is enabled. When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stdout.exclude_namespaces` setting), stdout logs will be collected from all containers across all pods/nodes in the cluster. If not specified in the ConfigMap,<br> the default value is `enabled = true`. |
+| `[log_collection_settings.stdout] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs won't be collected. This setting is effective only if<br> `log_collection_settings.stdout.enabled`<br> is set to `true`.<br> If not specified in the ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system","gatekeeper-system"]`. |
+| `[log_collection_settings.stderr] enabled =` | Boolean | True or false | Controls if stderr container log collection is enabled.<br> When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stderr.exclude_namespaces` setting), stderr logs will be collected from all containers across all pods/nodes in the cluster.<br> If not specified in the ConfigMap, the default value is<br> `enabled = true`. |
+| `[log_collection_settings.stderr] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs won't be collected.<br> This setting is effective only if<br> `log_collection_settings.stdout.enabled` is set to `true`.<br> If not specified in the ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system","gatekeeper-system"]`. |
+| `[log_collection_settings.env_var] enabled =` | Boolean | True or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in the ConfigMap.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to `False` either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the `env:` section.<br> If collection of environment variables is globally disabled, you can't enable collection for a specific container. The only override that can be applied at the container level is to disable collection when it's already enabled globally. |
+| `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | True or false | This setting controls container log enrichment to populate the `Name` and `Image` property values<br> for every log record written to the **ContainerLog** table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in the ConfigMap. |
+| `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | True or false | This setting allows the collection of Kube events of all types.<br> By default, the Kube events with type **Normal** aren't collected. When this setting is set to `true`, the **Normal** events are no longer filtered, and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap. |
### Metric collection settings
-The following table describes the settings you can configure to control metric collection:
+The following table describes the settings you can configure to control metric collection.
| Key | Data type | Value | Description | |--|--|--|--|
-| `[metric_collection_settings.collect_kube_system_pv_metrics] enabled =` | Boolean | true or false | This setting allows persistent volume (PV) usage metrics to be collected in the kube-system namespace. By default, usage metrics for persistent volumes with persistent volume claims in the kube-system namespace are not collected. When this setting is set to `true`, PV usage metrics for all namespaces are collected. By default, this is set to `false`. |
+| `[metric_collection_settings.collect_kube_system_pv_metrics] enabled =` | Boolean | True or false | This setting allows persistent volume (PV) usage metrics to be collected in the kube-system namespace. By default, usage metrics for persistent volumes with persistent volume claims in the kube-system namespace aren't collected. When this setting is set to `true`, PV usage metrics for all namespaces are collected. By default, this setting is set to `false`. |
-ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You cannot have another ConfigMaps overruling the collections.
+ConfigMap is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMap overruling the collections.
## Configure and deploy ConfigMaps
-Perform the following steps to configure and deploy your ConfigMap configuration file to your cluster.
+To configure and deploy your ConfigMap configuration file to your cluster:
-1. Download the [template ConfigMap YAML file](https://aka.ms/container-azm-ms-agentconfig) and save it as container-azm-ms-agentconfig.yaml.
+1. Download the [template ConfigMap YAML file](https://aka.ms/container-azm-ms-agentconfig) and save it as *container-azm-ms-agentconfig.yaml*.
-2. Edit the ConfigMap yaml file with your customizations to collect stdout, stderr, and/or environmental variables.
+1. Edit the ConfigMap YAML file with your customizations to collect stdout, stderr, and environmental variables:
- - To exclude specific namespaces for stdout log collection, you configure the key/value using the following example: `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.
-
- - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally, and then follow the steps [here](container-insights-manage-agent.md#how-to-disable-environment-variable-collection-on-a-container) to complete configuration for the specific container.
-
- - To disable stderr log collection cluster-wide, you configure the key/value using the following example: `[log_collection_settings.stderr] enabled = false`.
+ - To exclude specific namespaces for stdout log collection, configure the key/value by using the following example:
+ `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.
+ - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally. Then follow the steps [here](container-insights-manage-agent.md#how-to-disable-environment-variable-collection-on-a-container) to complete configuration for the specific container.
+ - To disable stderr log collection cluster-wide, configure the key/value by using the following example: `[log_collection_settings.stderr] enabled = false`.
Save your changes in the editor.
-3. Create ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+1. Create a ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`
-The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. Then all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, so not all of them restart at the same time. When the restarts are finished, a message similar to this example includes the following result: `configmap "container-azm-ms-agentconfig" created`.
## Verify configuration
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n kube-system`. If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following:
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n kube-system`. If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
``` ***************Start Config Processing******************** config::unsupported/missing config schema version - 'v21' , using defaults ```
-Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes:
--- From an agent pod logs using the same `kubectl logs` command.
+Errors related to applying configuration changes are also available for review. The following options are available to perform more troubleshooting of configuration changes:
-- From Live logs. Live logs show errors similar to the following:
+- From an agent pod log by using the same `kubectl logs` command.
+- From live logs. Live logs show errors similar to the following example:
``` config::error::Exception while parsing config map for log collection/env variable settings: \nparse error on value \"$\" ($end), using defaults, please check config map for errors ``` -- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence and count in the last hour.
+- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with error severity for configuration errors. If there are no errors, the entry in the table will have data with severity info, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.
-After you correct the error(s) in ConfigMap, save the yaml file and apply the updated ConfigMaps by running the command: `kubectl apply -f <configmap_yaml_file.yaml`.
+After you correct the errors in the ConfigMap, save the YAML file and apply the updated ConfigMap by running the following command: `kubectl apply -f <configmap_yaml_file.yaml`.
-## Applying updated ConfigMap
+## Apply updated ConfigMap
-If you have already deployed a ConfigMap on clusters and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`.
+If you've already deployed a ConfigMap on clusters and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used. Then you can apply it by using the same command as before: `kubectl apply -f <configmap_yaml_file.yaml`.
-The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" updated`.
+The configuration change can take a few minutes to finish before taking effect. Then all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, so not all of them restart at the same time. When the restarts are finished, a message similar to this example includes the following result: `configmap "container-azm-ms-agentconfig" updated`.
-## Verifying schema version
+## Verify schema version
-Supported config schema versions are available as pod annotation (schema-versions) on the Azure Monitor Agent pod. You can see them with the following kubectl command: `kubectl describe pod ama-logs-fdf58 -n=kube-system`
+Supported config schema versions are available as pod annotation (schema-versions) on the Azure Monitor Agent pod. You can see them with the following kubectl command: `kubectl describe pod ama-logs-fdf58 -n=kube-system`.
-The output will show similar to the following with the annotation schema-versions:
+Output similar to the following example appears with the annotation schema-versions:
``` Name: ama-logs-fdf58
The output will show similar to the following with the annotation schema-version
## Next steps -- Container insights does not include a predefined set of alerts. Review the [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.--- With monitoring enabled to collect health and resource utilization of your AKS or hybrid cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.--- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
+- Container insights doesn't include a predefined set of alerts. Review the [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.
+- With monitoring enabled to collect health and resource utilization of your Azure Kubernetes Service or hybrid cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
+- View [log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
# Enable Container insights for Azure Kubernetes Service (AKS) cluster
-This article describes how to set up Container insights to monitor managed Kubernetes cluster hosted on an [Azure Kubernetes Service](../../aks/index.yml) cluster.
+
+This article describes how to set up Container insights to monitor a managed Kubernetes cluster hosted on an [Azure Kubernetes Service (AKS)](../../aks/index.yml) cluster.
## Prerequisites If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). ## New AKS cluster
-You can enable monitoring for an AKS cluster as when it's created using any of the following methods:
-- Azure CLI. Follow the steps in [Create AKS cluster](../../aks/learn/quick-kubernetes-deploy-cli.md). -- Azure Policy. Follow the steps in [Enable AKS monitoring addon using Azure Policy](container-insights-enable-aks-policy.md).-- Terraform. If you are [deploying a new AKS cluster using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you do not choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) and complete the profile by including the [**addon_profile**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specify **oms_agent**.
+You can enable monitoring for an AKS cluster when it's created by using any of the following methods:
+
+- **Azure CLI**: Follow the steps in [Create AKS cluster](../../aks/learn/quick-kubernetes-deploy-cli.md).
+- **Azure Policy**: Follow the steps in [Enable AKS monitoring add-on by using Azure Policy](container-insights-enable-aks-policy.md).
+- **Terraform**: If you're [deploying a new AKS cluster by using Terraform](/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks), you specify the arguments required in the profile [to create a Log Analytics workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) if you don't choose to specify an existing one. To add Container insights to the workspace, see [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution). Complete the profile by including the [addon_profile](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) and specifying **oms_agent**.
## Existing AKS cluster+ Use any of the following methods to enable monitoring for an existing AKS cluster. ## [CLI](#tab/azure-cli) > [!NOTE]
-> Azure CLI version 2.39.0 or higher required for managed identity authentication.
+> Azure CLI version 2.39.0 or higher is required for managed identity authentication.
### Use a default Log Analytics workspace
-Use the following command to enable monitoring of your AKS cluster using a default Log Analytics workspace for the resource group. If a default workspace doesn't already exist in the cluster's region, then one will be created with a name in the format *DefaultWorkspace-\<GUID>-\<Region>*.
+Use the following command to enable monitoring of your AKS cluster by using a default Log Analytics workspace for the resource group. If a default workspace doesn't already exist in the cluster's region, one will be created with a name in the format *DefaultWorkspace-\<GUID>-\<Region>*.
```azurecli az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> ```
-The output will resemble the following:
+The output will resemble the following example:
```output provisioningState : Succeeded
Use the following command to enable monitoring of your AKS cluster on a specific
az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> ```
-The output will resemble the following:
+The output will resemble the following example:
```output provisioningState : Succeeded ``` ## [Terraform](#tab/terraform)
-Use the following steps to enable monitoring using Terraform:
-1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster)
+To enable monitoring by using Terraform:
+
+1. Add the **oms_agent** add-on profile to the existing [azurerm_kubernetes_cluster resource](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/kubernetes_cluster).
``` addon_profile {
Use the following steps to enable monitoring using Terraform:
} ```
-2. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) following the steps in the Terraform documentation.
-3. Enable collection of custom metrics using the guidance at [Enable custom metrics](container-insights-custom-metrics.md)
+1. Add the [azurerm_log_analytics_solution](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) by following the steps in the Terraform documentation.
+1. Enable collection of custom metrics by using the guidance at [Enable custom metrics](container-insights-custom-metrics.md).
## [Azure portal](#tab/portal-azure-monitor) > [!NOTE] > You can initiate this same process from the **Insights** option in the AKS menu for your cluster in the Azure portal.
-To enable monitoring of your AKS cluster in the Azure portal from Azure Monitor, do the following:
+To enable monitoring of your AKS cluster in the Azure portal from Azure Monitor:
1. In the Azure portal, select **Monitor**.
-2. Select **Containers** from the list.
-3. On the **Monitor - containers** page, select **Unmonitored clusters**.
-4. From the list of unmonitored clusters, find the cluster in the list and click **Enable**.
-5. On the **Configure Container insights** page, click **Configure**
-
- :::image type="content" source="media/container-insights-enable-aks/container-insights-configure.png" lightbox="media/container-insights-enable-aks/container-insights-configure.png" alt-text="Screenshot of configuration screen for AKS cluster.":::
+1. Select **Containers** from the list.
+1. On the **Monitor - containers** page, select **Unmonitored clusters**.
+1. From the list of unmonitored clusters, find the cluster in the list and select **Enable**.
+1. On the **Configure Container insights** page, select **Configure**.
-6. On the **Configure Container insights**, fill in the following information:
+ :::image type="content" source="media/container-insights-enable-aks/container-insights-configure.png" lightbox="media/container-insights-enable-aks/container-insights-configure.png" alt-text="Screenshot that shows the configuration screen for an AKS cluster.":::
- | Option | Description |
- |:|:|
- | Log Analytics workspace | Select a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) from the drop-down list or click **Create new** to create a default Log Analytics workspace. The Log Analytics workspace must be in the same subscription as the AKS container. |
- | Enable Prometheus metrics | Select this option to collect Prometheus metrics for the cluster in [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). |
- | Azure Monitor workspace | If you select **Enable Prometheus metrics**, then you must select an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). The Azure Monitor workspace must be in the same subscription as the AKS container and the Log Analytics workspace. |
- | Grafana workspace | To use the collected Prometheus metrics with dashboards in [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) to the Azure Monitor workspace if it isn't already. |
+1. On the **Configure Container insights** page, fill in the following information:
-7. Select **Use managed identity** if you want to use [managed identity authentication with the Azure Monitor agent](container-insights-onboard.md#authentication).
+ | Option | Description |
+ |:|:|
+ | Log Analytics workspace | Select a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) from the dropdown list or select **Create new** to create a default Log Analytics workspace. The Log Analytics workspace must be in the same subscription as the AKS container. |
+ | Enable Prometheus metrics | Select this option to collect Prometheus metrics for the cluster in [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). |
+ | Azure Monitor workspace | If you select **Enable Prometheus metrics**, you must select an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). The Azure Monitor workspace must be in the same subscription as the AKS container and the Log Analytics workspace. |
+ | Grafana workspace | To use the collected Prometheus metrics with dashboards in [Azure-managed Grafana](../../managed-grafan#link-a-grafana-workspace) to the Azure Monitor workspace if it isn't already. |
+
+1. Select **Use managed identity** if you want to use [managed identity authentication with Azure Monitor Agent](container-insights-onboard.md#authentication).
After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster. ## [Resource Manager template](#tab/arm) >[!NOTE]
->The template needs to be deployed in the same resource group as the cluster.
-
+>The template must be deployed in the same resource group as the cluster.
### Create or download templates
-You will either download template and parameter files or create your own depending on what authentication mode you're using.
-**To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
+You'll either download template and parameter files or create your own depending on the authentication mode you're using.
-1. Download the template at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file) and save it as **existingClusterOnboarding.json**.
+To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication):
-2. Download the parameter file at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save it as **existingClusterParam.json**.
+1. Download the template in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file) and save it as **existingClusterOnboarding.json**.
-3. Edit the values in the parameter file.
+1. Download the parameter file in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save it as **existingClusterParam.json**.
- - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
- - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
- - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
- - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension DCR of the cluster and the name of the data collection rule, which will be MSCI-\<clusterName\>-\<clusterRegion\> and this resource created in AKS clusters Resource Group. If this is first-time onboarding, you can set the arbitrary tag values.
+1. Edit the values in the parameter file:
+ - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
+ - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterName\>-\<clusterRegion\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values.
-**To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication)**
+To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication):
1. Save the following JSON as **existingClusterOnboarding.json**.
You will either download template and parameter files or create your own dependi
} ```
-2. Save the following JSON as **existingClusterParam.json**.
+1. Save the following JSON as **existingClusterParam.json**.
```json {
You will either download template and parameter files or create your own dependi
} ```
-2. Download the parameter file at [https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save as **existingClusterParam.json**.
-
-3. Edit the values in the parameter file.
+1. Download the parameter file in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save as **existingClusterParam.json**.
- - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
- - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
- - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
- - `resourceTagValues`: Use the existing tag values specified for the AKS cluster.
+1. Edit the values in the parameter file:
-### Deploy template
+ - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
+ - `resourceTagValues`: Use the existing tag values specified for the AKS cluster.
-Deploy the template with the parameter file using any valid method for deploying Resource Manager templates. See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for examples of different methods.
+### Deploy the template
+Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
-
-#### To deploy with Azure PowerShell:
+#### Deploy with Azure PowerShell
```powershell New-AzResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <ResourceGroupName> -TemplateFile .\existingClusterOnboarding.json -TemplateParameterFile .\existingClusterParam.json ```
-The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
+The configuration change can take a few minutes to complete. When it's finished, a message similar to the following example includes this result:
```output provisioningState : Succeeded ```
-#### To deploy with Azure CLI, run the following commands:
+#### Deploy with Azure CLI
```azurecli az login
az account set --subscription "Subscription Name"
az deployment group create --resource-group <ResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json ```
-The configuration change can take a few minutes to complete. When it's completed, a message is displayed that's similar to the following and includes the result:
+The configuration change can take a few minutes to complete. When it's finished, a message similar to the following example includes this result:
```output provisioningState : Succeeded
After you've enabled monitoring, it might take about 15 minutes before you can v
## Verify agent and solution deployment+ Run the following command to verify that the agent is deployed successfully. ``` kubectl get ds ama-logs --namespace=kube-system ```
-The output should resemble the following, which indicates that it was deployed properly:
+The output should resemble the following example, which indicates that it was deployed properly:
```output User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
ama-logs 2 2 2 2 2 beta.kubernetes.io/os=linux 1d ```
-If there are Windows Server nodes on the cluster then you can run the following command to verify that the agent is deployed successfully.
+If there are Windows Server nodes on the cluster, run the following command to verify that the agent is deployed successfully:
``` kubectl get ds ama-logs-windows --namespace=kube-system ```
-The output should resemble the following, which indicates that it was deployed properly:
+The output should resemble the following example, which indicates that it was deployed properly:
```output User@aksuser:~$ kubectl get ds ama-logs-windows --namespace=kube-system
To verify deployment of the solution, run the following command:
kubectl get deployment ama-logs-rs -n=kube-system ```
-The output should resemble the following, which indicates that it was deployed properly:
+The output should resemble the following example, which indicates that it was deployed properly:
```output User@aksuser:~$ kubectl get deployment ama-logs-rs -n=kube-system
ama-logs-rs 1 1 1 1 3h
## View configuration with CLI
-Use the `aks show` command to get details such as is the solution enabled or not, what is the Log Analytics workspace resourceID, and summary details about the cluster.
+Use the `aks show` command to find out whether the solution is enabled or not, what the Log Analytics workspace resource ID is, and summary information about the cluster.
```azurecli az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster> ```
-After a few minutes, the command completes and returns JSON-formatted information about solution. The results of the command should show the monitoring add-on profile and resembles the following example output:
+After a few minutes, the command completes and returns JSON-formatted information about the solution. The results of the command should show the monitoring add-on profile and resemble the following example output:
```output "addonProfiles": {
After a few minutes, the command completes and returns JSON-formatted informatio
## Migrate to managed identity authentication
-### Existing clusters with service principal
-AKS Clusters with service principal must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration.
+This section explains two methods for migrating to managed identity authentication.
-1. Get the configured Log Analytics workspace resource ID:
+### Existing clusters with a service principal
-```cli
-az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
-```
+AKS clusters with a service principal must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for this migration.
+
+1. Get the configured Log Analytics workspace resource ID:
+
+ ```cli
+ az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
+ ```
-2. Disable monitoring with the following command:
+1. Disable monitoring with the following command:
- ```cli
- az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
- ```
+ ```cli
+ az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
+ ```
-3. Upgrade cluster to system managed identity with the following command:
+1. Upgrade cluster to system managed identity with the following command:
- ```cli
- az aks update -g <resource-group-name> -n <cluster-name> --enable-managed-identity
- ```
+ ```cli
+ az aks update -g <resource-group-name> -n <cluster-name> --enable-managed-identity
+ ```
-4. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
+1. Enable the monitoring add-on with the managed identity authentication option by using the Log Analytics workspace resource ID obtained in step 1:
- ```cli
- az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
- ```
+ ```cli
+ az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
+ ```
-### Existing clusters with system or user assigned identity
-AKS Clusters with system assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for clusters with system identity. For clusters with user assigned identity, only Azure Public cloud is supported.
+### Existing clusters with system or user-assigned identity
-1. Get the configured Log Analytics workspace resource ID:
+AKS clusters with system-assigned identity must first disable monitoring and then upgrade to managed identity. Only Azure public cloud, Azure China cloud, and Azure Government cloud are currently supported for clusters with system identity. For clusters with user-assigned identity, only Azure public cloud is supported.
- ```cli
- az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
- ```
+1. Get the configured Log Analytics workspace resource ID:
-2. Disable monitoring with the following command:
+ ```cli
+ az aks show -g <resource-group-name> -n <cluster-name> | grep -i "logAnalyticsWorkspaceResourceID"
+ ```
- ```cli
- az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
- ```
+1. Disable monitoring with the following command:
-3. Enable Monitoring addon with managed identity authentication option using Log Analytics workspace resource ID obtained in the first step:
+ ```cli
+ az aks disable-addons -a monitoring -g <resource-group-name> -n <cluster-name>
+ ```
- ```cli
- az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
- ```
+1. Enable the monitoring add-on with the managed identity authentication option by using the Log Analytics workspace resource ID obtained in step 1:
+
+ ```cli
+ az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <resource-group-name> -n <cluster-name> --workspace-resource-id <workspace-resource-id>
+ ```
## Private link
-To enable network isolation by connecting your cluster to the Log Analytics workspace using [private link](../logs/private-link-security.md), your cluster must be using managed identity authentication with the Azure Monitor agent.
-1. Follow the steps in [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md) to create a data collection endpoint and add it to your AMPLS.
-2. Create an association between the cluster and the data collection endpoint using the following API call. See [Data Collection Rule Associations - Create](/rest/api/monitor/data-collection-rule-associations/create) for details on this call. The DCR association name must beΓÇ»**configurationAccessEndpoint**, `resourceUri` is the resource ID of the AKS cluster.
+To enable network isolation by connecting your cluster to the Log Analytics workspace by using [Azure Private Link](../logs/private-link-security.md), your cluster must be using managed identity authentication with Azure Monitor Agent.
+
+1. Follow the steps in [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md) to create a data collection endpoint and add it to your Azure Monitor private link service.
+
+1. Create an association between the cluster and the data collection endpoint by using the following API call. For information on this call, see [Data collection rule associations - Create](/rest/api/monitor/data-collection-rule-associations/create). The DCR association name must beΓÇ»**configurationAccessEndpoint**, and `resourceUri` is the resource ID of the AKS cluster.
```rest PUT https://management.azure.com/{cluster-resource-id}/providers/Microsoft.Insights/dataCollectionRuleAssociations/configurationAccessEndpoint?api-version=2021-04-01
To enable network isolation by connecting your cluster to the Log Analytics work
} ```
- Following is an example of this API call.
+ The following snippet is an example of this API call:
```rest PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/my-aks-cluster/providers/Microsoft.Insights/dataCollectionRuleAssociations/configurationAccessEndpoint?api-version=2021-04-01
To enable network isolation by connecting your cluster to the Log Analytics work
} ```
-3. Enable monitoring with managed identity authentication option using the steps in [Migrate to managed identity authentication](#migrate-to-managed-identity-authentication).
+1. Enable monitoring with the managed identity authentication option by using the steps in [Migrate to managed identity authentication](#migrate-to-managed-identity-authentication).
## Limitations -- Enabling managed identity authentication (preview) is not currently supported using Terraform or Azure Policy.-- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-name\>-\<cluster-region\>*. This name cannot currently be modified.
+- Enabling managed identity authentication (preview) isn't currently supported by using Terraform or Azure Policy.
+- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-name\>-\<cluster-region\>*. Currently, this name can't be modified.
## Next steps
-* If you experience issues while attempting to onboard the solution, review the [troubleshooting guide](container-insights-troubleshoot.md)
-
+* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md).
* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
Title: View Live Data with Container insights
+ Title: View live data with Container insights
description: This article describes the real-time view of Kubernetes logs, events, and pod metrics without using kubectl in Container insights. Last updated 05/24/2022
-# How to view Kubernetes logs, events, and pod metrics in real-time
+# View Kubernetes logs, events, and pod metrics in real time
-Container insights includes the Live Data feature, which is an advanced diagnostic feature allowing you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to further assist in troubleshooting issues in real-time.
+Container insights includes the Live Data feature. You can use this advanced diagnostic feature for direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time.
-This article provides a detailed overview and helps you understand how to use this feature.
+This article provides an overview of this feature and helps you understand how to use it.
-For help setting up or troubleshooting the Live Data feature, review our [setup guide](container-insights-livedata-setup.md). This feature directly accesses the Kubernetes API, and additional information about the authentication model can be found [here](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
+For help with setting up or troubleshooting the Live Data feature, see the [Setup guide](container-insights-livedata-setup.md). This feature directly accesses the Kubernetes API. For more information about the authentication model, see [The Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
## View AKS resource live logs
-Use the following procedure to view the live logs for pods, deployments, and replica sets with or without Container insights from the AKS resource view.
+
+To view the live logs for pods, deployments, and replica sets with or without Container insights from the AKS resource view:
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
-2. Select **Workloads** in the **Kubernetes resources** section of the menu.
+1. Select **Workloads** in the **Kubernetes resources** section of the menu.
-3. Select a pod, deployment, replica-set from the respective tab.
+1. Select a pod, deployment, or replica set from the respective tab.
-4. Select **Live Logs** from the resource's menu.
+1. Select **Live Logs** from the resource's menu.
-5. Select a pod to start collection of the live data.
+1. Select a pod to start collecting the live data.
- [![Deployment live logs](./media/container-insights-livedata-overview/live-data-deployment.png)](./media/container-insights-livedata-overview/live-data-deployment.png#lightbox)
+ [![Screenshot that shows the deployment of live logs.](./media/container-insights-livedata-overview/live-data-deployment.png)](./media/container-insights-livedata-overview/live-data-deployment.png#lightbox)
## View logs
-You can view real-time log data as they are generated by the container engine from the **Nodes**, **Controllers**, and **Containers** view. To view log data, perform the following steps.
+You can view real-time log data as it's generated by the container engine on the **Nodes**, **Controllers**, or **Containers** view. To view log data:
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
-2. On the AKS cluster dashboard, under **Monitoring** on the left-hand side, choose **Insights**.
+1. On the AKS cluster dashboard, under **Monitoring** on the left side, select **Insights**.
-3. Select either the **Nodes**, **Controllers**, or **Containers** tab.
+1. Select the **Nodes**, **Controllers**, or **Containers** tab.
-4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+1. Select an object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure Active Directory (Azure AD), you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure.
>[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [How to query logs from Container insights](container-insights-log-query.md) feature to learn more about viewing historical logs, events and metrics.
+ >To view the data from your Log Analytics workspace, select **View in analytics** in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md).
-After successfully authenticating, the Live Data console pane will appear below the performance data grid where you can view log data in a continuous stream. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
+After successful authentication, the Live Data console pane appears below the performance data grid. You can view log data here in a continuous stream. If the fetch status indicator shows a green check mark at the far right, it means data can be retrieved, and it begins streaming to your console.
-![Node properties pane view data option](./media/container-insights-livedata-overview/node-properties-pane.png)
+![Screenshot that shows the Node properties pane view data option.](./media/container-insights-livedata-overview/node-properties-pane.png)
The pane title shows the name of the pod the container is grouped with. ## View events
-You can view real-time event data as they are generated by the container engine from the **Nodes**, **Controllers**, **Containers**, and **Deployments** view when a container, pod, node, ReplicaSet, DaemonSet, job, CronJob or Deployment is selected. To view events, perform the following steps.
+You can view real-time event data as it's generated by the container engine on the **Nodes**, **Controllers**, **Containers**, or **Deployments** view when a container, pod, node, ReplicaSet, DaemonSet, job, CronJob, or Deployment is selected. To view events:
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
-2. On the AKS cluster dashboard, under **Monitoring** on the left-hand side, choose **Insights**.
+1. On the AKS cluster dashboard, under **Monitoring** on the left side, select **Insights**.
-3. Select either the **Nodes**, **Controllers**, **Containers**, or **Deployments** tab.
+1. Select the **Nodes**, **Controllers**, **Containers**, or **Deployments** tab.
-4. Select an object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+1. Select an object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure AD, you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure.
>[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review the [How to query logs from Container insights](container-insights-log-query.md) feature to learn more about viewing historical logs, events and metrics.
+ >To view the data from your Log Analytics workspace, select **View in analytics** in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md).
-After successfully authenticating, the Live Data console pane will appear below the performance data grid. If the fetch status indicator shows a green check mark, which is on the far right of the pane, it means data can be retrieved and it begins streaming to your console.
+After successful authentication, the Live Data console pane appears below the performance data grid. If the fetch status indicator shows a green check mark at the far right, it means data can be retrieved, and it begins streaming to your console.
-If the object you selected was a container, select the **Events** option in the pane. If you selected a Node, Pod, or controller, viewing events is automatically selected.
+If the object you selected was a container, select the **Events** option in the pane. If you selected a node, pod, or controller, viewing events is automatically selected.
-![Controller properties pane view events](./media/container-insights-livedata-overview/controller-properties-live-event.png)
+![Screenshot that shows the Controller properties pane view events.](./media/container-insights-livedata-overview/controller-properties-live-event.png)
The pane title shows the name of the Pod the container is grouped with. ### Filter events
-While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to choose from.
+While you view events, you can also limit the results by using the **Filter** pill found to the right of the search bar. Depending on the resource you select, the pill lists a pod, namespace, or cluster to choose from.
## View metrics
-You can view real-time metric data as they are generated by the container engine from the **Nodes** or **Controllers** view only when a **Pod** is selected. To view metrics, perform the following steps.
+You can view real-time metric data as it's generated by the container engine from the **Nodes** or **Controllers** view only when a **Pod** is selected. To view metrics:
1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource.
-2. On the AKS cluster dashboard, under **Monitoring** on the left-hand side, choose **Insights**.
+1. On the AKS cluster dashboard, under **Monitoring** on the left side, select **Insights**.
-3. Select either the **Nodes** or **Controllers** tab.
+1. Select either the **Nodes** or **Controllers** tab.
-4. Select a **Pod** object from the performance grid, and on the properties pane found on the right side, select **View live data** option. If the AKS cluster is configured with single sign-on using Azure AD, you are prompted to authenticate on first use during that browser session. Select your account and complete authentication with Azure.
+1. Select a **Pod** object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure AD, you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure.
>[!NOTE]
- >When viewing the data from your Log Analytics workspace by selecting the **View in analytics** option from the properties pane, the log search results will potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers** which may no longer exist. Attempting to search logs for a container which isn't available in `kubectl` will also fail here. Review [How to query logs from Container insights](container-insights-log-query.md) to learn more about viewing historical logs, events and metrics.
+ >To view the data from your Log Analytics workspace, select the **View in analytics** option in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md).
+
+After successful authentication, the Live Data console pane appears below the performance data grid. Metric data is retrieved and begins streaming to your console for presentation in the two charts. The pane title shows the name of the pod the container is grouped with.
-After successfully authenticating, the Live Data console pane will appear below the performance data grid. Metric data is retrieved and begins streaming to your console for presentation in the two charts. The pane title shows the name of the pod the container is grouped with.
+![Screenshot that shows the View Pod metrics example.](./media/container-insights-livedata-overview/pod-properties-live-metrics.png)
-![View Pod metrics example](./media/container-insights-livedata-overview/pod-properties-live-metrics.png)
+## Use live data views
-## Using live data views
The following sections describe functionality that you can use in the different live data views. ### Search
-The Live Data feature includes search functionality. In the **Search** field, you can filter results by typing a key word or term and any matching results are highlighted to allow quick review. While viewing events, you can additionally limit the results using the **Filter** pill found to the right of the search bar. Depending on what resource you have selected, the pill lists a Pod, Namespace, or cluster to chose from.
-![Live Data console pane filter example](./media/container-insights-livedata-overview/livedata-pane-filter-example.png)
+The Live Data feature includes search functionality. In the **Search** box, you can filter results by entering a keyword or term. Any matching results are highlighted to allow quick review. While you view the events, you can also limit the results by using the **Filter** feature to the right of the search bar. Depending on what resource you've selected, you can choose from a pod, namespace, or cluster.
-![Live Data console pane filter example for deployment](./media/container-insights-livedata-overview/live-data-deployment-search.png)
+![Screenshot that shows the Live Data console pane filter example.](./media/container-insights-livedata-overview/livedata-pane-filter-example.png)
-### Scroll Lock and Pause
+![Screenshot that shows the Live Data console pane filter example for deployment.](./media/container-insights-livedata-overview/live-data-deployment-search.png)
-To suspend autoscroll and control the behavior of the pane, allowing you to manually scroll through the new data read, you can use the **Scroll** option. To re-enable autoscroll, simply select the **Scroll** option again. You can also pause retrieval of log or event data by selecting the **Pause** option, and when you are ready to resume, simply select **Play**.
+### Scroll lock and pause
-![Live Data console pane pause live view](./media/container-insights-livedata-overview/livedata-pane-scroll-pause-example.png)
+To suspend autoscroll and control the behavior of the pane so that you can manually scroll through the new data read, select the **Scroll** option. To re-enable autoscroll, select **Scroll** again. You can also pause retrieval of log or event data by selecting the **Pause** option. When you're ready to resume, select **Play**.
-![Live Data console pane pause live view for deployment](./media/container-insights-livedata-overview/live-data-deployment-pause.png)
+![Screenshot that shows the Live Data console pane pause live view.](./media/container-insights-livedata-overview/livedata-pane-scroll-pause-example.png)
+![Screenshot that shows the Live Data console pane pause live view for deployment.](./media/container-insights-livedata-overview/live-data-deployment-pause.png)
+Suspend or pause autoscroll for only a short period of time while you're troubleshooting an issue. These requests might affect the availability and throttling of the Kubernetes API on your cluster.
>[!IMPORTANT]
->We recommend only suspending or pausing autoscroll for a short period of time while troubleshooting an issue. These requests may impact the availability and throttling of the Kubernetes API on your cluster.
-
->[!IMPORTANT]
->No data is stored permanently during operation of this feature. All information captured during the session is deleted when you close your browser or navigate away from it. Data only remains present for visualization inside the five minute window of the metrics feature; any metrics older than five minutes are also deleted. The Live Data buffer queries within reasonable memory usage limits.
+>No data is stored permanently during the operation of this feature. All information captured during the session is deleted when you close your browser or navigate away from it. Data only remains present for visualization inside the five-minute window of the metrics feature. Any metrics older than five minutes are also deleted. The Live Data buffer queries within reasonable memory usage limits.
## Next steps - To continue learning how to use Azure Monitor and monitor other aspects of your AKS cluster, see [View Azure Kubernetes Service health](container-insights-analyze.md).--- View [How to query logs from Container insights](container-insights-log-query.md) to see predefined queries and examples to create alerts, visualizations, or perform further analysis of your clusters.
+- To see predefined queries and examples to create alerts and visualizations or perform further analysis of your clusters, see [How to query logs from Container insights](container-insights-log-query.md).
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
# Metric alert rules in Container insights (preview)
-Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides pre-configured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
+Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides preconfigured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
> [!IMPORTANT] > Container insights in Azure Monitor now supports alerts based on Prometheus metrics. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts.+ ## Types of metric alert rules+ There are two types of metric rules used by Container insights based on either Prometheus metrics or custom metrics. See a list of the specific alert rules for each at [Alert rule details](#alert-rule-details). | Alert rule type | Description | |:|:|
-| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus (preview)](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are hand-picked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>-*Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
-| [Metric rules](#metrics-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. |
-
+| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus (preview)](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are handpicked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>- *Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
+| [Metric rules](#metric-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. |
## Prometheus alert rules
-[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts-preview) use metric data from your Kubernetes cluster sent to [Azure Monitor manage service for Prometheus](../essentials/prometheus-metrics-overview.md).
+
+[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts-preview) use metric data from your Kubernetes cluster sent to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
### Prerequisites-- Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).+
+Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). For more information, see [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).
### Enable alert rules
-The only method currently available for creating Prometheus alert rules is a Resource Manager template.
+The only method currently available for creating Prometheus alert rules is an Azure Resource Manager template (ARM template).
-1. Download the template that includes the set of alert rules that you want to enable. See [Alert rule details](#alert-rule-details) for a listing of the rules for each.
+1. Download the template that includes the set of alert rules you want to enable. For a list of the rules for each, see [Alert rule details](#alert-rule-details).
- [Community alerts](https://aka.ms/azureprometheus-communityalerts) - [Recommended alerts](https://aka.ms/azureprometheus-recommendedalerts)
-2. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates) for guidance.
+1. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates).
-> [!NOTE]
-> While the Prometheus alert could be created in a different resource group to the target resource, you should use the same resource group as your target resource.
+> [!NOTE]
+> Although you can create the Prometheus alert in a resource group different from the target resource, use the same resource group as your target resource.
### Edit alert rules
- To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it using any deployment method.
+ To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it by using any deployment method.
### Configure alertable metrics in ConfigMaps
-Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
+Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps only apply to the following alertable metrics:
-- *cpuExceededPercentage*-- *cpuThresholdViolated*-- *memoryRssExceededPercentage*-- *memoryRssThresholdViolated*-- *memoryWorkingSetExceededPercentage*-- *memoryWorkingSetThresholdViolated*-- *pvUsageExceededPercentage*-- *pvUsageThresholdViolated*
+- cpuExceededPercentage
+- cpuThresholdViolated
+- memoryRssExceededPercentage
+- memoryRssThresholdViolated
+- memoryWorkingSetExceededPercentage
+- memoryWorkingSetThresholdViolated
+- pvUsageExceededPercentage
+- pvUsageThresholdViolated
> [!TIP]
-> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
-
+> Download the new ConfigMap from [this GitHub content](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
- - **Example**. Use the following ConfigMap configuration to modify the *cpuExceededPercentage* threshold to 90%:
+ - **Example:** Use the following ConfigMap configuration to modify the `cpuExceededPercentage` threshold to 90%:
``` [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
Perform the following steps to configure your ConfigMap configuration file to ov
container_memory_working_set_threshold_percentage = 95.0 ```
- - **Example**. Use the following ConfigMap configuration to modify the *pvUsageExceededPercentage* threshold to 80%:
+ - **Example:** Use the following ConfigMap configuration to modify the `pvUsageExceededPercentage` threshold to 80%:
``` [alertable_metrics_configuration_settings.pv_utilization_thresholds]
Perform the following steps to configure your ConfigMap configuration file to ov
pv_usage_threshold_percentage = 80.0 ```
-2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+1. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before it takes effect. Then all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, so they don't all restart at the same time. When the restarts are finished, a message similar to the following example includes the result: `configmap "container-azm-ms-agentconfig" created`.
-## Metrics alert rules
-[Metric alert rules](../alerts/alerts-types.md#metric-alerts) use [custom metric data from your Kubernetes cluster](container-insights-custom-metrics.md).
+## Metric alert rules
+[Metric alert rules](../alerts/alerts-types.md#metric-alerts) use [custom metric data from your Kubernetes cluster](container-insights-custom-metrics.md).
### Prerequisites
- - You may need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
- - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
+ - You might need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
+ - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
### Enable and configure alert rules
The configuration change can take a few minutes to finish before taking effect,
#### Enable alert rules
-1. From the **Insights** menu for your cluster, select **Recommended alerts**.
+1. On the **Insights** menu for your cluster, select **Recommended alerts**.
- :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot showing recommended alerts option in Container insights.":::
+ :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot that shows recommended alerts option in Container insights.":::
+1. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
-2. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
+ :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot that shows a list of recommended alerts and options for enabling each.":::
- :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot showing list of recommended alerts and option for enabling each.":::
+1. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page. Specify an existing action group or create an action group by selecting **Create action group**.
-3. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page, specify an existing or create an action group by selecting **Create action group**.
-
- :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot showing selection of an action group.":::
+ :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot that shows selecting an action group.":::
#### Edit alert rules
-To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your AKS cluster.
+To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your Azure Kubernetes Service (AKS) cluster.
1. From Container insights for your cluster, select **Recommended alerts**.
-2. Click the **Rule Name** to open the alert rule.
-3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for details on the alert rule settings.
+2. Select the **Rule Name** to open the alert rule.
+3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for information on the alert rule settings.
#### Disable alert rules+ 1. From Container insights for your cluster, select **Recommended alerts**.
-2. Change the status for the alert rule to **Disabled**.
+1. Change the status for the alert rule to **Disabled**.
### [Resource Manager](#tab/resource-manager)
-For custom metrics, a separate Resource Manager template is provided for each alert rule.
+
+For custom metrics, a separate ARM template is provided for each alert rule.
#### Enable alert rules 1. Download one or all of the available templates that describe how to create the alert from [GitHub](https://github.com/microsoft/Docker-Provider/tree/ci_dev/alerts/recommended_alerts_ARM).
-2. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
-3. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md) for guidance.
+1. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
+1. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md).
#### Disable alert rules
-To disable custom alert rules, use the same Resource Manager template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
-
+To disable custom alert rules, use the same ARM template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
+ ## Alert rule details
-The following sections provide details on the alert rules provided by Container insights.
+
+The following sections present information on the alert rules provided by Container insights.
### Community alert rules
-These are hand-picked alerts from Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-mixins).
+
+These handpicked alerts come from the Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-mixins):
- KubeJobNotCompleted - KubeJobFailed
These are hand-picked alerts from Prometheus community. Source code for these mi
- KubeNodeReadinessFlapping - KubeletTooManyPods - KubeNodeUnreachable+ ### Recommended alert rules+ The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics. | Prometheus alert name | Custom metric alert name | Description | Default threshold |
The following table lists the recommended alert rules that you can enable for ei
| Average container working set memory % | Average container working set memory % | Calculates average working set memory used per container. | 95% | | Average CPU % | Average CPU % | Calculates average CPU used per node. | 80% | | Average Disk Usage % | Average Disk Usage % | Calculates average disk usage for a node. | 80% |
-| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average PV usage per pod. | 80% |
+| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average persistent volume usage per pod. | 80% |
| Average Working set memory % | Average Working set memory % | Calculates average Working set memory for a node. | 80% | | Restarting container count | Restarting container count | Calculates number of restarting containers. | 0 | | Failed Pod Counts | Failed Pod Counts | Calculates number of restarting containers. | 0 |
The following table lists the recommended alert rules that you can enable for ei
| Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 | > [!NOTE]
-> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule is not included with the Prometheus alert rules.
->
-> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) using the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
+> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule isn't included with the Prometheus alert rules.
+>
+> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) that uses the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
+Common properties across all these alert rules include:
-Common properties across all of these alert rules include:
--- All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
+- All alert rules are evaluated once per minute, and they look back at the last five minutes of data.
- All alert rules are disabled by default.-- Alerts rules don't have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.-- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before modifying its threshold.
+- Alerts rules don't have an action group assigned to them by default. To add an [action group](../alerts/action-groups.md) to the alert, either select an existing action group or create a new action group while you edit the alert rule.
+- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before you modify its threshold.
The following metrics have unique behavior characteristics: **Prometheus and custom metrics**-- `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.-- `containerRestartCount` metric is only sent when there are containers restarting.-- `oomKilledContainerCount` metric is only sent when there are OOM killed containers.-- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). cpuThresholdViolated, memoryRssThresholdViolated, and memoryWorkingSetThresholdViolated metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.-- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). `pvUsageThresholdViolated` metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. -
-
-**Prometheus only**
-- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), you should configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. See [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.-- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
+- The `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.
+- The `containerRestartCount` metric is only sent when there are containers restarting.
+- The `oomKilledContainerCount` metric is only sent when there are OOM killed containers.
+- The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and memory working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.
+- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
+- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
+
+**Prometheus only**
+- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. For details related to configuring your ConfigMap configuration file, see [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps). Collection of persistent volume metrics with claims in the `kube-system` namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. For more information, see [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings).
+- The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and Memory Working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. If you want to collect these metrics and analyze them from [metrics explorer](../essentials/metrics-getting-started.md), configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. For details related to configuring your ConfigMap configuration file, see the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps).
## View alerts
-View fired alerts for your cluster from [**Alerts** in the **Monitor menu** in the Azure portal] with other fired alerts in your subscription. You can also select **View in alerts** from the **Recommended alerts** pane to view alerts from custom metrics.
-> [!NOTE]
-> Prometheus alerts will not currently be displayed when you select **Alerts** from your AKs cluster since the alert rule doesn't use the cluster as its target.
+View fired alerts for your cluster from **Alerts** in the **Monitor** menu in the Azure portal with other fired alerts in your subscription. You can also select **View in alerts** on the **Recommended alerts** pane to view alerts from custom metrics.
+> [!NOTE]
+> Currently, Prometheus alerts won't be displayed when you select **Alerts** from your AKS cluster because the alert rule doesn't use the cluster as its target.
## Next steps -- [Read about the different alert rule types in Azure Monitor](../alerts/alerts-types.md).-- [Read about alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
+- Read about the [different alert rule types in Azure Monitor](../alerts/alerts-types.md).
+- Read about [alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
# Enable Container insights
-This article provides an overview of the requirements and options that are available for configuring Container insights to monitor the performance of workloads that are deployed to Kubernetes environments. You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using a number of supported methods.
+
+This article provides an overview of the requirements and options that are available for configuring Container insights to monitor the performance of workloads that are deployed to Kubernetes environments. You can enable Container insights for a new deployment or for one or more existing deployments of Kubernetes by using several supported methods.
## Supported configurations+ Container insights supports the following environments: -- [Azure Kubernetes Service (AKS)](../../aks/index.yml)
+- [Azure Kubernetes Service (AKS)](../../aks/index.yml)
- [Azure Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md) - [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises - [AKS engine](https://github.com/Azure/aks-engine) - [Red Hat OpenShift](https://docs.openshift.com/container-platform/latest/welcome/https://docsupdatetracker.net/index.html) version 4.x
-The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
+The versions of Kubernetes and support policy are the same as those versions [supported in AKS](../../aks/supported-kubernetes-versions.md).
### Differences between Windows and Linux clusters The main differences in monitoring a Windows Server cluster compared to a Linux cluster include: -- Windows doesn't have a Memory RSS metric, and as a result it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
+- Windows doesn't have a Memory RSS metric. As a result, it isn't available for Windows nodes and containers. The [Working Set](/windows/win32/memory/working-set) metric is available.
- Disk storage capacity information isn't available for Windows nodes. - Only pod environments are monitored, not Docker environments. - With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers. >[!NOTE]
-> Container insights support for Windows Server 2022 operating system in public preview.
-
+> Container insights support for the Windows Server 2022 operating system is in preview.
## Installation options
The main differences in monitoring a Windows Server cluster compared to a Linux
- [Azure Arc-enabled cluster](container-insights-enable-arc-enabled-clusters.md) - [Hybrid Kubernetes clusters](container-insights-hybrid-setup.md) - ## Prerequisites+ Before you start, make sure that you've met the following requirements: ### Log Analytics workspace+ Container insights stores its data in a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md). It supports workspaces in the regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). For a list of the supported mapping pairs to use for the default workspace, see [Region mappings supported by Container insights](container-insights-region-mapping.md).
-You can let the onboarding experience create a Log Analytics workspace in the default resource group of the AKS cluster subscription. If you already have a workspace though, then you will most likely want to use that one. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for details.
+You can let the onboarding experience create a Log Analytics workspace in the default resource group of the AKS cluster subscription. If you already have a workspace, you'll probably want to use that one. For more information, see [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
-An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure portal, but can be done with Azure CLI or Resource Manager template.
+ You can attach an AKS cluster to a Log Analytics workspace in a different Azure subscription in the same Azure Active Directory tenant. Currently, you can't do it with the Azure portal, but you can use the Azure CLI or an Azure Resource Manager template.
### Azure Monitor workspace (preview)
-If you are going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus-metrics-addon.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), then you must have an Azure Monitor workspace which is where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace.
+
+If you're going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus-metrics-addon.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), you must have an Azure Monitor workspace where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace.
### Permissions+ To enable container monitoring, you require the following permissions: -- Member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role.-- Member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on any AKS cluster resources.
+- You must be a member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role.
+- You must be a member of the [*Owner* group](../../role-based-access-control/built-in-roles.md#owner) on any AKS cluster resources.
-To view data once container monitoring is enabled, you require the following permissions:
+To view data after container monitoring is enabled, you require the following permissions:
-- Member of [Log Analytics reader](../logs/manage-access.md#azure-rbac) role if you aren't already a member of [Log Analytics contributor](../logs/manage-access.md#azure-rbac).
+- You must be a member of the [Log Analytics reader](../logs/manage-access.md#azure-rbac) role if you aren't already a member of the [Log Analytics contributor](../logs/manage-access.md#azure-rbac) role.
### Kubelet secure port
-The containerized Linux agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet Secure Port (10250) within the cluster to collect Node and Container Performance related Metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows Node and container performance related metrics collection to work.
-If you have a Kubernetes cluster with Windows nodes, then please review and configure the Network Security Group and Network Policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in cluster's virtual network.
+The containerized Linux agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet secure port (10250) within the cluster to collect node and container performance-related metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows node and container performance-related metrics collection to work.
+If you have a Kubernetes cluster with Windows nodes, review and configure the network security group and network policies to make sure the Kubelet secure port (:10250) is opened for both inbound and outbound in the cluster's virtual network.
### Network firewall requirements
-See [Network firewall requirements](#network-firewall-requirements) for details on the firewall requirements for the AKS cluster.
+
+For information on the firewall requirements for the AKS cluster, see [Network firewall requirements](#network-firewall-requirements).
## Authentication
-Container Insights now supports authentication using managed identity (preview). This is a secure and simplified authentication model where the monitoring agent uses the clusterΓÇÖs managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
+
+Container insights now supports authentication by using managed identity (in preview). This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
> [!NOTE]
-> Container Insights preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Container Insights previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see [Frequently asked questions about Azure Kubernetes Service (AKS)](../../aks/faq.md).
+> Container insights preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available." They're excluded from the service-level agreements and limited warranty. Container insights previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see [Frequently asked questions about Azure Kubernetes Service](../../aks/faq.md).
## Agent
-### Azure Monitor agent
-When using managed identity authentication (preview), Container insights relies on a containerized Azure Monitor agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
+This section reviews the agents used by Container insights.
+### Azure Monitor agent
-### Log Analytics agent
-When not using managed identity authentication, Container insights relies on a containerized Log Analytics agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster, and the agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
+When Container insights uses managed identity authentication (in preview), it relies on a containerized Azure Monitor agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster. The agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
-The agent version is *microsoft/oms:ciprod04202018* or later, and it's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS). To track which versions are released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+### Log Analytics agent
+When Container insights doesn't use managed identity authentication, it relies on a containerized Log Analytics agent for Linux. This specialized agent collects performance and event data from all nodes in the cluster. The agent is automatically deployed and registered with the specified Log Analytics workspace during deployment.
->[!NOTE]
->With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemonset pod on each individual Windows server node to collect logs and forward it to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor on behalf all Windows nodes in the cluster.
+The agent version is *microsoft/oms:ciprod04202018* or later. It's represented by a date in the following format: *mmddyyyy*. When a new version of the agent is released, it's automatically upgraded on your managed Kubernetes clusters that are hosted on AKS. To track which versions are released, see [Agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+With the general availability of Windows Server support for AKS, an AKS cluster with Windows Server nodes has a preview agent installed as a daemonset pod on each individual Windows Server node to collect logs and forward them to Log Analytics. For performance metrics, a Linux node that's automatically deployed in the cluster as part of the standard deployment collects and forwards the data to Azure Monitor for all Windows nodes in the cluster.
> [!NOTE]
-> If you've already deployed an AKS cluster and enabled monitoring using either the Azure CLI or a Azure Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
+> If you've already deployed an AKS cluster and enabled monitoring by using either the Azure CLI or a Resource Manager template, you can't use `kubectl` to upgrade, delete, redeploy, or deploy the agent. The template needs to be deployed in the same resource group as the cluster.
## Network firewall requirements
The following table lists the proxy and firewall configuration information requi
| `*.monitoring.azure.com` | 443 | | `login.microsoftonline.com` | 443 |
-The following table lists the additional firewall configuration required for managed identity authentication.
+The following table lists the extra firewall configuration required for managed identity authentication.
|Agent resource| Purpose | Port | |--|||
The following table lists the additional firewall configuration required for man
**Azure China 21Vianet cloud**
-The following table lists the proxy and firewall configuration information for Azure China 21Vianet:
+The following table lists the proxy and firewall configuration information for Azure China 21Vianet.
|Agent resource| Purpose | Port | |--||-|
The following table lists the proxy and firewall configuration information for A
| `*.oms.opinsights.azure.cn` | OMS onboarding | 443 | | `dc.services.visualstudio.com` | For agent telemetry that uses Azure Public Cloud Application Insights | 443 | -
-The following table lists the additional firewall configuration required for managed identity authentication.
+The following table lists the extra firewall configuration required for managed identity authentication.
|Agent resource| Purpose | Port | |--|||
The following table lists the additional firewall configuration required for man
**Azure Government cloud**
-The following table lists the proxy and firewall configuration information for Azure US Government:
+The following table lists the proxy and firewall configuration information for Azure US Government.
|Agent resource| Purpose | Port | |--||-|
The following table lists the proxy and firewall configuration information for A
| `*.oms.opinsights.azure.us` | OMS onboarding | 443 | | `dc.services.visualstudio.com` | For agent telemetry that uses Azure Public Cloud Application Insights | 443 |
-The following table lists the additional firewall configuration required for managed identity authentication.
+The following table lists the extra firewall configuration required for managed identity authentication.
|Agent resource| Purpose | Port | |--||| | `global.handler.control.monitor.azure.us` | Access control service | 443 | | `<cluster-region-name>.handler.control.monitor.azure.us` | Fetch data collection rules for specific AKS cluster | 443 | - ## Next steps
-Once you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on Azure Kubernetes Service (AKS), Azure Stack, or another environment. To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
+
+After you've enabled monitoring, you can begin analyzing the performance of your Kubernetes clusters that are hosted on AKS, Azure Stack, or another environment.
+
+To learn how to use Container insights, see [View Kubernetes cluster performance](container-insights-analyze.md).
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/partners.md
description: Learn about partners for Azure Monitor and how you can access docum
Previously updated : 10/27/2021 Last updated : 10/27/2022
The following partner products integrate with Azure Monitor. They're listed in alphabetical order.
+This is not a complete list of partners. The number keeps expanding and maintaining this list is no longer scalable. As such, we are not accepting new requests to be added to this list. Any GitHub changes opened will be closed without action. We suggest you use your favorite search engine to locate additional appropropriate partners.
+ ## AIMS ![AIMS AIOps logo.](./media/partners/aims.jpg)
Grafana is an open-source application that enables you to visualize metric data
## InfluxData
-![InfluxData logo.](./media/partners/Influxdata.png)
+![InfluxData logo.](./media/partners/influxdata.png)
InfluxData is the creator of InfluxDB, the open-source time series database. Its technology is purpose built to handle the massive volumes of time-stamped data produced by Internet of Things (IoT) devices, applications, networks, containers, and computers.
For more information, see the [Moogsoft documentation](https://www.moogsoft.com/
## New Relic
-![New Relic logo.](./media/partners/newrelic.png)
+![New Relic logo.](./media/partners/newrelic-logo.png)
-See the [New Relic documentation](https://newrelic.com/solutions/partners/azure).
+Microsoft Azure integration monitoring from New Relic gives you an overview of your ecosystem ΓÇô cloud migrations, digital transformations, and cloud native applications using New Relic Observability Platform.
+
+**NewRelic Azure monitoring helps you to:**
+* Monitor the entire software stack with Full-stack monitoring.
+* Reduce friction between engineers and ITOps teams by identifying, triaging, and delegating application and infrastructure issues quickly.
+* Identify service dependencies through cross-application tracing using New Relic APM.
+
+Refer to [New Relic Azure integration](https://newrelic.com/instant-observability/?category=azure&search=azure) for more information.
## OpsGenie
For more information, see the [SquaredUp website](https://squaredup.com/).
## Sumo Logic
-![Sumo Logic logo.](./media/partners/SumoLogic.png)
+![Sumo Logic logo.](./media/partners/sumologic.png)
Sumo Logic is a secure, cloud-native analytics service for machine data. It delivers real-time, continuous intelligence from structured, semistructured, and unstructured data across the entire application lifecycle and stack.
For more information, see the [Sumo Logic documentation](https://www.sumologic.c
## Turbonomic
-![Turbonomic logo.](./media/partners/Turbonomic.png)
+![Turbonomic logo.](./media/partners/turbonomic.png)
Turbonomic delivers workload automation for hybrid clouds by simultaneously optimizing performance, cost, and compliance in real time. Turbonomic helps organizations be elastic in their Azure estate by continuously optimizing the estate. Applications constantly get the resources they require to deliver their SLA, and nothing more, across compute, storage, and network for the IaaS and PaaS layer.
Organizations can simulate migrations, properly scale workloads, and retire data
For more information, see the [Turbonomic introduction](https://turbonomic.com/).
+## Zenduty
+
+![Zenduty logo.](./media/partners/zenduty.png)
+
+Zenduty is a novel collaborative incident management platform that provides end-to-end incident alerting, on-call management, and response orchestration, which gives teams greater control and automation over the incident management lifecycle. Zenduty is ideal for always-on services, helping teams orchestrate incident response for creating better user experiences and brand value and centralizing all incoming alerts through predefined notification rules to ensure that the right people are notified at the right time.
+
+Zenduty provides your NOC, SRE, and application engineers with detailed context around the Azure Monitor alert along with playbooks and a complete incident command framework to triage, remediate, and resolve incidents with speed.
+
+For more information, see the [Zenduty documentation](https://docs.zenduty.com/docs/microsoftazure).
+ ## Partner tools with Event Hubs integration If you use Azure Monitor to route monitoring data to an event hub, you can easily integrate with some external SIEM and monitoring tools. The following partners are known to have integration with the Event Hubs service.
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
You can download the Dependency agent from these locations:
| File | OS | Version | SHA-256 | |:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.14.20760 | D4DB398FAD36E86FEACCC41D7B8AF46711346A943806769B6CE017F0BF1625FF |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.14.20760 | 3DE3B485BA79B57E74B3DFB60FD277A30C8A5D1BD898455AD77FECF20E0E2610 |
+| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.15.22060 | 39427C875E08BF13E1FD3B78E28C96666B722DA675FAA94D8014D8F1A42AE724 |
+| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.15.22060 | 5B99CDEA77C6328BDEF448EAC9A6DEF03CE5A732C5F7C98A4D4F4FFB6220EF58 |
## Install the Dependency agent on Windows
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 10/14/2022 Last updated : 10/26/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* France Central * Germany West Central * Japan East
-* Japan West
* Korea Central * North Central US * North Europe
Azure NetApp Files Standard network features are supported for the following reg
* South Central US * South India * Southeast Asia
+* Sweden Central
* Switzerland North * UAE Central
+* UAE North
* UK South * West Europe * West US
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
Resource Manager provides several functions for working with objects in your Azu
* [createObject](#createobject) * [empty](#empty) * [intersection](#intersection)
+* [items](#items)
* [json](#json) * [length](#length) * [null](#null)
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md
You can add the test toolkit to your Azure Pipeline. With a pipeline, you can ru
The easiest way to add the test toolkit to your pipeline is with third-party extensions. The following two extensions are available: -- [Run ARM template TTK Tests](https://marketplace.visualstudio.com/items?itemName=Sam-Cogan.ARMTTKExtension)
+- [Run ARM template TTK Tests](https://marketplace.visualstudio.com/items?itemName=Sam-Cogan.ARMTTKExtensionXPlatform)
- [ARM Template Tester](https://marketplace.visualstudio.com/items?itemName=maikvandergaag.maikvandergaag-arm-ttk) Or, you can implement your own tasks. The following example shows how to download the test toolkit.
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Before the end of the 30 days of transition state, you can remove access from us
|**Subscription**| The subscription currently contains the classic account and other related resources such as the Media Services.| |**Resource Group**|Select an existing resource or create a new one. The resource group must be the same location as the classic account being connected| |**Azure Video Indexer account** (radio button)| Select the *"Connecting an existing classic account"*.|
- |**Existing account ID**| Enter the ID of existing Azure Video Indexer classic account.|
+ |**Existing account ID**|Select an existing Azure Video Indexer account from the dropdown.|
|**Resource name**|Enter the name of the new Azure Video Indexer account. Default value would be the same name the account had as classic.| |**Location**|The geographic region can't be changed in the connect process, the connected account must stay in the same region. | |**Media Services account name**|The original Media Services account name that was associated with classic account.|
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
This section describes languages supported by Azure Video Indexer API.
- Frame patterns (Only to Hebrew as of now) - Language customization
-| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (Language model) |
-|::|:--:|:--:|:-:|:-:|:-:|::|
-| Afrikaans | `af-ZA` | | Γ£ö | Γ£ö | Γ£ö | |
-| Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Bangla | `bn-BD` | | Γ£ö | Γ£ö | Γ£ö | |
-| Bosnian | `bs-Latn` | | Γ£ö | Γ£ö | Γ£ö | |
-| Bulgarian | `bg-BG` | | Γ£ö | Γ£ö | Γ£ö | |
-| Catalan | `ca-ES` | | Γ£ö | Γ£ö | Γ£ö | |
-| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-Hans` | Γ£ö | | | Γ£ö | Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | | Γ£ö | Γ£ö | Γ£ö | |
-| Croatian | `hr-HR` | | Γ£ö | Γ£ö | Γ£ö | |
+| **Language** | **Code** | **Transcription** | **LID**\* | **MLID**\* | **Translation** | **Customization** (language model) |
+|::|:--:|:--:|:-:|:-:|:-:|::|
+| Afrikaans | `af-ZA` | | | | | Γ£ö |
+| Arabic (Israel) | `ar-IL` | Γ£ö | | | | Γ£ö |
+| Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Kuwait) | `ar-KW` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Oman) | `ar-OM` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Qatar) | `ar-QA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Egypt | `ar-EG` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Bangla | `bn-BD` | | | | Γ£ö | |
+| Bosnian | `bs-Latn` | | | | Γ£ö | |
+| Bulgarian | `bg-BG` | | | | Γ£ö | |
+| Catalan | `ca-ES` | | | | Γ£ö | |
+| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö\*| | Γ£ö | Γ£ö |
+| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | |
+| Croatian | `hr-HR` | | | | Γ£ö | |
| Czech | `cs-CZ` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Danish | `da-DK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Dutch | `nl-NL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | English Australia | `en-AU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | English United Kingdom | `en-GB` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English United States | `en-US` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Estonian | `et-EE` | | Γ£ö | Γ£ö | Γ£ö | |
-| Fijian | `en-FJ` | | Γ£ö | Γ£ö | Γ£ö | |
-| Filipino | `fil-PH` | | Γ£ö | Γ£ö | Γ£ö | |
+| English United States | `en-US` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
+| Estonian | `et-EE` | | | | Γ£ö | |
+| Fijian | `en-FJ` | | | | Γ£ö | |
+| Filipino | `fil-PH` | | | | Γ£ö | |
| Finnish | `fi-FI` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| French | `fr-FR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| French | `fr-FR` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
| French (Canada) | `fr-CA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| German | `de-DE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Greek | `el-GR` | | Γ£ö | Γ£ö | Γ£ö | |
-| Haitian | `fr-HT` | | Γ£ö | Γ£ö | Γ£ö | |
+| German | `de-DE` | Γ£ö | Γ£ö \*| Γ£ö \*| Γ£ö | Γ£ö |
+| Greek | `el-GR` | | | | Γ£ö | |
+| Haitian | `fr-HT` | | | | Γ£ö | |
| Hebrew | `he-IL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Hindi | `hi-IN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Hungarian | `hu-HU` | | Γ£ö | Γ£ö | Γ£ö | |
-| Indonesian | `id-ID` | | Γ£ö | Γ£ö | Γ£ö | |
-| Italian | `it-IT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Kiswahili | `sw-KE` | | Γ£ö | Γ£ö | Γ£ö | |
+| Hungarian | `hu-HU` | | | | Γ£ö | |
+| Indonesian | `id-ID` | | | | Γ£ö | |
+| Italian | `it-IT` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Kiswahili | `sw-KE` | | | | Γ£ö | |
| Korean | `ko-KR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Latvian | `lv-LV` | | Γ£ö | Γ£ö | Γ£ö | |
-| Lithuanian | `lt-LT` | | Γ£ö | Γ£ö | Γ£ö | |
-| Malagasy | `mg-MG` | | Γ£ö | Γ£ö | Γ£ö | |
-| Malay | `ms-MY` | | Γ£ö | Γ£ö | Γ£ö | |
-| Maltese | `mt-MT` | | Γ£ö | Γ£ö | Γ£ö | |
+| Latvian | `lv-LV` | | | | Γ£ö | |
+| Lithuanian | `lt-LT` | | | | Γ£ö | |
+| Malagasy | `mg-MG` | | | | Γ£ö | |
+| Malay | `ms-MY` | | | | Γ£ö | |
+| Maltese | `mt-MT` | | | | Γ£ö | |
| Norwegian | `nb-NO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
| Polish | `pl-PL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Romanian | `ro-RO` | | Γ£ö | Γ£ö | Γ£ö | |
-| Russian | `ru-RU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | Γ£ö | Γ£ö | Γ£ö | |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | | Γ£ö | Γ£ö | Γ£ö | |
-| Serbian (Latin) | `sr-Latn-RS` | | Γ£ö | Γ£ö | Γ£ö | |
-| Slovak | `sk-SK` | | Γ£ö | Γ£ö | Γ£ö | |
-| Slovenian | `sl-SI` | | Γ£ö | Γ£ö | Γ£ö | |
-| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Spanish (Mexico) | `es-MX` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Swedish | `sv-SE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | | Γ£ö | Γ£ö | Γ£ö | |
-| Thai | `th-TH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tongan | `to-TO` | | Γ£ö | Γ£ö | Γ£ö | |
+| Romanian | `ro-RO` | | | | Γ£ö | |
+| Russian | `ru-RU` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö |
+| Samoan | `en-WS` | | | | Γ£ö | |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | |
+| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | |
+| Slovak | `sk-SK` | | | | Γ£ö | |
+| Slovenian | `sl-SI` | | | | Γ£ö | |
+| Spanish | `es-ES` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö |
+| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
+| Swedish | `sv-SE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tamil | `ta-IN` | | | | Γ£ö | |
+| Thai | `th-TH` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Tongan | `to-TO` | | | | Γ£ö | |
| Turkish | `tr-TR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Ukrainian | `uk-UA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Urdu | `ur-PK` | | Γ£ö | Γ£ö | Γ£ö | |
+| Urdu | `ur-PK` | | | | Γ£ö | |
| Vietnamese | `vi-VN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
+\*By default, languages marked by * are supported by LID or/and MLID auto-detection. When [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with API, you can specify to use other supported languages (see the table above) to auto-detect one or more languages by language identification (LID) or multi-language identification (MLID) by using `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by language identification (LID) or multi-language identification (MLID).
+
+> [!NOTE]
+> To change the default languages, set the `customLanguages` parameter. Setting the parameter, will replace the default languages supported by language identification (LID) and by multi-language identification (MLID).
+ ## Language support in frontend experiences The following table describes language support in the Azure Video Indexer frontend experiences.
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
+ # Run command in Azure VMware Solution In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
azure-vmware Configure Alerts For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-alerts-for-azure-vmware-solution.md
Title: Configure alerts and work with metrics in Azure VMware Solution description: Learn how to use alerts to receive notifications. Also learn how to work with metrics to gain deeper insights into your Azure VMware Solution private cloud. + Last updated 07/23/2021
azure-vmware Configure Github Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-github-enterprise-server.md
Title: Configure GitHub Enterprise Server on Azure VMware Solution
description: Learn how to Set up GitHub Enterprise Server on your Azure VMware Solution private cloud. Previously updated : 07/07/2021 Last updated : 10/25/2022+ # Configure GitHub Enterprise Server on Azure VMware Solution
azure-vmware Configure Hcx Network Extension High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-hcx-network-extension-high-availability.md
Title: Configure HCX network extension high availability
description: Learn how to configure HCX network extension high availability Previously updated : 05/06/2022 Last updated : 10/26/2022+ # HCX Network extension high availability (HA)
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
Title: Configure VMware HCX in Azure VMware Solution
description: Configure the on-premises VMware HCX Connector for your Azure VMware Solution private cloud. Previously updated : 09/07/2021 Last updated : 10/26/2022+ # Configure on-premises VMware HCX Connector
azure-vmware Connect Multiple Private Clouds Same Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/connect-multiple-private-clouds-same-region.md
Title: Connect multiple Azure VMware Solution private clouds in the same region
description: Learn how to create a network connection between two or more Azure VMware Solution private clouds located in the same region. Previously updated : 09/20/2021 Last updated : 10/26/2022+ # Connect multiple Azure VMware Solution private clouds in the same region
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-azure-vmware-solution.md
Title: Deploy and configure Azure VMware Solution description: Learn how to use the information gathered in the planning stage to deploy and configure the Azure VMware Solution private cloud. -+ Previously updated : 07/28/2021 Last updated : 10/22/2022
azure-vmware Deploy Disaster Recovery Using Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-vmware-hcx.md
Title: Deploy disaster recovery using VMware HCX
description: Learn how to deploy disaster recovery of your virtual machines (VMs) with VMware HCX Disaster Recovery. Also learn how to use Azure VMware Solution as the recovery or target site. Previously updated : 06/10/2021 Last updated : 10/26/2022+ # Deploy disaster recovery using VMware HCX
azure-vmware Deploy Traffic Manager Balance Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
Title: Deploy Traffic Manager to balance Azure VMware Solution workloads
description: Learn how to integrate Traffic Manager with Azure VMware Solution to balance application workloads across multiple endpoints in different regions. Previously updated : 02/08/2021 Last updated : 10/26/2022++ # Deploy Azure Traffic Manager to balance Azure VMware Solution workloads
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
Title: Deploy Zerto disaster recovery on Azure VMware Solution
description: Learn how to implement Zerto disaster recovery for on-premises VMware or Azure VMware Solution virtual machines. Previously updated : 10/25/2021- Last updated : 10/26/2022+ # Deploy Zerto disaster recovery on Azure VMware Solution
azure-vmware Ecosystem Back Up Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-back-up-vms.md
Title: Backup solutions for Azure VMware Solution virtual machines
description: Learn about leading backup and restore solutions for your Azure VMware Solution virtual machines. Previously updated : 04/21/2021 Last updated : 10/26/2022+ # Backup solutions for Azure VMware Solution virtual machines (VMs)
Back up network traffic between Azure VMware Solution VMs and the backup reposit
>[!NOTE] >For common questions, see [our third-party backup solution FAQ](./faq.yml). -- You can find more information on these backup solutions here: - [Cohesity](https://www.cohesity.com/blogs/expanding-cohesitys-support-for-microsofts-ecosystem-azure-stack-and-azure-vmware-solution/) - [Commvault](https://documentation.commvault.com/11.21/essential/128997_support_for_azure_vmware_solution.html)
azure-vmware Ecosystem Migration Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-migration-vms.md
Title: Migration solutions for Azure VMware Solution virtual machines
description: Learn about leading migration solutions for your Azure VMware Solution virtual machines. Previously updated : 03/22/2021 Last updated : 10/26/2022+ # Migration solutions for Azure VMware Solution virtual machines (VMs)
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
Title: Move Azure VMware Solution resources across regions description: This article describes how to move Azure VMware Solution resources from one Azure region to another. -+ Last updated 04/11/2022
azure-vmware Move Ea Csp Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-ea-csp-subscriptions.md
Title: Move Azure VMware Solution subscription to another subscription description: This article describes how to move Azure VMware Solution subscription to another subscription. You might move your resources for various reasons, such as billing. -+ Previously updated : 04/26/2021 Last updated : 10/26/2022 # Customer intent: As an Azure service administrator, I want to move my Azure VMware Solution subscription to another subscription.
Last updated 04/26/2021
This article describes how to move an Azure VMware Solution subscription to another subscription. You might move your subscription for various reasons, like billing. ## Prerequisites
-You should have at least contributor rights on both **source** and **target** subscriptions.
+
+You should have at least contributor rights on both **source** and **target** subscriptions.
>[!IMPORTANT]
->VNet and VNet gateway cannot be moved from one subscription to another. Additionally, moving your subscriptions has no impact on the management and workloads, like the vCenter, NSX, and workload virtual machines.
+>VNet and VNet gateway can't' be moved from one subscription to another. Additionally, moving your subscriptions has no impact on the management and workloads, like the vCenter, NSX, and workload virtual machines.
-## Prepare and move
+## Prepare and move
1. In the Azure portal, select the private cloud you want to move.
- :::image type="content" source="media/move-subscriptions/source-subscription-id.png" alt-text="Screenshot that shows the overview details of the selected private cloud.":::
+ :::image type="content" source="media/move-subscriptions/source-subscription-id.png" alt-text="Screenshot that shows the overview details of the selected private cloud."lightbox="media/move-subscriptions/source-subscription-id.png":::
1. From a command prompt, ping the components and workloads to verify that they are pinging from the same subscription.
You should have at least contributor rights on both **source** and **target** su
1. Select the **Subscription (change)** link.
- :::image type="content" source="media/move-subscriptions/private-cloud-overview-subscription-id.png" alt-text="Screenshot showing the private cloud details.":::
+ :::image type="content" source="media/move-subscriptions/private-cloud-overview-subscription-id.png" alt-text="Screenshot showing the private cloud details."lightbox="media/move-subscriptions/private-cloud-overview-subscription-id.png":::
1. Provide the subscription details for **Target** and select **Next**.
- :::image type="content" source="media/move-subscriptions/move-resources-subscription-target.png" alt-text="Screenshot of the target resource.":::
+ :::image type="content" source="media/move-subscriptions/move-resources-subscription-target.png" alt-text="Screenshot of the target resource."lightbox="media/move-subscriptions/move-resources-subscription-target.png":::
-1. Confirm the validation of the resources you selected to move. During the validation, youΓÇÖll see **Pending validation** for the status.
+1. Confirm the validation of the resources you selected to move. During the validation, youΓÇÖll see *Pending validation* under **Validation status**.
- :::image type="content" source="media/move-subscriptions/pending-move-resources-subscription-target.png" alt-text="Screenshot showing the resource being moved.":::
+ :::image type="content" source="media/move-subscriptions/pending-move-resources-subscription-target.png" alt-text="Screenshot showing the resource being moved."lightbox="media/move-subscriptions/pending-move-resources-subscription-target.png":::
1. Once the validation is successful, select **Next** to start the migration of your private cloud.
- :::image type="content" source="media/move-subscriptions/move-resources-succeeded.png" alt-text=" Screenshot showing the validation status of Succeeded.":::
+ :::image type="content" source="media/move-subscriptions/move-resources-succeeded.png" alt-text=" Screenshot showing the validation status of Succeeded."lightbox="media/move-subscriptions/move-resources-succeeded.png":::
1. Select the check box indicating you understand that the tools and scripts associated won't work until you update them to use the new resource IDs. Then select **Move**.
- :::image type="content" source="media/move-subscriptions/review-move-resources-subscription-target.png" alt-text="Screenshot showing the summary of the selected resource being moved.":::
+ :::image type="content" source="media/move-subscriptions/review-move-resources-subscription-target.png" alt-text="Screenshot showing the summary of the selected resource being moved."lightbox="media/move-subscriptions/review-move-resources-subscription-target.png":::
## Verify the move
-A notification appears once the resource move is complete.
+A notification appears once the resource move is complete.
The new subscription appears in the private cloud Overview. ## Next steps+ Learn more about: - [Move Azure VMware Solution across regions](move-azure-vmware-solution-across-regions.md)
azure-vmware Plan Private Cloud Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/plan-private-cloud-deployment.md
Title: Plan the Azure VMware Solution deployment description: Learn how to plan your Azure VMware Solution deployment. -+ Previously updated : 09/27/2021 Last updated : 10/26/2022 # Plan the Azure VMware Solution deployment Planning your Azure VMware Solution deployment is critical for a successful production-ready environment for creating virtual machines (VMs) and migration. During the planning process, you'll identify and gather what's needed for your deployment. As you plan, make sure to document the information you gather for easy reference during the deployment. A successful deployment results in a production-ready environment for creating virtual machines (VMs) and migration.
-In this how-to, you'll:
+In this how-to article, you'll do the following tasks:
> [!div class="checklist"] > * Identify the Azure subscription, resource group, region, and resource name
In this how-to, you'll:
> * Define the virtual network gateway > * Define VMware HCX network segments
-After you're finished, follow the recommended next steps at the end to continue with this getting started guide.
-
+After you're finished, follow the recommended [Next steps](#next-steps) at the end of this article to continue with this getting started guide.
## Identify the subscription
Identify the resource group you want to use for your Azure VMware Solution. Gen
## Identify the region or location
-Identify the [region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware) you want Azure VMware Solution deployed.
+Identify the [region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware) you want Azure VMware Solution deployed.
## Define the resource name The resource name is a friendly and descriptive name in which you title your Azure VMware Solution private cloud, for example, **MyPrivateCloud**. >[!IMPORTANT]
->The name must not exceed 40 characters. If the name exceeds this limit, you won't be able to create public IP addresses for use with the private cloud.
+>The name must not exceed 40 characters. If the name exceeds this limit, you won't be able to create public IP addresses for use with the private cloud.
## Identify the size hosts
The first Azure VMware Solution deployment you do consists of a private cloud co
[!INCLUDE [hosts-minimum-initial-deployment-statement](includes/hosts-minimum-initial-deployment-statement.md)] - >[!NOTE] >To learn about the limits for the number of hosts per cluster, the number of clusters per private cloud, and the number of hosts per private cloud, check [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-vmware-solution-limits).
-## Request a host quota
+## Request a host quota
It's crucial to request a host quota early, so after you've finished the planning process, you're ready to deploy your Azure VMware Solution private cloud. Before requesting a host quota, make sure you've identified the Azure subscription, resource group, and region. Also, make sure you've identified the size hosts and determine the number of clusters and hosts you'll need.
After the support team receives your request for a host quota, it takes up to fi
- [EA customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-ea-and-mca-customers) - [CSP customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-csp-customers) - ## Define the IP address segment for private cloud management
-Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. This address space is carved into smaller network segments (subnets) and used for Azure VMware Solution management segments, including vCenter Server, VMware HCX, NSX-T Data Center, and vMotion functionality. The diagram highlights Azure VMware Solution management IP address segments.
+Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. This address space is carved into smaller network segments (subnets) and used for Azure VMware Solution management segments, including: vCenter Server, VMware HCX, NSX-T Data Center, and vMotion functionality. The diagram highlights Azure VMware Solution management IP address segments.
>[!IMPORTANT] >The /22 CIDR network address block shouldn't overlap with any existing network segment you already have on-premises or in Azure. For details of how the /22 CIDR network is broken down per private cloud, see [Routing and subnet considerations](tutorial-network-checklist.md#routing-and-subnet-considerations). -- ## Define the IP address segment for VM workloads
-Like with any VMware vSphere environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there is often a combination of L2 extended segments from on-premises and local NSX-T network segments.
+Like with any VMware vSphere environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there's often a combination of L2 extended segments from on-premises and local NSX-T network segments.
-For the initial deployment, identify a single network segment (IP network), for example, `10.0.4.0/24`. This network segment is used primarily for testing purposes during the initial deployment. The address block shouldn't overlap with any network segments on-premises or within Azure and shouldn't be within the /22 network segment already defined.
+For the initial deployment, identify a single network segment (IP network), for example, `10.0.4.0/24`. This network segment is used primarily for testing purposes during the initial deployment. The address block shouldn't overlap with any network segments on-premises or within Azure and shouldn't be within the /22 network segment already defined.
- ## Define the virtual network gateway
-Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circuit. Define whether you want to use an *existing* OR *new* ExpressRoute virtual network gateway. If you decide to use a *new* virtual network gateway, you'll create it after creating your private cloud. It's acceptable to use an existing ExpressRoute virtual network gateway, and for planning purposes, make a note of which ExpressRoute virtual network gateway you'll use.
+Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circuit. Define whether you want to use an *existing* OR *new* ExpressRoute virtual network gateway. If you decide to use a *new* virtual network gateway, you'll create it after creating your private cloud. It's acceptable to use an existing ExpressRoute virtual network gateway. For planning purposes, make a note of which ExpressRoute virtual network gateway you'll use.
>[!IMPORTANT] >You can connect to a virtual network gateway in an Azure Virtual WAN, but it is out of scope for this quick start. ## Define VMware HCX network segments
-VMware HCX is an application mobility platform that simplifies application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware vSphere workloads to Azure VMware Solution and other connected sites through various migration types.
+VMware HCX is an application mobility platform that simplifies application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware vSphere workloads to Azure VMware Solution and other connected sites through various migration types.
-VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments. Identify the following for the VMware HCX deployment, which supports a pilot or small product use case. Depending on the needs of your migration, modify as necessary.
+VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments. Identify the following listed items for the VMware HCX deployment, which supports a pilot or small product use case. Depending on the needs of your migration, modify as necessary.
- **Management network:** When deploying VMware HCX on-premises, you'll need to identify a management network for VMware HCX. Typically, it's the same management network used by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case. >[!NOTE] >Preparing for large environments, instead of using the management network used for the on-premises VMware vSphere cluster, create a new /26 network and present that network as a port group to your on-premises VMware vSphere cluster. You can then create up to 10 service meshes and 60 network extenders (-1 per service mesh). You can stretch **eight** networks per network extender by using Azure VMware Solution private clouds. -- **Uplink network:** When deploying VMware HCX on-premises, you'll need to identify an Uplink network for VMware HCX. Use the same network which youΓÇÖll use for the Management network.
+- **Uplink network:** When deploying VMware HCX on-premises, you'll need to identify an Uplink network for VMware HCX. Use the same network you plan to use for the Management network.
- **vMotion network:** When deploying VMware HCX on-premises, you'll need to identify a vMotion network for VMware HCX. Typically, it's the same network used for vMotion by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
VMware HCX Connector deploys a subset of virtual appliances (automated) that req
>[!NOTE] >Many VMware vSphere environments use non-routed network segments for vMotion, which poses no problems. -- **Replication network:** When deploying VMware HCX on-premises, you'll need to define a replication network. Use the same network as you are using for your Management and Uplink networks. If the on-premises cluster hosts use a dedicated Replication VMkernel network, reserve **two** IP addresses in this network segment and use the Replication VMkernel network for the replication network.-
+- **Replication network:** When deploying VMware HCX on-premises, you'll need to define a replication network. Use the same network you're using for your Management and Uplink networks. If the on-premises cluster hosts use a dedicated Replication VMkernel network, reserve **two** IP addresses in this network segment and use the Replication VMkernel network for the replication network.
## Determine whether to extend your networks
Optionally, you can extend network segments from on-premises to Azure VMware Sol
>[!IMPORTANT] >These networks are extended as a final step of the configuration, not during deployment. - ## Next steps+ Now that you've gathered and documented the information needed, continue to the next tutorial to create your Azure VMware Solution private cloud. > [!div class="nextstepaction"]
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
Title: Protect web apps on Azure VMware Solution with Azure Application Gateway
description: Configure Azure Application Gateway to securely expose your web apps running on Azure VMware Solution. Previously updated : 02/10/2021 Last updated : 10/26/2022+ # Protect web apps on Azure VMware Solution with Azure Application Gateway [Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) is a layer 7 web traffic load balancer that lets you manage traffic to your web applications. It's offered in both Azure VMware Solution v1.0 and v2.0. Both versions tested with web apps running on Azure VMware Solution.
-The capabilities include:
+The capabilities include:
+ - Cookie-based session affinity - URL-based routing - Web Application Firewall (WAF)
-For a complete list of features, see [Azure Application Gateway features](../application-gateway/features.md).
+For a complete list of features, see [Azure Application Gateway features](../application-gateway/features.md).
This article shows you how to use Application Gateway in front of a web server farm to protect a web app running on Azure VMware Solution. ## Topology
-The diagram shows how Application Gateway is used to protect Azure IaaS virtual machines (VMs), Azure virtual machine scale sets, or on-premises servers. Application Gateway treats Azure VMware Solution VMs as on-premises servers.
+The diagram shows how Application Gateway is used to protect Azure IaaS virtual machines (VMs), Azure Virtual Machine Scale Sets, or on-premises servers. Application Gateway treats Azure VMware Solution VMs as on-premises servers.
+ > [!IMPORTANT] > Azure Application Gateway is currently the only supported method to expose web apps running on Azure VMware Solution VMs. The diagram shows the testing scenario used to validate the Application Gateway with Azure VMware Solution web applications. The Application Gateway instance gets deployed on the hub in a dedicated subnet with an Azure public IP address. Activating the [Azure DDoS Protection Standard](../ddos-protection/ddos-protection-overview.md) for the virtual network is recommended. The web server is hosted on an Azure VMware Solution private cloud behind NSX T0 and T1 Gateways. Additionally, Azure VMware Solution uses [ExpressRoute Global Reach](../expressroute/expressroute-global-reach.md) to enable communication with the hub and on-premises systems. ## Prerequisites -- An Azure account with an active subscription.
+- An Azure account with an active subscription.
- An Azure VMware Solution private cloud deployed and running. ## Deployment and configuration 1. In the Azure portal, search for **Application Gateway** and select **Create application gateway**.
-2. Provide the basic details as in the following figure; then select **Next: Frontends>**.
+2. Provide the basic details as in the following figure; then select **Next: Frontends>**.
- :::image type="content" source="media/application-gateway/create-app-gateway.png" alt-text="Screenshot showing Create application gateway page in Azure portal.":::
+ :::image type="content" source="media/application-gateway/create-app-gateway.png" alt-text="Screenshot showing Create application gateway page in Azure portal."lightbox="media/application-gateway/create-app-gateway.png":::
3. Choose the frontend IP address type. For public, choose an existing public IP address or create a new one. Select **Next: Backends>**.
The Application Gateway instance gets deployed on the hub in a dedicated subnet
5. On the **Configuration** tab, select **Add a routing rule**.
-6. On the **Listener** tab, provide the details for the listener. If HTTPS is selected, you must provide a certificate, either from a PFX file or an existing Azure Key Vault certificate.
+6. On the **Listener** tab, provide the details for the listener. If HTTPS is selected, you must provide a certificate, either from a PFX file or an existing Azure Key Vault certificate.
7. Select the **Backend targets** tab and select the backend pool previously created. For the **HTTP settings** field, select **Add new**. 8. Configure the parameters for the HTTP settings. Select **Add**.
-9. If you want to configure path-based rules, select **Add multiple targets to create a path-based rule**.
+9. If you want to configure path-based rules, select **Add multiple targets to create a path-based rule**.
-10. Add a path-based rule and select **Add**. Repeat to add more path-based rules.
+10. Add a path-based rule and select **Add**. Repeat to add more path-based rules.
-11. When you have finished adding path-based rules, select **Add** again; then select **Next: Tags>**.
+11. When you have finished adding path-based rules, select **Add** again; then select **Next: Tags>**.
12. Add tags and then select **Next: Review + Create>**.
The Application Gateway instance gets deployed on the hub in a dedicated subnet
## Configuration examples
-Now you'll configure Application Gateway with Azure VMware Solution VMs as backend pools for the following use cases:
+Now you'll configure Application Gateway with Azure VMware Solution VMs as backend pools for the following use cases:
- [Hosting multiple sites](#hosting-multiple-sites) - [Routing by URL](#routing-by-url) ### Hosting multiple sites
-This procedure shows you how to define backend address pools using VMs running on an Azure VMware Solution private cloud on an existing application gateway.
+This procedure shows you how to define backend address pools using VMs running on an Azure VMware Solution private cloud on an existing application gateway.
>[!NOTE] >This procedure assumes you have multiple domains, so we'll use examples of www.contoso.com and www.fabrikam.com.
+1. In your private cloud, create two different pools of VMs. One represents Contoso and the second Fabrikam.
-1. In your private cloud, create two different pools of VMs. One represents Contoso and the second Fabrikam.
+ :::image type="content" source="media/application-gateway/app-gateway-multi-backend-pool.png" alt-text="Screenshot showing summary of a web server's details in VMware vSphere Client."lightbox="media/application-gateway/app-gateway-multi-backend-pool.png":::
- :::image type="content" source="media/application-gateway/app-gateway-multi-backend-pool.png" alt-text="Screenshot showing summary of a web server's details in VSphere Client.":::
-
- We've used Windows Server 2016 with the Internet Information Services (IIS) role installed. Once the VMs are installed, run the following PowerShell commands to configure IIS on each of the VMs.
+ We've used Windows Server 2016 with the Internet Information Services (IIS) role installed. Once the VMs are installed, run the following PowerShell commands to configure IIS on each of the VMs.
```powershell Install-WindowsFeature -Name Web-Server
This procedure shows you how to define backend address pools using VMs running o
The following steps define backend address pools using VMs running on an Azure VMware Solution private cloud. The private cloud is on an existing application gateway. You then create routing rules that make sure web traffic arrives at the appropriate servers in the pools.
-1. In your private cloud, create a virtual machine pool to represent the web farm.
+1. In your private cloud, create a virtual machine pool to represent the web farm.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool.png" alt-text="Screenshot of page in VMSphere Client showing summary of another VM.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool.png" alt-text="Screenshot of page in VMware vSphere Client showing summary of another VM."lightbox="media/application-gateway/app-gateway-url-route-backend-pool.png":::
- Windows Server 2016 with IIS role installed has been used to illustrate this tutorial. Once the VMs are installed, run the following PowerShell commands to configure IIS for each VM tutorial.
+ Windows Server 2016 with IIS role installed has been used to illustrate this tutorial. Once the VMs are installed, run the following PowerShell commands to configure IIS for each VM tutorial.
The first virtual machine, contoso-web-01, hosts the main website.
The following steps define backend address pools using VMs running on an Azure V
``` The second virtual machine, contoso-web-02, hosts the images site.
-
+ ```powershell Install-WindowsFeature -Name Web-Server New-Item -Path "C:\inetpub\wwwroot\" -Name "images" -ItemType "directory"
The following steps define backend address pools using VMs running on an Azure V
Add-Content -Path C:\inetpub\wwwroot\video\test.htm -Value $($env:computername) ```
-2. Add three new backend pools in an existing application gateway instance.
+2. Add three new backend pools in an existing application gateway instance.
1. Select **Backend pools** from the left menu.
- 1. Select **Add** and enter the details of the first pool, **contoso-web**.
- 1. Add one VM as the target.
- 1. Select **Add**.
- 1. Repeat this process for **contoso-images** and **contoso-video**, adding one unique VM as the target.
+ 1. Select **Add** and enter the details of the first pool, **contoso-web**.
+ 1. Add one VM as the target.
+ 1. Select **Add**.
+ 1. Repeat this process for **contoso-images** and **contoso-video**, adding one unique VM as the target.
:::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-02.png" alt-text="Screenshot of Backend pools page showing the addition of three new backend pools." lightbox="media/application-gateway/app-gateway-url-route-backend-pool-02.png":::
The following steps define backend address pools using VMs running on an Azure V
4. On the left navigation, select **HTTP settings** and select **Add** in the left pane. Fill in the details to create a new HTTP setting and select **Save**.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-04.png" alt-text="Screenshot of Add HTTP setting page showing HTTP settings configuration.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-04.png" alt-text="Screenshot of Add HTTP setting page showing HTTP settings configuration."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-04.png":::
5. Create the rules in the **Rules** section of the left menu and associate each rule with the previously created listener. Then configure the main backend pool and HTTP settings, and then select **Add**.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-07.png" alt-text="Screenshot of Add a routing rule page to configure routing rules to a backend target.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-07.png" alt-text="Screenshot of Add a routing rule page to configure routing rules to a backend target."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-07.png":::
-6. Test the configuration. Access the application gateway on the Azure portal and copy the public IP address in the **Overview** section.
+6. Test the configuration. Access the application gateway on the Azure portal and copy the public IP address in the **Overview** section.
- 1. Open a new browser window and enter the URL `http://<app-gw-ip-address>:8080`.
+ 1. Open a new browser window and enter the URL `http://<app-gw-ip-address>:8080`.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-08.png" alt-text="Screenshot of browser page showing successful test of the configuration.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-08.png" alt-text="Screenshot of browser page showing successful test of the configuration."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-08.png":::
1. Change the URL to `http://<app-gw-ip-address>:8080/images/test.htm`.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-09.png" alt-text="Screenshot of another successful test with the new URL.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-09.png" alt-text="Screenshot of another successful test with the new URL."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-09.png":::
1. Change the URL again to `http://<app-gw-ip-address>:8080/video/test.htm`.
- :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-10.png" alt-text="Screenshot of successful test with the final URL.":::
+ :::image type="content" source="media/application-gateway/app-gateway-url-route-backend-pool-10.png" alt-text="Screenshot of successful test with the final URL."lightbox="media/application-gateway/app-gateway-url-route-backend-pool-10.png":::
## Next Steps
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
Open function host index page: `http://localhost:7071/api/index` to view the rea
> [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md) > [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
backup Backup Azure Backup Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sharepoint.md
Title: Back up a SharePoint farm to Azure with DPM description: This article provides an overview of DPM/Azure Backup server protection of a SharePoint farm to Azure- Previously updated : 03/09/2020+ Last updated : 10/27/2022++++
-# Back up a SharePoint farm to Azure with DPM
+# Back up a SharePoint farm to Azure with Data Protection Manager
-You back up a SharePoint farm to Microsoft Azure by using System Center Data Protection Manager (DPM) in much the same way that you back up other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points and gives you retention policy options for various backup points. DPM provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
-Backing up SharePoint to Azure with DPM is a very similar process to backing up SharePoint to DPM locally. Particular considerations for Azure will be noted in this article.
+This article describes how to back up and restore SharePoint data using System Center Data Protection Manager (DPM). The backup operation of SharePoint to Azure with DPM is similar to SharePoint backup to DPM locally.
-## SharePoint supported versions and related protection scenarios
+System Center Data Protection Manager (DPM) enables you back up a SharePoint farm to Microsoft Azure, which gives an experience similar to back up of other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points, and gives you retention policy options for various backup points. DPM provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
-For a list of supported SharePoint versions and the DPM versions required to back them up see [What can DPM back up?](/system-center/dpm/dpm-protection-matrix#applications-backup)
+In this article, you'll learn about:
-## Before you start
+> [!div class="checklist"]
+> - SharePoint supported scenarios
+> - Prerequisites
+> - Configure backup
+> - Monitor operations
+> - Restore SharePoint data
+> - Restore a SharePoint database from Azure using DPM
+> - Switch the Front-End Web Server
-There are a few things you need to confirm before you back up a SharePoint farm to Azure.
+## SharePoint supported scenarios
-### Prerequisites
+For information on the supported SharePoint versions and the DPM versions required to back them up, see [What can DPM back up?](/system-center/dpm/dpm-protection-matrix#applications-backup).
-Before you proceed, make sure that you have met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. Some tasks for prerequisites include: create a backup vault, download vault credentials, install Azure Backup Agent, and register DPM/Azure Backup Server with the vault.
+## Prerequisites
-Additional prerequisites and limitations can be found on the [Back up SharePoint with DPM](/system-center/dpm/back-up-sharepoint#prerequisites-and-limitations) article.
+Before you proceed to back up a SharePoint farm to Azure, ensure that you've met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. The tasks in prerequisites also include: create a backup vault, download vault credentials, install Azure Backup agent, and register DPM/Azure Backup Server with the vault.
+
+For other prerequisites and limitations, see [Back up SharePoint with DPM](/system-center/dpm/back-up-sharepoint#prerequisites-and-limitations).
## Configure backup
-To back up SharePoint farm you configure protection for SharePoint by using ConfigureSharePoint.exe and then create a protection group in DPM. For instructions, see [Configure Backup](/system-center/dpm/back-up-sharepoint#configure-backup) in the DPM documentation.
+To back up the SharePoint farm, configure protection for SharePoint using *ConfigureSharePoint.exe*, and then create a protection group in DPM. See the DPM documentation to learn [how to configure backup](/system-center/dpm/back-up-sharepoint#configure-backup).
-## Monitoring
+## Monitor operations
-To monitor the backup job, follow the instructions in [Monitoring DPM backup](/system-center/dpm/back-up-sharepoint#monitoring)
+To monitor the backup job, see [Monitoring DPM backup](/system-center/dpm/back-up-sharepoint#monitoring).
## Restore SharePoint data To learn how to restore a SharePoint item from a disk with DPM, see [Restore SharePoint data](/system-center/dpm/back-up-sharepoint#restore-sharepoint-data).
-## Restore a SharePoint database from Azure by using DPM
+## Restore a SharePoint database from Azure using DPM
+
+To recover a SharePoint content database, follow these steps:
-1. To recover a SharePoint content database, browse through various recovery points (as shown previously), and select the recovery point that you want to restore.
+1. Browse through various recovery points (as shown previously), and select the recovery point that you want to restore.
- ![DPM SharePoint Protection8](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection9.png)
+ ![Screenshot showing how to select a recovery point from the list.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection9.png)
2. Double-click the SharePoint recovery point to show the available SharePoint catalog information. > [!NOTE]
- > Because the SharePoint farm is protected for long-term retention in Azure, no catalog information (metadata) is available on the DPM server. As a result, whenever a point-in-time SharePoint content database needs to be recovered, you need to catalog the SharePoint farm again.
- >
- >
+ > Because the SharePoint farm is protected for long-term retention in Azure, no catalog information (metadata) is available on the DPM server. So, whenever a point-in-time SharePoint content database needs to be recovered, you need to catalog the SharePoint farm again.
+ 3. Select **Re-catalog**.
- ![DPM SharePoint Protection10](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection12.png)
+ ![Screenshot showing how to select Re-recatalog.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection12.png)
The **Cloud Recatalog** status window opens.
- ![DPM SharePoint Protection11](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection13.png)
+ ![Screenshot showing the Cloud Recatalog status window.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection13.png)
+
+ Once the cataloging is finished and the status changes to *Success*, select **Close**.
- After cataloging is finished, the status changes to *Success*. Select **Close**.
+ ![Screenshot showing the cataloging is complete with Success state.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection14.png)
- ![DPM SharePoint Protection12](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection14.png)
-4. Select the SharePoint object shown in the DPM **Recovery** tab to get the content database structure. Right-click the item, and then select **Recover**.
+4. On the DPM **Recovery** tab, select the *SharePoint object* to get the content database structure, right-click the item, and then select **Recover**.
- ![DPM SharePoint Protection13](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection15.png)
-5. At this point, follow the recovery steps earlier in this article to recover a SharePoint content database from disk.
+ ![Screenshot showing how to recover a SharePoint database from Azure.](./media/backup-azure-backup-sharepoint/dpm-sharepoint-protection15.png)
+5. To recover a SharePoint content database from disk, see [this section](#restore-sharepoint-data).
-## Switching the Front-End Web Server
+## Switch the Front-End Web Server
-If you have more than one front-end web server, and want to switch the server that DPM uses to protect the farm, follow the instructions in [Switching the Front-End Web Server](/system-center/dpm/back-up-sharepoint#switching-the-front-end-web-server).
+If you've more than one front-end web server, and want to switch the server that DPM uses to protect the farm, see [Switching the Front-End Web Server](/system-center/dpm/back-up-sharepoint#switching-the-front-end-web-server).
## Next steps
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
Title: About Nutanix Cloud Clusters on Azure
description: Learn about Nutanix Cloud Clusters on Azure and the benefits it offers. Previously updated : 03/31/2021+ Last updated : 10/13/2022 # About Nutanix Cloud Clusters on Azure
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/architecture.md
Title: Architecture of BareMetal Infrastructure for NC2 description: Learn about the architecture of several configurations of BareMetal Infrastructure for NC2. - Previously updated : 04/14/2021++ Last updated : 10/13/2022 # Architecture of BareMetal Infrastructure for Nutanix
baremetal-infrastructure Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/faq.md
Title: FAQ description: Questions frequently asked about NC2 on Azure - Previously updated : 07/01/2022-++ Last updated : 10/13/2022 # Frequently asked questions about NC2 on Azure
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md
Title: Getting started
description: Learn how to sign up, set up, and use Nutanix Cloud Clusters on Azure. Previously updated : 07/01/2021+ Last updated : 10/13/2022 # Getting started with Nutanix Cloud Clusters on Azure
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/nc2-baremetal-overview.md
Title: What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?
description: Learn about the features BareMetal Infrastructure offers for NC2 workloads. Previously updated : 07/01/2022+ Last updated : 10/13/2022 # What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?
baremetal-infrastructure Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/requirements.md
Title: Requirements
description: Learn what you need to run NC2 on Azure, including Azure, Nutanix, networking, and other requirements. Previously updated : 03/31/2021+ Last updated : 10/13/2022 # Requirements
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/skus.md
Title: SKUs
description: Learn about SKU options for NC2 on Azure, including core, RAM, storage, and network. Previously updated : 07/01/2021+ Last updated : 10/13/2022 # SKUs
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
Title: Solution design description: Learn about topologies and constraints for NC2 on Azure. - Previously updated : 07/01/2022++ Last updated : 10/13/2022 # Solution design
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
Title: Supported instances and regions description: Learn about instances and regions supported for NC2 on Azure. -- Previously updated : 03/31/2021++ Last updated : 10/13/2022 # Supported instances and regions
NC2 on Azure supports the following region using AN36P:
* North Central US * East US 2 - ## Next steps Learn more:
baremetal-infrastructure Use Cases And Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/use-cases-and-supported-scenarios.md
Title: Use cases and supported scenarios description: Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift. - Previously updated : 07/01/2022+ Last updated : 10/13/2022 # Use cases and supported scenarios
cloud-services Cloud Services Configure Ssl Certificate Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-configure-ssl-certificate-portal.md
Next, you must include information about the certificate in your service definit
## Step 2: Modify the service definition and configuration files Your application must be configured to use the certificate, and an HTTPS endpoint must be added. As a result, the service definition and service configuration files need to be updated.
-1. In your development environment, open the service definition file
- (CSDEF), add a **Certificates** section within the **WebRole**
- section, and include the following information about the
- certificate (and intermediate certificates):
+1. In your development environment, open the service definition file (CSDEF), add a **Certificates** section within the **WebRole** section, and include the following information about the certificate (and intermediate certificates):
- ```xml
+ ```xml
<WebRole name="CertificateTesting" vmsize="Small"> ... <Certificates>
Your application must be configured to use the certificate, and an HTTPS endpoin
2. In your service definition file, add an **InputEndpoint** element within the **Endpoints** section to enable HTTPS:
- ```xml
+ ```xml
<WebRole name="CertificateTesting" vmsize="Small"> ... <Endpoints>
Your application must be configured to use the certificate, and an HTTPS endpoin
the **Sites** section. This element adds an HTTPS binding to map the endpoint to your site:
- ```xml
+ ```xml
<WebRole name="CertificateTesting" vmsize="Small"> ... <Sites>
Your application must be configured to use the certificate, and an HTTPS endpoin
value with that of your certificate. The following code sample provides details of the **Certificates** section, except for the thumbprint value.
- ```xml
+ ```xml
<Role name="Deployment"> ... <Certificates>
connect to it using HTTPS.
* Learn how to [deploy a cloud service](cloud-services-how-to-create-deploy-portal.md). * Configure a [custom domain name](cloud-services-custom-domain-name-portal.md). * [Manage your cloud service](cloud-services-how-to-manage-portal.md).---
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
The following steps create the cloud service project that will host the Socket.I
1. From the **Start Menu** or **Start Screen**, search for **Windows PowerShell**. Finally, right-click **Windows PowerShell** and select **Run As Administrator**. ![Azure PowerShell icon][powershell-menu]+ 2. Create a directory called **c:\\node**. ```powershell
The following steps create the cloud service project that will host the Socket.I
![The output of the new-azureservice and add-azurenodeworkerrolecmdlets](./media/cloud-services-nodejs-chat-app-socketio/socketio-1.png) ## Download the Chat Example+ For this project, we will use the chat example from the [Socket.IO GitHub repository]. Perform the following steps to download the example and add it to the project you previously created.
and add it to the project you previously created.
1. Create a local copy of the repository by using the **Clone** button. You may also use the **ZIP** button to download the project. ![A browser window viewing https://github.com/LearnBoost/socket.io/tree/master/examples/chat, with the ZIP download icon highlighted](./media/cloud-services-nodejs-chat-app-socketio/socketio-22.png)+ 2. Navigate the directory structure of the local repository until you arrive at the **examples\\chat** directory. Copy the contents of this directory to the **C:\\node\\chatapp\\WorkerRole1** directory created earlier.
and add it to the project you previously created.
![Explorer, displaying the contents of the examples\\chat directory extracted from the archive][chat-contents] The highlighted items in the screenshot above are the files copied from the **examples\\chat** directory+ 3. In the **C:\\node\\chatapp\\WorkerRole1** directory, delete the **server.js** file, and then rename the **app.js** file to **server.js**. This removes the default **server.js** file created previously by the **Add-AzureNodeWorkerRole** cmdlet and replaces it with the application file from the chat example. ### Modify Server.js and Install Modules
make some minor modifications. Perform the following steps to the
server.js file: 1. Open the **server.js** file in Visual Studio or any text editor.+ 2. Find the **Module dependencies** section at the beginning of server.js and change the line containing **sio = require('..//..//lib//socket.io')** to **sio = require('socket.io')** as shown below: ```js
Azure emulator:
following: ![The output of the npm install command][The-output-of-the-npm-install-command]+ 2. Since this example was originally a part of the Socket.IO GitHub repository, and directly referenced the Socket.IO library by relative path, Socket.IO was not referenced in the package.json
Azure emulator:
``` ### Test and Deploy+ 1. Launch the emulator by issuing the following command: ```powershell
Azure emulator:
> Reinstall AzureAuthoringTools v 2.7.1 and AzureComputeEmulator v 2.7 - make sure that version matches. 2. Open a browser and navigate to `http://127.0.0.1`.+ 3. When the browser window opens, enter a nickname and then hit enter. This will allow you to post messages as a specific nickname. To test multi-user functionality, open additional browser windows using the same URL and enter different nicknames. ![Two browser windows displaying chat messages from User1 and User2](./media/cloud-services-nodejs-chat-app-socketio/socketio-8.png)+ 4. After testing the application, stop the emulator by issuing the following command:
Azure emulator:
PS C:\node\chatapp\WorkerRole1> Stop-AzureEmulator ```
-5. To deploy the application to Azure, use the
- **Publish-AzureServiceProject** cmdlet. For example:
+5. To deploy the application to Azure, use the **Publish-AzureServiceProject** cmdlet. For example:
```powershell PS C:\node\chatapp\WorkerRole1> Publish-AzureServiceProject -ServiceName mychatapp -Location "East US" -Launch
Azure emulator:
> Be sure to use a unique name, otherwise the publish process will fail. After the deployment has completed, the browser will open and navigate to the deployed service. > > If you receive an error stating that the provided subscription name doesn't exist in the imported publish profile, you must download and import the publishing profile for your subscription before deploying to Azure. See the **Deploying the Application to Azure** section of [Build and deploy a Node.js application to an Azure Cloud Service](./cloud-services-nodejs-develop-deploy-app.md)
- >
- >
![A browser window displaying the service hosted on Azure][completed-app] > [!NOTE] > If you receive an error stating that the provided subscription name doesn't exist in the imported publish profile, you must download and import the publishing profile for your subscription before deploying to Azure. See the **Deploying the Application to Azure** section of [Build and deploy a Node.js application to an Azure Cloud Service](./cloud-services-nodejs-develop-deploy-app.md)
- >
- >
Your application is now running on Azure, and can relay chat messages between different clients using Socket.IO. > [!NOTE] > For simplicity, this sample is limited to chatting between users connected to the same instance. This means that if the cloud service creates two worker role instances, users will only be able to chat with others connected to the same worker role instance. To scale the application to work with multiple role instances, you could use a technology like Service Bus to share the Socket.IO store state across instances. For examples, see the Service Bus Queues and Topics usage samples in the [Azure SDK for Node.js GitHub repository](https://github.com/WindowsAzure/azure-sdk-for-node).
->
->
## Next steps+ In this tutorial you learned how to create a basic chat application hosted in an Azure Cloud Service. To learn how to host this application in an Azure Website, see [Build a Node.js Chat Application with Socket.IO on an Azure Web Site][chatwebsite]. For more information, see also the [Node.js Developer Center](/azure/developer/javascript/).
For more information, see also the [Node.js Developer Center](/azure/developer/j
[chat example]: https://github.com/LearnBoost/socket.io/tree/master/examples/chat [chat-example-view]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-22.png - [chat-contents]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-5.png [The-output-of-the-npm-install-command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-7.png
-[The output of the Publish-AzureService command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-9.png
+[The output of the Publish-AzureService command]: ./media/cloud-services-nodejs-chat-app-socketio/socketio-9.png
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Previously updated : 08/24/2022 Last updated : 10/24/2022
The following table lists accepted data types, when each data type should be use
| Data type | Used for testing | Recommended quantity | Used for training | Recommended quantity | |--|--|-|-|-|
-| [Audio only](#audio-data-for-testing) | Yes (visual inspection) | 5+ audio files | No | Not applicable |
+| [Audio only](#audio-data-for-training-or-testing) | Yes (visual inspection) | 5+ audio files | Yes (Preview for `en-US`) | 1-20 hours of audio |
| [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio | | [Plain text](#plain-text-data-for-training) | No | Not applicable | Yes | 1-200 MB of related text | | [Structured text](#structured-text-data-for-training) (public preview) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
Refer to the following table to ensure that your pronunciation dataset files are
| Number of pronunciations per line | 1 | | Maximum file size | 1 MB (1 KB for free tier) |
-## Audio data for testing
+### Audio data for training or testing
Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
+> [!NOTE]
+> Audio only data for training is available in preview for the `en-US` locale. For other locales, to train with audio data you must also provide [human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
+ Custom Speech projects require audio files with these properties: | Property | Value |
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
| Check the audio file format. | `sox --i <filename>` | | Convert the audio file to single channel, 16-bit, 16 KHz. | `sox <input> -b 16 -e signed-integer -c 1 -r 16k -t wav <output>.wav` |
-### Audio data for training
-
-Not all base models support [training with audio data](language-support.md?tabs=stt-tts). For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt-tts).
-
-Even if a base model supports training with audio data, the service might use only part of the audio. In [regions](regions.md#speech-service) with dedicated hardware available for training audio data, the Speech service will use up to 20 hours of your audio training data. In other regions, the Speech service uses up to 8 hours of your audio data.
- ## Next steps - [Upload your data](how-to-custom-speech-upload-data.md)
communication-services Program Brief Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/program-brief-guidelines.md
# Short Code Program Brief Filling Guidelines-- [!INCLUDE [Short code eligibility notice](../../includes/public-preview-include-short-code-eligibility.md)] Azure Communication Services allows you to apply for a short code for SMS programs. In this document, we'll review the guidelines on how to fill out a program brief for short code registration. A program brief application consists of 4 sections:
communication-services Apply For Short Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/apply-for-short-code.md
# Quickstart: Apply for a short code-- [!INCLUDE [Short code eligibility notice](../../includes/public-preview-include-short-code-eligibility.md)] ## Prerequisites
confidential-computing Confidential Nodes Aks Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-addon.md
RUN apt-get update && apt-get install -y \
libsgx-quote-ex \ az-dcap-client \ open-enclave
-WORKDIR /opt/openenclave/share/openenclave/samples/remote_attestation
+WORKDIR /opt/openenclave/share/openenclave/samples/attestation
RUN . /opt/openenclave/share/openenclave/openenclaverc \ && make build # this sets the flag for out of proc attestation mode, alternatively you can set this flag on the deployment files
container-apps Get Started Existing Container Image Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image-portal.md
If you're not going to continue to use this application, you can delete the Azur
## Next steps > [!div class="nextstepaction"]
-> [Environments in Azure Container Apps](environment.md)
+> [Communication between microservices](communicate-between-microservices.md)
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
Remove-AzResourceGroup -Name $ResourceGroupName -Force
## Next steps > [!div class="nextstepaction"]
-> [Environments in Azure Container Apps](environment.md)
+> [Communication between microservices](communicate-between-microservices.md)
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Remove-AzResourceGroup -Name $ResourceGroupName -Force
## Next steps > [!div class="nextstepaction"]
-> [Environments in Azure Container Apps](environment.md)
+> [Communication between microservices](communicate-between-microservices.md)
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
ms.devlang: azurecli
You learn how to: > [!div class="checklist"]
-> - Create a Container Apps environment to host your container apps
-> - Create an Azure Blob Storage account
-> - Create a Dapr state store component for the Azure Blob storage
-> - Deploy two container apps: one that produces messages, and one that consumes messages and persists them in the state store
-> - Verify the solution is up and running
+> * Create a Container Apps environment for your container apps
+> * Create an Azure Blob Storage state store for the container app
+> * Deploy two apps that produce and consume messages and persist them in the state store
+> * Verify the interaction between the two microservices.
-In this tutorial, you deploy the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) quickstart.
+With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
+
+In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
The application consists of:
There are multiple ways to authenticate to external resources via Dapr. This exa
# [Bash](#tab/bash)
-Create a config file named **statestore.yaml** with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. Since the application is authenticating directly via Managed Identity, there's no need to include the storage account key directly within the component. The following example shows how your **statestore.yaml** file should look when configured for your Azure Blob Storage account:
+Open a text editor and create a config file named *statestore.yaml* with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. The following example shows how your *statestore.yaml* file should look when configured for your Azure Blob Storage account:
```yaml # statestore.yaml for Azure Blob storage component
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
An Azure account with an active subscription is required. If you don't already h
## Setup
-> [!NOTE]
-> An Azure Container Apps environment can be deployed as a zone redundant resource in regions where support is available. This is a deployment-time only configuration option.
- <!-- Create --> [!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)]
If you're not going to continue to use this application, you can delete the Azur
## Next steps > [!div class="nextstepaction"]
-> [Environments in Azure Container Apps](environment.md)
+> [Communication between microservices](communicate-between-microservices.md)
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
Previously updated : 08/10/2022 Last updated : 10/25/2022
The following quotas are on a per subscription basis for Azure Container Apps.
-| Feature | Quantity | Scope | Limit can be extended | Remarks |
+To request an increase in quota amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
+
+| Feature | Scope | Default | Is Configurable<sup>1</sup> | Remarks |
|--|--|--|--|--|
-| Environments | 5 | For a subscription per region | Yes | |
-| Container Apps | 20 | Environment | Yes |
-| Revisions | 100 | Container app | No |
-| Replicas | 30 | Revision | No |
-| Cores | 2 | Replica | No | Maximum number of cores that can be requested by a revision replica. |
-| Memory | 4 GiB | Replica | No | Maximum amount of memory that can be requested by a revision replica. |
-| Cores | 20 | Environment | Yes| Calculated by the total cores an environment can accommodate. For instance, the sum of cores requested by each active replica of all revisions in an environment. |
+| Environments | Region | 5 | Yes | |
+| Container Apps | Environment | 20 | Yes | |
+| Revisions | Container app | 100 | No | |
+| Replicas | Revision | 30 | Yes | |
+| Cores | Replica | 2 | No | Maximum number of cores that can be requested by a revision replica. |
+| Cores | Environment | 20 | Yes | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
+
+<sup>1</sup> The **Is Configurable** column denotes that a feature maximum may be increased through a [support request](https://azure.microsoft.com/support/create-ticket/).
## Considerations
-* Pay-as-you-go and trial subscriptions are limited to 1 environment per region per subscription.
* If an environment runs out of allowed cores: * Provisioning times out with a failure * The app silently refuses to scale out-
-To request an increase in quota amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
Previously updated : 05/09/2022 Last updated : 10/26/2022 # Burst capacity in Azure Cosmos DB (preview)+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] Azure Cosmos DB burst capacity (preview) allows you to take advantage of your database or container's idle throughput capacity to handle spikes of traffic. With burst capacity, each physical partition can accumulate up to 5 minutes of idle capacity, which can be consumed at a rate up to 3000 RU/s. With burst capacity, requests that would have otherwise been rate limited can now be served with burst capacity while it's available.
Burst capacity applies only to Azure Cosmos DB accounts using provisioned throug
Let's take an example of a physical partition that has 100 RU/s of provisioned throughput and is idle for 5 minutes. With burst capacity, it can accumulate a maximum of 100 RU/s * 300 seconds = 30,000 RU of burst capacity. The capacity can be consumed at a maximum rate of 3000 RU/s, so if there's a sudden spike in request volume, the partition can burst up to 3000 RU/s for up 30,000 RU / 3000 RU/s = 10 seconds. Without burst capacity, any requests that are consumed beyond the provisioned 100 RU/s would have been rate limited (429).
-After the 10 seconds is over, the burst capacity has been used up. If the workload continues to exceed the provisioned 100 RU/s, any requests that are consumed beyond the provisioned 100 RU/s would now be rate limited (429). The maximum amount of burst capacity a physical partition can accumulate at any point in time is equal to 300 seconds * the provisioned RU/s of the physical partition.
+After the 10 seconds is over, the burst capacity has been used up. If the workload continues to exceed the provisioned 100 RU/s, any requests that are consumed beyond the provisioned 100 RU/s would now be rate limited (429). The maximum amount of burst capacity a physical partition can accumulate at any point in time is equal to 300 seconds * the provisioned RU/s of the physical partition.
## Getting started
-To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
Before submitting your request:-- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.-- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).+
+- Ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.
+- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria).
The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
To check whether an Azure Cosmos DB account is eligible for the preview, you can
:::image type="content" source="media/burst-capacity/burst-capacity-eligibility-check.png" alt-text="Burst capacity eligibility check with table of all preview eligibility criteria":::
-## Limitations
+## Limitations (preview eligibility criteria)
-### Preview eligibility criteria
To enroll in the preview, your Azure Cosmos DB account must meet all the following criteria:
- - Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
- - If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When burst capacity is enabled on your account, all requests sent from non .NET SDKs, or older .NET SDK versions won't be accepted.
- - There are no SDK or driver requirements to use the feature with API for Cassandra, Gremlin, or MongoDB.
- - Your Azure Cosmos DB account isn't using any unsupported connectors
- - Azure Data Factory
- - Azure Stream Analytics
- - Logic Apps
- - Azure Functions
- - Azure Search
- - Azure Cosmos DB Spark connector
- - Azure Cosmos DB data migration tool
- - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-
-### SDK requirements (API for NoSQL and Table only)
-#### API for NoSQL
-For API for NoSQL accounts, burst capacity is supported only in the latest version of the .NET v3 SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. There are no driver or SDK requirements to use burst capacity with API for Gremlin, Cassandra, or MongoDB.
-
-Find the latest version of the supported SDK:
-
-| SDK | Supported versions | Package manager link |
-| | | |
-| **.NET SDK v3** | *>= 3.27.0* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
-
-Support for other API for NoSQL SDKs is planned for the future.
-
-> [!TIP]
-> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
-
-#### Table API
-For API for Table accounts, burst capacity is supported only when using the latest version of the Tables SDK. When the feature is enabled on your account, you must only use the supported SDK. Requests sent from other SDKs or earlier versions won't be accepted. The legacy SDK with namespace `Microsoft.Azure.CosmosDB.Table` isn't supported. Follow the [migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/tables/Azure.Data.Tables/MigrationGuide.md) to upgrade to the latest SDK.
-
-| SDK | Supported versions | Package manager link |
-| | | |
-| **Azure Tables client library for .NET** | *>= 12.0.0* | <https://www.nuget.org/packages/Azure.Data.Tables/> |
-| **Azure Tables client library for Java** | *>= 12.0.0* | <https://mvnrepository.com/artifact/com.azure/azure-data-tables> |
-| **Azure Tables client library for JavaScript** | *>= 12.0.0* | <https://www.npmjs.com/package/@azure/data-tables> |
-| **Azure Tables client library for Python** | *>= 12.0.0* | <https://pypi.org/project/azure-data-tables/> |
-
-### Unsupported connectors
-
-If you enroll in the preview, the following connectors will fail.
-
-* Azure Data Factory<sup>1</sup>
-* Azure Stream Analytics<sup>1</sup>
-* Logic Apps<sup>1</sup>
-* Azure Functions<sup>1</sup>
-* Azure Search<sup>1</sup>
-* Azure Cosmos DB Spark connector<sup>1</sup>
-* Azure Cosmos DB data migration tool
-* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
-
-<sup>1</sup>Support for these connectors is planned for the future.
+
+- Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.
+- Your Azure Cosmos DB account is using API for NoSQL, Cassandra, Gremlin, MongoDB, or Table.
## Next steps
-* See the FAQ on [burst capacity.](burst-capacity-faq.yml)
-* Learn more about [provisioned throughput.](set-throughput.md)
-* Learn more about [request units.](request-units.md)
-* Trying to decide between provisioned throughput and serverless? See [choose between provisioned throughput and serverless.](throughput-serverless.md)
-* Want to learn the best practices? See [best practices for scaling provisioned throughput.](scaling-provisioned-throughput-best-practices.md)
+- See the FAQ on [burst capacity.](burst-capacity-faq.yml)
+- Learn more about [provisioned throughput.](set-throughput.md)
+- Learn more about [request units.](request-units.md)
+- Trying to decide between provisioned throughput and serverless? See [choose between provisioned throughput and serverless.](throughput-serverless.md)
+- Want to learn the best practices? See [best practices for scaling provisioned throughput.](scaling-provisioned-throughput-best-practices.md)
cosmos-db Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/consistency-mapping.md
Title: Apache Cassandra and Azure Cosmos DB consistency levels
description: Apache Cassandra and Azure Cosmos DB consistency levels. + Previously updated : 03/24/2022- Last updated : 10/18/2022 # Apache Cassandra and Azure Cosmos DB for Apache Cassandra consistency levels
-Unlike Azure Cosmos DB, Apache Cassandra does not natively provide precisely defined consistency guarantees. Instead, Apache Cassandra provides a write consistency level and a read consistency level, to enable the high availability, consistency, and latency tradeoffs. When using Azure Cosmos DB's API for Cassandra:
-* The write consistency level of Apache Cassandra is mapped to the default consistency level configured on your Azure Cosmos DB account. Consistency for a write operation (CL) can't be changed on a per-request basis.
+Unlike Azure Cosmos DB, Apache Cassandra doesn't natively provide precisely defined consistency guarantees. Instead, Apache Cassandra provides a write consistency level and a read consistency level, to enable the high availability, consistency, and latency tradeoffs. When using Azure Cosmos DB for Cassandra:
-* Azure Cosmos DB will dynamically map the read consistency level specified by the Cassandra client driver to one of the Azure Cosmos DB consistency levels configured dynamically on a read request.
+- The write consistency level of Apache Cassandra is mapped to the default consistency level configured on your Azure Cosmos DB account. Consistency for a write operation (CL) can't be changed on a per-request basis.
+- Azure Cosmos DB will dynamically map the read consistency level specified by the Cassandra client driver. The consistency level will be mapped to one of the Azure Cosmos DB consistency levels configured dynamically on a read request.
## Multi-region writes vs single-region writes
-Apache Cassandra database is a multi-master system by default, and does not provide an out-of-box option for single-region writes with multi-region replication for reads. However, Azure Cosmos DB provides turnkey ability to have either single region, or [multi-region](../how-to-multi-master.md) write configurations. One of the advantages of being able to choose a single region write configuration across multiple regions is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions.
+Apache Cassandra database is a multi-master system by default, and doesn't provide an out-of-box option for single-region writes with multi-region replication for reads. However, Azure Cosmos DB provides turnkey ability to have either single region, or [multi-region](../how-to-multi-master.md) write configurations. One of the advantages of being able to choose a single region write configuration across multiple regions is the avoidance of cross-region conflict scenarios, and the option of maintaining strong consistency across multiple regions.
-With single-region writes, you can maintain strong consistency, while still maintaining a level of high availability across regions with [service-managed failover](../high-availability.md#region-outages). In this configuration, you can still exploit data locality to reduce read latency by downgrading to eventual consistency on a per request basis. In addition to these capabilities, the Azure Cosmos DB platform also provides the ability to enable [zone redundancy](/azure/architecture/reliability/architect) when selecting a region. Thus, unlike native Apache Cassandra, Azure Cosmos DB allows you to navigate the CAP Theorem [trade-off spectrum](../consistency-levels.md#rto) with more granularity.
+With single-region writes, you can maintain strong consistency, while still maintaining a level of high availability across regions with [service-managed failover](../high-availability.md#region-outages). In this configuration, you can still exploit data locality to reduce read latency by downgrading to eventual consistency on a per request basis. In addition to these capabilities, the Azure Cosmos DB platform also offers the option of [zone redundancy](/azure/architecture/reliability/architect) when selecting a region. Thus, unlike native Apache Cassandra, Azure Cosmos DB allows you to navigate the CAP Theorem [trade-off spectrum](../consistency-levels.md#rto) with more granularity.
## Mapping consistency levels
-The Azure Cosmos DB platform provides a set of five well-defined, business use-case oriented consistency settings with respect to replication and the tradeoffs defined by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem) and [PACLC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). As this approach differs significantly from Apache Cassandra, we would recommend that you take time to review and understand [Azure Cosmos DB consistency](../consistency-levels.md), or watch this short [video guide to understanding consistency settings](https://aka.ms/docs.consistency-levels) in the Azure Cosmos DB platform.
+The Azure Cosmos DB platform provides a set of five well-defined, business use-case oriented consistency settings with respect to replication. The tradeoffs to these consistency settings are defined by the [CAP](https://en.wikipedia.org/wiki/CAP_theorem) and [PACLC](https://en.wikipedia.org/wiki/PACELC_theorem) theorems. As this approach differs significantly from Apache Cassandra, we would recommend that you take time to review and understand [Azure Cosmos DB consistency](../consistency-levels.md). Alternatively, you can review this short [video guide to understanding consistency settings](https://aka.ms/docs.consistency-levels) in the Azure Cosmos DB platform. The following table illustrates the possible mappings between Apache Cassandra and Azure Cosmos DB consistency levels when using API for Cassandra. This table shows configurations for single region, multi-region reads with single-region writes, and multi-region writes.
+
+### Mappings
+
+> [!NOTE]
+> These are not exact mappings. Rather, we have provided the closest analogues to Apache Cassandra, and disambiguated any qualitative differences in the rightmost column. As mentioned above, we recommend reviewing Azure Cosmos DB's [consistency settings](../consistency-levels.md).
+
+### `ALL`, `EACH_QUOROM`, `QUOROM`, `LOCAL_QUORUM`, or `THREE` write consistency in Apache Cassandra
+
+| Apache read consistency | Reading from | Closest Azure Cosmos DB consistency level to Apache Cassandra read/write settings |
+| | | |
+| `ALL` | Local region | `Strong` |
+| `EACH_QUOROM` | Local region | `Strong` |
+| `QUOROM` | Local region | `Strong` |
+| `LOCAL_QUORUM` | Local region | `Strong` |
+| `LOCAL_ONE` | Local region | `Eventual` |
+| `ONE` | Local region | `Eventual` |
+| `TWO` | Local region | `Strong` |
+| `THREE` | Local region | `Strong` |
+
+Unlike Apache and DSE Cassandra, Azure Cosmos DB durably commits a quorum write by default. At least three out of four (3/4) nodes commit the write to disk, and NOT just an in-memory commit log.
+
+### `ONE`, `LOCAL_ONE`, or `ANY` write consistency in Apache Cassandra
+
+| Apache read consistency | Reading from | Closest Azure Cosmos DB consistency level to Apache Cassandra read/write settings |
+| | | |
+| `ALL` | Local region | `Strong` |
+| `EACH_QUOROM` | Local region | `Eventual` |
+| `QUOROM` | Local region | `Eventual` |
+| `LOCAL_QUORUM` | Local region | `Eventual` |
+| `LOCAL_ONE` | Local region | `Eventual` |
+| `ONE` | Local region | `Eventual` |
+| `TWO` | Local region | `Eventual` |
+| `THREE` | Local region | `Eventual` |
+
+Azure Cosmos DB API for Cassandra always durably commits a quorum write by default, hence all read consistencies can be made use of.
+
+### `TWO` write consistency in Apache Cassandra
+
+| Apache read consistency | Reading from | Closest Azure Cosmos DB consistency level to Apache Cassandra read/write settings |
+| | | |
+| `ALL` | Local region | `Strong` |
+| `EACH_QUOROM` | Local region | `Strong` |
+| `QUOROM` | Local region | `Strong` |
+| `LOCAL_QUORUM` | Local region | `Strong` |
+| `LOCAL_ONE` | Local region | `Eventual` |
+| `ONE` | Local region | `Eventual` |
+| `TWO` | Local region | `Eventual` |
+| `THREE` | Local region | `Strong` |
+
+Azure Cosmos DB has no notion of write consistency to only two nodes, hence we treat this consistency similar to quorum for most cases. For read consistency `TWO`, this consistency is equivalent to write with `QUOROM` and read from `ONE`.
+
+### `Serial`, or `Local_Serial` write consistency in Apache Cassandra
+
+| Apache read consistency | Reading from | Closest Azure Cosmos DB consistency level to Apache Cassandra read/write settings |
+| | | |
+| `ALL` | Local region | `Strong` |
+| `EACH_QUOROM` | Local region | `Strong` |
+| `QUOROM` | Local region | `Strong` |
+| `LOCAL_QUORUM` | Local region | `Strong` |
+| `LOCAL_ONE` | Local region | `Eventual` |
+| `ONE` | Local region | `Eventual` |
+| `TWO` | Local region | `Strong` |
+| `THREE` | Local region | `Strong` |
+
+Serial only applies to lightweight transactions. Azure Cosmos DB follows a [durably committed algorithm](https://www.microsoft.com/research/publication/revisiting-paxos-algorithm/) by default, and hence `Serial` consistency is similar to quorum.
+
+### Other regions for single-region write
+
+Azure Cosmos DB facilitates five consistency settings, including strong, across multiple regions where single-region writes is configured. This facilitation occurs as long as regions are within 2,000 miles of each other.
+
+Azure Cosmos DB doesn't have an applicable mapping to Apache Cassandra as all nodes/regions are writes and a strong consistency guarantee isn't possible across all regions.
+
+### Other regions for multi-region write
+
+Azure Cosmos DB facilitates only four consistency settings; `eventual`, `consistent prefix`, `session`, and `bounded staleness` across multiple regions where multi-region write is configured.
+
+Apache Cassandra would only provide eventual consistency for reads across other regions regardless of settings.
+
+### Dynamic overrides supported
+
+| Azure Cosmos DB account setting | Override value in client request | Override effect |
+| | | |
+| `Strong` | `All` | No effect (remain as `strong`) |
+| `Strong` | `Quorum` | No effect (remain as `strong`) |
+| `Strong` | `LocalQuorum` | No effect (remain as `strong`) |
+| `Strong` | `Two` | No effect (remain as `strong`) |
+| `Strong` | `Three` | No effect (remain as `strong`) |
+| `Strong` | `Serial` | No effect (remain as `strong`) |
+| `Strong` | `LocalSerial` | No effect (remain as `strong`) |
+| `Strong` | `One` | Consistency changes to `Eventual` |
+| `Strong` | `LocalOne` | Consistency changes to `Eventual` |
+| `Strong` | `Any` | Not allowed (error) |
+| `Strong` | `EachQuorum` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `All` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Quorum` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `LocalQuorum` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Two` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Three` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Serial` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `LocalSerial` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `One` | Consistency changes to `Eventual` |
+| `Bounded staleness`, `session`, or `consistent prefix` | `LocalOne` | Consistency changes to `Eventual` |
+| `Bounded staleness`, `session`, or `consistent prefix` | `Any` | Not allowed (error) |
+| `Bounded staleness`, `session`, or `consistent prefix` | `EachQuorum` | Not allowed (error) |
+
+### Metrics
+
+If your Azure Cosmos DB account is configured with a consistency level other than the strong consistency, review the *Probabilistically Bounded Staleness* (PBS) metric. The metric captures the probability that your clients may get strong and consistent reads for your workloads. This metric is exposed in the Azure portal. To find more information about the PBS metric, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+
+Probabilistically bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you've currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting consistent reads for a combination of write and read regions.
+
+## Global strong consistency for write requests in Apache Cassandra
+
+Apache Cassandra, the setting of `EACH_QUORUM` or `QUORUM` gives a strong consistency. When a write request is sent to a region, `EACH_QUORUM` persists the data in a quorum number of nodes in each data center. This persistence requires every data center to be available for the write operation to succeed. `QUORUM` is slightly less restrictive, with a `QUORUM` number of nodes across all the data centers needed to persist the data prior to acknowledging the write to be successful.
-The following table illustrates the possible mappings between Apache Cassandra and Azure Cosmos DB consistency levels when using API for Cassandra. This shows configurations for single region, multi-region reads with single-region writes, and multi-region writes.
+The following graphic illustrates a global strong consistency setting in Apache Cassandra between two regions 1 and 2. After data is written to region 1, the write needs to be persisted in a quorum number of nodes in both region 1, and region 2 before an acknowledgment is received by the application.
++
+## Global strong consistency for write requests in Azure Cosmos DB for Apache Cassandra
+
+In Azure Cosmos DB consistency is set at the account level. With `Strong` consistency in Azure Cosmos DB for Cassandra, data is replicated synchronously to the read regions for the account. The further apart the regions for the Azure Cosmos DB account are, the higher the latency of the consistent write operations.
++
+How the number of regions affects your read or write request:
+
+- Two regions: With strong consistency, quorum `(N/2 + 1) = 2`. So, if the read region goes down, the account can no longer accept writes with strong consistency since a quorum number of regions isn't available for the write to be replicated to.
+- Three or more regions: for `N = 3`, `quorum = 2`. If one of the read regions is down, the write region can still replicate the writes to a total of two regions that meet the quorum requirement. Similarly, with four regions, `quorum = 4/2 + 1 = 3`. Even with one read region being down, quorum can be met.
> [!NOTE]
-> These are not exact mappings. Rather, we have provided the closest analogues to Apache Cassandra, and disambiguated any qualitative differences in the rightmost column. As mentioned above, we recommend reviewing Azure Cosmos DB's [consistency settings](../consistency-levels.md).
+> If a globally strong consistency is required for all write operations, the consistency for Azure Cosmos DB for Cassandra account must be set to Strong. The consistency level for write operations cannot be overridden to a lower consistency level on a per request basis in Azure Cosmos DB.
+
+## Weaker consistency for write requests in Apache Cassandra
+
+A consistency level of `ANY`, `ONE`, `TWO`, `THREE`, `LOCAL_QUORUM`, `Serial` or `Local_Serial`? Consider a write request with `LOCAL_QUORUM` with an `RF` of `4` in a six-node datacenter. `Quorum = 4/2 + 1 = 3`.
++
+## Weaker consistency for write requests in Azure Cosmos DB for Apache Cassandra
+
+When a write request is sent with any of the consistency levels lower than `Strong`, a success response is returned as soon as the local region persists the write in at least three out of four replicas.
++
+## Global strong consistency for read requests in Apache Cassandra
+
+With a consistency of `EACH_QUORUM`, a consistent read can be achieved in Apache Cassandra. In, a multi-region setup for `EACH_QUORUM` if the quorum number of nodes isn't met in each region, then the read will be unsuccessful.
++
+## Global strong consistency for read requests in Azure Cosmos DB for Apache Cassandra
+
+The read request is served from two replicas in the specified region. Since the write already took care of persisting to a quorum number of regions (and all regions if every region was available), simply reading from two replicas in the specified region provides Strong consistency. This strong consistency requires `EACH_QUORUM` to be specified in the driver when issuing the read against a region for the Cosmos DB account along with Strong Consistency as the default consistency level for the account.
++
+## Local strong consistency in Apache Cassandra
+
+A read request with a consistency level of `TWO`, `THREE`, or `LOCAL_QUORUM` will give us strong consistency reading from local region. With a consistency level of `LOCAL_QUORUM`, you need a response from two nodes in the specified datacenter for a successful read.
++
+## Local strong consistency in Azure Cosmos DB for Apache Cassandra
+
+In Azure Cosmos DB for Cassandra, having a consistency level of `TWO`, `THREE` or `LOCAL_QUORUM` will give a local strong consistency for a read request. Since the write path guarantees replicating to a minimum of three out of four replicas, a read from two replicas in the specified region will guarantee a quorum read of the data in that region.
++
+## Eventual consistency in Apache Cassandra
+
+A consistency level of `LOCAL_ONE`, `One` and `ANY with LOCAL_ONE` will result in eventual consistency. This consistency is used in cases where the focus is on latency.
++
+## Eventual consistency in Azure Cosmos DB for Apache Cassandra?
+
+A consistency level of `LOCAL_ONE`, `ONE` or `Any` will give you eventual consistency. With eventual consistency, a read is served from just one of the replicas in the specified region.
+
+## Override consistency level for read operations in Azure Cosmos DB for Cassandra
+Previously, the consistency level for read requests could only be overridden to a lower consistency than the default set on the account. For instance, with the default consistency of Strong, read requests could be issued with Strong by default and overridden on a per request basis (if needed) to a consistency level weaker than Strong. However, read requests couldn't be issued with an overridden consistency level higher than the accountΓÇÖs default. An account with Eventual consistency couldn't receive read requests with a consistency level higher than Eventual (which in the Apache Cassandra drivers translate to `TWO`, `THREE`, `LOCAL_QUORUM` or `QUORUM`).
-If your Azure Cosmos DB account is configured with a consistency level other than the strong consistency, you can find out the probability that your clients may get strong and consistent reads for your workloads by looking at the *Probabilistically Bounded Staleness* (PBS) metric. This metric is exposed in the Azure portal, to learn more, see [Monitor Probabilistically Bounded Staleness (PBS) metric](../how-to-manage-consistency.md#monitor-probabilistically-bounded-staleness-pbs-metric).
+Azure Cosmos DB for Cassandra now facilitates overriding the consistency on read requests to a value higher than the accountΓÇÖs default consistency. For instance, with the default consistency on the Cosmos DB account set to Eventual (Apache Cassandra equivalent of `One` or `ANY`), read requests can be overridden on a per request basis to `LOCAL_QUORUM`. This override ensures that a quorum number of replicas within the specified region are consulted prior to returning the result set, as required by `LOCAL_QUORUM`.
-Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an insight into how often you can get a stronger consistency than the consistency level that you have currently configured on your Azure Cosmos DB account. In other words, you can see the probability (measured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.
+This option also prevents the need to set a default consistency that is higher than `Eventual`, when it's only needed for read requests.
## Next steps Learn more about global distribution and consistency levels for Azure Cosmos DB:
-* [Global distribution overview](../distribute-data-globally.md)
-* [Consistency Level overview](../consistency-levels.md)
-* [High availability](../high-availability.md)
+- [Global distribution overview](../distribute-data-globally.md)
+- [Consistency Level overview](../consistency-levels.md)
+- [High availability](../high-availability.md)
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Azure Cosmos DB is a fully managed NoSQL database for modern app development. Az
## APIs in Azure Cosmos DB
-Azure Cosmos DB offers multiple database APIs, which include NoSQL, MongoDB, Cassandra, Gremlin, and Table. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying with its various APIs.
+Azure Cosmos DB offers multiple database APIs, which include NoSQL, MongoDB, PostgreSQL Cassandra, Gremlin, and Table. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying with its various APIs.
All the APIs offer automatic scaling of storage and throughput, flexibility, and performance guarantees. There's no one best API, and you may choose any one of the APIs to build your application. This article will help you choose an API based on your workload and team requirements.
All the APIs offer automatic scaling of storage and throughput, flexibility, and
API for NoSQL is native to Azure Cosmos DB.
-API for MongoDB, Cassandra, Gremlin, and Table implement the wire protocol of open-source database engines. These APIs are best suited if the following conditions are true:
+API for MongoDB, PostgreSQL, Cassandra, Gremlin, and Table implement the wire protocol of open-source database engines. These APIs are best suited if the following conditions are true:
-* If you have existing MongoDB, Cassandra, or Gremlin applications
+* If you have existing MongoDB, PostgreSQL Cassandra, or Gremlin applications
* If you don't want to rewrite your entire data access layer * If you want to use the open-source developer ecosystem, client-drivers, expertise, and resources for your database * If you want to use the Azure Cosmos DB core features such as:
Based on your workload, you must choose the API that fits your requirement. The
:::image type="content" source="./media/choose-api/choose-api-decision-tree.png" alt-text="Decision tree to choose an API in Azure Cosmos DB." lightbox="./media/choose-api/choose-api-decision-tree.png":::
+> [!NOTE]
+> This decision tree will be updated soon to include API for PostgreSQL.
+ ## <a id="coresql-api"></a> API for NoSQL The Azure Cosmos DB API for NoSQL stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on API for NoSQL accounts. NoSQL accounts provide support for querying items using the Structured Query Language (SQL) syntax, one of the most familiar and popular query languages to query JSON objects. To learn more, see the [Azure Cosmos DB API for NoSQL](/training/modules/intro-to-azure-cosmos-db-core-api/) training module and [getting started with SQL queries](nosql/query/getting-started.md) article.
The features that Azure Cosmos DB provides, that you don't have to compromise on
You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb/connect-using-compass.md), and [Robo3T](mongodb/connect-using-robomongo.md), can run queries and work with data as they do with native MongoDB. To learn more, see [API for MongoDB](mongodb/introduction.md) article.
+## API for PostgreSQL
+
+Azure Cosmos DB for PostgreSQL is a managed service for running PostgreSQL at any scale, with the [Citus open source](https://github.com/citusdata/citus) superpower of distributed tables. It stores data either on a single node, or distributed in a multi-node configuration.
+
+Azure Cosmos DB for PostgreSQL is built on native PostgreSQL--rather than a PostgreSQL fork--and lets you choose any major database versions supported by the PostgreSQL community. It's ideal for starting on a single-node database with rich indexing, geospatial capabilities, and JSONB support. Later, if your performance needs grow, you can add nodes to the cluster with zero downtime.
+
+If youΓÇÖre looking for a managed open source relational database with high performance and geo-replication, Azure Cosmos DB for PostgreSQL is the recommended choice. To learn more, see the [Azure Cosmos DB for PostgreSQL introduction](postgresql/introduction.md).
+ ## <a id="cassandra-api"></a> API for Apache Cassandra The Azure Cosmos DB API for Cassandra stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. API for Cassandra in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. This API for Cassandra is wire protocol compatible with native Apache Cassandra. You should consider API for Cassandra if you want to benefit from the elasticity and fully managed nature of Azure Cosmos DB and still use most of the native Apache Cassandra features, tools, and ecosystem. This fully managed nature means on API for Cassandra you don't need to manage the OS, Java VM, garbage collector, read/write performance, nodes, clusters, etc.
The Azure Cosmos DB API for Table stores data in key/value format. If you're cur
Applications written for Azure Table storage can migrate to the API for Table with little code changes and take advantage of premium capabilities. To learn more, see [API for Table](table/introduction.md) article.
-## API for PostgreSQL
-
-Azure Cosmos DB for PostgreSQL is a managed service for running PostgreSQL at any scale, with the [Citus open source](https://github.com/citusdata/citus) superpower of distributed tables. It stores data either on a single node, or distributed in a multi-node configuration.
-
-Azure Cosmos DB for PostgreSQL is built on native PostgreSQL--rather than a PostgreSQL fork--and lets you choose any major database versions supported by the PostgreSQL community. It's ideal for starting on a single-node database with rich indexing, geospatial capabilities, and JSONB support. Later, if your performance needs grow, you can add nodes to the cluster with zero downtime.
-
-If youΓÇÖre looking for a managed open source relational database with high performance and geo-replication, Azure Cosmos DB for PostgreSQL is the recommended choice. To learn more, see the [Azure Cosmos DB for PostgreSQL introduction](postgresql/introduction.md).
- ## Capacity planning when migrating data Trying to do capacity planning for a migration to Azure Cosmos DB for NoSQL or MongoDB from an existing database cluster? You can use information about your existing database cluster for capacity planning.
Trying to do capacity planning for a migration to Azure Cosmos DB for NoSQL or M
* [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md) * [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)
+* [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-create-portal.md)
* [Get started with Azure Cosmos DB for Cassandra](cassandr) * [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-dotnet.md) * [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
You can provision throughput at a container-level or a database-level in terms o
| Maximum storage per container | Unlimited | | Maximum attachment size per Account (Attachment feature is being deprecated) | 2 GB | | Minimum RU/s required per 1 GB | 10 RU/s <sup>3</sup> |
-
+ <sup>1</sup> You can increase Maximum RUs per container or database by [filing an Azure support ticket](create-support-request-quota-increase.md). <sup>2</sup> To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md). If your workload has already reached the logical partition limit of 20 GB in production, it's recommended to rearchitect your application with a different partition key as a long-term solution. To help give time to rearchitect your application, you can request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Requesting a temporary increase is intended as a temporary mitigation and not recommended as a long-term solution, as **SLA guarantees are not honored when the limit is increased**. To remove the configuration, file a support ticket and select quota type **Restore containerΓÇÖs logical partition key size to default (20 GB)**. Filing this support ticket can be done after you have either deleted data to fit the 20-GB logical partition limit or have rearchitected your application with a different partition key.
An Azure Cosmos DB item can represent either a document in a collection, a row i
| Maximum size of an item | 2 MB (UTF-8 length of JSON representation) <sup>1</sup> | | Maximum length of partition key value | 2048 bytes | | Maximum length of ID value | 1023 bytes |
+| Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are several known limitations in some versions of the Cosmos DB SDK, as well as connectors (ADF, Spark, Kafka etc.) and http-drivers/libraries etc. that can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, please encode the ID value - [for example via Base64 + custom encoding of special charatcers allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. |
| Maximum number of properties per item | No practical limit | | Maximum length of property name | No practical limit | | Maximum length of property value | No practical limit |
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Azure Cosmos DB accounts configured with multiple write regions cannot be config
To learn more about consistency concepts, read the following articles: -- [High-level TLA+ specifications for the five consistency levels offered by Azure Cosmos DB](https://github.com/Azure/azure-cosmos-tla)
+- [High-level TLA+ specifications for the five consistency levels offered by Azure Cosmos DB](https://github.com/tlaplus/azure-cosmos-tla)
- [Replicated Data Consistency Explained Through Baseball (video) by Doug Terry](https://www.youtube.com/watch?v=gluIh8zd26I) - [Replicated Data Consistency Explained Through Baseball (whitepaper) by Doug Terry](https://www.microsoft.com/research/publication/replicated-data-consistency-explained-through-baseball/) - [Session guarantees for weakly consistent replicated data](https://dl.acm.org/citation.cfm?id=383631)
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
The following table summarizes the high availability capability of various accou
|Zone failures ΓÇô data loss | Data loss | No data loss | No data loss | No data loss | No data loss | |Zone failures ΓÇô availability | Availability loss | No availability loss | No availability loss | No availability loss | No availability loss | |Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information.
-|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss for read region failure, temporary for write region failure | No availability loss |
+|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss for read region failure, temporary for write region failure | No read availability loss, temporary write availability loss in the affected region |
|Price (***1***) | N/A | Provisioned RU/s x 1.25 rate | Provisioned RU/s x n regions | Provisioned RU/s x 1.25 rate x n regions (***2***) | Multi-region write rate x n regions | ***1*** For Serverless accounts request units (RU) are multiplied by a factor of 1.25.
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
Previously updated : 05/09/2022 Last updated : 10/26/2022 # Merge partitions in Azure Cosmos DB (preview)+ [!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)] Merging partitions in Azure Cosmos DB (preview) allows you to reduce the number of physical partitions used for your container in place. With merge, containers that are fragmented in throughput (have low RU/s per partition) or storage (have low storage per partition) can have their physical partitions reworked. If a container's throughput has been scaled up and needs to be scaled back down, merge can help resolve throughput fragmentation issues. For the same amount of provisioned RU/s, having fewer physical partitions means each physical partition gets more of the overall RU/s. Minimizing partitions reduces the chance of rate limiting if a large quantity of data is removed from a container and RU/s per partition is low. Merge can help clear out unused or empty partitions, effectively resolving storage fragmentation problems. ## Getting started
-To get started using partition merge, enroll in the preview by submitting a request for the **Azure Cosmos DB Partition Merge** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+To get started using partition merge, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Partition merge (preview)** feature.
+
+Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria). Once you've enabled the feature, it will take 15-20 minutes to take effect.
-Before submitting your request:
-- Ensure that you have at least 1 Azure Cosmos DB account in the subscription. This may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.-- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria).
+> [!CAUTION]
+> When merge is enabled on an account, only requests from .NET SDK version >= 3.27.0 will be allowed on the account, regardless of whether merges are ongoing or not. Requests from other SDKs (older .NET SDK, Java, JavaScript, Python, Go) or unsupported connectors (Azure Data Factory, Azure Search, Azure Cosmos DB Spark connector, Azure Functions, Azure Stream Analytics, and others) will be blocked and fail. Ensure you have upgraded to a supported SDK version before enabling the feature. After the feature is enabled or disabled, it may take 15-20 minutes to fully propagate to the account. If you plan to disable the feature after you've completed using it, it may take 15-20 minutes before requests from SDKs and connectors that are not supported for merge are allowed.
-The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Partition Merge**. Run the **Check eligibility for partition merge preview** diagnostic. :::image type="content" source="media/merge/merge-eligibility-check.png" alt-text="Screenshot of merge eligibility check with table of all preview eligibility criteria."::: ### How to identify containers to merge Containers that meet both of these conditions are likely to benefit from merging partitions:-- Condition 1: The current RU/s per physical partition is <3000 RU/s-- Condition 2: The current average storage in GB per physical partition is <20 GB
-Condition 1 often occurs when you have previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state.
+- **Condition 1**: The current RU/s per physical partition is <3000 RU/s
+- **Condition 2**: The current average storage in GB per physical partition is <20 GB
+
+Condition 1 often occurs when you've previously scaled up the RU/s (often for a data ingestion) and now want to scale down in steady state.
Condition 2 often occurs when you delete/TTL a large volume of data, leaving unused partitions. #### Criteria 1
-To determine the current RU/s per physical partition, from your Cosmos account, navigate to **Metrics**. Select the metric **Physical Partition Throughput** and filter to your database and container. Apply splitting by **PhysicalPartitionId**.
+To determine the current RU/s per physical partition, from your Cosmos account, navigate to **Metrics**. Select the metric **Physical Partition Throughput** and filter to your database and container. Apply splitting by **PhysicalPartitionId**.
-For containers using autoscale, this will show the max RU/s currently provisioned on each physical partition. For containers using manual throughput, this will show the manual RU/s on each physical partition.
+For containers using autoscale, this metric will show the max RU/s currently provisioned on each physical partition. For containers using manual throughput, this metric will show the manual RU/s on each physical partition.
-In the below example, we have an autoscale container provisioned with 5000 RU/s (scales between 500 - 5000 RU/s). It has 5 physical partitions and each physical partition has 1000 RU/s.
+In the below example, we have an autoscale container provisioned with 5000 RU/s (scales between 500 - 5000 RU/s). It has five physical partitions and each physical partition has 1000 RU/s.
:::image type="content" source="media/merge/RU-per-physical-partition-metric.png" alt-text="Screenshot of Azure Monitor metric Physical Partition Throughput in Azure portal.":::
Navigate to **Insights** > **Storage** > **Data & Index Usage**. The total stora
:::image type="content" source="media/merge/storage-per-container.png" alt-text="Screenshot of Azure Monitor storage (data + index) metric for container in Azure portal.":::
-Next, find the total number of physical partitions. This is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Criteria 1. In our example, we have 5 physical partitions.
+Next, find the total number of physical partitions. This metric is the distinct number of **PhysicalPartitionIds** in the **PhysicalPartitionThroughput** chart we saw in Criteria 1. In our example, we have five physical partitions.
-Finally, calculate: Total storage in GB / number of physical partitions. In our example, we have an average of (74 GB / 5 physical partitions) = 14.8 GB per physical partition.
+Finally, calculate: Total storage in GB / number of physical partitions. In our example, we have an average of (74 GB / five physical partitions) = 14.8 GB per physical partition.
Based on criteria 1 and 2, our container can potentially benefit from merging partitions.
In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a
```azurepowershell // Add the preview extension
-Install-Module -Name Az.CosmosDB -AllowPrerelease -Force
+$parameters = @{
+ Name = "Az.CosmosDB"
+ AllowPrerelease = $true
+ Force = $true
+}
+Install-Module @parameters
+```
+```azurepowershell
// API for NoSQL
-Invoke-AzCosmosDBSqlContainerMerge `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-container-name>" `
- -WhatIf
+$parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+ AccountName = "<cosmos-account-name>"
+ DatabaseName = "<cosmos-database-name>"
+ Name = "<cosmos-container-name>"
+ WhatIf = $true
+}
+Invoke-AzCosmosDBSqlContainerMerge @parameters
+```
+```azurepowershell
// API for MongoDB
-Invoke-AzCosmosDBMongoDBCollectionMerge `
- -ResourceGroupName "<resource-group-name>" `
- -AccountName "<cosmos-account-name>" `
- -DatabaseName "<cosmos-database-name>" `
- -Name "<cosmos-collection-name>" `
- -WhatIf
+$parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+ AccountName = "<cosmos-account-name>"
+ DatabaseName = "<cosmos-database-name>"
+ Name = "<cosmos-container-name>"
+ WhatIf = $true
+}
+Invoke-AzCosmosDBMongoDBCollectionMerge @parameters
``` #### [Azure CLI](#tab/azure-cli)
Invoke-AzCosmosDBMongoDBCollectionMerge `
```azurecli // Add the preview extension az extension add --name cosmosdb-preview
+```
+```azurecli
// API for NoSQL az cosmosdb sql container merge \ --resource-group '<resource-group-name>' \ --account-name '<cosmos-account-name>' \ --database-name '<cosmos-database-name>' \ --name '<cosmos-container-name>'
+```
+```azurecli
// API for MongoDB az cosmosdb mongodb collection merge \ --resource-group '<resource-group-name>' \
az cosmosdb mongodb collection merge \
### Monitor merge operations+ Partition merge is a long-running operation and there's no SLA on how long it takes to complete. The time depends on the amount of data in the container and the number of physical partitions. It's recommended to allow at least 5-6 hours for merge to complete. While partition merge is running on your container, it isn't possible to change the throughput or any container settings (TTL, indexing policy, unique keys, etc.). Wait until the merge operation completes before changing your container settings.
You can track whether merge is still in progress by checking the **Activity Log*
## Limitations ### Preview eligibility criteria+ To enroll in the preview, your Azure Cosmos DB account must meet all the following criteria:
-* Your Azure Cosmos DB account uses API for NoSQL or MongoDB with version >=3.6.
-* Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
- * Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
- * However, only the containers with dedicated throughput will be able to be merged.
-* Your Azure Cosmos DB account is a single-write region account (merge isn't currently supported for multi-region write accounts).
-* Your Azure Cosmos DB account doesn't use any of the following features:
- * [Point-in-time restore](continuous-backup-restore-introduction.md)
- * [Customer-managed keys](how-to-setup-cmk.md)
- * [Analytical store](analytical-store-introduction.md)
-* Your Azure Cosmos DB account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
-* If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
- * There are no SDK or driver requirements to use the feature with API for MongoDB.
-* Your Azure Cosmos DB account doesn't use any currently unsupported connectors:
- * Azure Data Factory
- * Azure Stream Analytics
- * Logic Apps
- * Azure Functions
- * Azure Search
- * Azure Cosmos DB Spark connector
- * Azure Cosmos DB data migration tool
- * Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
+
+- Your Azure Cosmos DB account uses API for NoSQL or MongoDB with version >=3.6.
+- Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
+ - Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
+ - However, only the containers with dedicated throughput will be able to be merged.
+- Your Azure Cosmos DB account is a single-write region account (merge isn't currently supported for multi-region write accounts).
+- Your Azure Cosmos DB account doesn't use any of the following features:
+ - [Point-in-time restore](continuous-backup-restore-introduction.md)
+ - [Customer-managed keys](how-to-setup-cmk.md)
+ - [Analytical store](analytical-store-introduction.md)
+- Your Azure Cosmos DB account uses bounded staleness, session, consistent prefix, or eventual consistency (merge isn't currently supported for strong consistency).
+- If you're using API for NoSQL, your application must use the Azure Cosmos DB .NET V3 SDK, version 3.27.0 or higher. When merge preview enabled on your account, all requests sent from non .NET SDKs or older .NET SDK versions won't be accepted.
+ - There are no SDK or driver requirements to use the feature with API for MongoDB.
+- Your Azure Cosmos DB account doesn't use any currently unsupported connectors:
+ - Azure Data Factory
+ - Azure Stream Analytics
+ - Logic Apps
+ - Azure Functions
+ - Azure Search
+ - Azure Cosmos DB Spark connector
+ - Azure Cosmos DB data migration tool
+ - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET V3 SDK v3.27.0 or higher
### Account resources and configuration
-* Merge is only available for API for NoSQL and MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
-* Merge is only available for single-region write accounts. Multi-region write account support isn't available.
-* Accounts using merge functionality can't also use these features (if these features are added to a merge enabled account, resources in the account will no longer be able to be merged):
- * [Point-in-time restore](continuous-backup-restore-introduction.md)
- * [Customer-managed keys](how-to-setup-cmk.md)
- * [Analytical store](analytical-store-introduction.md)
-* Containers using merge functionality must have their throughput provisioned at the container level. Database-shared throughput support isn't available.
-* Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It isn't currently supported for strong consistency.
-* After a container has been merged, it isn't possible to read the change feed with start time. Support for this feature is planned for the future.
+
+- Merge is only available for API for NoSQL and MongoDB accounts. For API for MongoDB accounts, the MongoDB account version must be 3.6 or greater.
+- Merge is only available for single-region write accounts. Multi-region write account support isn't available.
+- Accounts using merge functionality can't also use these features (if these features are added to a merge enabled account, resources in the account will no longer be able to be merged):
+ - [Point-in-time restore](continuous-backup-restore-introduction.md)
+ - [Customer-managed keys](how-to-setup-cmk.md)
+ - [Analytical store](analytical-store-introduction.md)
+- Containers using merge functionality must have their throughput provisioned at the container level. Database-shared throughput support isn't available.
+- Merge is only available for accounts using bounded staleness, session, consistent prefix, or eventual consistency. It isn't currently supported for strong consistency.
+- After a container has been merged, it isn't possible to read the change feed with start time. Support for this feature is planned for the future.
### SDK requirements (API for NoSQL only)
-Accounts with the merge feature enabled are supported only when you use the latest version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing.
+Accounts with the merge feature enabled are supported only when you use the latest version of the .NET v3 SDK. When the feature is enabled on your account (regardless of whether you run the merge), you must only use the supported SDK using the account. Requests sent from other SDKs or earlier versions won't be accepted. As long as you're using the supported SDK, your application can continue to run while a merge is ongoing.
Find the latest version of the supported SDK:
Find the latest version of the supported SDK:
Support for other SDKs is planned for the future. > [!TIP]
-> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
+> You should ensure that your application has been updated to use a compatible SDK version prior to enrolling in the preview. If you're using the legacy .NET V2 SDK, follow the [.NET SDK v3 migration guide](sql/migrate-dotnet-v3.md).
### Unsupported connectors If you enroll in the preview, the following connectors will fail.
-* Azure Data Factory<sup>1</sup>
-* Azure Stream Analytics<sup>1</sup>
-* Logic Apps<sup>1</sup>
-* Azure Functions<sup>1</sup>
-* Azure Search<sup>1</sup>
-* Azure Cosmos DB Spark connector<sup>1</sup>
-* Azure Cosmos DB data migration tool
-* Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher
+- Azure Data Factory <sup>1</sup>
+- Azure Stream Analytics <sup>1</sup>
+- Logic Apps <sup>1</sup>
+- Azure Functions <sup>1</sup>
+- Azure Search <sup>1</sup>
+- Azure Cosmos DB Spark connector <sup>1</sup>
+- Azure Cosmos DB data migration tool
+- Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET V3 SDK v3.27.0 or higher
-<sup>1</sup>Support for these connectors is planned for the future.
+<sup>1</sup> Support for these connectors is planned for the future.
## Next steps
-* Learn more about [using Azure CLI with Azure Cosmos DB.](/cli/azure/azure-cli-reference-for-cosmos-db)
-* Learn more about [using Azure PowerShell with Azure Cosmos DB.](/powershell/module/az.cosmosdb/)
-* Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)
+- Learn more about [using Azure CLI with Azure Cosmos DB.](/cli/azure/azure-cli-reference-for-cosmos-db)
+- Learn more about [using Azure PowerShell with Azure Cosmos DB.](/powershell/module/az.cosmosdb/)
+- Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-dotnet.md
Watch the video below to learn more about using the .NET SDK from an Azure Cosmo
|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. | |<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. | |<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Azure Cosmos DB [visit](troubleshoot-dotnet-sdk-request-timeout.md) |
-|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) |
+|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. For accounts configured with a single write region, the SDK won't retry on writes for transient failures as writes aren't idempotent. For accounts configured with multiple write regions, there are [some scenarios](troubleshoot-sdk-availability.md#transient-connectivity-issues-on-tcp-protocol) where the SDK will automatically retry writes on other regions. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) |
|<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. | |<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. | | <input type="checkbox"/> | Parallel Queries | The Azure Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
cosmos-db Tutorial Create Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-create-notebook.md
Title: |
description: | Learn how to use built-in Jupyter notebooks to import data to Azure Cosmos DB for NoSQL, analyze the data, and visualize the output. + Last updated 09/29/2022
# Tutorial: Create a Jupyter Notebook in Azure Cosmos DB for NoSQL to analyze and visualize data (preview) > [!IMPORTANT] > The Jupyter Notebooks feature of Azure Cosmos DB is currently in a preview state and is progressively rolling out to all customers over time.
-This article describes how to use the Jupyter Notebooks feature of Azure Cosmos DB to import sample retail data to an Azure Cosmos DB for NoSQL account. You'll see how to use the Azure Cosmos DB magic commands to run queries, analyze the data, and visualize the results.
+This tutorial walks through how to use the Jupyter Notebooks feature of Azure Cosmos DB to import sample retail data to an Azure Cosmos DB for NoSQL account. You'll see how to use the Azure Cosmos DB magic commands to run queries, analyze the data, and visualize the results.
## Prerequisites -- [Azure Cosmos DB for NoSQL account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) (configured with serverless throughput)
+- [Azure Cosmos DB for NoSQL account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) (configured with serverless throughput).
## Create a new notebook
In this section, you'll create the Azure Cosmos database, container, and import
## Next steps - [Learn about the Jupyter Notebooks feature in Azure Cosmos DB](../notebooks-overview.md)
+- [Import notebooks from GitHub into an Azure Cosmos DB for NoSQL account](tutorial-import-notebooks.md)
- [Review the FAQ on Jupyter Notebook support](../notebooks-faq.yml)
cosmos-db Tutorial Deploy App Bicep Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-deploy-app-bicep-aks.md
Title: 'Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep'
-description: Deploy an ASP.NET MVC web application with Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep.
+ Title: 'Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service via Bicep'
+description: Learn how to deploy an ASP.NET MVC web application with Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service by using Bicep.
Last updated 10/17/2022
-# Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service using Bicep
+# Tutorial: Deploy an ASP.NET web application by using Azure Cosmos DB for NoSQL, managed identity, and AKS via Bicep
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] In this tutorial, you'll deploy a reference ASP.NET web application on an Azure Kubernetes Service (AKS) cluster that connects to Azure Cosmos DB for NoSQL.
-**[Azure Cosmos DB](../introduction.md)** is a fully managed distributed database platform for modern application development with NoSQL or relational databases.
+[Azure Cosmos DB](../introduction.md) is a fully managed distributed database platform for modern application development with NoSQL or relational databases.
-**[Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters.
+[AKS](../../aks/intro-kubernetes.md) is a managed Kubernetes service that helps you quickly deploy and manage clusters.
> [!IMPORTANT]
->
-> - This article requires the latest version of Azure CLI. For more information, see [install Azure CLI](/cli/azure/install-azure-cli). If you are using the Azure Cloud Shell, the latest version is already installed.
-> - This article also requires the latest version of the Bicep CLI within Azure CLI. For more information, see [install Bicep tools](../../azure-resource-manager/bicep/install.md#azure-cli)
-> - If you are running the commands in this tutorial locally instead of in the Azure Cloud Shell, ensure you run the commands using an administrator account.
->
+> - This article requires the latest version of the Azure CLI. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed.
+> - This article also requires the latest version of the Bicep CLI within the Azure CLI. For more information, see [Install Bicep tools](../../azure-resource-manager/bicep/install.md#azure-cli).
+> - If you're running the commands in this tutorial locally instead of in Azure Cloud Shell, ensure that you use an administrator account.
-## Pre-requisites
+## Prerequisites
-The following tools are required to compile the ASP.NET web application and create its container image.
+The following tools are required to compile the ASP.NET web application and create its container image:
- [Docker Desktop](https://docs.docker.com/desktop/) - [Visual Studio Code](https://code.visualstudio.com/)
The following tools are required to compile the ASP.NET web application and crea
## Overview
-This tutorial uses an [Infrastructure as Code (IaC)](/devops/deliver/what-is-infrastructure-as-code) approach to deploy the resources to Azure. We'll use **[Bicep](../../azure-resource-manager/bicep/overview.md)**, which is a new declarative language that offers the same capabilities as [ARM templates](../../azure-resource-manager/templates/overview.md). However, bicep includes a syntax that is more concise and easier to use.
+This tutorial uses an [infrastructure as code (IaC)](/devops/deliver/what-is-infrastructure-as-code) approach to deploy the resources to Azure. You'll use [Bicep](../../azure-resource-manager/bicep/overview.md), which is a new declarative language that offers the same capabilities as [Azure Resource Manager templates](../../azure-resource-manager/templates/overview.md). However, Bicep includes a syntax that's more concise and easier to use.
-The Bicep modules will deploy the following Azure resources within the targeted subscription scope.
+The Bicep modules will deploy the following Azure resources within the targeted subscription scope:
-1. A [resource group](../../azure-resource-manager/management/overview.md#resource-groups) to organize the resources
-1. A [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for authentication
-1. An [Azure Container Registry (ACR)](../../container-registry/container-registry-intro.md) for storing container images
-1. An [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster
-1. An [Azure Virtual Network (VNET)](../../virtual-network/network-overview.md) required for configuring AKS
-1. An [Azure Cosmos DB for NoSQL account](../introduction.md)) along with a database, container, and the [SQL role](/cli/azure/cosmosdb/sql/role)
-1. An [Azure Key Vault](../../key-vault/general/overview.md) to store secure keys
-1. (Optional) An [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md)f
+- A [resource group](../../azure-resource-manager/management/overview.md#resource-groups) to organize the resources
+- A [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for authentication
+- A [container registry](../../container-registry/container-registry-intro.md) for storing container images
+- An [AKS](../../aks/intro-kubernetes.md) cluster
+- A [virtual network](../../virtual-network/network-overview.md) for configuring AKS
+- An [Azure Cosmos DB for NoSQL account](../introduction.md), along with a database, a container, and the [SQL role](/cli/azure/cosmosdb/sql/role)
+- A [key vault](../../key-vault/general/overview.md) to store secure keys
+- (Optional) A [Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md)
-This tutorial uses the following security best practices with Azure Cosmos DB.
+This tutorial uses the following security best practices with Azure Cosmos DB:
-1. Implements access control using [role-based access control](../../role-based-access-control/overview.md) and [managed identity](../../active-directory/managed-identities-azure-resources/overview.md). These features eliminate the need for developers to manage secrets, credentials, certificates, and keys used to secure communication between services.
-1. Limits Azure Cosmos DB access to the AKS subnet by [configuring a virtual network service endpoint](../how-to-configure-vnet-service-endpoint.md).
-1. Set `disableLocalAuth = true` in the **databaseAccount** resource to [enforce role-based access control as the only authentication method](../how-to-setup-rbac.md#disable-local-auth).
+- Implement access control by using [role-based access control (RBAC)](../../role-based-access-control/overview.md) and a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md). These features eliminate the need for developers to manage secrets, credentials, certificates, and keys for secure communication between services.
+- Limit Azure Cosmos DB access to the AKS subnet by [configuring a virtual network service endpoint](../how-to-configure-vnet-service-endpoint.md).
+- Set `disableLocalAuth = true` in the `databaseAccount` resource to [enforce RBAC as the only authentication method](../how-to-setup-rbac.md#disable-local-auth).
> [!TIP]
-> The steps in this tutorial uses [Azure Cosmos DB for NoSQL](./quickstart-dotnet.md). However, the same concepts can also be applied to **[Azure Cosmos DB for MongoDB](../mongodb/introduction.md)**.
+> The steps in this tutorial use [Azure Cosmos DB for NoSQL](./quickstart-dotnet.md). However, you can apply the same concepts to [Azure Cosmos DB for MongoDB](../mongodb/introduction.md).
## Download the Bicep modules
-Download or [clone](https://docs.github.com/repositories/creating-and-managing-repositories/cloning-a-repository) the Bicep modules from the **Bicep** folder of the [azure-samples/cosmos-aks-samples](https://github.com/Azure-Samples/cosmos-aks-samples/tree/main/Bicep) GitHub repository.
+Download or [clone](https://docs.github.com/repositories/creating-and-managing-repositories/cloning-a-repository) the Bicep modules from the *Bicep* folder of the [azure-samples/cosmos-aks-samples](https://github.com/Azure-Samples/cosmos-aks-samples/tree/main/Bicep) GitHub repository:
```bash git clone https://github.com/Azure-Samples/cosmos-aks-samples.git
cd Bicep/
## Connect to your Azure subscription
-Use [`az login`](/cli/azure/authenticate-azure-cli) to connect to your default Azure subscription.
+Use [az login](/cli/azure/authenticate-azure-cli) to connect to your default Azure subscription:
```azurecli az login ```
-Optionally, use [`az account set`](/cli/azure/account#az-account-set) with the name or ID of a specific subscription to set the active subscription if you have multiple subscriptions.
+Optionally, use [az account set](/cli/azure/account#az-account-set) with the name or ID of a specific subscription to set the active subscription if you have multiple subscriptions:
```azurecli az account set \
az account set \
## Initialize the deployment parameters
-Create a **param.json** file by using the JSON in this example. Replace the `{resource group name}`, `{Azure Cosmos DB account name}`, and `{Azure Container Registry instance name}` placeholders with your own values for resource group name, Azure Cosmos DB account name, and Azure Container Registry instance name respectively.
+Create a *param.json* file by using the JSON in the following example. Replace the `{resource group name}`, `{Azure Cosmos DB account name}`, and `{Azure Container Registry instance name}` placeholders with your own values.
> [!IMPORTANT]
-> All resource names used in the steps below should be compliant with the **[naming rules and restrictions for Azure resources](../../azure-resource-manager/management/resource-name-rules.md)**, also ensure that the placeholders values are replaced consistently and match with values supplied in **param.json**.
+> All resource names that you use in the following code should comply with the [naming rules and restrictions for Azure resources](../../azure-resource-manager/management/resource-name-rules.md). Also ensure that the placeholder values are replaced consistently and match the values in *param.json*.
```json {
Create a **param.json** file by using the JSON in this example. Replace the `{re
## Create a Bicep deployment
-Set shell variables using these commands replacing the `{deployment name}`, and `{location}` placeholders with your own values.
+Set shell variables by using the following commands. Replace the `{deployment name}` and `{location}` placeholders with your own values.
```bash
-deploymentName='{deployment name}' # Name of the Deployment
+deploymentName='{deployment name}' # Name of the deployment
location='{location}' # Location for deploying the resources ```
-Within the **Bicep** folder, use [`az deployment sub create`](/cli/azure/deployment/sub#az-deployment-sub-create) to deploy the template to the current subscription scope.
+Within the *Bicep* folder, use [az deployment sub create](/cli/azure/deployment/sub#az-deployment-sub-create) to deploy the template to the current subscription scope:
```azurecli az deployment sub create \
During deployment, the console will output a message indicating that the deploym
/ Running .. ```
-The deployment could take somewhere around 20 to 30 minutes. Once provisioning is completed, the console will output JSON with `Succeeded` as the provisioning state.
+The deployment could take 20 to 30 minutes. After provisioning is completed, the console will output JSON with `Succeeded` as the provisioning state:
```output }
The deployment could take somewhere around 20 to 30 minutes. Once provisioning i
} ```
-You can also see the deployment status in the resource group
+You can also see the deployment status in the resource group:
:::image type="content" source="./media/tutorial-deploy-app-bicep-aks/deployed-resource-group.png" lightbox="./media/tutorial-deploy-app-bicep-aks/deployed-resource-group.png" alt-text="Screenshot of the deployment status for the resource group in the Azure portal."::: > [!NOTE]
-> When creating an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks)
+> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks).
## Link Azure Container Registry with AKS
-Replace the `{Azure Container Registry instance name}` and `{resource group name}` placeholders with your own values.
+Use the following commands to link your Azure Container Registry instance with AKS. Replace the `{Azure Container Registry instance name}` and `{resource group name}` placeholders with your own values.
```bash acrName='{Azure Container Registry instance name}'
rgName='{resource group name}'
aksName=$rgName'aks' ```
-Run [`az aks update`](/cli/azure/aks#az-aks-update) to attach the existing ACR resource with the AKS cluster.
+Run [az aks update](/cli/azure/aks#az-aks-update) to attach the existing Azure Container Registry resource with the AKS cluster:
```azurecli az aks update \
az aks update \
## Connect to the AKS cluster
-To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use [`az aks install-cli`](/cli/azure/aks#az-aks-install-cli):
+To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use [az aks install-cli](/cli/azure/aks#az-aks-install-cli):
```azurecli az aks install-cli ```
-To configure `kubectl` to connect to your Kubernetes cluster, use [`az aks get-credentials`](/cli/azure/aks#az-aks-get-credentials). This command downloads credentials and configures the Kubernetes CLI to use them.
+To configure `kubectl` to connect to your Kubernetes cluster, use [az aks get-credentials](/cli/azure/aks#az-aks-get-credentials). This command downloads credentials and configures the Kubernetes CLI to use them.
```azurecli az aks get-credentials \
az aks get-credentials \
## Connect the AKS pods to Azure Key Vault
-Azure Active Directory (Azure AD) pod-managed identities use AKS primitives to associate managed identities for Azure resources and identities in Azure AD with pods. We'll use these identities to grant access to the Azure Key Vault Secrets Provider for Secrets Store CSI driver.
+Azure Active Directory (Azure AD) pod-managed identities use AKS primitives to associate managed identities for Azure resources and identities in Azure AD with pods. You'll use these identities to grant access to the Azure Key Vault Provider for Secrets Store CSI Driver.
-Use the command below to find the values of the Tenant ID (homeTenantId).
+Use the following command to find the values of the tenant ID (`homeTenantId`):
```azurecli az account show ```
-Use this YAML template to create a **secretproviderclass.yml** file. Make sure to update your own values for `{Tenant Id}` and `{resource group name}` placeholders. Ensure that the below values for resource group name placeholder match with values supplied in **param.json**.
+Use the following YAML template to create a *secretproviderclass.yml* file. Replace the `{Tenant Id}` and `{resource group name}` placeholders with your own values. Also ensure that the value for `{resource group name}` matches the value in *param.json*.
```yml
-# This is a SecretProviderClass example using aad-pod-identity to access the key vault
+# This is a SecretProviderClass example that uses aad-pod-identity to access the key vault
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
spec:
provider: azure parameters: usePodIdentity: "true"
- keyvaultName: "{resource group name}kv" # Replace resource group name. Key Vault name is generated by Bicep
- tenantId: "{Tenant Id}" # The tenant ID of your account, use 'homeTenantId' attribute value from the 'az account show' command output
+ keyvaultName: "{resource group name}kv" # Replace resource group name. Bicep generates the key vault name.
+ tenantId: "{Tenant Id}" # The tenant ID of your account. Use the 'homeTenantId' attribute value from the 'az account show' command output.
``` ## Apply the SecretProviderClass to the AKS cluster
-Use [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) to install the Secrets Store CSI Driver using the YAML.
+Use [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) to install the Secrets Store CSI Driver by using the YAML:
```bash kubectl apply \
kubectl apply \
## Build the ASP.NET web application
-Download or clone the web application source code from the **Application** folder of the [azure-samples/cosmos-aks-samples](https://github.com/Azure-Samples/cosmos-aks-samples/tree/main/Application) GitHub repository.
+Download or clone the web application's source code from the *Application* folder of the [azure-samples/cosmos-aks-samples](https://github.com/Azure-Samples/cosmos-aks-samples/tree/main/Application) GitHub repository:
```bash git clone https://github.com/Azure-Samples/cosmos-aks-samples.git
git clone https://github.com/Azure-Samples/cosmos-aks-samples.git
cd Application/ ```
-Open the **Application folder** in **Visual Studio Code**. Run the application using either **F5** or the **Debug: Start Debugging** command.
+Open the *Application* folder in Visual Studio Code. Run the application by using either the F5 key or the **Debug: Start Debugging** command.
## Push the Docker container image to Azure Container Registry
-1. To create a container image from the Explorer tab on **Visual Studio Code**, open the context menu on the **Dockerfile** and select **Build Image...**. You'll then get a prompt asking for the name and version to tag the image. Enter the name `todo:latest`.
+1. To create a container image from the **Explorer** tab in Visual Studio Code, right-click **Dockerfile**, and then select **Build Image**.
:::image type="content" source="./media/tutorial-deploy-app-bicep-aks/context-menu-build-docker-image.png" alt-text="Screenshot of the context menu in Visual Studio Code with the Build Image option selected.":::
-1. Use the Docker pane to push the built image to ACR. You'll find the built image under the **Images** node. Open the `todo` node, then open the context menu for the latest, and then finally select **Push...**.
+1. In the prompt that asks for the name and version to tag the image, enter the name **todo:latest**.
-1. You'll then get prompts to select your Azure subscription, ACR resource, and image tags. The image tag format should be `{acrname}.azurecr.io/todo:latest`.
+1. Use the **Docker** pane to push the built image to Azure Container Registry. You'll find the built image under the **Images** node. Open the **todo** node, right-click **latest**, and then select **Push**.
:::image type="content" source="./media/tutorial-deploy-app-bicep-aks/context-menu-push-docker-image.png" alt-text="Screenshot of the context menu in Visual Studio Code with the Push option selected.":::
-1. Wait for **Visual Studio Code** to push the container image to ACR.
+1. In the prompts, select your Azure subscription, Azure Container Registry resource, and image tags. The image tag format should be `{acrname}.azurecr.io/todo:latest`.
-## Prepare Deployment YAML
+1. Wait for Visual Studio Code to push the container image to Azure Container Registry.
-Use this YAML template to create an **akstododeploy.yml** file. Make sure to replace the values for `{ACR name}`, `{Image name}`, `{Version}`, and `{resource group name}` placeholders.
+## Prepare the deployment YAML
+
+Use the following YAML template to create an *akstododeploy.yml* file. Replace the `{ACR name}`, `{Image name}`, `{Version}`, and `{resource group name}` placeholders with your own values.
```yml apiVersion: apps/v1
spec:
spec: containers: - name: mycontainer
- image: "{ACR name}/{Image name}:{Version}" # update as per your environment, example myacrname.azurecr.io/todo:latest. Do NOT add https:// in ACR Name
+ image: "{ACR name}/{Image name}:{Version}" # Update per your environment; for example, myacrname.azurecr.io/todo:latest. Do not add https:// in ACR Name.
ports: - containerPort: 80 env: - name: KeyVaultName
- value: "{resource group name}kv" # Replace resource group name. Key Vault name is generated by Bicep
+ value: "{resource group name}kv" # Replace resource group name. Key Vault name is generated by Bicep.
nodeSelector: kubernetes.io/os: linux volumes:
spec:
targetPort: 80 ```
-## Apply deployment YAML
+## Apply the deployment YAML
-Use `kubectl apply` again to deploy the application pods and expose the pods via a load balancer.
+Use `kubectl apply` again to deploy the application pods and expose the pods via a load balancer:
```bash kubectl apply \
kubectl apply \
## Test the application
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete
+When the application runs, a Kubernetes service exposes the application's front end to the internet. This process can take a few minutes to complete.
-Use [`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) to view the external IP exposed by the load balancer.
+Use [kubectl get](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) to view the external IP that the load balancer exposes:
```bash kubectl get services \ --namespace "my-app" ```
-Open the IP received as output in a browser to access the application.
+To access the application, open the IP address that you received as output in a browser.
## Clean up the resources
-To avoid Azure charges, you should clean up unneeded resources when the cluster is no longer needed. Use [`az group delete`](/cli/azure/group#az-group-delete) and [`az deployment sub delete`](/cli/azure/deployment/sub#az-deployment-sub-delete) to delete the resource group and subscription deployment respectively.
+To avoid Azure charges, clean up unneeded resources when you no longer need the cluster. Use [az group delete](/cli/azure/group#az-group-delete) and [az deployment sub delete](/cli/azure/deployment/sub#az-deployment-sub-delete) to delete the resource group and subscription deployment, respectively:
```azurecli az group delete \
az deployment sub delete \
## Next steps -- Learn how to [Develop a web application with Azure Cosmos DB](./tutorial-dotnet-web-app.md)-- Learn how to [Query Azure Cosmos DB for NoSQL](./tutorial-query.md).-- Learn how to [upgrade your cluster](../../aks/tutorial-kubernetes-upgrade-cluster.md)-- Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md)-- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md)
+- Learn how to [develop a web application with Azure Cosmos DB](./tutorial-dotnet-web-app.md).
+- Learn how to [query Azure Cosmos DB for NoSQL](./tutorial-query.md).
+- Learn how to [upgrade your cluster](../../aks/tutorial-kubernetes-upgrade-cluster.md).
+- Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md).
+- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md).
cosmos-db Tutorial Import Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-import-notebooks.md
+
+ Title: |
+ Tutorial: Import Jupyter notebooks from GitHub into Azure Cosmos DB for NoSQL (preview)
+description: |
+ Learn how to connect to GitHub and import the notebooks from a GitHub repository to your Azure Cosmos DB for NoSQL account.
+++ Last updated : 09/29/2022+++++
+# Tutorial: Import Jupyter notebooks from GitHub into Azure Cosmos DB for NoSQL (preview)
++
+> [!IMPORTANT]
+> The Jupyter Notebooks feature of Azure Cosmos DB is currently in a preview state and is progressively rolling out to all customers over time.
+
+This tutorial walks through how to import Jupyter notebooks from a GitHub repository and run them in an Azure Cosmos DB for NoSQL account. After importing the notebooks, you can run, edit them, and persist your changes back to the same GitHub repository.
+
+## Prerequisites
+
+- [Azure Cosmos DB for NoSQL account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) (configured with serverless throughput).
+
+## Create a copy of a GitHub repository
+
+1. Navigate to the [azure-samples/cosmos-db-nosql-notebooks](https://github.com/azure-samples/cosmos-db-nosql-notebooks/generate) template repository.
+
+1. Create a new copy of the template repository in your own GitHub account or organization.
+
+## Pull notebooks from GitHub
+
+Instead of creating new notebooks each time you start a workspace, you can import existing notebooks from GitHub. In this section, you'll connect to an existing GitHub repository with sample notebooks.
+
+1. Navigate to your Azure Cosmos DB account and open the **Data Explorer.**
+
+1. Select **Connect to GitHub**.
+
+ :::image type="content" source="media/tutorial-import-notebooks/connect-github-option.png" lightbox="media/tutorial-import-notebooks/connect-github-option.png" alt-text="Screenshot of the Data Explorer with the 'Connect to GitHub' option highlighted.":::
+
+1. In the **Connect to GitHub** dialog, select the access option appropriate to your GitHub repository and then select **Authorize access**.
+
+ :::image type="content" source="media/tutorial-import-notebooks/authorize-access.png" alt-text="Screenshot of the 'Connect to GitHub' dialog with options for various levels of access.":::
+
+1. Complete the GitHub third-party authorization workflow granting access to the organization\[s\] required to access your GitHub repository. For more information, see [Authorizing GitHub Apps](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/authorizing-github-apps).
+
+1. In the **Manage GitHub settings** dialog, select the GitHub repository you created earlier.
+
+ :::image type="content" source="media/tutorial-import-notebooks/select-pinned-repositories.png" alt-text="Screenshot of the 'Manage GitHub settings' dialog with a list of unpinned and pinned repositories.":::
+
+1. Back in the Data Explorer, locate the new tree of nodes for your pinned repository and open the **website-metrics-python.ipynb** file.
+
+ :::image type="content" source="media/tutorial-import-notebooks/open-notebook-pinned-repositories.png" alt-text="Screenshot of the pinned repositories in the Data Explorer.":::
+
+1. In the editor for the notebook, locate the following cell.
+
+ ```python
+ import pandas as pd
+ pd.options.display.html.table_schema = True
+ pd.options.display.max_rows = None
+
+ df_cosmos.groupby("Item").size()
+ ```
+
+1. The cell currently outputs the number of unique items. Replace the final line of the cell with a new line to output the number of unique actions in the dataset.
+
+ ```python
+ df_cosmos.groupby("Action").size()
+ ```
+
+1. Run all the cells sequentially to see the new dataset. The new dataset should only include three potential values for the **Action** column. Optionally, you can select a data visualization for the results.
+
+ :::image type="content" source="media/tutorial-import-notebooks/updated-visualization.png" alt-text="Screenshot of the Pandas dataframe visualization for the data.":::
+
+## Push notebook changes to GitHub
+
+> [!TIP]
+> Currently, temporary workspaces will be de-allocated if left idle for 20 minutes. The maximum amount of usage time per day is 60 minutes. These limits are subject to change in the future.
+
+To save your work permanently, save your notebooks back to the GitHub repository. In this section, you'll persist your changes from the temporary workspace to GitHub as a new commit.
+
+1. Select **Save** to create a commit for your change to the notebook.
+
+ :::image type="content" source="media/tutorial-import-notebooks/save-option.png" alt-text="Screenshot of the 'Save' option in the Data Explorer menu.":::
+
+1. In the **Save** dialog, add a descriptive commit message.
+
+ :::image type="content" source="media/tutorial-import-notebooks/commit-message-dialog.png" alt-text="Screenshot of the 'Save' dialog with an example of a commit message.":::
+
+1. Navigate to the GitHub repository you created using your browser. The new commit should now be visible in the online repository.
+
+ :::image type="content" source="media/tutorial-import-notebooks/updated-github-repository.png" alt-text="Screenshot of the updated notebook on the GitHub website.":::
+
+## Next steps
+
+- [Learn about the Jupyter Notebooks feature in Azure Cosmos DB](../notebooks-overview.md)
+- [Create your first notebook in an Azure Cosmos DB for NoSQL account](tutorial-create-notebook.md)
+- [Review the FAQ on Jupyter Notebook support](../notebooks-faq.yml)
cosmos-db Notebooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/notebooks-overview.md
You can import the data from Azure Cosmos containers or the results of queries i
To get started with built-in Jupyter Notebooks in Azure Cosmos DB, see the following articles: - [Create your first notebook in an Azure Cosmos DB for NoSQL account](nosql/tutorial-create-notebook.md)
+- [Import notebooks from GitHub into an Azure Cosmos DB for NoSQL account](nosql/tutorial-import-notebooks.md)
- [Review the FAQ on Jupyter Notebook support](notebooks-faq.yml)
cosmos-db Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-nodes.md
Previously updated : 07/28/2019 Last updated : 10/26/2022 # Nodes and tables in Azure Cosmos DB for PostgreSQL
WHERE shardid = 102027;
## Next steps - [Determine your application's type](howto-app-type.md) to prepare for data modeling
+- Inspect shards and placements with [useful diagnostic queries](howto-useful-diagnostic-queries.md).
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 10/19/2022 Last updated : 10/26/2022 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
The versions of each extension installed in a cluster sometimes differ based on
> [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | > ||||||
-> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
+> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 | > | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 | > | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
cost-management-billing Discount Sql Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/discount-sql-edge.md
+
+ Title: Understand reservations discount for Azure SQL Edge
+description: Learn how a reservation discount is applied to Azure SQL Edge.
+++++ Last updated : 10/26/2022+++
+# How a reservation discount is applied to Azure SQL Edge
+
+After you buy Azure SQL Edge reserved capacity, the reservation discount is automatically applied to SQL Edge deployed to edge devices that match the attributes and quantity of the reservation. A reservation applies to the future use of Azure SQL Edge deployments. You're charged for software, storage, and networking at the normal rates.
+
+## How reservation discount is applied
+
+A reservation discount is "_use-it-or-lose-it_". So, if you don't have matching resources for any month, then you lose a reservation quantity for that month. You can't carry forward unused reserved months.
+
+When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved months are _lost_.
+
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation months with other workloads.
+
+## Discount applied to deployed devices
+
+The reserved capacity discount is applied to deployed devices monthly. The reservation that you buy is matched to the usage emitted by the deployed device. For devices that don't run the full month, the reservation is automatically applied to other deployed devices matching the reservation attributes. The discount can apply to deployed devices that are running concurrently. If you don't have deployed devices that run for the full month that match the reservation attributes, you don't get the full benefit of the reservation discount for that month.
+
+If your number of devices deployed exceeds your reservation quantity, then you're charged the non-discounted cost for the number beyond the reservation quantity.
+
+To understand and view the application of your Azure Reservations in billing usage reports, see [Understand Azure reservation usage](understand-reserved-instance-usage-ea.md).
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+To learn more about Azure Reservations, see the following articles:
+
+- [What are Azure Reservations?](save-compute-costs-reservations.md)
+- [Manage Azure Reservations](manage-reserved-vm-instance.md)
+- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
+- [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Azure Files](../../storage/files/files-reserve-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Azure VMware Solution](../../azure-vmware/reserved-instance.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Azure Cosmos DB](../../cosmos-db/cosmos-db-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Azure SQL Edge](prepay-sql-edge.md)
- [Databricks](prepay-databricks-reserved-capacity.md) - [Data Explorer](/azure/data-explorer/pricing-reserved-capacity?toc=/azure/cost-management-billing/reservations/toc.json) - [Dedicated Host](../../virtual-machines/prepay-dedicated-hosts-reserved-instances.md)
cost-management-billing Prepay Sql Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-sql-edge.md
+
+ Title: Prepay for Azure SQL Edge reservations
+description: Learn how you can prepay for Azure SQL Edge to save money over your pay-as-you-go costs.
+++++ Last updated : 10/26/2022+++
+# Prepay for Azure SQL Edge reservations
+
+When you prepay for your SQL Edge reserved capacity, you can save money over your pay-as-you-go costs. With reserved capacity, you make a commitment for SQL Edge device use for a period of one or three years to get a significant discount on usage costs. The discounts only apply to SQL Edge deployed devices and not on other software or other container usage. The reservation discount is applied automatically to the deployed devices in the selected reservation scope. Because of this automatic application, you don't need to assign a reservation to a specific deployed device to get the discounts.
+
+You can buy SQL Edge reserved capacity from the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md). To buy reserved capacity:
+
+- You must be in the Owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
+- For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
+- For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy SQL Edge reserved capacity.
+
+## Buy a software plan
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Select **All services** > **Reservations**.
+3. Select **Add** and then in the **Purchase Reservations** pane, select **Azure SQL Edge**.
+4. Select a scope. The reservation's scope can cover one subscription or multiple subscriptions (shared scope):
+ - **Shared** - The reservation discount is applied to SQL Edge devices running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.
+ - **Management Group** - The reservation discount is applied to the matching resource in the list of subscriptions that are a part of both the management group and billing scope.
+ - **Single subscription** - The reservation discount is applied to the SQL Edge devices in this subscription.
+ - **Single resource group** - The reservation discount is applied to the SQL Edge devices in the selected subscription and the selected resource group within that subscription.
+5. Select a **Subscription**. The subscription used to pay for the capacity reservation. The subscription payment method is charged the upfront costs for the reservation.
+ - For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage.
+ - For an individual subscription with pay-as-you-go pricing, the charges are billed to the subscription's credit card or invoice payment method.
+6. Select a **Region**. The Azure region that's covered by the capacity reservation.
+7. Select the **Billing frequency**. It indicates how often the account is billed for the reservation. Options include _Monthly_ or _Upfront_.
+8. Select a **Term**. One year or three years.
+9. Add the product to the cart.
+10. Choose a quantity, which is the number of prepaid SQL Edge deployments that can get the billing discount.
+11. Review your selections and purchase.
+
+## Cancel, exchange, or refund reservations
+
+You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md).
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+To learn how to manage a reservation, see [Manage Azure reservations](manage-reserved-vm-instance.md).
+
+To learn more, see the following articles:
+
+- [What are Azure Reservations?](save-compute-costs-reservations.md)
+- [Manage Reservations in Azure](manage-reserved-vm-instance.md)
+- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
cost-management-billing Reservation Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-application.md
Read the following articles that apply to you to learn how discounts apply to a
- [App Service](reservation-discount-app-service.md) - [Azure Cache for Redis](understand-azure-cache-for-redis-reservation-charges.md) - [Azure Cosmos DB](understand-cosmosdb-reservation-charges.md)
+- [Azure SQL Edge](discount-sql-edge.md)
- [Database for MariaDB](understand-reservation-charges-mariadb.md) - [Database for MySQL](understand-reservation-charges-mysql.md) - [Database for PostgreSQL](understand-reservation-charges-postgresql.md)
data-factory Concepts Data Flow Performance Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-pipelines.md
Previously updated : 09/29/2021 Last updated : 10/26/2022
-# Using data flows in pipelines
+# Using data flows in pipelines
When building complex pipelines with multiple data flows, your logical flow can have a big impact on timing and cost. This section covers the impact of different architecture strategies.
Data flows allow you to group sinks together into groups from the data flow prop
On the pipeline execute data flow activity under the "Sink Properties" section is an option to turn on parallel sink loading. When you enable "run in parallel", you are instructing data flows write to connected sinks at the same time rather than in a sequential manner. In order to utilize the parallel option, the sinks must be group together and connected to the same stream via a New Branch or Conditional Split.
+## Access Azure Synapse database templates in pipelines
+
+You can use an [Azure Synapse database template](../synapse-analytics/database-designer/overview-database-templates.md) when crating a pipeline. When creating a new dataflow, in the source or sink settings, select **Workspace DB**. The database dropdown will list the databases created through the database template. The Workspace DB option is only available for new data flows, it's not available when you use an existing pipeline from the Synapse studio gallery.
+ ## Next steps - [Data flow performance overview](concepts-data-flow-performance.md)
data-factory Concepts Nested Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-nested-activities.md
Previously updated : 06/30/2021 Last updated : 10/24/2022 # Nested activities in Azure Data Factory and Azure Synapse Analytics
An example of this pattern would be if you had a file system that had a list of
5. In the child pipeline, you could then use another nested activity (such as ForEach) with the passed in array list to iterate over the files and perform one or more sets of inner activities. The parent pipeline would look similar to the below example.+ [ ![Screenshot showing an example parent pipeline calling a child pipeline in a ForEach loop.](media/concepts-pipelines-activities/nested-activity-execute-pipeline.png) ](media/concepts-pipelines-activities/nested-activity-execute-pipeline.png#lightbox) The child pipeline would look similar to the below example.+ :::image type="content" source="media/concepts-pipelines-activities/nested-activity-execute-child-pipeline.png" alt-text="Screenshot showing an example child pipeline with a ForEach loop."::: ## Next steps
The child pipeline would look similar to the below example.
See the following tutorials for step-by-step instructions for creating pipelines and datasets. - [Tutorial: Copy multiple tables in bulk by using Azure Data Factory in the Azure portal](tutorial-bulk-copy-portal.md)-- [Tutorial: Incrementally load data from a source data store to a destination data store](tutorial-incremental-copy-overview.md)
+- [Tutorial: Incrementally load data from a source data store to a destination data store](tutorial-incremental-copy-overview.md)
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-http.md
Previously updated : 09/09/2021 Last updated : 10/26/2022 # Copy data from an HTTP endpoint by using Azure Data Factory or Azure Synapse Analytics
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-for-each-activity.md
Previously updated : 09/09/2021 Last updated : 10/26/2022 # ForEach activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow If Condition Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-if-condition-activity.md
Previously updated : 09/09/2021 Last updated : 10/26/2022
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-until-activity.md
# Until activity in Azure Data Factory and Synapse Analytics [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-The Until activity provides the same functionality that a do-until looping structure provides in programming languages. It executes a set of activities in a loop until the condition associated with the activity evaluates to true. If a inner activity fails the Until activity does not stop. You can specify a timeout value for the until activity.
+The Until activity provides the same functionality that a do-until looping structure provides in programming languages. It executes a set of activities in a loop until the condition associated with the activity evaluates to true. If an inner activity fails, the Until activity does not stop. You can specify a timeout value for the until activity.
## Create an Until activity with UI
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
Previously updated : 09/01/2022 Last updated : 10/26/2022 # Sink transformation in mapping data flow
To use an inline dataset, select the format you want in the **Sink type** select
## Workspace DB (Synapse workspaces only)
-When using data flows in Azure Synapse workspaces, you will have an additional option to sink your data directly into a database type that is inside your Synapse workspace. This will alleviate the need to add linked services or datasets for those databases.
+When using data flows in Azure Synapse workspaces, you will have an additional option to sink your data directly into a database type that is inside your Synapse workspace. This will alleviate the need to add linked services or datasets for those databases. The databases created through the [Azure Synapse database templates](../synapse-analytics/database-designer/overview-database-templates.md) are also accessible when you select Workspace DB.
> [!NOTE] > The Azure Synapse Workspace DB connector is currently in public preview and can only work with Spark Lake databases at this time
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 08/23/2022 Last updated : 10/26/2022 # Source transformation in mapping data flow
Because an inline dataset is defined inside the data flow, there is not a define
## Workspace DB (Synapse workspaces only)
-In Azure Synapse workspaces, an additional option is present in data flow source transformations called ```Workspace DB```. This will allow you to directly pick a workspace database of any available type as your source data without requiring additional linked services or datasets.
+In Azure Synapse workspaces, an additional option is present in data flow source transformations called ```Workspace DB```. This will allow you to directly pick a workspace database of any available type as your source data without requiring additional linked services or datasets. The databases created through the [Azure Synapse database templates](../synapse-analytics/database-designer/overview-database-templates.md) are also accessible when you select Workspace DB.
:::image type="content" source="media/data-flow/syms-source.png" alt-text="Screenshot that shows workspacedb selected.":::
data-factory Tutorial Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-powershell.md
Install the latest version of Azure PowerShell if you don't already have it on y
```powershell Connect-AzAccount
- ```
+ ```
1. If you have multiple Azure subscriptions, run the following command to select the subscription that you want to work with. Replace **SubscriptionId** with the ID of your Azure subscription:
Install the latest version of Azure PowerShell if you don't already have it on y
1. To create the data factory, run the following `Set-AzDataFactoryV2` cmdlet:
- ```powershell
+ ```powershell
Set-AzDataFactoryV2 -ResourceGroupName $resourceGroupName -Location $location -Name $dataFactoryName ```
In this step, you link your Azure storage account to the data factory.
> [!IMPORTANT] > Before you save the file, replace \<accountName> and \<accountKey> with the name and key of your Azure storage account. You noted them in the [Prerequisites](#get-storage-account-name-and-account-key) section.
- ```json
+ ```json
{ "name": "AzureStorageLinkedService", "properties": {
In this step, you link your Azure storage account to the data factory.
} } }
- ```
+ ```
1. In PowerShell, switch to the *C:\ADFv2Tutorial* folder.+ ```powershell
- Set-Location 'C:\ADFv2Tutorial'
+ Set-Location 'C:\ADFv2Tutorial'
``` 1. To create the linked service, AzureStorageLinkedService, run the following `Set-AzDataFactoryV2LinkedService` cmdlet:
In this step, you link your SQL Server instance to the data factory.
} } }
- ```
+ ```
**Using Windows authentication:**
In this step, you create input and output datasets. They represent input and out
### Create a dataset for the source SQL Server database In this step, you define a dataset that represents data in the SQL Server database instance. The dataset is of type SqlServerTable. It refers to the SQL Server linked service that you created in the preceding step. The linked service has the connection information that the Data Factory service uses to connect to your SQL Server instance at runtime. This dataset specifies the SQL table in the database that contains the data. In this tutorial, the **emp** table contains the source data.
-1. Create a JSON file named *SqlServerDataset.json* in the *C:\ADFv2Tutorial* folder, with the following code:
+1. Create a JSON file named *SqlServerDataset.json* in the *C:\ADFv2Tutorial* folder, with the following code:
+ ```json { "name":"SqlServerDataset",
For a list of data stores that are supported by Data Factory, see [supported dat
To learn about copying data in bulk from a source to a destination, advance to the following tutorial: > [!div class="nextstepaction"]
->[Copy data in bulk](tutorial-bulk-copy.md)
+>[Copy data in bulk](tutorial-bulk-copy.md)
data-factory Data Factory Azure Ml Batch Execution Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-ml-batch-execution-activity.md
We recommend that you go through the [Build your first pipeline with Data Factor
} ```
- Both **start** and **end** datetimes must be in [ISO format](https://en.wikipedia.org/wiki/ISO_8601). For example: 2014-10-14T16:32:41Z. The **end** time is optional. If you do not specify value for the **end** property, it is calculated as "**start + 48 hours.**" To run the pipeline indefinitely, specify **9999-09-09** as the value for the **end** property. See [JSON Scripting Reference](/previous-versions/azure/dn835050(v=azure.100)) for details about JSON properties.
+ Both **start** and **end** datetime values must be in [ISO format](https://en.wikipedia.org/wiki/ISO_8601), such as `2014-10-14T16:32:41Z`. The **end** time is optional. If you do not specify value for the **end** property, it is calculated as "**start + 48 hours.**" To run the pipeline indefinitely, specify **9999-09-09** as the value for the **end** property. See [JSON Scripting Reference](/previous-versions/azure/dn835050(v=azure.100)) for details about JSON properties.
> [!NOTE] > Specifying input for the AzureMLBatchExecution activity is optional.
- >
- >
### Scenario: Experiments using Reader/Writer Modules to refer to data in various storages Another common scenario when creating Studio (classic) experiments is to use Reader and Writer modules. The reader module is used to load data into an experiment and the writer module is to save data from your experiments. For details about reader and writer modules, see [Reader](/azure/machine-learning/studio-module-reference/import-data) and [Writer](/azure/machine-learning/studio-module-reference/export-data) topics on MSDN Library.
-When using the reader and writer modules, it is good practice to use a Web service parameter for each property of these reader/writer modules. These web parameters enable you to configure the values during runtime. For example, you could create an experiment with a reader module that uses an Azure SQL Database: XXX.database.windows.net. After the web service has been deployed, you want to enable the consumers of the web service to specify another logical SQL server called YYY.database.windows.net. You can use a Web service parameter to allow this value to be configured.
+When using the reader and writer modules, it's good practice to use a Web service parameter for each property of these reader/writer modules. These web parameters enable you to configure the values during runtime. For example, you could create an experiment with a reader module that uses an Azure SQL Database instance: `XXX.database.windows.net`. After the web service has been deployed, you want to enable the consumers of the web service to specify another logical SQL Server instance called `YYY.database.windows.net`. You can use a Web service parameter to allow this value to be configured.
> [!NOTE] > Web service input and output are different from Web service parameters. In the first scenario, you have seen how an input and output can be specified for a Studio (classic) Web service. In this scenario, you pass parameters for a Web service that correspond to properties of reader/writer modules.
->
->
Let's look at a scenario for using Web service parameters. You have a deployed Studio (classic) web service that uses a reader module to read data from one of the data sources supported by Studio (classic) (for example: Azure SQL Database). After the batch execution is performed, the results are written using a Writer module (Azure SQL Database). No web service inputs and outputs are defined in the experiments. In this case, we recommend that you configure relevant web service parameters for the reader and writer modules. This configuration allows the reader/writer modules to be configured when using the AzureMLBatchExecution activity. You specify Web service parameters in the **globalParameters** section in the activity JSON as follows.
You can also use [Data Factory Functions](data-factory-functions-variables.md) i
> [!NOTE] > The Web service parameters are case-sensitive, so ensure that the names you specify in the activity JSON match the ones exposed by the Web service.
->
->
### Using a Reader module to read data from multiple files in Azure Blob Big data pipelines with activities such as Pig and Hive can produce one or more output files with no extensions. For example, when you specify an external Hive table, the data for the external Hive table can be stored in Azure blob storage with the following name 000000_0. You can use the reader module in an experiment to read multiple files, and use them for predictions.
When using the reader module in a Studio (classic) experiment, you can specify A
### Example #### Pipeline with AzureMLBatchExecution activity with Web Service Parameters
-```JSON
+```json
{ "name": "MLWithSqlReaderSqlWriter", "properties": {
If the web service takes multiple inputs, use the **webServiceInputs** property
In your ML Studio (classic) experiment, web service input and output ports and global parameters have default names ("input1", "input2") that you can customize. The names you use for webServiceInputs, webServiceOutputs, and globalParameters settings must exactly match the names in the experiments. You can view the sample request payload on the Batch Execution Help page for your Studio (classic) endpoint to verify the expected mapping.
-```JSON
+```json
{ "name": "PredictivePipeline", "properties": {
In your ML Studio (classic) experiment, web service input and output ports and g
#### Web Service does not require an input ML Studio (classic) batch execution web services can be used to run any workflows, for example R or Python scripts, that may not require any inputs. Or, the experiment might be configured with a Reader module that does not expose any GlobalParameters. In that case, the AzureMLBatchExecution Activity would be configured as follows:
-```JSON
+```json
{ "name": "scoring service", "type": "AzureMLBatchExecution",
ML Studio (classic) batch execution web services can be used to run any workflow
#### Web Service does not require an input/output The ML Studio (classic) batch execution web service might not have any Web Service output configured. In this example, there is no Web Service input or output, nor are any GlobalParameters configured. There is still an output configured on the activity itself, but it is not given as a webServiceOutput.
-```JSON
+```json
{ "name": "retraining", "type": "AzureMLBatchExecution",
The ML Studio (classic) batch execution web service might not have any Web Servi
#### Web Service uses readers and writers, and the activity runs only when other activities have succeeded The ML Studio (classic) web service reader and writer modules might be configured to run with or without any GlobalParameters. However, you may want to embed service calls in a pipeline that uses dataset dependencies to invoke the service only when some upstream processing has completed. You can also trigger some other action after the batch execution has completed using this approach. In that case, you can express the dependencies using activity inputs and outputs, without naming any of them as Web Service inputs or outputs.
-```JSON
+```json
{ "name": "retraining", "type": "AzureMLBatchExecution",
If you want to continue using the AzureMLBatchScoring activity, continue reading
### ML Studio (classic) Batch Scoring activity using Azure Storage for input/output
-```JSON
+```json
{ "name": "PredictivePipeline", "properties": {
If you want to continue using the AzureMLBatchScoring activity, continue reading
### Web Service Parameters To specify values for Web service parameters, add a **typeProperties** section to the **AzureMLBatchScoringActivity** section in the pipeline JSON as shown in the following example:
-```JSON
+```json
"typeProperties": { "webServiceParameters": { "Param 1": "Value 1",
To specify values for Web service parameters, add a **typeProperties** section t
} } ```+ You can also use [Data Factory Functions](data-factory-functions-variables.md) in passing values for the Web service parameters as shown in the following example:
-```JSON
+```json
"typeProperties": { "webServiceParameters": { "Database query": "$$Text.Format('SELECT * FROM myTable WHERE timeColumn = \\'{0:yyyy-MM-dd HH:mm:ss}\\'', Time.AddHours(WindowStart, 0))"
You can also use [Data Factory Functions](data-factory-functions-variables.md) i
> [!NOTE] > The Web service parameters are case-sensitive, so ensure that the names you specify in the activity JSON match the ones exposed by the Web service.
->
->
-## See Also
+## See also
* [Azure blog post: Getting started with Azure Data Factory and ML Studio (classic)](https://azure.microsoft.com/blog/getting-started-with-azure-data-factory-and-azure-machine-learning-4/) [adf-build-1st-pipeline]: data-factory-build-your-first-pipeline.md
-[azure-machine-learning]: https://azure.microsoft.com/services/machine-learning/
+[azure-machine-learning]: https://azure.microsoft.com/services/machine-learning/
data-share Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/overview.md
Title: What is Azure Data Share? description: Learn about sharing data simply and securely to multiple customers and partners using Azure Data Share.--++ Previously updated : 02/07/2022 Last updated : 10/26/2022 # What is Azure Data Share?
-In today's world, data is viewed as a key strategic asset that many organizations need to simply and securely share with their customers and partners. There are many ways that customers do this today, including through FTP, e-mail, APIs to name a few. Organizations can easily lose track of who they've shared their data with. Sharing data through FTP or through standing up their own API infrastructure is often expensive to provision and administer. There's management overhead associated with using these methods of sharing on a large scale.
+Azure Data Share enables organizations to securely share data with multiple customers and partners. Data providers are always in control of the data that they've shared and Azure Data Share makes it simple to manage and monitor what data was shared, when and by whom.
-Many organizations need to be accountable for the data that they've shared. In addition to accountability, many organizations would like to be able to control, manage, and monitor all of their data sharing in a simple way. In today's world, where data is expected to continue to grow at an exponential pace, organizations need a simple way to share big data. Customers demand the most up-to-date data to ensure that they're able to derive timely insights.
+In today's world, data is viewed as a key strategic asset that many organizations need to simply and securely share with their customers and partners. There are many ways that customers do this today, including through FTP, e-mail, APIs to name a few. Organizations can easily lose track of who they've shared their data with. Sharing data through FTP or through standing up their own API infrastructure is often expensive to provision and administer. There's management overhead associated with using these methods of sharing on a large scale. In addition to accountability, many organizations would like to be able to control, manage, and monitor all of their data sharing in a simple way that stays up to date, so they can derive timely insights.
-Azure Data Share enables organizations to simply and securely share data with multiple customers and partners. You can provision a new data share account, add datasets, and invite your customers and partners to your data share. Data providers are always in control of the data that they've shared. Azure Data Share makes it simple to manage and monitor what data was shared, when and by whom.
-
-A data provider can stay in control of how their data is handled by specifying terms of use for their data share. The data consumer must accept these terms before being able to receive the data. Data providers can specify the frequency at which their data consumers receive updates. Access to new updates can be revoked at any time by the data provider.
+Using Data Share, a data provider can share data and manage their shares all in one place. They can stay in control of how their data is handled by specifying terms of use for their data share. The data consumer must accept these terms before being able to receive the data. Data providers can specify the frequency at which their data consumers receive updates. Access to new updates can be revoked at any time by the data provider.
Azure Data Share helps enhance insights by making it easy to combine data from third parties to enrich analytics and AI scenarios. Easily use the power of Azure analytics tools to prepare, process, and analyze data shared with Azure Data Share.
Both the data provider and data consumer must have an Azure subscription to shar
## Scenarios for Azure Data Share
-Azure Data Share can be used in many different industries. For example, a retailer may want to share recent point of sales data with their suppliers. Using Azure Data Share, a retailer can set up a data share containing point of sales data for all of their suppliers and share sales on an hourly or daily basis.
+Azure Data Share can be used in many different industries. For example, a retailer may want to share recent point of sales data with their suppliers. Using Azure Data Share, a retailer can set up a data share containing point of sales data for all of their suppliers and share sales on an hourly or daily basis.
Azure Data Share can also be used to establish a data marketplace for a specific industry. For example, a government or a research institution that regularly shares anonymized data about population growth with third parties.
Another use case for Azure Data Share is establishing a data consortium. For exa
## How it works
-Azure Data Share currently offers snapshot-based sharing and in-place sharing.
+Azure Data Share currently offers [snapshot-based sharing](#snapshot-based-sharing) and [in-place sharing](#in-place-sharing).
+
+![data share flow](media/data-share-flow.png)
+
+### Snapshot-based sharing
In snapshot-based sharing, data moves from the data provider's Azure subscription and lands in the data consumer's Azure subscription. As a data provider, you provision a data share and invite recipients to the data share. Data consumers receive an invitation to your data share via e-mail. Once a data consumer accepts the invitation, they can trigger a full snapshot of the data shared with them. This data is received into the data consumers storage account. Data consumers can receive regular, incremental updates to the data shared with them so that they always have the latest version of the data. Data providers can offer their data consumers incremental updates to the data shared with them through a snapshot schedule. Snapshot schedules are offered on an hourly or a daily basis. When a data consumer accepts and configures their data share, they can subscribe to a snapshot schedule. This is beneficial in scenarios where the shared data is updated regularly, and the data consumer needs the most up-to-date data.
-![data share flow](media/data-share-flow.png)
- When a data consumer accepts a data share, they're able to receive the data in a data store of their choice. For example, if the data provider shares data using Azure Blob Storage, the data consumer can receive this data in Azure Data Lake Store. Similarly, if the data provider shares data from an Azure Synapse Analytics, the data consumer can choose whether they want to receive the data into an Azure Data Lake Store, an Azure SQL Database or an Azure Synapse Analytics. If sharing from SQL-based sources, the data consumer can also choose whether they receive data in parquet or csv.
+### In-place sharing
+ With in-place sharing, data providers can share data where it resides without copying the data. After sharing relationship is established through the invitation flow, a symbolic link is created between the data provider's source data store and the data consumer's target data store. Data consumer can read and query the data in real time using its own data store. Changes to the source data store are available to the data consumer immediately. In-place sharing is currently available for Azure Data Explorer. ## Key capabilities
data-share Share Your Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-portal.md
Create an Azure Data Share resource in an Azure resource group.
1. Select the dataset type that you would like to add. You will see a different list of dataset types depending on the share type (snapshot or in-place) you have selected in the previous step.
- ![AddDatasets](./media/add-datasets-updated.png "Add Datasets")
+ ![AddDatasets](./media/add-datasets.png "Add Datasets")
1. Navigate to the object you would like to share and select 'Add Datasets'.
data-share Share Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data.md
Title: 'Tutorial: Share outside your org - Azure Data Share' description: Tutorial - Share data with customers and partners using Azure Data Share --++ Previously updated : 11/12/2021 Last updated : 10/26/2022 # Tutorial: Share data using Azure Data Share
-In this tutorial, you will learn how to set up a new Azure Data Share and start sharing your data with customers and partners outside of your Azure organization.
+In this tutorial, you'll learn how to set up a new Azure Data Share and start sharing your data with customers and partners outside of your Azure organization.
In this tutorial, you'll learn how to:
In this tutorial, you'll learn how to:
* Azure Subscription: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. * Your recipient's Azure e-mail address (using their e-mail alias won't work).
-* If the source Azure data store is in a different Azure subscription than the one you will use to create Data Share resource, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where the Azure data store is located.
+* If the source Azure data store is in a different Azure subscription than the one you'll use to create Data Share resource, register the [Microsoft.DataShare resource provider](concepts-roles-permissions.md#resource-provider-registration) in the subscription where the Azure data store is located.
### Share from a storage account
Below is the list of prerequisites for sharing data from SQL source.
* SQL Server Firewall access. This can be done through the following steps: 1. In Azure portal, navigate to SQL server. Select *Firewalls and virtual networks* from left navigation. 1. Select **Yes** for *Allow Azure services and resources to access this server*.
- 1. Select **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you are sharing SQL data from Azure portal. You can also add an IP range.
+ 1. Select **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you're sharing SQL data from Azure portal. You can also add an IP range.
1. Select **Save**. #### Prerequisites for sharing from Azure Synapse Analytics (workspace) SQL pool
-* * An Azure Synapse Analytics (workspace) dedicated SQL pool with tables that you want to share. Sharing of view is not currently supported. Sharing from serverless SQL pool is not currently supported.
+* * An Azure Synapse Analytics (workspace) dedicated SQL pool with tables that you want to share. Sharing of view isn't currently supported. Sharing from serverless SQL pool isn't currently supported.
* Permission to write to the SQL pool in Synapse workspace, which is present in *Microsoft.Synapse/workspaces/sqlPools/write*. This permission exists in the **Contributor** role. * Permission for the Data Share resource's managed identity to access Synapse workspace SQL pool. This can be done through the following steps: 1. In Azure portal, navigate to Synapse workspace. Select SQL Active Directory admin from left navigation and set yourself as the **Azure Active Directory admin**.
Below is the list of prerequisites for sharing data from SQL source.
create user "<share_acct_name>" from external provider; exec sp_addrolemember db_datareader, "<share_acct_name>"; ```
- The *<share_acc_name>* is the name of your Data Share resource. If you have not created a Data Share resource as yet, you can come back to this pre-requisite later.
+ The *<share_acc_name>* is the name of your Data Share resource. If you haven't created a Data Share resource as yet, you can come back to this pre-requisite later.
* Synapse workspace Firewall access. This can be done through the following steps: 1. In Azure portal, navigate to Synapse workspace. Select *Firewalls* from left navigation. 1. Select **ON** for *Allow Azure services and resources to access this workspace*.
- 1. Select **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you are sharing SQL data from Azure portal. You can also add an IP range.
+ 1. Select **+Add client IP**. Client IP address is subject to change. This process might need to be repeated the next time you're sharing SQL data from Azure portal. You can also add an IP range.
1. Select **Save**.
Create an Azure Data Share resource in an Azure resource group.
| Name | *datashareaccount* | Specify a name for your data share account. | | | |
-1. Select **Review + create**, then **Create** to provision your data share account. Provisioning a new data share account typically takes about 2 minutes or less.
+1. Select **Review + create**, then **Create** to create your data share account. Creating a new data share account typically takes about 2 minutes or less.
1. When the deployment is complete, select **Go to resource**.
Use these commands to create the resource:
1. Navigate to your Data Share Overview page.
- ![Share your data](./media/share-receive-data.png "Share your data")
+ :::image type="content" source="./media/share-receive-data.png" alt-text="Screenshot of the Azure Data Share overview page in the Azure portal.":::
1. Select **Start sharing your data**.
-1. Select **Create**.
+1. Select **Create**.
1. Fill out the details for your share. Specify a name, share type, description of share contents, and terms of use (optional).
- ![EnterShareDetails](./media/enter-share-details.png "Enter Share details")
+ :::image type="content" source="./media/enter-share-details.png " alt-text="Screenshot of the share creation page in Azure Data Share, showing the share name, type, description, and terms of used filled out.":::
1. Select **Continue**.
-1. To add Datasets to your share, select **Add Datasets**.
+1. To add Datasets to your share, select **Add Datasets**.
- ![Add Datasets to your share](./media/datasets.png "Datasets")
+ :::image type="content" source="./media/datasets.png" alt-text="Screenshot of the datasets page in share creation, the add datasets button is highlighted.":::
-1. Select the dataset type that you would like to add. You will see a different list of dataset types depending on the share type (snapshot or in-place) you have selected in the previous step. If sharing from an Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you will be prompted for authentication method to list tables. Select AAD authentication, and check the checkbox **Allow Data Share to run the above 'create user' script on my behalf**.
+1. Select the dataset type that you would like to add. You'll see a different list of dataset types depending on the share type (snapshot or in-place) you've selected in the previous step. If sharing from an Azure SQL Database or Azure Synapse Analytics (formerly Azure SQL DW), you'll be prompted for authentication method to list tables. Select Azure Active Directory authentication, and check the checkbox **Allow Data Share to run the above 'create user' script on my behalf**.
- ![AddDatasets](./media/add-datasets.png "Add Datasets")
+ :::image type="content" source="./media/add-datasets.png" alt-text="Screenshot showing the available dataset types.":::
-1. Navigate to the object you would like to share and select 'Add Datasets'.
+1. Navigate to the object you would like to share and select 'Add Datasets'.
- ![SelectDatasets](./media/select-datasets.png "Select Datasets")
+ :::image type="content" source="./media/select-datasets.png" alt-text="Screenshot of the select datasets page, showing a folder selected.":::
-1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'.
+1. In the Recipients tab, enter in the email addresses of your Data Consumer by selecting '+ Add Recipient'.
- ![AddRecipients](./media/add-recipient.png "Add recipients")
+ :::image type="content" source="./media/add-recipient.png" alt-text="Screenshot of the recipients page, showing a recipient added.":::
1. Select **Continue**.
-1. If you have selected snapshot share type, you can configure snapshot schedule to provide updates of your data to your data consumer.
+1. If you have selected snapshot share type, you can configure snapshot schedule to provide updates of your data to your data consumer.
- ![EnableSnapshots](./media/enable-snapshots.png "Enable snapshots")
+ :::image type="content" source="./media/enable-snapshots.png" alt-text="Screenshot of the settings page, showing the snapshot toggle enabled.":::
-1. Select a start time and recurrence interval.
+1. Select a start time and recurrence interval.
1. Select **Continue**.
Use these commands to create the resource:
```azurecli az datashare invitation create --resource-group testresourcegroup \ --name DataShareInvite --share-name ContosoMarketplaceDataShare \
- --account-name ContosoMarketplaceAccount --target-email "jacob@fabrikam"
+ --account-name ContosoMarketplaceAccount --target-email "jacob@fabrikam.com"
``` ### [PowerShell](#tab/powershell)
-1. If you do not already have data you would like to share, you can follow these steps to create a storage account. If you already have storage, you may skip to step 2.
+1. If you don't already have data you would like to share, you can follow these steps to create a storage account. If you already have storage, you may skip to step 2.
1. Run the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command to create an Azure Storage account:
Use these commands to create the resource:
New-AzStorageContainer -Name $containerName -Context $ctx -Permission blob ```
- 1. Run the [Set-AzStorageBlobContent](/powershell/module/az.storage/new-azstoragecontainer) command to upload a file. The follow example uploads _textfile.csv_ from the _D:\testFiles_ folder on local memory, to the container you created.
+ 1. Run the [Set-AzStorageBlobContent](/powershell/module/az.storage/new-azstoragecontainer) command to upload a file. The following example uploads _textfile.csv_ from the _D:\testFiles_ folder on local memory, to the container you created.
```azurepowershell Set-AzStorageBlobContent -File "D:\testFiles\textfile.csv" -Container $containerName -Blob "textfile.csv" -Context $ctx
Use these commands to create the resource:
1. Run the [New-AzDataShare](/powershell/module/az.datashare/new-azdatashare) command to create your Data Share: ```azurepowershell
- New-AzDataShare -ResourceGroupName <String> -AccountName <String> -Name <String> -ShareKind "CopyBased" -Description <String> -TermsOfUse <String>
+ New-AzDataShare -ResourceGroupName <String> -AccountName <String> -Name <String> -Description <String> -TermsOfUse <String>
``` 1. Use the [New-AzDataShareInvitation](/powershell/module/az.datashare/get-azdatasharereceivedinvitation) command to create the invitation for the specified address:
databox-online Azure Stack Edge Gpu Deploy Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-arc-data-controller.md
- Title: Deploy an Azure Arc Data Controller on your Azure Stack Edge Pro GPU device| Microsoft Docs
-description: Describes how to deploy an Azure Arc Data Controller and Azure Data Services on your Azure Stack Edge Pro GPU device.
------ Previously updated : 04/15/2021--
-# Deploy Azure Data Services on your Azure Stack Edge Pro GPU device
--
-This article describes the process of creating an Azure Arc Data Controller and then deploying Azure Data Services on your Azure Stack Edge Pro GPU device.
-
-Azure Arc Data Controller is the local control plane that enables Azure Data Services in customer-managed environments. Once you have created the Azure Arc Data Controller on the Kubernetes cluster that runs on your Azure Stack Edge Pro GPU device, you can deploy Azure Data Services such as SQL Managed Instance on that data controller.
-
-The procedure to create Data Controller and then deploy an SQL Managed Instance involves the use of PowerShell and `kubectl` - a native tool that provides command-line access to the Kubernetes cluster on the device.
--
-## Prerequisites
-
-Before you begin, make sure that:
-
-1. You've access to an Azure Stack Edge Pro GPU device and you've activated your device as described in [Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
-
-1. You've enabled the compute role on the device. A Kubernetes cluster was also created on the device when you configured compute on the device as per the instructions in [Configure compute on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-configure-compute.md).
-
-1. You have the Kubernetes API endpoint from the **Device** page of your local web UI. For more information, see the instructions in [Get Kubernetes API endpoint](azure-stack-edge-gpu-deploy-configure-compute.md#get-kubernetes-endpoints).
-
-1. You've access to a client that will connect to your device.
- 1. This article uses a Windows client system running PowerShell 5.0 or later to access the device. You can use any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device).
- 1. Install `kubectl` on your client. For the client version:
- 1. Identify the Kubernetes server version installed on the device. In the local UI of the device, go to **Software updates** page. Note the **Kubernetes server version** in this page.
- 1. Download a client that is skewed no more than one minor version from the master. The client version but may lead the master by up to one minor version. For example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes, and should work with v1.2, v1.3, and v1.4 clients. For more information on Kubernetes client version, see [Kubernetes version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-version-skew).
-
-1. Optionally, [Install client tools for deploying and managing Azure Arc-enabled data services](../azure-arc/dat). These tools are not required but recommended.
-1. Make sure you have enough resources available on your device to provision a data controller and one SQL Managed Instance. For data controller and one SQL Managed Instance, you will need a minimum of 16 GB of RAM and 4 CPU cores. For detailed guidance, go to [Minimum requirements for Azure Arc-enabled data services deployment](../azure-arc/dat#minimum-deployment-requirements).
--
-## Configure Kubernetes external service IPs
-
-1. Go the local web UI of the device and then go to **Compute**.
-1. Select the network enabled for compute.
-
- ![Compute page in local UI 2](./media/azure-stack-edge-gpu-deploy-arc-data-controller/compute-network-1.png)
-
-1. Make sure that you provide three additional Kubernetes external service IPs (in addition to the IPs you have already configured for other external services or containers). The data controller will use two service IPs and the third IP is used when you create a SQL Managed Instance. You will need one IP for each additional Data Service you will deploy.
-
- ![Compute page in local UI 3](./media/azure-stack-edge-gpu-deploy-arc-data-controller/compute-network-2.png)
-
-1. Apply the settings and these new IPs will immediately take effect on an already existing Kubernetes cluster.
--
-## Deploy Azure Arc Data Controller
-
-Before you deploy a data controller, you'll need to create a namespace.
-
-### Create namespace
-
-Create a new, dedicated namespace where you will deploy the Data Controller. You'll also create a user and then grant user the access to the namespace that you created.
-
-> [!NOTE]
-> For both namespace and user names, the [DNS subdomain naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) apply.
-
-1. [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
-1. Create a namespace. Type:
-
- `New-HcsKubernetesNamespace -Namespace <Name of namespace>`
-
-1. Create a user. Type:
-
- `New-HcsKubernetesUser -UserName <User name>`
-
-1. A config file is displayed in plain text. Copy this file and save it as a *config* file.
-
- > [!IMPORTANT]
- > Do not save the config file as *.txt* file, save the file without any file extension.
-
-1. The config file should live in the `.kube` folder of your user profile on the local machine. Copy the file to that folder in your user profile.
-
- ![Location of config file on client](media/azure-stack-edge-gpu-create-kubernetes-cluster/location-config-file.png)
-1. Grant the user access to the namespace that you created. Type:
-
- `Grant-HcsKubernetesNamespaceAccess -Namespace <Name of namespace> -UserName <User name>`
-
- Here is a sample output of the preceding commands. In this example, we create a `myadstest` namespace, a `myadsuser` user and granted the user access to the namespace.
-
- ```powershell
- [10.100.10.10]: PS>New-HcsKubernetesNamespace -Namespace myadstest
- [10.100.10.10]: PS>New-HcsKubernetesUser -UserName myadsuser
- apiVersion: v1
- clusters:
- - cluster:
- certificate-authority-data: LS0tLS1CRUdJTiBD=======//snipped//=======VSVElGSUNBVEUtLS0tLQo=
- server: https://compute.myasegpudev.wdshcsso.com:6443
- name: kubernetes
- contexts:
- - context:
- cluster: kubernetes
- user: myadsuser
- name: myadsuser@kubernetes
- current-context: myadsuser@kubernetes
- kind: Config
- preferences: {}
- users:
- - name: myadsuser
- user:
- client-certificate-data: LS0tLS1CRUdJTiBDRV=========//snipped//=====EE9PQotLS0kFURSBLRVktLS0tLQo=
-
- [10.100.10.10]: PS>Grant-HcsKubernetesNamespaceAccess -Namespace myadstest -UserName myadsuser
- [10.100.10.10]: PS>Set-HcsKubernetesAzureArcDataController -SubscriptionId db4e2fdb-6d80-4e6e-b7cd-736098270664 -ResourceGroupName myasegpurg -Location "EastUS" -UserName myadsuser -Password "Password1" -DataControllerName "arctestcontroller" -Namespace myadstest
- [10.100.10.10]: PS>
- ```
-1. Add a DNS entry to the hosts file on your system.
-
- 1. Run Notepad as administrator and open the `hosts` file located at `C:\windows\system32\drivers\etc\hosts`.
- 2. Use the information that you saved from the **Device** page in the local UI (prerequisite) to create the entry in the hosts file.
-
- For example, copy this endpoint `https://compute.myasegpudev.microsoftdatabox.com/[10.100.10.10]` to create the following entry with device IP address and DNS domain:
-
- `10.100.10.10 compute.myasegpudev.microsoftdatabox.com`
-
-1. To verify that you can connect to the Kubernetes pods, start a cmd prompt or a PowerShell session. Type:
-
- ```powershell
- PS C:\WINDOWS\system32> kubectl get pods -n "myadstest"
- No resources found.
- PS C:\WINDOWS\system32>
- ```
-You can now deploy your data controller and data services applications in the namespace, then view the applications and their logs.
-
-### Create data controller
-
-The data controller is a collection of pods that are deployed to your Kubernetes cluster to provide an API, the controller service, the bootstrapper, and the monitoring databases and dashboards. Follow these steps to create a data controller on the Kubernetes cluster that exists on your Azure Stack Edge device in the namespace that you created earlier.
-
-1. Gather the following information that you'll need to create a data controller:
-
-
- |Column1 |Column2 |
- |||
- |Data controller name |A descriptive name for your data controller. For example, `arctestdatacontroller`. |
- |Data controller username |Any username for the data controller administrator user. The data controller username and password are used to authenticate to the data controller API to perform administrative functions. |
- |Data controller password |A password for the data controller administrator user. Choose a secure password and share it with only those that need to have cluster administrator privileges. |
- |Name of your Kubernetes namespace |The name of the Kubernetes namespace that you want to create the data controller in. |
- |Azure subscription ID |The Azure subscription GUID for where you want the data controller resource in Azure to be created. |
- |Azure resource group name |The name of the resource group where you want the data controller resource in Azure to be created. |
- |Azure location |The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see Azure global infrastructure / Products by region.|
--
-1. Connect to the PowerShell interface. To create the data controller, type:
-
- ```powershell
- Set-HcsKubernetesAzureArcDataController -SubscriptionId <Subscription ID> -ResourceGroupName <Resource group name> -Location <Location without spaces> -UserName <User you created> -Password <Password to authenticate to Data Controller> -DataControllerName <Data Controller Name> -Namespace <Namespace you created>
- ```
- Here is a sample output of the preceding commands.
-
- ```powershell
- [10.100.10.10]: PS>Set-HcsKubernetesAzureArcDataController -SubscriptionId db4e2fdb-6d80-4e6e-b7cd-736098270664 -ResourceGroupName myasegpurg -Location "EastUS" -UserName myadsuser -Password "Password1" -DataControllerName "arctestcontroller" -Namespace myadstest
- [10.100.10.10]: PS>
- ```
-
- The deployment may take approximately 5 minutes to complete.
-
- > [!NOTE]
- > The data controller created on Kubernetes cluster on your Azure Stack Edge Pro GPU device works only in the disconnected mode in the current release. The disconnected mode is for the Data Controller and not for your device.
-
-### Monitor data creation status
-
-1. Open another PowerShell window.
-1. Use the following `kubectl` command to monitor the creation status of the data controller.
-
- ```powershell
- kubectl get datacontroller/<Data controller name> --namespace <Name of your namespace>
- ```
- When the controller is created, the status should be `Ready`.
- Here is a sample output of the preceding command:
-
- ```powershell
- PS C:\WINDOWS\system32> kubectl get datacontroller/arctestcontroller --namespace myadstest
- NAME STATE
- arctestcontroller Ready
- PS C:\WINDOWS\system32>
- ```
-1. To identify the IPs assigned to the external services running on the data controller, use the `kubectl get svc -n <namespace>` command. Here is a sample output:
-
- ```powershell
- PS C:\WINDOWS\system32> kubectl get svc -n myadstest
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- controldb-svc ClusterIP 172.28.157.130 <none> 1433/TCP,8311/TCP,8411/TCP 3d21h
- controller-svc ClusterIP 172.28.123.251 <none> 443/TCP,8311/TCP,8301/TCP,8411/TCP,8401/TCP 3d21h
- controller-svc-external LoadBalancer 172.28.154.30 10.57.48.63 30080:31090/TCP 3d21h
- logsdb-svc ClusterIP 172.28.52.196 <none> 9200/TCP,8300/TCP,8400/TCP 3d20h
- logsui-svc ClusterIP 172.28.85.97 <none> 5601/TCP,8300/TCP,8400/TCP 3d20h
- metricsdb-svc ClusterIP 172.28.255.103 <none> 8086/TCP,8300/TCP,8400/TCP 3d20h
- metricsdc-svc ClusterIP 172.28.208.191 <none> 8300/TCP,8400/TCP 3d20h
- metricsui-svc ClusterIP 172.28.158.163 <none> 3000/TCP,8300/TCP,8400/TCP 3d20h
- mgmtproxy-svc ClusterIP 172.28.228.229 <none> 443/TCP,8300/TCP,8311/TCP,8400/TCP,8411/TCP 3d20h
- mgmtproxy-svc-external LoadBalancer 172.28.166.214 10.57.48.64 30777:30621/TCP 3d20h
- sqlex-svc ClusterIP None <none> 1433/TCP 3d20h
- PS C:\WINDOWS\system32>
- ```
-
-## Deploy SQL managed instance
-
-After you have successfully created the data controller, you can use a template to deploy a SQL Managed Instance on the data controller.
-
-### Deployment template
-
-Use the following deployment template to deploy a SQL Managed Instance on the data controller on your device.
-
-```yml
-apiVersion: v1
-data:
- password: UGFzc3dvcmQx
- username: bXlhZHN1c2Vy
-kind: Secret
-metadata:
- name: sqlex-login-secret
-type: Opaque
-
-apiVersion: sql.arcdata.microsoft.com/v1alpha1
-kind: sqlmanagedinstance
-metadata:
- name: sqlex
-spec:
- limits:
- memory: 4Gi
- vcores: "4"
- requests:
- memory: 2Gi
- vcores: "1"
- service:
- type: LoadBalancer
- storage:
- backups:
- className: ase-node-local
- size: 5Gi
- data:
- className: ase-node-local
- size: 5Gi
- datalogs:
- className: ase-node-local
- size: 5Gi
- logs:
- className: ase-node-local
- size: 1Gi
-```
--
-#### Metadata name
-
-The metadata name is the name of the SQL Managed Instance. The associated pod in the preceding `deployment.yaml` will be name as `sqlex-n` (`n` is the number of pods associated with the application).
-
-#### Password and username data
-
-The data controller username and password are used to authenticate to the data controller API to perform administrative functions. The Kubernetes secret for the data controller username and password in the deployment template are base64 encoded strings.
-
-You can use an online tool to base64 encode your desired username and password or you can use built in CLI tools depending on your platform. When using an online Base64 encode tool, provide the user name and password strings (that you entered while creating the data controller) in the tool and the tool will generate the corresponding Base64 encoded strings.
-
-#### Service type
-
-Service type should be set to `LoadBalancer`.
-
-#### Storage class name
-
-You can identify the storage class on your Azure Stack Edge device that the deployment will use for data, backups, data logs and logs. Use the `kubectl get storageclass` command to get the storage class deployed on your device.
-
-```powershell
-PS C:\WINDOWS\system32> kubectl get storageclass
-NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
-ase-node-local rancher.io/local-path Delete WaitForFirstConsumer false 5d23h
-PS C:\WINDOWS\system32>
-```
-In the preceding sample output, the storage class `ase-node-local` on your device should be specified in the template.
- 
-#### Spec
-
-To create an SQL Managed Instance on your Azure Stack Edge device, you can specify your memory and CPU requirements in the spec section of the `deployment.yaml`. Each SQL managed instance must request a minimum of 2-GB memory and 1 CPU core as shown in the following example.
-
-```yml
-spec:
- limits:
- memory: 4Gi
- vcores: "4"
- requests:
- memory: 2Gi
- vcores: "1"
-```
-
-For detailed sizing guidance for data controller and 1 SQL Managed Instance, review [SQL managed instance sizing details](../azure-arc/dat#sql-managed-instance-sizing-details).
-
-### Run deployment template
-
-Run the `deployment.yaml` using the following command:
-
-```powershell
-kubectl create -n <Name of namespace that you created> -f <Path to the deployment yaml>
-```
-
-Here is a sample output of the following command:
-
-```powershell
-PS C:\WINDOWS\system32> kubectl get pods -n "myadstest"
-No resources found.
-PS C:\WINDOWS\system32>
-PS C:\WINDOWS\system32> kubectl create -n myadstest -f C:\azure-arc-data-services\sqlex.yaml secret/sqlex-login-secret created
-sqlmanagedinstance.sql.arcdata.microsoft.com/sqlex created
-PS C:\WINDOWS\system32> kubectl get pods --namespace myadstest
-NAME READY STATUS RESTARTS AGE
-bootstrapper-mv2cd 1/1 Running 0 83m
-control-w9s9l 2/2 Running 0 78m
-controldb-0 2/2 Running 0 78m
-controlwd-4bmc5 1/1 Running 0 64m
-logsdb-0 1/1 Running 0 64m
-logsui-wpmw2 1/1 Running 0 64m
-metricsdb-0 1/1 Running 0 64m
-metricsdc-fb5r5 1/1 Running 0 64m
-metricsui-56qzs 1/1 Running 0 64m
-mgmtproxy-2ckl7 2/2 Running 0 64m
-sqlex-0 3/3 Running 0 13m
-PS C:\WINDOWS\system32>
-```
-
-The `sqlex-0` pod in the sample output indicates the status of the SQL Managed Instance.
-
-## Remove data controller
-
-To remove the data controller, delete the dedicated namespace in which you deployed it.
--
-```powershell
-kubectl delete ns <Name of your namespace>
-```
--
-## Next steps
--- [Deploy a stateless application on your Azure Stack Edge Pro](./azure-stack-edge-gpu-deploy-stateless-application-kubernetes.md).
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
$templateParameterFile = "<Path to addGPUExtWindowsVM.parameters.json>"
RGName = "<Name of your resource group>" New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Name for your deployment>" ```+ > [!NOTE] > The extension deployment is a long running job and takes about 10 minutes to complete.
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
description: Learn about the benefits and features of Microsoft Defender for con
Last updated 04/07/2022 --++ # Introduction to Microsoft Defender for container registries (deprecated)
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for Cloud description: Enable the container protections of Microsoft Defender for Containers ++ zone_pivot_groups: k8s-host Last updated 07/25/2022
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers ++ Last updated 09/11/2022
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud
description: Defend your AWS resources with Microsoft Defender for Cloud Last updated 09/20/2022++ zone_pivot_groups: connect-aws-accounts
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud
description: Monitoring your GCP resources from Microsoft Defender for Cloud Last updated 09/20/2022++ zone_pivot_groups: connect-gcp-accounts
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
Title: Connect your non-Azure machines to Microsoft Defender for Cloud
description: Learn how to connect your non-Azure machines to Microsoft Defender for Cloud Last updated 02/27/2022++ zone_pivot_groups: non-azure-machines
defender-for-cloud Sql Information Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-information-protection-policy.md
Title: SQL information protection policy in Microsoft Defender for Cloud description: Learn how to customize information protection policies in Microsoft Defender for Cloud. ++ Last updated 11/09/2021 # SQL information protection policy in Microsoft Defender for Cloud
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. ++ Last updated 10/24/2022
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
Title: Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud description: Learn about the availability of Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud deployment. ++ Last updated 10/23/2022
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Title: Workflow automation in Microsoft Defender for Cloud description: Learn how to create and automate workflows in Microsoft Defender for Cloud ++ Last updated 09/21/2022
deployment-environments How To Configure Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-use-cli.md
Previously updated : 10/12/2022 Last updated : 10/26/2022
This article shows you how to use the Deployment Environments Azure CLI extensio
**Automated install**
- Execute the script https://aka.ms/DevCenterEnvironments/Install-DevCenterEnvironmentsCli.ps1 directly in PowerShell to install:
+ Execute the script https://aka.ms/DevCenter/Install-DevCenterCli.ps1 directly in PowerShell to install:
```powershell
- iex "& { $(irm https://aka.ms/DevCenterEnvironments/Install-DevCenterEnvironmentsCli.ps1 ) }"
+ iex "& { $(irm https://aka.ms/DevCenter/Install-DevCenterCli.ps1 ) }"
``` This will uninstall any existing dev center extension and install the latest version.
This article shows you how to use the Deployment Environments Azure CLI extensio
Run the following command in the Azure CLI: ```azurecli
- az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-environments-0.1.0-py3-none-any.whl
+ az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-0.1.0-py3-none-any.whl
``` 1. Sign in to Azure CLI. ```azurecli
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
Previously updated : 10/12/2022 Last updated : 10/26/2022 # Quickstart: Create and access Environments
In this quickstart, you do the following actions:
2. Install the Deployment Environments AZ CLI extension: **Automated install**
- Execute the script https://aka.ms/DevCenterEnvironments/Install-DevCenterEnvironmentsCli.ps1 directly in PowerShell to install:
+ Execute the script https://aka.ms/DevCenter/Install-DevCenterCli.ps1 directly in PowerShell to install:
```powershell
- iex "& { $(irm https://aka.ms/DevCenterEnvironments/Install-DevCenterEnvironmentsCli.ps1 ) }"
+ iex "& { $(irm https://aka.ms/DevCenter/Install-DevCenterCli.ps1 ) }"
``` This will uninstall any existing dev center extension and install the latest version.
In this quickstart, you do the following actions:
Run the following command in the Azure CLI: ```azurecli
- az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-environments-0.1.0-py3-none-any.whl
+ az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-0.1.0-py3-none-any.whl
``` >[!NOTE]
dns Private Resolver Endpoints Rulesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md
Previously updated : 10/13/2022 Last updated : 10/26/2022 #Customer intent: As an administrator, I want to understand components of the Azure DNS Private Resolver.
For example, if you have the following rules:
A query for `secure.store.azure.contoso.com` will match the **AzurePrivate** rule for `azure.contoso.com` and also the **Contoso** rule for `contoso.com`, but the **AzurePrivate** rule takes precedence because the prefix `azure.contoso` is longer than `contoso`.
+> [!IMPORTANT]
+> - You can't enter the Azure DNS IP address of 168.63.129.16 as the destination IP address for a rule. Attempting to add this IP address will output the error: **Exception while making add request for rule**.
+> - Do not use the private resolver's inbound endpoint IP address as a forwarding destination for zones that are not linked to the virtual network where the private resolver is provisioned.
+ ## Next steps * Review components, benefits, and requirements for [Azure DNS Private Resolver](dns-private-resolver-overview.md).
energy-data-services How To Add More Data Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-add-more-data-partitions.md
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-The article describes how you can add data partitions to an existing Microsoft Energy Data Services (MEDS) instance. The concept of "data partitions" in MEDS is picked from [OSDU&trade;](https://osduforum.org/) where single deployment can contain multiple partitions.
+In this article, you'll learn how to add data partitions to an existing Microsoft Energy Data Services instance. The concept of "data partitions" is picked from [OSDU&trade;](https://osduforum.org/) where single deployment can contain multiple partitions.
Each partition provides the highest level of data isolation within a single deployment. All access rights are governed at a partition level. Data is separated in a way that allows for the partition's life cycle and deployment to be handled independently. (See [Partition Service](https://community.opengroup.org/osdu/platform/home/-/issues/31) in OSDU&trade;)
-> [!NOTE]
-> You can create maximum five data partitions in one MEDS instance. Currently, in line with the data partition capabilities that are available in OSDU&trade;, you can only create data partitions but can't delete or rename data existing data partitions.
+
+You can create maximum five data partitions in one MEDS instance. Currently, in line with the data partition capabilities that are available in OSDU&trade;, you can only create data partitions but can't delete or rename data existing data partitions.
## Create a data partition
Each partition provides the highest level of data isolation within a single depl
2. Select "Create".
- The page shows a table of all data partitions in your MEDS instance with the status of the data partition next to it. Clicking "Create" option on the top opens a right-pane for next steps.
+ The page shows a table of all data partitions in your MEDS instance with the status of the data partition next to it. Clicking the "Create" option on the top opens a right-pane for next steps.
[![Screenshot to help you locate the create button on the data partitions page. The 'create' button to add a new data partition is highlighted.](media/how-to-add-more-data-partitions/start-create-data-partition.png)](media/how-to-add-more-data-partitions/start-create-data-partition.png#lightbox) 3. Choose a name for your data partition.
- Each data partition name needs to be - "1-10 characters long and be a combination of lowercase letters, numbers and hyphens only" The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. Soon as you hit create, the deployment of the underlying data partition resources such as Azure Cosmos DB and Azure Storage accounts is started.
+ Each data partition name needs to be 1-10 characters long and be a combination of lowercase letters, numbers and hyphens only. The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. As soon as you hit create, the deployment of the underlying data partition resources such as Azure Cosmos DB and Azure Storage accounts is started.
>[!NOTE] >It generally takes 15-20 minutes to create a data partition.
Each partition provides the highest level of data isolation within a single depl
## Delete a failed data partition
-The data-partition deployment triggered in the previous process might fail in some cases due to issues - quota limits reached, ARM template deployment transient issues, data seeding failures, and failure in connecting to underlying AKS clusters.
+The data-partition deployment triggered in the previous process might fail in some cases due to various issues. These issues include quota limits reached, ARM template deployment transient issues, data seeding failures, and failure in connecting to underlying AKS clusters.
The status of such data partitions shows as "Creation Failed". You can delete these deployments using the "delete" button that shows next to all failed data partition deployments. This deletion will clean up any records created in the backend. You can retry creating the data partitions later.
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Microsoft Energy Data Services is updated on an ongoing basis. To stay up to dat
- Deprecated functionality - Plans for changes +
+<hr width=100%>
++
+## October 20, 2022
+
+### Support for Private Links
+
+Azure Private Link on Microsoft Energy Data Services provides private access to the service. This means traffic between your private network and Microsoft Energy Data Services travels over the Microsoft backbone network therefore limiting any exposure over the internet. By using Azure Private Link, you can connect to a Microsoft Energy Data Services instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Microsoft Energy Data Services instance over these private IP addresses. [Create a private endpoint for Microsoft Energy Data Services](how-to-set-up-private-links.md).
+
+### Encryption at Rest using Customer Managed Keys
+Microsoft Energy Data Services Preview supports customer managed encryption keys (CMK). All data in Microsoft Energy Data Services is encrypted with Microsoft-managed keys by default. In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Microsoft Energy Data Services. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. [Data security and encryption in Microsoft Energy Data Services](how-to-manage-data-security-and-encryption.md).
++
+<hr width=100%>
++ ## Microsoft Energy Data Services Preview Release
Microsoft Energy Data Services is developed in alignment with the emerging requi
### Partition & User Management -- New data partitions can be [created dynamically](how-to-add-more-data-partitions.md) as needed post provising of the platform (up to five). Earlier, data partitions could only be created when provisioning a new instance.
+- New data partitions can be [created dynamically](how-to-add-more-data-partitions.md) as needed post provisioning of the platform (up to five). Earlier, data partitions could only be created when provisioning a new instance.
- The domain name for entitlement groups for [user management](how-to-manage-users.md) has been changed to "dataservices.energy". ### Data Ingestion
event-grid Security Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-authorization.md
Title: Azure Event Grid security and authentication description: Describes Azure Event Grid and its concepts. Previously updated : 02/12/2021 Last updated : 10/25/2022 # Authorizing access to Event Grid resources
event-hubs Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authorize-access-azure-active-directory.md
Title: Authorize access with Azure Active Directory description: This article provides information on authorizing access to Event Hubs resources using Azure Active Directory. Previously updated : 09/20/2021 Last updated : 10/25/2022 # Authorize access to Event Hubs resources using Azure Active Directory
event-hubs Event Hubs Event Processor Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-event-processor-host.md
Title: Receive events using Event Processor Host - Azure Event Hubs | Microsoft Docs description: This article describes the Event Processor Host in Azure Event Hubs, which simplifies the management of checkpointing, leasing, and reading events ion parallel. Previously updated : 08/04/2021 Last updated : 10/25/2022 ms.devlang: csharp
event-hubs Event Hubs Exchange Events Different Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-exchange-events-different-protocols.md
Title: Azure Event Hubs - Exchange events using different protocols description: This article shows how consumers and producers that use different protocols (AMQP, Apache Kafka, and HTTPS) can exchange events when using Azure Event Hubs. Previously updated : 09/20/2021 Last updated : 10/25/2022 ms.devlang: csharp, java
For Kafka consumers that receive properties from AMQP or HTTPS producers, use th
As a best practice, we recommend that you include a property in messages sent via AMQP or HTTPS. The Kafka consumer can use it to determine whether header values need AMQP deserialization. The value of the property is not important. It just needs a well-known name that the Kafka consumer can find in the list of headers and adjust its behavior accordingly.
+> [!NOTE]
+> The Event Hubs service natively converts some of the EventHubs specific [AmqpMessage properties](http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-properties) to [KafkaΓÇÖs record headers](https://kafka.apache.org/32/javadoc/org/apache/kafka/common/header/Headers.html) as **strings**. Kafka message header is a list of &lt;key, value&gt; pairs where key is string and value is always a byte array. For these supported properties, the byte array will have an UTF8encoded string.
+>
+> Here is the list of immutable properties that Event Hubs support in this conversion today. If you set values for user properties with the names in this list, you donΓÇÖt need to deserialize at the Kafka consumer side.
+>
+> - message-id
+> - user-id
+> - to
+> - reply-to
+> - content-type
+> - content-encoding
+> - creation-time
++++ ### AMQP to Kafka part 1: create and send an event in C# (.NET) with properties ```csharp // Create an event with properties "MyStringProperty" and "MyIntegerProperty"
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md
Title: Scalability - Azure Event Hubs | Microsoft Docs description: This article provides information on how to scale Azure Event Hubs by using partitions and throughput units. Previously updated : 05/26/2021 Last updated : 10/25/2022 # Scaling with Event Hubs
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
To enroll in the preview, send an email to **exrpm@microsoft.com**, providing th
**FastPath support for virtual network peering and UDRs is only available for ExpressRoute Direct connections**.
-> [!NOTE]
-> * Virtual network peering and UDR support is enabled by default for all new FastPath connections
-> * To enable virtual network peering and UDR support for FastPath connections configured before 9/19/2022, disable and enable FastPath on the target connection.
- ### FastPath and Private Link for 10 Gbps ExpressRoute Direct With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This preview supports connections associated to 10 Gbps ExpressRoute Direct circuits. This preview doesn't support ExpressRoute circuits managed by an ExpressRoute partner.
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
Previously updated : 07/28/2022 Last updated : 10/26/2022
You'll define the outbound type to use the UDR that already exists on the subnet
```azurecli az aks create -g $RG -n $AKSNAME -l $LOC \ --node-count 3 \
- --network-plugin $PLUGIN \
+ --network-plugin azure \
--outbound-type userDefinedRouting \ --vnet-subnet-id $SUBNETID \ --api-server-authorized-ip-ranges $FWPUBLIC_IP ``` > [!NOTE]
-> For creating and using your own VNet and route table where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you are using an ARM template or other client, you need to use the Principal ID of the cluster managed identity to perform a [role assignment.][add role to identity]
->
-> If you are not using the CLI but using your own VNet or route table which are outside of the worker node resource group, it's recommended to use [user-assigned control plane identity][Bring your own control plane managed identity]. For system-assigned control plane identity, we cannot get the identity ID before creating cluster, which causes delay for role assignment to take effect.
-
+> To create and use your own VNet and route table with `kubelet` network plugin, you need to use [user-assigned control plane identity][bring-your-own-control-plane-managed-identity]. For system-assigned control plane identity, we cannot get the identity ID before creating cluster, which causes delay for role assignment to take effect.
+> To create and use your own VNet and route table with `azure` network plugin, both system-assigned and user-assigned managed identities are supported.
### Enable developer access to the API server
az group delete -g $RG
## Next steps - Learn more about Azure Kubernetes Service, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../aks/concepts-clusters-workloads.md).+
+<!-- LINKS - Internal -->
+[bring-your-own-control-plane-managed-identity]: ../aks/use-managed-identity.md#bring-your-own-control-plane-managed-identity
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
The steps in this article were tested with the following Terraform and Terraform
type = string default = "Standard_AzureFrontDoor" validation {
- condition = contains(["Standard_AzureFrontDoor", "Premium_AzureFrontDoor"], var. front_door_sku_name)
+ condition = contains(["Standard_AzureFrontDoor", "Premium_AzureFrontDoor"], var.front_door_sku_name)
error_message = "The SKU value must be Standard_AzureFrontDoor or Premium_AzureFrontDoor." } }
frontdoor Quickstart Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-terraform.md
Title: 'Quickstart: Create an Azure Front Door Service using Terraform'
+ Title: 'Quickstart: Create an Azure Front Door (classic) using Terraform'
description: This quickstart describes how to create an Azure Front Door Service using Terraform. documentationcenter:
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources. Previously updated : 08/05/2022 Last updated : 10/26/2022
operations of the Azure Policy Insights REST API, see
Evaluations of assigned policies and initiatives happen as the result of various events: -- A policy or initiative is newly assigned to a scope. It takes around 30 minutes for the assignment
+- A policy or initiative is newly assigned to a scope. It takes around five minutes for the assignment
to be applied to the defined scope. Once it's applied, the evaluation cycle begins for resources within that scope against the newly assigned policy or initiative and depending on the effects used by the policy or initiative, resources are marked as compliant, non-compliant, or exempt. A
pages have a green 'Try It' button on each operation that allows you to try it r
Use ARMClient or a similar tool to handle authentication to Azure for the REST API examples.
-> [!NOTE]
-> Currently "reason for non-compliance" cannot be retrieved from Command line. We are working on mapping the reason code to the "reason for non-compliance" and at this point there is no ETA on this.
- ### Summarize results With the REST API, summarization can be performed by container, definition, or assignment. Here is
governance General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/troubleshoot/general.md
Title: Troubleshoot common errors description: Learn how to troubleshoot problems with creating policy definitions, the various SDKs, and the add-on for Kubernetes. Previously updated : 06/17/2022 Last updated : 10/26/2022 # Troubleshoot errors with using Azure Policy
A resource is in the _Not Started_ state, or the compliance details aren't curre
#### Cause
-A new policy or initiative assignment takes about 30 minutes to be applied. New or updated
+A new policy or initiative assignment takes about five minutes to be applied. New or updated
resources within scope of an existing assignment become available in about 15 minutes. A standard compliance scan occurs every 24 hours. For more information, see [evaluation triggers](../how-to/get-compliance-data.md#evaluation-triggers).
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 08/11/2022 Last updated : 10/26/2022 - + # Azure Resource Graph table and resource type reference
For sample queries for this table, see [Resource Graph sample queries for resour
- Sample query: [List all Azure Arc-enabled Kubernetes resources](../samples/samples-by-category.md#list-all-azure-arc-enabled-kubernetes-resources) - Sample query: [List all ConnectedClusters and ManagedClusters that contain a Flux Configuration](../samples/samples-by-category.md#list-all-connectedclusters-and-managedclusters-that-contain-a-flux-configuration) - microsoft.Kusto/clusters (Azure Data Explorer Clusters)-- microsoft.Kusto/clusters/databases (Azure Data Explorer Databases)
+- microsoft.Kusto/clusters/databases (Azure Data Explorer databases)
- microsoft.LabServices/labAccounts (Lab accounts) - microsoft.LabServices/labPlans (Lab plans) - microsoft.LabServices/labs (Labs)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.MixedReality/remoteRenderingAccounts (Remote Rendering Accounts) - microsoft.MixedReality/spatialAnchorsAccounts (Spatial Anchors Accounts) - microsoft.mixedreality/surfacereconstructionaccounts-- microsoft.MobileNetwork/mobileNetworks (Mobile Networks)
+- microsoft.MobileNetwork/mobileNetworks (Mobile networks)
- microsoft.MobileNetwork/mobileNetworks/dataNetworks (Data Networks) - microsoft.MobileNetwork/mobileNetworks/services (Services)-- microsoft.MobileNetwork/mobileNetworks/simPolicies (Sim Policies)-- microsoft.MobileNetwork/mobileNetworks/sites (Mobile Network Sites)
+- microsoft.MobileNetwork/mobileNetworks/simPolicies (SIM policies)
+- microsoft.MobileNetwork/mobileNetworks/sites (Mobile network sites)
- microsoft.MobileNetwork/mobileNetworks/slices (Slices) - microsoft.mobilenetwork/networks - microsoft.mobilenetwork/networks/sites
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.Network/azureFirewalls (Firewalls) - microsoft.Network/bastionHosts (Bastions) - microsoft.Network/connections (Connections)-- microsoft.Network/customIpPrefixes (Custom IP Prefixes)
+- microsoft.Network/customIpPrefixes (Custom IP prefixes)
- microsoft.network/ddoscustompolicies - microsoft.Network/ddosProtectionPlans (DDoS protection plans)-- microsoft.Network/dnsForwardingRulesets (Dns Forwarding Rulesets)
+- microsoft.Network/dnsForwardingRulesets (DNS forwarding rulesets)
- microsoft.Network/dnsResolvers (DNS Private Resolvers) - microsoft.network/dnsresolvers/inboundendpoints - microsoft.network/dnsresolvers/outboundendpoints
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.network/expressroutecrossconnections - microsoft.network/expressroutegateways - microsoft.Network/expressRoutePorts (ExpressRoute Direct)-- microsoft.Network/firewallPolicies (Firewall Policies)-- microsoft.network/firewallpolicies/rulegroups
+- microsoft.Network/firewallPolicies (Firewall policies)
- microsoft.Network/frontdoors (Front Doors) - microsoft.Network/FrontDoorWebApplicationFirewallPolicies (Web Application Firewall policies (WAF)) - microsoft.network/ipallocations
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.network/vpnserverconfigurations - microsoft.network/vpnsites - microsoft.networkfunction/azuretrafficcollectors-- microsoft.NotificationHubs/namespaces (Notification Hub Namespaces)
+- microsoft.NotificationHubs/namespaces (Notification Hub namespaces)
- microsoft.NotificationHubs/namespaces/notificationHubs (Notification Hubs) - microsoft.nutanix/interfaces - microsoft.nutanix/nodes
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.OperationalInsights/workspaces (Log Analytics workspaces) - microsoft.OperationsManagement/solutions (Solutions) - microsoft.operationsmanagement/views-- microsoft.Orbital/contactProfiles (Contact Profiles)
+- microsoft.Orbital/contactProfiles (Contact profiles)
- microsoft.Orbital/EdgeSites (Edge Sites)-- microsoft.Orbital/GroundStations (Ground Stations)-- microsoft.Orbital/l2Connections (L2 Connections)
+- microsoft.Orbital/GroundStations (Ground stations)
+- microsoft.Orbital/l2Connections (L2 connections)
- microsoft.orbital/orbitalendpoints - microsoft.orbital/orbitalgateways - microsoft.orbital/orbitalgateways/orbitall2connections - microsoft.orbital/orbitalgateways/orbitall3connections - microsoft.Orbital/spacecrafts (Spacecrafts) - microsoft.Peering/peerings (Peerings)-- microsoft.Peering/peeringServices (Peering Services)
+- microsoft.Peering/peeringServices (Peering services)
- microsoft.PlayFab/playerAccountPools (PlayFab player account pools) - microsoft.PlayFab/titles (PlayFab titles) - microsoft.Portal/dashboards (Shared dashboards)
hdinsight Hdinsight Service Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-service-tags.md
Previously updated : 07/21/2022 Last updated : 10/24/2022 # NSG service tags for Azure HDInsight
If your cluster is located in a region listed in this table, you only need to ad
| India | Central India | HDInsight.CentralIndia | | &nbsp; | JIO India West | HDInsight.JioIndiaWest | | &nbsp; | South India | HDInsight.SouthIndia |
+| Qatar | Qatar Central | HDInsight.QatarCentral |
| South Africa | South Africa North | HDInsight.SouthAfricaNorth | | UAE | UAE North | HDInsight.UAENorth | | &nbsp; | UAE Central | HDInsight.UAECentral |
If your cluster is located in a region listed in this table, you only need to ad
| &nbsp; | West Europe | HDInsight.WestEurope | | France | France Central| HDInsight.FranceCentral | | Germany | Germany West Central| HDInsight.GermanyWestCentral |
+| &nbsp; | Germany North| HDInsight.GermanyNorth |
| Norway | Norway East | HDInsight.NorwayEast | | Sweden | Sweden Central | HDInsight.SwedenCentral | | &nbsp; | Sweden South | HDInsight.SwedenSouth |
For example, if your cluster is created in the `East US 2` region, you'll need t
| &nbsp; | Southeast Asia | HDInsight.SoutheastAsia | | Australia | Australia East | HDInsight.AustraliaEast | + #### Group 2 Clusters in the regions of *China North* and *China East* need to allow two service tags: `HDInsight.ChinaNorth` and `HDInsight.ChinaEast`.
hdinsight Hdinsight Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-availability-zones.md
description: Learn how to create an Azure HDInsight cluster that uses Availabili
Previously updated : 09/15/2022 Last updated : 10/25/2022 # Create an HDInsight cluster that uses Availability Zones (Preview)
HDInsight clusters can currently be created using availability zones in the foll
- Japan East - Korea Central - North Europe
+ - Qatar Central
- Southeast Asia - South Central US - UK South
iot-dps Concepts Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-deploy-at-scale.md
Recommended device logic when connecting to IoT Hub via DPS:
- When receiving any of the 500-series of server error responses, retry the connection using either cached credentials or the results of a Device Registration Status Lookup API call. - When receiving `401, Unauthorized` or `403, Forbidden` or `404, Not Found`, perform a full re-registration by calling the [DPS registration API](/rest/api/iot-dps/device/runtime-registration/register-device). - At any time, devices should be capable of responding to a user-initiated reprovisioning command.
+- If devices get disconnected from IoT Hub, devices should try to reconnect directly to the same IoT Hub for at least 15 minutes (If scenario permits 30 minutes or more), before attempting to go back to DPS.
Other IoT Hub scenarios when using DPS:
key-vault Multiline Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/multiline-secrets.md
You can then view the stored secret using the Azure CLI [az keyvault secret show
az keyvault secret show --name "MultilineSecret" --vault-name "<your-unique-keyvault-name>" --query "value" ```
-The secret will be returned with newlines embedded:
+The secret will be returned with `\n` in place of newline:
```bash "This is\nmy multi-line\nsecret" ```
+The `\n` above is a `\` and `n` character, not the newline character. Quotes `"` are included in the string.
+ ## Set the secret using Azure Powershell With Azure PowerShell, you must first read in the file using the [Get-Content](/powershell/module/microsoft.powershell.management/get-content) cmdlet, then convert it to a secure string using [ConvertTo-SecureString](/powershell/module/microsoft.powershell.security/convertto-securestring).
You can then view the stored secret using the Azure CLI [az keyvault secret show
az keyvault secret show --name "MultilineSecret" --vault-name "<your-unique-keyvault-name>" --query "value" ```
-The secret will be returned with newlines embedded:
+The secret will be returned with `\n` in place of newline:
```bash "This is\nmy multi-line\nsecret" ```
+The `\n` above is a `\` and `n` character, not the newline character. Quotes `"` are included in the string.
+ ## Next steps - Read an [Overview of Azure Key Vault](../general/overview.md)
lab-services Administrator Guide 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide-1.md
The following list highlights scenarios where more than one lab account might be
When you set up a lab account, you set policies that apply to *all* labs under the lab account, such as: - The Azure virtual network with shared resources that the lab can access. For example, you might have a set of labs that need access to a shared data set within a virtual network.
- - The virtual machine images that the labs can use to create VMs. For example, you might have a set of labs that need access to the [Data Science VM for Linux](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) Azure Marketplace image.
+ - The virtual machine images that the labs can use to create VMs. For example, you might have a set of labs that need access to the [Data Science VM for Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) Azure Marketplace image.
If each of your labs has unique policy requirements, it might be beneficial to create separate lab accounts for managing each lab separately.
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
The following list highlights scenarios where more than one lab plan might be be
When you create a lab plan, you set policies that apply to all newly created labs, such as: - The Azure virtual network with shared resources that the lab can access. For example, you might have a set of labs that need access to a license server within a virtual network.
- - The virtual machine images that the labs can use to create VMs. For example, you might have a set of labs that need access to the [Data Science VM for Linux](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) Azure Marketplace image.
+ - The virtual machine images that the labs can use to create VMs. For example, you might have a set of labs that need access to the [Data Science VM for Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) Azure Marketplace image.
If each of your labs has unique policy requirements, it might be beneficial to create separate lab plans for managing each lab separately.
lab-services Class Type Deep Learning Natural Language Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-deep-learning-natural-language-processing.md
For instructions on how to create a lab, see [Tutorial: Set up a lab](tutorial-s
| Lab settings | Value | | | | | Virtual machine (VM) size | **Small GPU (Compute)**. This size is best suited for compute-intensive and network-intensive applications like Artificial Intelligence and Deep Learning. |
-| VM image | [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804). This image provides deep learning frameworks and tools for machine learning and data science. To view the full list of installed tools on this image, see [What's included on the DSVM?](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). |
+| VM image | [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux). This image provides deep learning frameworks and tools for machine learning and data science. To view the full list of installed tools on this image, see [What's included on the DSVM?](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). |
| Enable remote desktop connection | Optionally, check **Enable remote desktop connection**. The Data Science image is already configured to use X2Go so that teachers and students can connect using a GUI remote desktop. X2Go *doesn't* require the **Enable remote desktop connection** setting to be enabled. | | Template Virtual Machine Settings | Optionally, choose **Use a virtual machine image without customization**. If you're using the [August 2022 Update](lab-services-whats-new.md) and the DSVM has all the tools that your class requires, you can skip the template customization step. |
lab-services Class Type Jupyter Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-jupyter-notebook.md
This article uses the Data Science virtual machine images available on the Azure
| Lab settings | Value | | | | | Virtual machine size | Select **Small** or **Medium** for a basic setup accessing Jupyter Notebooks. Select **Small GPU (Compute)** for compute-intensive and network-intensive applications used in Artificial Intelligence and Deep Learning classes. |
-| Virtual machine image | Choose **[Data Science Virtual Machine ΓÇô Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019)** or **[Data Science Virtual Machine ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview)** depending on your OS needs. |
+| Virtual machine image | Choose **[Data Science Virtual Machine ΓÇô Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019)** or **[Data Science Virtual Machine ΓÇô Ubuntu](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux)** depending on your OS needs. |
| Template virtual machine settings | Select **Use virtual machine without customization.**. When you create a lab with the **Small GPU (Compute)** size, you can [install GPU drivers](./how-to-setup-lab-gpu.md#ensure-that-the-appropriate-gpu-drivers-are-installed). This option installs recent NVIDIA drivers and Compute Unified Device Architecture (CUDA) toolkit, which is required to enable high-performance computing with the GPU. For more information, see the article [Set up a lab with GPU virtual machines](./how-to-setup-lab-gpu.md).
lab-services Class Type Matlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-matlab.md
To set up this lab, you need an Azure subscription and lab account to get starte
## License server
-Before creating the lab plan, you'll need to set up the server to run the [Network License Manager](https://www.mathworks.com/help/install/administer-network-licenses.html) software. These instructions are only applicable for institutions that choose the networking licensing option for MATLAB, which allows users to share a pool of license keys. You'll also need to save the license file and file installation key for later. For detailed instructions on how to download a license file, see the first step in [Install Network License Manager with Internet Connection](https://www.mathworks.com/help/install/ug/install-network-license-manager-with-internet-connection.html).
+Before creating the lab plan, you'll need to set up the server to run the [Network License Manager](https://www.mathworks.com/help/install/administer-network-licenses.html) software. These instructions are only applicable for institutions that choose the networking licensing option for MATLAB, which allows users to share a pool of license keys. You'll also need to save the license file and file installation key for later. For detailed instructions on how to download a license file, see the first step in [Install License Manager on License Server](https://www.mathworks.com/help/install/ug/install-license-manager-on-license-server.html).
-For detailed instructions on how to install a licensing server, see [Install Network License Manager with Internet Connection](https://www.mathworks.com/help/install/ug/install-network-license-manager-with-internet-connection.html). To enable borrowing, see [Borrow License](https://www.mathworks.com/help/install/license/borrow-licenses.html).
+For detailed instructions on how to install a licensing server, see [Install License Manager on License Server](https://www.mathworks.com/help/install/ug/install-license-manager-on-license-server.html). To enable borrowing, see [Borrow License](https://www.mathworks.com/help/install/license/borrow-licenses.html).
Assuming the license server is located in an on-premises network or a private network within Azure, youΓÇÖll need to [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) when creating your [lab plan](./tutorial-setup-lab-plan.md).
lab-services Class Type Rstudio Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-linux.md
[R](https://www.r-project.org/https://docsupdatetracker.net/about.html) is an open-source language used for statistical computing and graphics. It's used in the statistical analysis of genetics to natural language processing to analyzing financial data. R provides an [interactive command line](https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Invoking-R-from-the-command-line) experience. [RStudio](https://www.rstudio.com/products/rstudio/) is an interactive development environment (IDE) available for the R language. The free version provides code editing tools, an integrated debugging experience, and package development tools. This article will focus on solely RStudio and R as a building block for a class that requires the use of statistical computing. The [deep learning](class-type-deep-learning-natural-language-processing.md) and [Python and Jupyter Notebooks](class-type-jupyter-notebook.md)
-class types setup RStudio differently. Each article describes how to use the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/microsoft-dsvm.ubuntu-1804) marketplace image, which has many [data science related tools](../machine-learning/data-science-virtual-machine/tools-included.md), including RStudio, pre-installed.
+class types setup RStudio differently. Each article describes how to use the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) marketplace image, which has many [data science related tools](../machine-learning/data-science-virtual-machine/tools-included.md), including RStudio, pre-installed.
## Lab configuration
lab-services Class Type Rstudio Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-windows.md
[R](https://www.r-project.org/https://docsupdatetracker.net/about.html) is an open-source language used for statistical computing and graphics. It's used in the statistical analysis of genetics to natural language processing to analyzing financial data. R provides an [interactive command line](https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Invoking-R-from-the-command-line) experience. [RStudio](https://www.rstudio.com/products/rstudio/) is an interactive development environment (IDE) available for the R language. The free version provides code-editing tools, an integrated debugging experience, and package development tools. This article will focus on solely RStudio and R as a building block for a class that requires the use of statistical computing. The [deep learning](class-type-deep-learning-natural-language-processing.md) and [Python and Jupyter Notebooks](class-type-jupyter-notebook.md)
-class types set up RStudio differently. Each article describes how to use the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/microsoft-dsvm.ubuntu-1804) marketplace image, which has many [data science related tools](../machine-learning/data-science-virtual-machine/tools-included.md), including RStudio, pre-installed.
+class types set up RStudio differently. Each article describes how to use the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) marketplace image, which has many [data science related tools](../machine-learning/data-science-virtual-machine/tools-included.md), including RStudio, pre-installed.
## Lab configuration
lab-services How To Configure Lab Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-lab-accounts.md
You can enable several auto-shutdown cost control features to proactively preven
Review more details about the auto-shutdown features in the [Maximize cost control with auto-shutdown settings](cost-management-guide.md#automatic-shutdown-settings-for-cost-control) section. > [!IMPORTANT]
-> Linux labs only support automatic shut down when users disconnect and when VMs are started but users don't connect. Support also varies depending on [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions). Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) image.
+> Linux labs only support automatic shut down when users disconnect and when VMs are started but users don't connect. Support also varies depending on [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions). Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) image.
## Enable automatic shutdown
lab-services How To Enable Remote Desktop Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-remote-desktop-linux.md
You can also connect to a Linux VM using a **GUI** (graphical user interface). T
In some cases, such as with Ubuntu LTS 18.04, X2Go provides better performance. If you use RDP and notice latency when interacting with the graphical desktop environment, consider trying X2Go since it may improve performance. > [!IMPORTANT]
-> Some marketplace images already have a graphical desktop environment and remote desktop server installed. For example, the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) already has [XFCE and X2Go Server installed and configured to accept client connections](../machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md#x2go).
+> Some marketplace images already have a graphical desktop environment and remote desktop server installed. For example, the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) already has [XFCE and X2Go Server installed and configured to accept client connections](../machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md#x2go).
> [!WARNING] > If you need to use [GNOME](https://www.gnome.org/) or [MATE](https://mate-desktop.org/), ensure your lab VM is properly configured. There is a known networking conflict that can occur with the Azure Linux Agent which is needed for the VMs to work properly in Azure Lab Services. Instead, we recommend using a different graphical desktop environment, such as [XFCE](https://www.xfce.org/).
lab-services How To Enable Shutdown Disconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-shutdown-disconnect.md
This article shows you how you can configure [automatic shut-down](classroom-lab
A lab plan administrator can configure automatic shutdown policies for the lab plan that you use create labs. For more information, see [Configure automatic shutdown of VMs for a lab plan](how-to-configure-auto-shutdown-lab-plans.md). As a lab owner, you can override the settings when creating a lab or after the lab is created. > [!IMPORTANT]
-> Prior to the [August 2022 Update](lab-services-whats-new.md), Linux labs only support automatic shut down when users disconnect and when VMs are started but users don't connect. Support also varies depending on [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions). Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) image.
+> Prior to the [August 2022 Update](lab-services-whats-new.md), Linux labs only support automatic shut down when users disconnect and when VMs are started but users don't connect. Support also varies depending on [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions). Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) image.
## Configure for the lab level
lab-services How To Manage Lab Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-lab-accounts.md
The **Shut down virtual machines when users disconnect** setting supports both W
- The Secure Shell (SSH) connection is disconnected for a Linux VM. > [!IMPORTANT]
-> Only [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions) are supported. Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) image.
+> Only [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions) are supported. Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) image.
You can specify how long the virtual machines should wait for the user to reconnect before automatically shutting down.
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
To protect student data from being lost if a virtual machine is reset, we recomm
### Install OneDrive
-To manually download and install OneDrive, see the [OneDrive](https://onedrive.live.com/about/download/) or [OneDrive for Business](https://onedrive.live.com/about/business/) download pages.
+To manually download and install OneDrive, see the [OneDrive](https://onedrive.live.com/about/download/) or [OneDrive for Business](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business) download pages.
You can also use the following PowerShell script. It will automatically download and install the latest version of OneDrive. Once the OneDrive client is installed, run the installer. In our example, we use the `/allUsers` switch to install OneDrive for all users on the machine. We also use the `/silent` switch to silently install OneDrive.
lab-services How To Setup Lab Gpu 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu-1.md
On the first page of the lab creation wizard, in the **Which virtual machine siz
In this process, you have the option of selecting either **Visualization** or **Compute** GPUs. It's important to choose the type of GPU that's based on the software that your students will use.
-As described in the following table, the *compute* GPU size is intended for compute-intensive applications. For example, the [Deep Learning in Natural Language Processing class type](./class-type-deep-learning-natural-language-processing.md) uses the **Small GPU (Compute)** size. The compute GPU is suitable for this type of class, because students use deep learning frameworks and tools that are provided by the [Data Science Virtual Machine image](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) to train deep learning models with large sets of data.
+As described in the following table, the *compute* GPU size is intended for compute-intensive applications. For example, the [Deep Learning in Natural Language Processing class type](./class-type-deep-learning-natural-language-processing.md) uses the **Small GPU (Compute)** size. The compute GPU is suitable for this type of class, because students use deep learning frameworks and tools that are provided by the [Data Science Virtual Machine image](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) to train deep learning models with large sets of data.
| Size | vCPUs | RAM | Description | | - | -- | | -- |
lab-services How To Setup Lab Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu.md
On the first page of the lab creation wizard, in the **Virtual machine size** dr
In this process, you have the option of selecting either **Visualization** or **Compute** GPUs. It's important to choose the type of GPU that's based on the software that your students will use.
-As described in the following table, the *compute* GPU size is intended for compute-intensive applications. For example, the [Deep Learning in Natural Language Processing class type](./class-type-deep-learning-natural-language-processing.md) uses the **Small GPU (Compute)** size. The compute GPU is suitable for this type of class, because students use deep learning frameworks and tools that are provided by the [Data Science Virtual Machine image](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) to train deep learning models with large sets of data.
+As described in the following table, the *compute* GPU size is intended for compute-intensive applications. For example, the [Deep Learning in Natural Language Processing class type](./class-type-deep-learning-natural-language-processing.md) uses the **Small GPU (Compute)** size. The compute GPU is suitable for this type of class, because students use deep learning frameworks and tools that are provided by the [Data Science Virtual Machine image](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) to train deep learning models with large sets of data.
| Size | vCPUs | RAM | Description | | - | -- | | -- |
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
ms.suite: integration Previously updated : 08/15/2022 Last updated : 10/26/2022 #Customer intent: As a developer, I want to create an automated integration workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
For optimal designer responsiveness and performance, review and follow these gui
| **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. | | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **Fabrikam-Workflows-RG**. | | **Logic App name** | Yes | <*logic-app-name*> | Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Standard)** resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. <br><br>This example creates a logic app named **Fabrikam-Workflows**. |
- |||||
1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Standard** so that you view only the settings that apply to the Standard plan-based logic app type. The **Plan type** property specifies the hosting plan and billing model to use for your logic app. For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md).
For optimal designer responsiveness and performance, review and follow these gui
|--|-| | **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). | | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
- |||
| Property | Required | Value | Description | |-|-|-|-| | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan name or provide a name for a new plan. <p><p>This example uses the name `Fabrikam-Service-Plan`. | | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for your logic app. Your selection affects the pricing, compute, memory, and storage that your logic app and workflows use. <p><p>To change the default pricing tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <p><p>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
- |||||
1. Now continue making the following selections:
For optimal designer responsiveness and performance, review and follow these gui
|-|-|-|-| | **Publish** | Yes | **Workflow** | This option appears and applies only when **Plan type** is set to the **Standard** logic app type. By default, this option is set to **Workflow** and creates an empty logic app resource where you add your first workflow. <p><p>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Standard)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. | | **Region** | Yes | <*Azure-region*> | The Azure datacenter region to use for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <br><br>- If you previously chose **Docker Container**, select your custom location from the **Region** list. <br><br>- If you want to deploy your app to an existing [App Service Environment v3 resource](../app-service/environment/overview.md), you can select that environment from the **Region** list. |
- |||||
> [!NOTE] >
For optimal designer responsiveness and performance, review and follow these gui
|-|-|-|-| | **Storage type** | Yes | - **Azure Storage** <br>- **SQL and Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <p><p>- To deploy only to Azure, select **Azure Storage**. <p><p>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL and Azure Storage**, and review [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <p><p>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The ongoing workflow state, run history, and other runtime artifacts are stored in your SQL database. <p><p>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. | | **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. <p><p>This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <p><p>This example creates a storage account named `fabrikamstorageacct`. |
- |||||
1. Next, if your creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app.
For optimal designer responsiveness and performance, review and follow these gui
![Screenshot that shows the Azure portal and new logic app resource settings.](./media/create-single-tenant-workflows-azure-portal/check-logic-app-resource-settings.png)
- > [!TIP]
- > If you get a validation error after this step, open and review the error details. For example,
- > if your selected region reaches a quota for resources that you're trying to create, you might
- > have to try a different region.
+ > [!NOTE]
+ >
+ > The read-only **Runtime stack** property is automatically set at creation.
+ >
+ > If you get a validation error during this step, open and review the error details.
+ > For example, if your selected region reaches a quota for resources that you're
+ > trying to create, you might have to try a different region.
After Azure finishes deployment, your logic app is automatically live and running but doesn't do anything yet because the resource is empty, and you haven't added any workflows yet.
Before you can add a trigger to a blank workflow, make sure that the workflow de
| **To** | Yes | <*your-email-address*> | The email recipient, which can be your email address for test purposes. This example uses the fictitious email, `sophiaowen@fabrikam.com`. | | **Subject** | Yes | `An email from your example workflow` | The email subject | | **Body** | Yes | `Hello from your example workflow!` | The email body content |
- ||||
> [!NOTE] > When making any changes in the details pane on the **Settings**, **Static Result**, or **Run After** tabs,
For a stateful workflow, after each workflow run, you can view the run history,
| **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. | | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <p><p>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. | | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
- |||
1. To review the status for each step in a run, select the run that you want to review.
For a stateful workflow, after each workflow run, you can view the run history,
| **Succeeded with retries** | The action succeeded but only after a single or multiple retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. | | **Timed out** | The action stopped due to the timeout limit specified by that action's settings. | | **Waiting** | Applies to a webhook action that's waiting for an inbound request from a caller. |
- |||
[aborted-icon]: ./media/create-single-tenant-workflows-azure-portal/aborted.png [canceled-icon]: ./media/create-single-tenant-workflows-azure-portal/cancelled.png
If you use source control, you can seamlessly redeploy a deleted **Logic App (St
| `AzureWebJobsStorage` | Replace the existing value with the previously copied connection string from your storage account. | | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Replace the existing value with the previously copied string from your storage account. | | `WEBSITE_CONTENTSHARE` | Replace the existing value with the previously copied file share name. |
- |||
1. On your logic app menu, under **Workflows**, select **Connections**.
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
If the [Client Certificate](../active-directory/authentication/active-directory-
| Property (designer) | Property (JSON) | Required | Value | Description | ||--|-|-|-| | **Authentication** | `type` | Yes | **Client Certificate** <br>or <br>`ClientCertificate` | The authentication type to use. You can manage certificates with [Azure API Management](../api-management/api-management-howto-mutual-certificates.md). <p></p>**Note**: Custom connectors don't support certificate-based authentication for both inbound and outbound calls. |
-| **Pfx** | `pfx` | Yes | <*encoded-pfx-file-content*> | The base64-encoded content from a Personal Information Exchange (PFX) file <p><p>To convert the PFX file into base64-encoded format, you can use PowerShell by following these steps: <p>1. Save the certificate content into a variable: <p> `$pfx_cert = get-content 'c:\certificate.pfx' -Encoding Byte` <p>2. Convert the certificate content by using the `ToBase64String()` function and save that content to a text file: <p> `[System.Convert]::ToBase64String($pfx_cert) | Out-File 'pfx-encoded-bytes.txt'` <p><p>**Troubleshooting**: If you use the `cert mmc/PowerShell` command, you might get this error: <p><p>`Could not load the certificate private key. Please check the authentication certificate password is correct and try again.` <p><p>To resolve this error, try converting the PFX file to a PEM file and back again by using the `openssl` command: <p><p>`openssl pkcs12 -in certificate.pfx -out certificate.pem` <br>`openssl pkcs12 -in certificate.pem -export -out certificate2.pfx` <p><p>Afterwards, when you get the base64-encoded string for the certificate's newly converted PFX file, the string now works in Azure Logic Apps. |
+| **Pfx** | `pfx` | Yes | <*encoded-pfx-file-content*> | The base64-encoded content from a Personal Information Exchange (PFX) file <p><p>To convert the PFX file into base64-encoded format, you can use PowerShell 7 by following these steps: <p>1. Save the certificate content into a variable: <p> `$pfx_cert = [System.IO.File]::ReadAllBytes('c:\certificate.pfx')` <p>2. Convert the certificate content by using the `ToBase64String()` function and save that content to a text file: <p> `[System.Convert]::ToBase64String($pfx_cert) | Out-File 'pfx-encoded-bytes.txt'` <p><p>**Troubleshooting**: If you use the `cert mmc/PowerShell` command, you might get this error: <p><p>`Could not load the certificate private key. Please check the authentication certificate password is correct and try again.` <p><p>To resolve this error, try converting the PFX file to a PEM file and back again by using the `openssl` command: <p><p>`openssl pkcs12 -in certificate.pfx -out certificate.pem` <br>`openssl pkcs12 -in certificate.pem -export -out certificate2.pfx` <p><p>Afterwards, when you get the base64-encoded string for the certificate's newly converted PFX file, the string now works in Azure Logic Apps. |
| **Password** | `password`| No | <*password-for-pfx-file*> | The password for accessing the PFX file | |||||
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
In this article, learn about Azure Machine Learning CLI (v2) releases.
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes-v2%22&locale=en-us`
+## 2022-10-10
+
+### Azure Machine Learning CLI (v2) v2.10.0
+
+- The CLI is depending on GA version of azure-ai-ml.
+- Dropped support for Python 3.6.
+- `az ml registry`
+ - New command group added to manage ML asset registries.
+- `az ml job`
+ - Added `az ml job show-services` command.
+ - Added model sweeping and hyperparameter tuning to AutoML NLP jobs.
+- `az ml schedule`
+ - Added `month_days` property in recurrence schedule.
+- `az ml compute`
+ - Added custom setup scripts support for compute instances.
+
+## 2022-09-22
+
+### Azure Machine Learning CLI (v2) v2.8.0
+
+- `az ml job`
+ - Added spark job support.
+ - Added shm_size and docker_args to job.
+- `az ml compute`
+ - Compute instance supports managed identity.
+ - Added idle shutdown time support for compute instance.
+- `az ml online-deployment`
+ - Added support for data collection for eventhub and data storage.
+ - Added syntax validation for scoring script.
+- `az ml batch-deployment`
+ - Added syntax validation for scoring script.
+
+## 2022-08-10
+
+### Azure Machine Learning CLI (v2) v2.7.0
+
+- `az ml component`
+ - Added AutoML component.
+- `az ml dataset`
+ - Deprecated command group (Use `az ml data` instead).
+
+## 2022-07-16
+
+### Azure Machine Learning CLI (v2) v2.6.0
+
+- Added MoonCake cloud support.
+- `az ml job`
+ - Allow Git repo URLs to be used as code.
+ - AutoML jobs use the same input schema as other job types.
+ - Pipeline jobs now supports registry assets.
+- `az ml component`
+ - Allow Git repo URLs to be used as code.
+- `az ml online-endpoint`
+ - MIR now supports registry assets.
+ ## 2022-05-24 ### Azure Machine Learning CLI (v2) v2.4.0
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
Previously updated : 09/26/2022 Last updated : 10/25/2022 # Azure Machine Learning Python SDK release notes
__RSS feed__: Get notified when this page is updated by copying and pasting the
`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2022-10-25
+
+### Azure Machine Learning SDK for Python v1.47.0
+ + **azureml-automl-dnn-nlp**
+ + Runtime changes for AutoML NLP to account for fixed training parameters, as part of the newly introduced model sweeping and hyperparameter tuning.
+ + **azureml-mlflow**
+ + AZUREML_ARTIFACTS_DEFAULT_TIMEOUT can be used to control the timeout for artifact upload
+ + **azureml-train-automl-runtime**
+ + Many Models and Hierarchical Time Series training now enforces check on timeout parameters to detect conflict before submitting the experiment for run. This will prevent experiment failure during the run by raising exception before submitting experiment.
+ + Customers can now control the step size while using rolling forecast in Many Models inference.
+ + ManyModels inference with unpartitioned tabular data now supports forecast_quantiles.
+ ## 2022-09-26 ### Azure Machine Learning SDK for Python v1.46.0
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-deploy-model-custom-output.md
In any of those cases, Batch Deployments allow you to take control of the output
[!INCLUDE [basic cli prereqs](../../../includes/machine-learning-cli-prereqs.md)] * A model registered in the workspace. In this tutorial, we'll use an MLflow model. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
-* You must have an endpoint already created. If you don't, follow the instructions at [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md). This example assumes the endpoint is named `heart-classifier-batch`.
-* You must have a compute created where to deploy the deployment. If you don't, follow the instructions at [Create compute](../how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+* You must have an endpoint already created. If you don't, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `heart-classifier-batch`.
+* You must have a compute created where to deploy the deployment. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
## About this sample
Follow the next steps to create a deployment using the previous scoring script:
2. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, in this case we are going to indicate a scoring script and environment since we want to customize how inference is executed. > [!NOTE]
- > This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md).
+ > This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
# [Azure ML CLI](#tab/cli)
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-image-processing-batch.md
Title: "Image processing tasks with batch deployments"
+ Title: "Image processing with batch deployments"
description: Learn how to deploy a model in batch endpoints that process images
-# Image processing tasks with batch deployments
+# Image processing with batch deployments
[!INCLUDE [ml v2](../../../includes/machine-learning-dev-v2.md)]
Batch Endpoints can be used for processing tabular data, but also any other file
[!INCLUDE [basic cli prereqs](../../../includes/machine-learning-cli-prereqs.md)]
-* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md). This example assumes the endpoint is named `imagenet-classifier-batch`.
-* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](../how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `imagenet-classifier-batch`.
+* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
## About the model used in the sample
One the scoring script is created, it's time to create a batch deployment for it
1. Now, let create the deployment. > [!NOTE]
- > This example assumes you have an endpoint created with the name `imagenet-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md).
+ > This example assumes you have an endpoint created with the name `imagenet-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
# [Azure ML CLI](#tab/cli)
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
-# Using MLflow models in batch deployments
+# Use MLflow models in batch deployments
[!INCLUDE [cli v2](../../../includes/machine-learning-dev-v2.md)]
The model has been trained using an `XGBBoost` classifier and all the required p
### Follow along in Jupyter Notebooks
-You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: `azureml-examples/sdk/python/endpoints/batch/mlflow-for-batch-tabular.ipynb`.
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mlflow-for-batch-tabular.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mlflow-for-batch-tabular.ipynb).
## Steps
Use the following steps to deploy an MLflow model with a custom scoring script.
model = mlflow.pyfunc.load(model_path) def run(mini_batch):
- resultList = []
+ results = pd.DataFrame(columns=['file', 'predictions'])
for file_path in mini_batch: data = pd.read_csv(file_path)
Use the following steps to deploy an MLflow model with a custom scoring script.
df = pd.DataFrame(pred, columns=['predictions']) df['file'] = os.path.basename(file_path)
- resultList.extend(df.values)
+ results = pd.concat([results, df])
- return resultList
+ return results
``` 1. Let's create an environment where the scoring script can be executed:
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-nlp-processing-batch.md
Title: "NLP tasks with batch deployments"
+ Title: "Text processing with batch deployments"
description: Learn how to use batch deployments to process text and output results.
-# NLP tasks with batch deployments
+# Text processing with batch deployments
[!INCLUDE [cli v2](../../../includes/machine-learning-dev-v2.md)]
Batch Endpoints can be used for processing tabular data, but also any other file
[!INCLUDE [basic cli prereqs](../../../includes/machine-learning-cli-prereqs.md)]
-* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md). This example assumes the endpoint is named `text-summarization-batch`.
-* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](../how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
+* You must have an endpoint already created. If you don't please follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `text-summarization-batch`.
+* You must have a compute created where to deploy the deployment. If you don't please follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
## About the model used in the sample
One the scoring script is created, it's time to create a batch deployment for it
2. Now, let create the deployment. > [!NOTE]
- > This example assumes you have an endpoint created with the name `text-summarization-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](../how-to-use-batch-endpoint.md).
+ > This example assumes you have an endpoint created with the name `text-summarization-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
# [Azure ML CLI](#tab/cli)
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-secure-batch-endpoint.md
When deploying a machine learning model to a batch endpoint, you can secure thei
All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. Not further configuration is required. > [!IMPORTANT]
-> When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](../how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-scoring-job).
+> When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-scoring-job).
The following diagram shows how the networking looks like for batch endpoints when deployed in a private workspace:
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-troubleshoot-batch-endpoints.md
+
+ Title: "Troubleshooting batch endpoints"
+
+description: Learn how to troubleshoot and diagnostic errors with batch endpoints jobs
++++++ Last updated : 10/10/2022++++
+# Troubleshooting batch endpoints
++
+Learn how to troubleshoot and solve, or work around, common errors you may come across when using [batch endpoints](how-to-use-batch-endpoint.md) for batch scoring.
+
+## Understanding logs of a batch scoring job
+
+### Get logs
+
+After you invoke a batch endpoint using the Azure CLI or REST, the batch scoring job will run asynchronously. There are two options to get the logs for a batch scoring job.
+
+Option 1: Stream logs to local console
+
+You can run the following command to stream system-generated logs to your console. Only logs in the `azureml-logs` folder will be streamed.
+
+```azurecli
+az ml job stream -name <job_name>
+```
+
+Option 2: View logs in studio
+
+To get the link to the run in studio, run:
+
+```azurecli
+az ml job show --name <job_name> --query interaction_endpoints.Studio.endpoint -o tsv
+```
+
+1. Open the job in studio using the value returned by the above command.
+1. Choose __batchscoring__
+1. Open the __Outputs + logs__ tab
+1. Choose the log(s) you wish to review
+
+### Understand log structure
+
+There are two top-level log folders, `azureml-logs` and `logs`.
+
+The file `~/azureml-logs/70_driver_log.txt` contains information from the controller that launches the scoring script.
+
+Because of the distributed nature of batch scoring jobs, there are logs from several different sources. However, two combined files are created that provide high-level information:
+
+- `~/logs/job_progress_overview.txt`: This file provides high-level information about the number of mini-batches (also known as tasks) created so far and the number of mini-batches processed so far. As the mini-batches end, the log records the results of the job. If the job failed, it will show the error message and where to start the troubleshooting.
+
+- `~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. This log provides information on task creation, progress monitoring, the job result.
+
+For a concise understanding of errors in your script there is:
+
+- `~/logs/user/error.txt`: This file will try to summarize the errors in your script.
+
+For more information on errors in your script, there is:
+
+- `~/logs/user/error/`: This file contains full stack traces of exceptions thrown while loading and running the entry script.
+
+When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/node` folder, grouped by worker nodes:
+
+- `~/logs/sys/node/<ip_address>/<process_name>.txt`: This file provides detailed info about each mini-batch as it's picked up or completed by a worker. For each mini-batch, this file includes:
+
+ - The IP address and the PID of the worker process.
+ - The total number of items, the number of successfully processed items, and the number of failed items.
+ - The start time, duration, process time, and run method time.
+
+You can also view the results of periodic checks of the resource usage for each node. The log files and setup files are in this folder:
+
+- `~/logs/perf`: Set `--resource_monitor_interval` to change the checking interval in seconds. The default interval is `600`, which is approximately 10 minutes. To stop the monitoring, set the value to `0`. Each `<ip_address>` folder includes:
+
+ - `os/`: Information about all running processes in the node. One check runs an operating system command and saves the result to a file. On Linux, the command is `ps`.
+ - `%Y%m%d%H`: The sub folder name is the time to hour.
+ - `processes_%M`: The file ends with the minute of the checking time.
+ - `node_disk_usage.csv`: Detailed disk usage of the node.
+ - `node_resource_usage.csv`: Resource usage overview of the node.
+ - `processes_resource_usage.csv`: Resource usage overview of each process.
+
+### How to log in scoring script
+
+You can use Python logging in your scoring script. Logs are stored in `logs/user/stdout/<node_id>/processNNN.stdout.txt`.
+
+```python
+import argparse
+import logging
+
+# Get logging_level
+arg_parser = argparse.ArgumentParser(description="Argument parser.")
+arg_parser.add_argument("--logging_level", type=str, help="logging level")
+args, unknown_args = arg_parser.parse_known_args()
+print(args.logging_level)
+
+# Initialize Python logger
+logger = logging.getLogger(__name__)
+logger.setLevel(args.logging_level.upper())
+logger.info("Info log statement")
+logger.debug("Debug log statement")
+```
+
+## Common issues
+
+The following section contains common problems and solutions you may see during batch endpoint development and consumption.
+
+### No module named 'azureml'
+
+__Reason__: Azure Machine Learning Batch Deployments require the package `azureml-core` to be installed.
+
+__Solution__: Add `azureml-core` to your conda dependencies file.
+
+### Output already exists
+
+__Reason__: Azure Machine Learning Batch Deployment can't overwrite the `predictions.csv` file generated by the output.
+
+__Solution__: If you are indicated an output location for the predictions, ensure the path leads to a non-existing file.
+
+### The run() function in the entry script had timeout for [number] times
+
+__Message logged__: `No progress update in [number] seconds. No progress update in this check. Wait [number] seconds since last update.`
+
+__Reason__: Batch Deployments can be configured with a `timeout` value that indicates the amount of time the deployment shall wait for a single batch to be processed. If the execution of the batch takes more than such value, the task is aborted. Tasks that are aborted can be retried up to a maximum of times that can also be configured. If the `timeout` occurs on each retry, then the deployment job fails. These properties can be configured for each deployment.
+
+__Solution__: Increase the `timemout` value of the deployment by updating the deployment. These properties are configured in the parameter `retry_settings`. By default, a `timeout=30` and `retries=3` is configured. When deciding the value of the `timeout`, take into consideration the number of files being processed on each batch and the size of each of those files. You can also decrease them to account for more mini-batches of smaller size and hence quicker to execute.
+
+### Dataset initialization failed
+
+__Message logged__: Dataset initialization failed: UserErrorException: Message: Cannot mount Dataset(id='xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx', name='None', version=None). Source of the dataset is either not accessible or does not contain any data.
+
+__Reason__: The compute cluster where the deployment is running can't mount the storage where the data asset is located. The managed identity of the compute don't have permissions to perform the mount.
+
+__Solutions__: Ensure the identity associated with the compute cluster where your deployment is running has at least has at least [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../../storage/blobs/assign-azure-role-data-access.md).
+
+### Data set node [code] references parameter dataset_param which doesn't have a specified value or a default value
+
+__Message logged__: Data set node [code] references parameter dataset_param which doesn't have a specified value or a default value.
+
+__Reason__: The input data asset provided to the batch endpoint isn't supported.
+
+__Solution__: Ensure you are providing a data input that is supported for batch endpoints.
+
+### User program failed with Exception: Run failed, please check logs for details
+
+__Message logged__: User program failed with Exception: Run failed, please check logs for details. You can check logs/readme.txt for the layout of logs.
+
+__Reason__: There was an error while running the `init()` or `run()` function of the scoring script.
+
+__Solution__: Go to __Outputs + Logs__ and open the file at `logs > user > error > 10.0.0.X > process000.txt`. You will see the error message generated by the `init()` or `run()` method.
+
+### There is no succeeded mini batch item returned from run()
+
+__Message logged__: There is no succeeded mini batch item returned from run(). Please check 'response: run()' in https://aka.ms/batch-inference-documentation.
+
+__Reason__: The batch endpoint failed to provide data in the expected format to the `run()` method. This may be due to corrupted files being read or incompatibility of the input data with the signature of the model (MLflow).
+
+__Solution__: To understand what may be happening, go to __Outputs + Logs__ and open the file at `logs > user > stdout > 10.0.0.X > process000.stdout.txt`. Look for error entries like `Error processing input file`. You should find there details about why the input file can't be correctly read.
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-batch-endpoint.md
+
+ Title: 'Use batch endpoints for batch scoring'
+
+description: In this article, learn how to create a batch endpoint to continuously batch score large data.
+++++++ Last updated : 05/24/2022+
+#Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
++
+# Use batch endpoints for batch scoring
++
+Batch endpoints provide a convenient way to run inference over large volumes of data. They simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints?](../concept-endpoints.md).
+
+Use batch endpoints when:
+
+> [!div class="checklist"]
+> * You have expensive models that requires a longer time to run inference.
+> * You need to perform inference over large amounts of data, distributed in multiple files.
+> * You don't have low latency requirements.
+> * You can take advantage of parallelization.
+
+In this article, you will learn how to use batch endpoints to do batch scoring.
+
+## Prerequisites
++
+### About this example
+
+On this example, we are going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we are going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. On the second half, [we are going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
+
+### Clone the example repository
++
+### Create compute
+
+Batch endpoints run on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](../how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](../how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource so one cluster can host one or many batch deployments (along with other workloads if desired).
+
+Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`.
+
+# [Azure ML CLI](#tab/cli)
++
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+compute_name = "batch-cluster"
+compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=5)
+ml_client.begin_create_or_update(compute_cluster)
+```
+
+# [studio](#tab/studio)
+
+*Create a compute cluster as explained in the following tutorial [Create an Azure Machine Learning compute cluster](../how-to-create-attach-compute-cluster.md?tabs=azure-studio).*
+++
+> [!NOTE]
+> You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](../how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
++
+### Registering the model
+
+Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you are trying to deploy is already registered. In this case, we are registering a Torch model for the popular digit recognition problem (MNIST).
+
+> [!TIP]
+> Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
+
+
+# [Azure ML CLI](#tab/cli)
+
+```azurecli
+MODEL_NAME='mnist'
+az ml model create --name $MODEL_NAME --type "custom_model" --path "./mnist/model/"
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+model_name = 'mnist'
+model = ml_client.models.create_or_update(
+ Model(name=model_name, path='./mnist/model/', type=AssetTypes.CUSTOM_MODEL)
+)
+```
+
+# [studio](#tab/studio)
+
+1. Navigate to the __Models__ tab on the side menu.
+1. Click on __Register__ > __From local files__.
+1. In the wizard, leave the option *Model type* as __Unspecified type__.
+1. Click on __Browse__ > __Browse folder__ > Select the folder `./mnist/model/` > __Next__.
+1. Configure the name of the model: `mnist`. You can leave the rest of the fields as they are.
+1. Click on __Register__.
+++
+## Create a batch endpoint
+
+A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](../concept-endpoints.md#what-are-batch-endpoints)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
+
+> [!TIP]
+> One of the batch deployments will serve as the default deployment for the endpoint. The default deployment will be used to do the actual batch scoring when the endpoint is invoked. Learn more about [batch endpoints and batch deployment](../concept-endpoints.md#what-are-batch-endpoints).
+
+### Steps
+
+1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+
+ # [Azure ML CLI](#tab/cli)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ ```azurecli
+ ENDPOINT_NAME="mnist-batch"
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ ```python
+ endpoint_name="mnist-batch"
+ ```
+
+ # [studio](#tab/studio)
+
+ *You will configure the name of the endpoint later in the creation wizard.*
+
+
+1. Configure your batch endpoint
+
+ # [Azure ML CLI](#tab/cli)
+
+ The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`.
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-endpoint.yml":::
+
+ The following table describes the key properties of the endpoint. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](../reference-yaml-endpoint-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
+ | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ # create a batch endpoint
+ endpoint = BatchEndpoint(
+ name=endpoint_name,
+ description="A batch endpoint for scoring images from the MNIST dataset.",
+ )
+ ```
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
+ | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
+
+ # [studio](#tab/studio)
+
+ *You will create the endpoint in the same step you create the deployment.*
+
+
+1. Create the endpoint:
+
+ # [Azure ML CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_batch_endpoint" :::
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+ # [studio](#tab/studio)
+
+ *You will create the endpoint in the same step you are creating the deployment later.*
++
+## Create a scoring script
+
+Batch deployments require a scoring script that indicates how the given model should be executed and how input data must be processed. For MLflow models this scoring script is not required as it is automatically generated by Azure Machine Learning. If your model is an MLflow model, you can skip this step.
+
+> [!TIP]
+> For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+In this case, we are deploying a model that read image files representing digits and outputs the corresponding digit. The scoring script looks as follows:
++
+### Understanding the scoring script
+
+The scoring script is a Python file (`.py`) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor driver. It must contain two methods:
+
+#### The `init` method
+
+Use the `init()` method for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. You model's files will be available in an environment variable called `AZUREML_MODEL_DIR`. Use this variable to locate the files associated with the model.
+
+#### The `run` method
+
+Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method will be called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.
+
+> [!IMPORTANT]
+> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option will depend on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+
+The `run()` method should return a pandas DataFrame or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
+
+Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results.
+
+> [!WARNING]
+> Do not not output complex data types (or lists of complex data types) in the `run` function. Those outputs will be transformed to string and they will be hard to read.
+
+The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (given that the `output_action` isn't `summary_only`).
+
+> [!TIP]
+> We suggest you to read the Scenarios sections (see the navigation bar at the left) to see different case by case scenarios and how the scoring script looks like.
+
+## Create a batch deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch deployment, you need all the following items:
+
+* A registered model in the workspace.
+* The code to score the model.
+* The environment in which the model runs.
+* The pre-created compute and resource settings.
+
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work.
+
+ # [Azure ML CLI](#tab/cli)
+
+ *No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file as an anonymous environment.*
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ env = Environment(
+ conda_file="./mnist/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+ # [studio](#tab/studio)
+
+ 1. Navigate to the __Environments__ tab on the side menu.
+ 1. Select the tab __Custom environments__ > __Create__.
+ 1. Enter the name of the environment, in this case `torch-batch-env`.
+ 1. On __Select environment type__ select __Use existing docker image with conda__.
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+ 1. On __Customize__ section copy the content of the file `./mnist/environment/conda.yml` included in the repository into the portal. The conda file looks as follows:
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist/environment/conda.yml":::
+
+ 1. Click on __Next__ and then on __Create__.
+ 1. The environment is ready to be used.
+
+
+
+ > [!WARNING]
+ > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
+
+ > [!IMPORTANT]
+ > Do not forget to include the library `azureml-core` in your deployment as it is required by the executor.
+
+
+1. Create a deployment definition
+
+ # [Azure ML CLI](#tab/cli)
+
+ __mnist-torch-deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-torch-deployment.yml":::
+
+ For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](../reference-yaml-deployment-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the deployment. |
+ | `endpoint_name` | The name of the endpoint to create the deployment under. |
+ | `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](../reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
+ | `code_configuration.code.path` | The local directory that contains all the Python source code to score the model. |
+ | `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](#understanding-the-scoring-script). |
+ | `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](../reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. |
+ | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using `azureml:<compute-name>` syntax. |
+ | `resources.instance_count` | The number of instances to be used for each batch scoring job. |
+ | `max_concurrency_per_instance` | [Optional] The maximum number of parallel `scoring_script` runs per instance. |
+ | `mini_batch_size` | [Optional] The number of files the `scoring_script` can process in one `run()` call. |
+ | `output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and only calculate `error_threshold`. |
+ | `output_file_name` | [Optional] The name of the batch scoring output file for `append_row` `output_action`. |
+ | `retry_settings.max_retries` | [Optional] The number of max tries for a failed `scoring_script` `run()`. |
+ | `retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. |
+ | `error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
+ | `logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ deployment = BatchDeployment(
+ name="mnist-torch-dpl",
+ description="A deployment using Torch to solve the MNIST classification dataset.",
+ endpoint_name=batch_endpoint_name,
+ model=model,
+ code_path="./mnist/code/",
+ scoring_script="batch_driver.py",
+ environment=env,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
+ logging_level="info",
+ )
+ ```
+
+ This class allows user to configure the following key aspects.
+ * `name` - Name of the deployment.
+ * `endpoint_name` - Name of the endpoint to create the deployment under.
+ * `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+ * `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
+ * `code_path`- Path to the source code directory for scoring the model
+ * `scoring_script` - Relative path to the scoring file in the source code directory
+ * `compute` - Name of the compute target to execute the batch scoring jobs on
+ * `instance_count`- The number of nodes to use for each batch scoring job.
+ * `max_concurrency_per_instance`- The maximum number of parallel scoring_script runs per instance.
+ * `mini_batch_size` - The number of files the code_configuration.scoring_script can process in one `run`() call.
+ * `retry_settings`- Retry settings for scoring each mini batch.
+ * `max_retries`- The maximum number of retries for a failed or timed-out mini batch (default is 3)
+ * `timeout`- The timeout in seconds for scoring a mini batch (default is 30)
+ * `output_action`- Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row`
+ * `output_file_name`- Name of the batch scoring output file. Default is `predictions.csv`
+ * `environment_variables`- Dictionary of environment variable name-value pairs to set for each batch scoring job.
+ * `logging_level`- The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`.
+
+ # [studio](#tab/studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__ > __Create__.
+ 1. Give the endpoint a name, in this case `mnist-batch`. You can configure the rest of the fields or leave them blank.
+ 1. Click on __Next__.
+ 1. On the model list, select the model `mnist` and click on __Next__.
+ 1. On the deployment configuration page, give the deployment a name.
+ 1. On __Output action__, ensure __Append row__ is selected.
+ 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+ 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+ 1. On __Scoring timeout (seconds)__, ensure you are giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+ 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+ 1. Once done, click on __Next__.
+ 1. On environment, go to __Select scoring file and dependencies__ and click on __Browse__.
+ 1. Select the scoring script file on `/mnist/code/batch_driver.py`.
+ 1. On the section __Choose an environment__, select the environment you created a previous step.
+ 1. Click on __Next__.
+ 1. On the section __Compute__, select the compute cluster you created in a previous step.
+
+ > [!WARNING]
+ > Azure Kubernetes cluster are supported in batch deployments, but only when created using the Azure ML CLI or Python SDK.
+
+ 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we will use 2.
+ 1. Click on __Next__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
+
+ 1. Complete the wizard.
+
+1. Create the deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_batch_deployment_set_default" :::
+
+ > [!TIP]
+ > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#adding-deployments-to-an-endpoint) section.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+ Once the deployment is completed, we need to ensure the new deployment is the default deployment in the endpoint:
+
+ ```python
+ endpoint = ml_client.batch_endpoints.get(endpoint_name)
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+ # [studio](#tab/studio)
+
+ In the wizard, click on __Create__ to start the deployment process.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
++
+1. Check batch endpoint and deployment details.
+
+ # [Azure ML CLI](#tab/cli)
+
+ Use `show` to check endpoint and deployment details. To check a batch deployment, run the following code:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="check_batch_deployment_detail" :::
+
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ To check a batch deployment, run the following code:
+
+ ```python
+ ml_client.batch_deployments.get(name=deployment.name, endpoint_name=endpoint.name)
+ ```
+
+ # [studio](#tab/studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__.
+ 1. Select the batch endpoint you want to get details from.
+ 1. In the endpoint page, you will see all the details of the endpoint along with all the deployments available.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
+
+## Invoke the batch endpoint to start a batch scoring job
+
+Invoke a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for a period of time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
+
+# [Azure ML CLI](#tab/cli)
+
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ inputs=Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist", type=AssetTypes.URI_FOLDER)
+)
+```
+
+# [studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
+
+1. Click on __Next__.
+1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://pipelinedata.blob.core.windows.net/sampledat) for details.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
+
+1. Start the job.
+++
+### Configure job's inputs
+
+Batch endpoints support reading files or folders that are located in different locations. To learn more about how the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+
+> [!TIP]
+> Local data folders/files can be used when executing batch endpoints from the Azure ML CLI or Azure ML SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+
+> [!IMPORTANT]
+> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
+
+### Configure the output location
+
+The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.
+
+# [Azure ML CLI](#tab/cli)
+
+Use `output-path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `--set output_file_name=<your-file-name>` to configure a new output file name.
++
+# [Azure ML SDK for Python](#tab/sdk)
+
+Use `output_path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `output_file_name=<your-file-name>` to configure a new output file name.
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ inputs={
+ "input": Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist", type=AssetTypes.URI_FOLDER)
+ },
+ output_path={
+ "score": Input(path=f"azureml://datastores/workspaceblobstore/paths/{endpoint_name}")
+ },
+ output_file_name="predictions.csv"
+)
+```
+
+# [studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+1. Click on __Next__.
+1. Check the option __Override deployment settings__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+
+1. You can now configure __Output file name__ and some extra properties of the deployment execution. Just this execution will be affected.
+1. On __Select data source__, select the data input you want to use.
+1. On __Configure output location__, check the option __Enable output configuration__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
+
+1. Configure the __Blob datastore__ where the outputs should be placed.
+++
+> [!WARNING]
+> You must use a unique output location. If the output file exists, the batch scoring job will fail.
+
+> [!IMPORTANT]
+> As opposite as for inputs, only Azure Machine Learning data stores running on blob storage accounts are supported for outputs.
+
+## Overwrite deployment configuration per each job
+
+Some settings can be overwritten when invoke to make best use of the compute resources and to improve performance. The following settings can be configured in a per-job basis:
+
+* Use __instance count__ to overwrite the number of instances to request from the compute cluster. For example, for larger volume of data inputs, you may want to use more instances to speed up the end to end batch scoring.
+* Use __mini-batch size__ to overwrite the number of files to include on each mini-batch. The number of mini batches is decided by total input file counts and mini_batch_size. Smaller mini_batch_size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead.
+* Other settings can be overwritten other settings including __max retries__, __timeout__, and __error threshold__. These settings might impact the end to end batch scoring time for different workloads.
+
+# [Azure ML CLI](#tab/cli)
++
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ input=Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist"),
+ params_override=[
+ { "mini_batch_size": "20" },
+ { "compute.instance_count": "5" }
+ ],
+)
+```
+
+# [studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+1. Click on __Next__.
+1. Check the option __Override deployment settings__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+
+1. Configure the job parameters. Only the current job execution will be affected by this configuration.
+++
+### Monitor batch scoring job execution progress
+
+Batch scoring jobs usually take some time to process the entire set of inputs.
+
+# [Azure ML CLI](#tab/cli)
+
+You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`.
++
+# [Azure ML SDK for Python](#tab/sdk)
+
+The following code checks the job status and outputs a link to the Azure ML studio for further details.
+
+```python
+ml_client.jobs.get(job.name)
+```
+
+# [studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to monitor.
+1. Click on the tab __Jobs__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
+
+1. You will see a list of the jobs created for the selected endpoint.
+1. Select the last job that is running.
+1. You will be redirected to the job monitoring page.
+++
+### Check batch scoring results
+
+Follow the below steps to view the scoring results in Azure Storage Explorer when the job is completed:
+
+1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="show_job_in_studio" :::
+
+1. In the graph of the job, select the `batchscoring` step.
+1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
+1. From __Data outputs__, select the icon to open __Storage Explorer__.
++
+The scoring results in Storage Explorer are similar to the following sample page:
++
+## Adding deployments to an endpoint
+
+Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments can't affect one to another.
+
+### Adding a second deployment
+
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work.
+
+ # [Azure ML CLI](#tab/cli)
+
+ *No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file as an anonymous environment.*
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Let's get a reference to the environment:
+
+ ```python
+ env = Environment(
+ conda_file="./mnist-keras/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ ```
+
+ # [studio](#tab/studio)
+
+ 1. Navigate to the __Environments__ tab on the side menu.
+ 1. Select the tab __Custom environments__ > __Create__.
+ 1. Enter the name of the environment, in this case `keras-batch-env`.
+ 1. On __Select environment type__ select __Use existing docker image with conda__.
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+ 1. On __Customize__ section copy the content of the file `./mnist/environment/conda.yml` included in the repository into the portal. The conda file looks as follows:
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist/environment/conda.yml":::
+
+ 1. Click on __Next__ and then on __Create__.
+ 1. The environment is ready to be used.
+
+
+
+ > [!WARNING]
+ > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
+
+ > [!IMPORTANT]
+ > Do not forget to include the library `azureml-core` in your deployment as it is required by the executor.
+
+
+1. Create a deployment definition
+
+ # [Azure ML CLI](#tab/cli)
+
+ __mnist-keras-deployment__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras-deployment.yml":::
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ deployment = BatchDeployment(
+ name="non-mlflow-deployment",
+ description="this is a sample non-mlflow deployment",
+ endpoint_name=batch_endpoint_name,
+ model=model,
+ code_path="./mnist-keras/code/",
+ scoring_script="digit_identification.py",
+ environment=env,
+ compute=compute_name,
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
+ logging_level="info",
+ )
+ ```
+
+ # [studio](#tab/studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+ 1. Select the tab __Batch endpoints__.
+ 1. Select the existing batch endpoint where you want to add the deployment.
+ 1. Click on __Add deployment__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
+
+ 1. On the model list, select the model `mnist` and click on __Next__.
+ 1. On the deployment configuration page, give the deployment a name.
+ 1. On __Output action__, ensure __Append row__ is selected.
+ 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+ 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+ 1. On __Scoring timeout (seconds)__, ensure you are giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+ 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+ 1. Once done, click on __Next__.
+ 1. On environment, go to __Select scoring file and dependencies__ and click on __Browse__.
+ 1. Select the scoring script file on `/mnist-keras/code/batch_driver.py`.
+ 1. On the section __Choose an environment__, select the environment you created a previous step.
+ 1. Click on __Next__.
+ 1. On the section __Compute__, select the compute cluster you created in a previous step.
+ 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we will use 2.
+ 1. Click on __Next__
+
+1. Create the deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_new_deployment_not_default" :::
+
+ > [!TIP]
+ > The `--set-default` parameter is missing in this case. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
+
+ # [studio](#tab/studio)
+
+ In the wizard, click on __Create__ to start the deployment process.
++
+### Test a non-default batch deployment
+
+To test the new non-default deployment, you will need to know the name of the deployment you want to run.
+
+# [Azure ML CLI](#tab/cli)
++
+Notice `--deployment-name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ deployment_name=deployment.name,
+ endpoint_name=endpoint.name,
+ input=input,
+)
+```
+
+Notice `deployment_name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+
+# [studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you just created.
+1. Click on __Create job__.
+1. On __Deployment__, select the deployment you want to execute. In this case, `mnist-keras`.
+1. Complete the job creation wizard to get the job started.
+++
+### Update the default batch deployment
+
+Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+# [Azure ML CLI](#tab/cli)
+
+```bash
+az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
++
+# [studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to configure.
+1. Click on __Update default deployment__.
+
+ :::image type="content" source="../media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
+
+1. On __Select default deployment__, select the name of the deployment you want to be the default one.
+1. Click on __Update__.
+1. The selected deployment is now the default one.
+++
+## Delete the batch endpoint and the deployment
+
+# [Azure ML CLI](#tab/cli)
+
+If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion.
++
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
++
+# [Azure ML SDK for Python](#tab/sdk)
+
+Delete endpoint:
+
+```python
+ml_client.batch_endpoints.begin_delete(name=batch_endpoint_name)
+```
+
+Delete compute: optional, as you may choose to reuse your compute cluster with later deployments.
+
+```python
+ml_client.compute.begin_delete(name=compute_name)
+```
+
+# [studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+1. Select the tab __Batch endpoints__.
+1. Select the batch endpoint you want to delete.
+1. Click on __Delete__.
+1. The endpoint all along with its deployments will be deleted.
+1. Notice that this won't affect the compute cluster where the deployment(s) run.
+++
+## Next steps
+
+* [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+* [Authentication on batch endpoints](how-to-authenticate-batch-endpoint.md).
+* [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md).
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
To create a batch deployment, you need to specify the following elements:
- Scoring script - code needed to do the scoring/inferencing - Environment - a Docker image with Conda dependencies
-If you're deploying [MLFlow models](how-to-train-model.md), there's no need to provide a scoring script and execution environment, as both are autogenerated.
+If you're deploying [MLFlow models in batch deployments](batch-inference/how-to-mlflow-batch.md), there's no need to provide a scoring script and execution environment, as both are autogenerated.
-Learn how to [deploy and use batch endpoints with the Azure CLI](how-to-use-batch-endpoint.md) and the [studio web portal](how-to-use-batch-endpoints-studio.md)
+Learn more about how to [deploy and use batch endpoints](batch-inference/how-to-use-batch-endpoint.md).
### Managed cost with autoscaling compute Invoking a batch endpoint triggers an asynchronous batch inference job. Compute resources are automatically provisioned when the job starts, and automatically de-allocated as the job completes. So you only pay for compute when you use it.
-You can [override compute resource settings](how-to-use-batch-endpoint.md#configure-the-output-location-and-overwrite-settings) (like instance count) and advanced settings (like mini batch size, error threshold, and so on) for each individual batch inference job to speed up execution and reduce cost.
+You can [override compute resource settings](batch-inference/how-to-use-batch-endpoint.md#overwrite-deployment-configuration-per-each-job) (like instance count) and advanced settings (like mini batch size, error threshold, and so on) for each individual batch inference job to speed up execution and reduce cost.
### Flexible data sources and storage
You can use the following options for input data when invoking a batch endpoint:
> - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke. > - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
-For more information on supported input options, see [Batch scoring with batch endpoint](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-with-different-input-options).
+For more information on supported input options, see [Accessing data from batch endpoints jobs](batch-inference/how-to-access-data-batch-endpoints-jobs.md).
Specify the storage output location to any datastore and path. By default, batch endpoints store their output to the workspace's default blob store, organized by the Job Name (a system-generated GUID).
Specify the storage output location to any datastore and path. By default, batch
- VNET support: Batch endpoints support ingress protection. A batch endpoint with ingress protection will accept scoring requests only from hosts inside a virtual network but not from the public internet. A batch endpoint that is created in a private-link enabled workspace will have ingress protection. To create a private-link enabled workspace, see [Create a secure workspace](tutorial-create-secure-workspace.md). > [!NOTE]
-Creating batch endpoints in a private-link enabled workspace is only supported in the following versions.
+> Creating batch endpoints in a private-link enabled workspace is only supported in the following versions.
> - CLI - version 2.15.1 or higher. > - REST API - version 2022-05-01 or higher. > - SDK V2 - version 0.1.0b3 or higher.
Creating batch endpoints in a private-link enabled workspace is only supported i
## Next steps - [How to deploy online endpoints with the Azure CLI](how-to-deploy-managed-online-endpoints.md)-- [How to deploy batch endpoints with the Azure CLI](how-to-use-batch-endpoint.md)
+- [How to deploy batch endpoints with the Azure CLI](batch-inference/how-to-use-batch-endpoint.md)
- [How to use online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md) - [Deploy models with REST](how-to-deploy-with-rest.md) - [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Title: Create Data Assets
-description: Learn how to create Azure Machine Learning data assets.
+description: Learn how to create Azure Machine Learning data assets
-+ Last updated 09/22/2022
-#Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models.
# Create data assets
Last updated 09/22/2022
> * [v1](./v1/how-to-create-register-datasets.md) > * [v2 (current version)](how-to-create-data-assets.md)
-In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from datastores, Azure Storage, public URLs, and local files.
+In this article, you learn how to create a data asset in Azure Machine Learning. By creating a data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from AzureML datastores, Azure Storage, public URLs, and local files.
+
+> [!IMPORTANT]
+> If you didn't creat/register the data source as a data asset, you can still [consume the data via specifying the data path in a job](how-to-read-write-data-v2.md#read-data-in-a-job) without below benefits.
The benefits of creating data assets are:
The benefits of creating data assets are:
* You can **version** the data. + ## Prerequisites To create and work with data assets, you need:
When you create a data asset in Azure Machine Learning, you'll need to specify a
|Location | Examples | ||| |A path on your local computer | `./home/username/data/my_data` |
-|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
-|A path on Azure Storage | `https://<account_name>.blob.core.windows.net/<container_name>/path` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
|A path on a datastore | `azureml://datastores/<data_store_name>/paths/<path>` |
+|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
+|A path on Azure Storage |`wasbs://<containername>@<accountname>.blob.core.windows.net/<path_to_data>/` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` <br> `adl://<accountname>.azuredatalakestore.net/<path_to_data>/`<br> `https://<account_name>.blob.core.windows.net/<container_name>/path` |
> [!NOTE] > When you create a data asset from a local path, it will be automatically uploaded to the default Azure Machine Learning datastore in the cloud. +
+## Data asset types
+ - [**URIs**](#Create a `uri_folder` data asset) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`.
+
+ - [**MLTable**](#Create a `mltable` data asset) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be leveraged in automl. If you just want to create an data asset for a job or you want to write your own parsing logic in python you could use `uri_file`, `uri_folder`.
+
+ The ideal scenarios to use `mltable` are:
+ - The schema of your data is complex and/or changes frequently.
+ - You only need a subset of data (for example: a sample of rows or files, specific columns, etc).
+ - AutoML jobs requiring tabular data.
+If your scenario does not fit the above then it is likely that URIs are a more suitable type.
+ ## Create a `uri_folder` data asset Below shows you how to create a *folder* as an asset:
To create a File data asset in the Azure Machine Learning studio, use the follow
## Create a `mltable` data asset `mltable` is a way to abstract the schema definition for tabular data to make it easier to share data assets (an overview can be found in [MLTable](concept-data.md#mltable)).
+`mltable` supports tabular data coming from belowing sources:
+- Delimited files (CSV, TSV, TXT)
+- Parquet files
+- JSON Lines
+- Delta Lake
+
+Please find more details about what are the abilities we provide via `mltable` in [reference-yaml-mltable](reference-yaml-mltable.md).
In this section, we show you how to create a data asset when the type is an `mltable`. ### The MLTable file
-The MLTable file is a file that provides the specification of the data's schema so that the `mltable` *engine* can materialize the data into an in-memory object (Pandas/Dask/Spark). An *example* MLTable file is provided below:
+The MLTable file is a file that provides the specification of the data's schema so that the `mltable` *engine* can materialize the data into an in-memory object (Pandas/Dask/Spark).
+
+> [!NOTE]
+> This file needs to be named exactly as `MLTable`.
+
+An *example* MLTable file is provided below:
```yml type: mltable
path: <path>
``` > [!NOTE]
-> The path points to the **folder** containing the MLTable artifact.
+> The path points to the **folder** containing the MLTable artifact.
Next, create the data asset using the CLI:
To create a Table data asset in the Azure Machine Learning studio, use the follo
+ ## Next steps - [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
from azure.ai.ml.entities import (
CodeConfiguration, Environment, )
-from azure.identity import DefaultAzureCredential, AzureCliCredential
+from azure.identity import DefaultAzureCredential
``` Set up variables for the workspace and endpoint:
Azure Machine Learning local endpoints use Docker and VS Code development contai
Get a handle to the workspace: ```python
-credential = AzureCliCredential()
+credential = DefaultAzureCredential()
ml_client = MLClient( credential, subscription_id=subscription_id,
To debug online endpoints locally in VS Code, set the `vscode-debug` and `local`
deployment = ManagedOnlineDeployment( name="blue", endpoint_name=endpoint_name,
- model=Model(path="../model-1/model"),
+ model=Model(path="../model-1/model/sklearn_regression_model.pkl"),
code_configuration=CodeConfiguration( code="../model-1/onlinescoring", scoring_script="score.py" ),
deployment = ManagedOnlineDeployment(
) deployment = ml_client.online_deployments.begin_create_or_update(
- deployment,
- local=True,
- vscode_debug=True,
+ deployment, local=True, vscode_debug=True
) ```
endpoint = ml_client.online_endpoints.get(name=endpoint_name, local=True)
request_file_path = "../model-1/sample-request.json"
-endpoint.invoke(endpoint_name, request_file_path, local=True)
+ml_client.online_endpoints.invoke(endpoint_name, request_file_path, local=True)
``` In this case, `<REQUEST-FILE>` is a JSON file that contains input data samples for the model to make predictions on similar to the following JSON:
In this case, `<REQUEST-FILE>` is a JSON file that contains input data samples f
> The scoring URI is the address where your endpoint listens for requests. The `as_dict` method of endpoint objects returns information similar to `show` in the Azure CLI. The endpoint object can be obtained through `.get`. > > ```python
-> endpoint = ml_client.online_endpoints.get(endpoint_name, local=True)
-> endpoint.as_dict()
+> print(endpoint)
> ``` > > The output should look similar to the following:
For more extensive changes involving updates to your environment and endpoint co
new_deployment = ManagedOnlineDeployment( name="green", endpoint_name=endpoint_name,
- model=Model(path="../model-2/model"),
+ model=Model(path="../model-2/model/sklearn_regression_model.pkl"),
code_configuration=CodeConfiguration( code="../model-2/onlinescoring", scoring_script="score.py" ),
new_deployment = ManagedOnlineDeployment(
instance_count=2, )
-ml_client.online_deployments.update(new_deployment, local=True, vscode_debug=True)
+deployment = ml_client.online_deployments.begin_create_or_update(
+ new_deployment, local=True, vscode_debug=True
+)
``` Once the updated image is built and your development container launches, use the VS Code debugger to test and troubleshoot your updated endpoint.
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
In this article, you learn how to use the new REST APIs to:
[Batch endpoints](concept-endpoints.md#what-are-batch-endpoints) simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. In this article, you'll create a batch endpoint and deployment, and invoking it to start a batch scoring job. But first you'll have to register the assets needed for deployment, including model, code, and environment.
-There are many ways to create an Azure Machine Learning batch endpoint, including [the Azure CLI](how-to-use-batch-endpoint.md), and visually with [the studio](how-to-use-batch-endpoints-studio.md). The following example creates a batch endpoint and a batch deployment with the REST API.
+There are many ways to create an Azure Machine Learning batch endpoint, including the Azure CLI, Azure ML SDK for Python, and visually with the studio. The following example creates a batch endpoint and a batch deployment with the REST API.
## Create machine learning assets
You can use the tool [jq](https://stedolan.github.io/jq/) to parse the JSON resu
### Upload & register code
-Now that you have the datastore, you can upload the scoring script. For more information about how to author the scoring script, see [Understanding the scoring script](how-to-use-batch-endpoint.md#understanding-the-scoring-script). Use the Azure Storage CLI to upload a blob into your default container:
+Now that you have the datastore, you can upload the scoring script. For more information about how to author the scoring script, see [Understanding the scoring script](batch-inference/how-to-use-batch-endpoint.md#understanding-the-scoring-script). Use the Azure Storage CLI to upload a blob into your default container:
:::code language="rest-api" source="~/azureml-examples-main/cli/batch-score-rest.sh" id="upload_code":::
Batch scoring jobs usually take some time to process the entire set of inputs. M
### Check batch scoring results
-For information on checking the results, see [Check batch scoring results](how-to-use-batch-endpoint.md#check-batch-scoring-results).
+For information on checking the results, see [Check batch scoring results](batch-inference/how-to-use-batch-endpoint.md#check-batch-scoring-results).
## Delete the batch endpoint
If you aren't going use the batch endpoint, you should delete it with the below
## Next steps
-* Learn how to deploy your model for batch scoring [using the Azure CLI](how-to-use-batch-endpoint.md).
-* Learn how to deploy your model for batch scoring [using studio](how-to-use-batch-endpoints-studio.md).
-* Learn to [Troubleshoot batch endpoints](how-to-troubleshoot-batch-endpoints.md)
+* Learn [how to deploy your model for batch scoring](batch-inference/how-to-use-batch-endpoint.md).
+* Learn to [Troubleshoot batch endpoints](batch-inference/how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
To learn more, review these articles:
- [Deploy models with REST](how-to-deploy-with-rest.md) - [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md) - [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)
+- [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md)
- [Access Azure resources from an online endpoint with a managed identity](how-to-access-resources-from-endpoints-managed-identities.md) - [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md) - [Enable network isolation with managed online endpoints](how-to-secure-online-endpoint.md)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
To learn more, review these articles:
- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md) - [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) - [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)
+- [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md)
- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with an online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) - [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
To learn more, review these articles:
- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md) - [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) - [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)
+- [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md)
- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md) - [Access Azure resources with an online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md) - [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
https://github.com/azure/azureml-examples
## Step 2: Sign in to Azure Pipelines ## Step 3: Create an Azure Resource Manager connection
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
Using the following keystroke shortcuts, you can more easily navigate and run co
* **Expired token**: If you run into an expired token issue, sign out of your Azure ML studio, sign back in, and then restart the notebook kernel.
-* **File upload limit**: When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use one of the following methods:
-
- * Use the SDK to upload the data to a datastore. For more information, see [Create data assets](how-to-create-data-assets.md?tabs=Python-SDK).
- * Use [Azure Data Factory](v1/how-to-data-ingest-adf.md) to create a data ingestion pipeline.
-
+* **File upload limit**: When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use the SDK to upload the data to a datastore. For more information, see [Create data assets](how-to-create-data-assets.md?tabs=Python-SDK).
## Next steps
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
Last updated 07/28/2022-+ ms.devlang: azurecli
machine-learning How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md
As part of job submission, the training scripts and data must be uploaded to a c
}" ```
-1. Register a versioned reference to the training script for use with a job. In this case, the script would be located at `https://azuremlexamples.blob.core.windows.net/testjob`. This `testjob` is the folder in Blob storage that contains the training script and any dependencies needed by the script. In the following example, the ID of the versioned training code is returned and stored in the `$TRAIN_CODE` variable:
+1. Register a versioned reference to the training script for use with a job. In this example, the script is located at `https://azuremlexamples.blob.core.windows.net/azureml-blobstore-c8e832ae-e49c-4084-8d28-5e6c88502655/testjob`. This `testjob` is the folder in Blob storage that contains the training script and any dependencies needed by the script. In the following example, the ID of the versioned training code is returned and stored in the `$TRAIN_CODE` variable:
```bash TRAIN_CODE=$(curl --location --request PUT "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/codes/train-lightgbm/versions/1?api-version=$API_VERSION" \
As part of job submission, the training scripts and data must be uploaded to a c
--data-raw "{ \"properties\": { \"description\": \"Train code\",
- \"codeUri\": \"https://larrystore0912.blob.core.windows.net/azureml-blobstore-c8e832ae-e49c-4084-8d28-5e6c88502655/testjob\"
+ \"codeUri\": \"https://azuremlexamples.blob.core.windows.net/azureml-blobstore-c8e832ae-e49c-4084-8d28-5e6c88502655/testjob\"
} }" | jq -r '.id') ```
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
# Train TensorFlow models at scale with Azure Machine Learning > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](v1/how-to-train-tensorflow.md) > * [v2 (preview)](how-to-train-tensorflow.md)
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
- Title: Troubleshooting batch endpoints-
-description: Tips to help you succeed with batch endpoints.
-------- Previously updated : 03/31/2022
-#Customer intent: As an ML Deployment Pro, I want to figure out why my batch endpoint doesn't run so that I can fix it.
-
-# Troubleshooting batch endpoints
---
-Learn how to troubleshoot and solve, or work around, common errors you may come across when using [batch endpoints](how-to-use-batch-endpoint.md) for batch scoring.
-
-The following table contains common problems and solutions you may see during batch endpoint development and consumption.
-
-| Problem | Possible solution |
-|--|--|
-| Code configuration or Environment is missing. | Ensure you provide the scoring script and an environment definition if you're using a non-MLflow model. No-code deployment is supported for the MLflow model only. For more, see [Track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md)|
-| Unsupported input data. | Batch endpoint accepts input data in three forms: 1) registered data 2) data in the cloud 3) data in local. Ensure you're using the right format. For more, see [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)|
-| Output already exists. | If you configure your own output location, ensure you provide a new output for each endpoint invocation. |
-
-## Understanding logs of a batch scoring job
-
-### Get logs
-
-After you invoke a batch endpoint using the Azure CLI or REST, the batch scoring job will run asynchronously. There are two options to get the logs for a batch scoring job.
-
-Option 1: Stream logs to local console
-
-You can run the following command to stream system-generated logs to your console. Only logs in the `azureml-logs` folder will be streamed.
-
-```azurecli
-az ml job stream -name <job_name>
-```
-
-Option 2: View logs in studio
-
-To get the link to the run in studio, run:
-
-```azurecli
-az ml job show --name <job_name> --query interaction_endpoints.Studio.endpoint -o tsv
-```
-
-1. Open the job in studio using the value returned by the above command.
-1. Choose **batchscoring**
-1. Open the **Outputs + logs** tab
-1. Choose the log(s) you wish to review
-
-### Understand log structure
-
-There are two top-level log folders, `azureml-logs` and `logs`.
-
-The file `~/azureml-logs/70_driver_log.txt` contains information from the controller that launches the scoring script.
-
-Because of the distributed nature of batch scoring jobs, there are logs from several different sources. However, two combined files are created that provide high-level information:
--- `~/logs/job_progress_overview.txt`: This file provides high-level information about the number of mini-batches (also known as tasks) created so far and the number of mini-batches processed so far. As the mini-batches end, the log records the results of the job. If the job failed, it will show the error message and where to start the troubleshooting.--- `~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. This log provides information on task creation, progress monitoring, the job result.-
-For a concise understanding of errors in your script there is:
--- `~/logs/user/error.txt`: This file will try to summarize the errors in your script.-
-For more information on errors in your script, there is:
--- `~/logs/user/error/`: This file contains full stack traces of exceptions thrown while loading and running the entry script.-
-When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/node` folder, grouped by worker nodes:
--- `~/logs/sys/node/<ip_address>/<process_name>.txt`: This file provides detailed info about each mini-batch as it's picked up or completed by a worker. For each mini-batch, this file includes:-
- - The IP address and the PID of the worker process.
- - The total number of items, the number of successfully processed items, and the number of failed items.
- - The start time, duration, process time, and run method time.
-
-You can also view the results of periodic checks of the resource usage for each node. The log files and setup files are in this folder:
--- `~/logs/perf`: Set `--resource_monitor_interval` to change the checking interval in seconds. The default interval is `600`, which is approximately 10 minutes. To stop the monitoring, set the value to `0`. Each `<ip_address>` folder includes:-
- - `os/`: Information about all running processes in the node. One check runs an operating system command and saves the result to a file. On Linux, the command is `ps`.
- - `%Y%m%d%H`: The sub folder name is the time to hour.
- - `processes_%M`: The file ends with the minute of the checking time.
- - `node_disk_usage.csv`: Detailed disk usage of the node.
- - `node_resource_usage.csv`: Resource usage overview of the node.
- - `processes_resource_usage.csv`: Resource usage overview of each process.
-
-### How to log in scoring script
-
-You can use Python logging in your scoring script. Logs are stored in `logs/user/stdout/<node_id>/processNNN.stdout.txt`.
-
-```python
-import argparse
-import logging
-
-# Get logging_level
-arg_parser = argparse.ArgumentParser(description="Argument parser.")
-arg_parser.add_argument("--logging_level", type=str, help="logging level")
-args, unknown_args = arg_parser.parse_known_args()
-print(args.logging_level)
-
-# Initialize Python logger
-logger = logging.getLogger(__name__)
-logger.setLevel(args.logging_level.upper())
-logger.info("Info log statement")
-logger.debug("Debug log statement")
-```
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
- Title: 'Use batch endpoints for batch scoring using Python SDK v2'-
-description: In this article, learn how to create a batch endpoint to continuously batch score large data using Python SDK v2.
------- Previously updated : 05/25/2022-
-#Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
--
-# Use batch endpoints for batch scoring using Python SDK v2
--
-Learn how to use batch endpoints to do batch scoring using Python SDK v2. Batch endpoints simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
-
-In this article, you'll learn to:
-
-* Connect to your Azure machine learning workspace from the Python SDK v2.
-* Create a batch endpoint from Python SDK v2.
-* Create deployments on that endpoint from Python SDK v2.
-* Test a deployment with a sample request.
-
-## Prerequisites
-
-* A basic understanding of Machine Learning.
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-* An Azure ML workspace with computer cluster to run your batch scoring job.
-* The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
-
-### Clone examples repository
-
-To run the examples, first clone the examples repository and change into the `sdk` directory:
-
-```bash
-git clone --depth 1 https://github.com/Azure/azureml-examples
-cd azureml-examples/sdk
-```
-
-> [!TIP]
-> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
-
-## Connect to Azure Machine Learning workspace
-
-The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which the job will be run.
-
-1. Import the required libraries:
-
- ```python
- # import required libraries
- from azure.ai.ml import MLClient, Input
- from azure.ai.ml.entities import (
- AmlCompute,
- BatchEndpoint,
- BatchDeployment,
- Model,
- Environment,
- BatchRetrySettings,
- )
- from azure.ai.ml.entities._assets import Dataset
- from azure.identity import DefaultAzureCredential
- from azure.ai.ml.constants import BatchDeploymentOutputAction
- ```
-
-1. Configure workspace details and get a handle to the workspace:
-
- To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
-
- ```python
- # enter details of your AzureML workspace
- subscription_id = "<SUBSCRIPTION_ID>"
- resource_group = "<RESOURCE_GROUP>"
- workspace = "<AZUREML_WORKSPACE_NAME>"
- ```
-
- ```python
- # get a handle to the workspace
- ml_client = MLClient(
- DefaultAzureCredential(), subscription_id, resource_group, workspace
- )
- ```
-
-## Create batch endpoint
-
-Batch endpoints are endpoints that are used batch inferencing on large volumes of data over a period of time. Batch endpoints receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis.
-
-To create an online endpoint, we'll use `BatchEndpoint`. This class allows user to configure the following key aspects:
-
-* `name` - Name of the endpoint. Needs to be unique at the Azure region level
-* `auth_mode` - The authentication method for the endpoint. Currently only Azure Active Directory (Azure AD) token-based (`aad_token`) authentication is supported.
-* `identity`- The managed identity configuration for accessing Azure resources for endpoint provisioning and inference.
-* `defaults` - Default settings for the endpoint.
- * `deployment_name` - Name of the deployment that will serve as the default deployment for the endpoint.
-* `description`- Description of the endpoint.
-
-1. Configure the endpoint:
-
- ```python
- # Creating a unique endpoint name with current datetime to avoid conflicts
- import datetime
-
- batch_endpoint_name = "my-batch-endpoint-" + datetime.datetime.now().strftime(
- "%Y%m%d%H%M"
- )
-
- # create a batch endpoint
- endpoint = BatchEndpoint(
- name=batch_endpoint_name,
- description="this is a sample batch endpoint",
- tags={"foo": "bar"},
- )
- ```
-
-1. Create the endpoint:
-
- Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
-
- ```python
- ml_client.begin_create_or_update(endpoint)
- ```
-
-## Create batch compute
-
-Batch endpoint runs only on cloud computing resources, not locally. The cloud computing resource is a reusable virtual computer cluster. Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `cpu-cluster`.
-
-```python
-compute_name = "cpu-cluster"
-compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=5)
-ml_client.begin_create_or_update(compute_cluster)
-```
-
-## Create a deployment
-
-A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `BatchDeployment` class. This class allows user to configure the following key aspects.
-
-* `name` - Name of the deployment.
-* `endpoint_name` - Name of the endpoint to create the deployment under.
-* `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
-* `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
-* `code_path`- Path to the source code directory for scoring the model
-* `scoring_script` - Relative path to the scoring file in the source code directory
-* `compute` - Name of the compute target to execute the batch scoring jobs on
-* `instance_count`- The number of nodes to use for each batch scoring job.
-* `max_concurrency_per_instance`- The maximum number of parallel scoring_script runs per instance.
-* `mini_batch_size` - The number of files the code_configuration.scoring_script can process in one `run`() call.
-* `retry_settings`- Retry settings for scoring each mini batch.
- * `max_retries`- The maximum number of retries for a failed or timed-out mini batch (default is 3)
- * `timeout`- The timeout in seconds for scoring a mini batch (default is 30)
-* `output_action`- Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row`
-* `output_file_name`- Name of the batch scoring output file. Default is `predictions.csv`
-* `environment_variables`- Dictionary of environment variable name-value pairs to set for each batch scoring job.
-* `logging_level`- The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`.
-
-1. Configure the deployment:
-
- ```python
- # create a batch deployment
- model = Model(path="./mnist/model/")
- env = Environment(
- conda_file="./mnist/environment/conda.yml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
- )
- deployment = BatchDeployment(
- name="non-mlflow-deployment",
- description="this is a sample non-mlflow deployment",
- endpoint_name=batch_endpoint_name,
- model=model,
- code_path="./mnist/code/",
- scoring_script="digit_identification.py",
- environment=env,
- compute=compute_name,
- instance_count=2,
- max_concurrency_per_instance=2,
- mini_batch_size=10,
- output_action=BatchDeploymentOutputAction.APPEND_ROW,
- output_file_name="predictions.csv",
- retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
- logging_level="info",
- )
- ```
-
-1. Create the deployment:
-
- Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
-
- ```python
- ml_client.begin_create_or_update(deployment)
- ```
-
-## Test the endpoint with sample data
-
-Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
-
-* `endpoint_name` - Name of the endpoint
-* `input` - Path where input data is present
-* `deployment_name` - Name of the specific deployment to test in an endpoint
-
-1. Invoke the endpoint:
-
- ```python
- # create a dataset form the folderpath
- input = Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist")
-
- # invoke the endpoint for batch scoring job
- job = ml_client.batch_endpoints.invoke(
- endpoint_name=batch_endpoint_name,
- input=input,
- deployment_name="non-mlflow-deployment", # name is required as default deployment is not set
- params_override=[{"mini_batch_size": "20"}, {"compute.instance_count": "4"}],
- )
- ```
-
-1. Get the details of the invoked job:
-
- Let us get details and logs of the invoked job
-
- ```python
- # get the details of the job
- job_name = job.name
- batch_job = ml_client.jobs.get(name=job_name)
- print(batch_job.status)
- # stream the job logs
- ml_client.jobs.stream(name=job_name)
- ```
-
-## Clean up resources
-
-Delete endpoint
-
-```python
-ml_client.batch_endpoints.begin_delete(name=batch_endpoint_name)
-```
-
-Delete compute: optional, as you may choose to reuse your compute cluster with later deployments.
-
-```python
-ml_client.compute.begin_delete(name=compute_name)
-```
-
-## Next steps
-
-If you encounter problems using batch endpoints, see [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
- Title: 'Use batch endpoints for batch scoring'-
-description: In this article, learn how to create a batch endpoint to continuously batch score large data.
------- Previously updated : 05/24/2022-
-#Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
--
-# Use batch endpoints for batch scoring
---
-Learn how to use batch endpoints to do batch scoring. Batch endpoints simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
-
-In this article, you learn to do the following tasks:
-
-> [!div class="checklist"]
-> * Create a batch endpoint and a default batch deployment
-> * Start a batch scoring job using Azure CLI
-> * Monitor batch scoring job execution progress and check scoring results
-> * Deploy a new MLflow model with auto generated code and environment to an existing endpoint without impacting the existing flow
-> * Test the new deployment and set it as the default deployment
-> * Delete the not in-use endpoint and deployment
---
-## Prerequisites
-
-* You must have an Azure subscription to use Azure Machine Learning. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-
-* Install the Azure CLI and the `ml` extension. Follow the installation steps in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-
-* Create an Azure resource group if you don't have one, and you (or the service principal you use) must have `Contributor` permission. For resource group creation, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-
-* Create an Azure Machine Learning workspace if you don't have one. For workspace creation, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-
-* Configure your default workspace and resource group for the Azure CLI. Machine Learning CLI commands require the `--workspace/-w` and `--resource-group/-g` parameters. Configure the defaults can avoid passing in the values multiple times. You can override these on the command line. Run the following code to set up your defaults. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-
-```azurecli
-az account set -s "<subscription ID>"
-az configure --defaults group="<resource group>" workspace="<workspace name>" location="<location>"
-```
-
-### Clone the example repository
-
-Run the following commands to clone the [AzureML Example repository](https://github.com/Azure/azureml-examples) and go to the `cli` directory. This article uses the assets in `/cli/endpoints/batch`, and the end to end working example is `/cli/batch-score.sh`.
-
-```azurecli
-git clone https://github.com/Azure/azureml-examples
-cd azureml-examples/cli
-```
-
-Set your endpoint name. Replace `YOUR_ENDPOINT_NAME` with a unique name within an Azure region.
-
-For Unix, run this command:
--
-For Windows, run this command:
-
-```azurecli
-set ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"
-```
-
-> [!NOTE]
-> Batch endpoint names need to be unique within an Azure region. For example, there can be only one batch endpoint with the name mybatchendpoint in westus2.
-
-### Create compute
-
-Batch endpoint runs only on cloud computing resources, not locally. The cloud computing resource is a reusable virtual computer cluster. Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`.
--
-> [!NOTE]
-> You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
-
-## Understand batch endpoints and batch deployments
-
-A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](concept-endpoints.md#what-are-batch-endpoints)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
-
-> [!TIP]
-> One of the batch deployments will serve as the default deployment for the endpoint. The default deployment will be used to do the actual batch scoring when the endpoint is invoked. Learn more about [batch endpoints and batch deployment](concept-endpoints.md#what-are-batch-endpoints).
-
-The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`.
--
-The following table describes the key properties of the endpoint YAML. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
-
-| Key | Description |
-| | -- |
-| `$schema` | [Optional] The YAML schema. You can view the schema in the above example in a browser to see all available options for a batch endpoint YAML file. |
-| `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
-| `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
-| `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
-
-To create a batch deployment, you need all the following items:
-* Model files, or a registered model in your workspace referenced using `azureml:<model-name>:<model-version>`.
-* The code to score the model.
-* The environment in which the model runs. It can be a Docker image with Conda dependencies, or an environment already registered in your workspace referenced using `azureml:<environment-name>:<environment-version>`.
-* The pre-created compute referenced using `azureml:<compute-name>` and resource settings.
-
-For more information about how to reference an Azure ML entity, see [Referencing an Azure ML entity](reference-yaml-core-syntax.md#referencing-an-azure-ml-entity).
-
-The example repository contains all the required files. The following YAML file defines a batch deployment with all the required inputs and optional settings. You can include this file in your CLI command to [create your batch deployment](#create-a-batch-deployment). In the repository, this file is located at `/cli/endpoints/batch/nonmlflow-deployment.yml`.
--
-For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
-
-| Key | Description |
-| | -- |
-| `$schema` | [Optional] The YAML schema. You can view the schema in the above example in a browser to see all available options for a batch deployment YAML file. |
-| `name` | The name of the deployment. |
-| `endpoint_name` | The name of the endpoint to create the deployment under. |
-| `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
-| `code_configuration.code.path` | The local directory that contains all the Python source code to score the model. |
-| `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-use-batch-endpoint.md#understanding-the-scoring-script). |
-| `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. |
-| `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and reference it using `azureml:<compute-name>` syntax. |
-| `resources.instance_count` | The number of instances to be used for each batch scoring job. |
-| `max_concurrency_per_instance` | [Optional] The maximum number of parallel `scoring_script` runs per instance. |
-| `mini_batch_size` | [Optional] The number of files the `scoring_script` can process in one `run()` call. |
-| `output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and only calculate `error_threshold`. |
-| `output_file_name` | [Optional] The name of the batch scoring output file for `append_row` `output_action`. |
-| `retry_settings.max_retries` | [Optional] The number of max tries for a failed `scoring_script` `run()`. |
-| `retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. |
-| `error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
-| `logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
-
-### Understanding the scoring script
-
-As mentioned earlier, the `code_configuration.scoring_script` must contain two functions:
--- `init()`: Use this function for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process.-- `run(mini_batch)`: This function will be called for each `mini_batch` and do the actual scoring.
- - `mini_batch`: The `mini_batch` value is a list of file paths.
- - `response`: The `run()` method should return a pandas DataFrame or an array. Each returned output element indicates one successful run of an input element in the input `mini_batch`. Make sure that enough data is included in your `run()` response to correlate the input with the output. The resulting DataFrame or array is populated according to this scoring script. It's up to you how much or how little information youΓÇÖd like to output to correlate output values with the input value, for example, the array can represent a list of tuples containing both the model's output and input. There's no requirement on the cardinality of the results. All elements in the result DataFrame or array will be written to the output file as-is (given that the `output_action` isn't `summary_only`).
-
-The example uses `/cli/endpoints/batch/mnist/code/digit_identification.py`. The model is loaded in `init()` from `AZUREML_MODEL_DIR`, which is the path to the model folder created during deployment. `run(mini_batch)` iterates each file in `mini_batch`, does the actual model scoring and then returns output results.
-
-## Deploy with batch endpoints and run batch scoring
-
-Now, let's deploy the model with batch endpoints and run batch scoring.
-
-### Create a batch endpoint
-
-The simplest way to create a batch endpoint is to run the following code providing only a `--name`.
--
-You can also create a batch endpoint using a YAML file. Add `--file` parameter in above command and specify the YAML file path.
-
-### Create a batch deployment
-
-Run the following code to create a batch deployment named `nonmlflowdp` under the batch endpoint and set it as the default deployment.
--
-> [!TIP]
-> The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#deploy-a-new-model) section.
-
-### Check batch endpoint and deployment details
-
-Use `show` to check endpoint and deployment details.
-
-To check a batch deployment, run the following code:
--
-To check a batch endpoint, run the following code. As the newly created deployment is set as the default deployment, you should see `nonmlflowdp` in `defaults.deployment_name` from the response.
--
-### Invoke the batch endpoint to start a batch scoring job
-
-Invoke a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for a period of time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. One `scoring_script` `run()` takes one `mini_batch` and processes it by a process on an instance. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
-
-#### Invoke the batch endpoint with different input options
-
-You can either use CLI or REST to `invoke` the endpoint. For REST experience, see [Use batch endpoints with REST](how-to-deploy-batch-with-rest.md)
-
-There are several options to specify the data inputs in CLI `invoke`.
-
-* __Option 1-1: Data in the cloud__
-
- Use `--input` and `--input-type` to specify a file or folder on an Azure Machine Learning registered datastore or a publicly accessible path. When you're specifying a single file, use `--input-type uri_file`, and when you're specifying a folder, use `--input-type uri_folder`).
-
- When the file or folder is on Azure ML registered datastore, the syntax for the URI is `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/` for folder, and `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/<file-name>` for a specific file. When the file of folder is on a publicly accessible path, the syntax for the URI is `https://<public-path>/` for folder, `https://<public-path>/<file-name>` for a specific file.
-
- For more information about data URI, see [Azure Machine Learning data reference URI](reference-yaml-core-syntax.md#azure-ml-data-reference-uri).
-
- The example uses publicly available data in a folder from `https://pipelinedata.blob.core.windows.net/sampledata/mnist`, which contains thousands of hand-written digits. Name of the batch scoring job will be returned from the invoke response. Run the following code to invoke the batch endpoint using this data. `--query name` is added to only return the job name from the invoke response, and it will be used later to [Monitor batch scoring job execution progress](#monitor-batch-scoring-job-execution-progress) and [Check batch scoring results](#check-batch-scoring-results). Remove `--query name -o tsv` if you want to see the full invoke response. For more information on the `--query` parameter, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="start_batch_scoring_job" :::
-
-* __Option 1-2: Registered data asset__
-
- Use `--input` to pass in an Azure Machine Learning registered V2 data asset (with the type of either `uri_file` or `url_folder`). You don't need to specify `--input-type` in this option. The syntax for this option is `azureml:<dataset-name>:<dataset-version>`.
-
- ```azurecli
- az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:<dataset-name>:<dataset-version>
- ```
-
-* __Option 2: Data stored locally__
-
- Use `--input` to pass in data files stored locally. You don't need to specify `--input-type` in this option. The data files will be automatically uploaded as a folder to Azure ML datastore, and passed to the batch scoring job.
-
- ```azurecli
- az ml batch-endpoint invoke --name $ENDPOINT_NAME --input <local-path>
- ```
-
-> [!NOTE]
-> - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset.
-> - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
-
-#### Configure the output location and overwrite settings
-
-The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint. Use `--output-path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. The prefix `folder:` isn't required anymore. Use `--set output_file_name=<your-file-name>` to configure a new output file name if you prefer having one output file containing all scoring results (specified `output_action=append_row` in your deployment YAML).
-
-> [!IMPORTANT]
-> You must use a unique output location. If the output file exists, the batch scoring job will fail.
-
-Some settings can be overwritten when invoke to make best use of the compute resources and to improve performance:
-
-* Use `--instance-count` to overwrite `instance_count`. For example, for larger volume of data inputs, you may want to use more instances to speed up the end to end batch scoring.
-* Use `--mini-batch-size` to overwrite `mini_batch_size`. The number of mini batches is decided by total input file counts and mini_batch_size. Smaller mini_batch_size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead.
-* Use `--set` to overwrite other settings including `max_retries`, `timeout`, and `error_threshold`. These settings might impact the end to end batch scoring time for different workloads.
-
-To specify the output location and overwrite settings when invoke, run the following code. The example stores the outputs in a folder with the same name as the endpoint in the workspace's default blob storage, and also uses a random file name to ensure the output location uniqueness. The code should work in Unix. Replace with your own unique folder and file name.
--
-### Monitor batch scoring job execution progress
-
-Batch scoring jobs usually take some time to process the entire set of inputs.
-
-You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`.
--
-### Check batch scoring results
-
-Follow the below steps to view the scoring results in Azure Storage Explorer when the job is completed:
-
-1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="show_job_in_studio" :::
-
-1. In the graph of the job, select the `batchscoring` step.
-1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
-1. From __Data outputs__, select the icon to open __Storage Explorer__.
--
-The scoring results in Storage Explorer are similar to the following sample page:
--
-## Deploy a new model
-
-Once you have a batch endpoint, you can continue to refine your model and add new deployments.
-
-### Create a new batch deployment hosting an MLflow model
-
-To create a new batch deployment under the existing batch endpoint but not set it as the default deployment, run the following code:
--
-Notice that `--set-default` isn't used. If you `show` the batch endpoint again, you should see no change of the `defaults.deployment_name`.
-
-The example uses a model (`/cli/endpoints/batch/autolog_nyc_taxi`) trained and tracked with MLflow. `scoring_script` and `environment` can be auto generated using model's metadata, no need to specify in the YAML file. For more about MLflow, see [Train and track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
-
-Below is the YAML file the example uses to deploy an MLflow model, which only contains the minimum required properties. The source file in repository is `/cli/endpoints/batch/mlflow-deployment.yml`.
--
-> [!NOTE]
-> `scoring_script` and `environment` auto generation only supports Python Function model flavor and column-based model signature.
-
-### Test a non-default batch deployment
-
-To test the new non-default deployment, run the following code. The example uses a different model that accepts a publicly available csv file from `https://pipelinedata.blob.core.windows.net/sampledata/nytaxi/taxi-tip-data.csv`.
--
-Notice `--deployment-name` is used to specify the new deployment name. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
-
-### Update the default batch deployment
-
-To update the default batch deployment of the endpoint, run the following code:
--
-Now, if you `show` the batch endpoint again, you should see `defaults.deployment_name` is set to `mlflowdp`. You can `invoke` the batch endpoint directly without the `--deployment-name` parameter.
-
-### (Optional) Update the deployment
-
-If you want to update the deployment (for example, update code, model, environment, or settings), update the YAML file, and then run `az ml batch-deployment update`. You can also update without the YAML file by using `--set`. Check `az ml batch-deployment update -h` for more information.
-
-## Delete the batch endpoint and the deployment
-
-If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion.
--
-Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
--
-## Next steps
-
-* [Batch endpoints in studio](how-to-use-batch-endpoints-studio.md)
-* [Deploy models with REST for batch scoring](how-to-deploy-batch-with-rest.md)
-* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Use Batch Endpoints Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints-studio.md
- Title: 'How to use batch endpoints in studio'-
-description: In this article, learn how to create a batch endpoint in Azure Machine Learning studio. Batch endpoints are used to continuously batch score large data.
------- Previously updated : 08/03/2022---
-# How to use batch endpoints in Azure Machine Learning studio
-
-In this article, you learn how to use batch endpoints to do batch scoring in [Azure Machine Learning studio](https://ml.azure.com). For more, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
---
-In this article, you learn about:
-
-> [!div class="checklist"]
-> * Create a batch endpoint with a no-code experience for MLflow model
-> * Check batch endpoint details
-> * Start a batch scoring job
-> * Overview of batch endpoint features in Azure machine learning studio
-
-> [!IMPORTANT]
-> When working on a private link-enabled workspaces, batch enpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI. Please use the [Azure ML CLI v](how-to-configure-cli.md) instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-scoring-job).
-
-## Prerequisites
-
-* An Azure subscription - If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-
-* The example repository - Clone the [AzureML Example repository](https://github.com/Azure/azureml-examples). This article uses the assets in `/cli/endpoints/batch`.
-
-* A compute target where you can run batch scoring workflows. For more information on creating a compute target, see [Create compute targets in Azure Machine Learning studio](how-to-create-attach-compute-studio.md).
-
-* Register machine learning model.
-
-## Create a batch endpoint
-
-There are two ways to create Batch Endpoints in Azure Machine Learning studio:
-
-* From the **Endpoints** page, select **Batch Endpoints** and then select **+ Create**.
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/create-batch-endpoints.png" alt-text="Screenshot of creating a batch endpoint/deployment from Endpoints page":::
-
-OR
-
-* From the **Models** page, select the model you want to deploy and then select **Deploy to batch endpoint**.
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/models-page-deployment.png" alt-text="Screenshot of creating a batch endpoint/deployment from Models page":::
-
-> [!TIP]
-> If you're using an MLflow model, you can use no-code batch endpoint creation. That is, you don't need to prepare a scoring script and environment, both can be auto generated. For more, see [Train and track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
->
-> :::image type="content" source="media/how-to-use-batch-endpoints-studio/mlflow-model-wizard.png" alt-text="Screenshot of deploying an MLflow model":::
-
-Complete all the steps in the wizard to create a batch endpoint and deployment.
--
-## Check batch endpoint details
-
-After a batch endpoint is created, select it from the **Endpoints** page to view the details.
--
-## Start a batch scoring job
-
-A batch scoring workload runs as an offline job. By default, batch scoring stores the scoring outputs in blob storage. You can also configure the outputs location and overwrite some of the settings to get the best performance.
-
-1. Select **+ Create job**:
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring":::
-
-1. You can update the default deployment while submitting a job from the drop-down:
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job":::
-
-### Overwrite settings
-
-Some settings can be overwritten when you start a batch scoring job. For example, you might overwrite settings to make better use of the compute resource, or to improve performance. To override settings, select __Override deployment settings__ and provide the settings. For more information, see [Use batch endpoints](how-to-use-batch-endpoint.md#configure-the-output-location-and-overwrite-settings).
--
-### Start a batch scoring job with different input options
-
-You have two options to specify the data inputs in Azure machine learning studio:
-
-* Use a **registered dataset**:
-
- > [!NOTE]
- > During Preview, only FileDataset is supported.
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/select-dataset-for-job.png" alt-text="Screenshot of selecting registered dataset as an input option":::
-
-OR
-
-* Use a **datastore**:
-
- You can specify AzureML registered datastore or if your data is publicly available, specify the public path.
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option":::
-
-### Configure the output location
-
-By default, the batch scoring results are stored in the default blob store for the workspace. Results are in a folder named after the job name (a system-generated GUID).
-
-To change where the results are stored, providing a blob store and output path when you start a job.
-
-> [!IMPORTANT]
-> You must use a unique output location. If the output file exists, the batch scoring job will fail.
--
-### Summary of all submitted jobs
-
-To see a summary of all the submitted jobs for an endpoint, select the endpoint and then select the **Runs** tab.
-
-## Check batch scoring results
-
-To learn how to view the scoring results, see [Use batch endpoints](how-to-use-batch-endpoint.md#check-batch-scoring-results).
-
-## Add a deployment to an existing batch endpoint
-
-In Azure machine learning studio, there are two ways to add a deployment to an existing batch endpoint:
-
-* From the **Endpoints** page, select the batch endpoint to add a new deployment to. Select **+ Add deployment**, and complete the wizard to add a new deployment.
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option":::
-
-OR
-
-* From the **Models** page, select the model you want to deploy. Then select **Deploy to batch endpoint** option from the drop-down. In the wizard, on the **Endpoint** screen, select **Existing**. Complete the wizard to add the new deployment.
-
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/add-deployment-models-page.png" alt-text="Screenshot of selecting an existing batch endpoint to add new deployment":::
-
-## Update the default deployment
-
-If an endpoint has multiple deployments, one of the deployments is the *default*. The default deployment receives 100% of the traffic to the endpoint. To change the default deployment, use the following steps:
-
-1. Select the endpoint from the **Endpoints** page.
-1. Select **Update default deployment**. From the **Details** tab, select the deployment you want to set as default and then select **Update**.
- :::image type="content" source="media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment":::
-
-## Delete batch endpoint and deployments
-
-To delete an **endpoint**, select the endpoint from the **Endpoints** page and then select delete.
-
-> [!WARNING]
-> Deleting an endpoint also deletes all deployments to that endpoint.
-
-To delete a **deployment**, select the endpoint from the **Endpoints** page, select the deployment, and then select delete.
-
-## Next steps
-
-In this article, you learned how to create and call batch endpoints. See these other articles to learn more about Azure Machine Learning:
-
-* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
-* [Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md)
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
Last updated 07/01/2022 -+ # Track Azure Databricks ML experiments with MLflow and Azure Machine Learning
You have to configure the MLflow tracking URI to point exclusively to Azure Mach
# [Using the Azure ML SDK v2](#tab/azuremlsdk)
- [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v2.md)]
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-workspace-diagnostic-api.md
Last updated 09/14/2022 -+ # How to use workspace diagnostics
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
To bring a model into production, it's deployed. Azure Machine Learning's manage
See: * [Deploy a model with a real-time managed endpoint](how-to-deploy-managed-online-endpoints.md)
- * [Use batch endpoints for scoring](how-to-use-batch-endpoint.md)
+ * [Use batch endpoints for scoring](batch-inference/how-to-use-batch-endpoint.md)
## MLOps: DevOps for machine learning
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `model` | string or object | **Required.** The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. <br><br> To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. <br><br> To define a model inline, follow the [Model schema](reference-yaml-model.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the model separately and reference it here. | | | | `code_configuration` | object | Configuration for the scoring code logic. <br><br> This property is not required if your model is in MLflow format. | | | | `code_configuration.code` | string | The local directory that contains all the Python source code to score the model. | | |
-| `code_configuration.scoring_script` | string | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-use-batch-endpoint.md#understanding-the-scoring-script).| | |
+| `code_configuration.scoring_script` | string | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](batch-inference/how-to-use-batch-endpoint.md#understanding-the-scoring-script).| | |
| `environment` | string or object | The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> This property is not required if your model is in MLflow format. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | | | `compute` | string | **Required.** Name of the compute target to execute the batch scoring jobs on. This value should be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | | | | `resources.instance_count` | integer | The number of nodes to use for each batch scoring job. | | `1` |
Examples are available in the [examples GitHub repository](https://github.com/Az
## YAML: basic (MLflow) ## YAML: custom model and scoring code ## Next steps
machine-learning Reference Yaml Endpoint Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-batch.md
Examples are available in the [examples GitHub repository](https://github.com/Az
## YAML: basic ## Next steps
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
Previously updated : 04/19/2021 Last updated : 10/26/2022
This example performs a parameter sweep over alpha values and captures the resul
1. Create a training script that includes the logging logic, `train.py`.
- [!code-python[](~/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/train.py)]
+ [!code-python[](~/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/scripts/train.py)]
1. Submit the ```train.py``` script to run in a user-managed environment. The entire script folder is submitted for training.
migrate How To Modify Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-modify-assessment.md
Here's what's included in Azure SQL assessment properties:
To edit assessment properties after creating an assessment, do the following:
-1. In the Azure Migrate project, select **Servers**.
+1. In the Azure Migrate project, select **Servers, databases and web apps**.
2. In **Azure Migrate: Discovery and assessment**, select the assessments count. 3. In **Assessment**, select the relevant assessment > **Edit properties**. 5. Customize the assessment properties in accordance with the tables above.
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-mysql-github-actions.md
Previously updated : 06/20/2022 Last updated : 10/26/2022 # Quickstart: Use GitHub Actions to connect to Azure MySQL
The file has two sections:
|**Deploy** | 1. Deploy the database. | ## Generate deployment credentials
-# [Service principal](#tab/userlevel)
-You can create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac&preserve-view=true) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-
-Replace the placeholders `server-name` with the name of your MySQL server hosted on Azure. Replace the `subscription-id` and `resource-group` with the subscription ID and resource group connected to your MySQL server.
-
-```azurecli-interactive
- az ad sp create-for-rbac --name {server-name} --role contributor \
- --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
- --sdk-auth
-```
-
-The output is a JSON object with the role assignment credentials that provide access to your database similar to below. Copy this output JSON object for later.
-
-```output
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
-```
-
-> [!IMPORTANT]
-> It's always a good practice to grant minimum access. The scope in the previous example is limited to the specific server and not the entire resource group.
-
-# [OpenID Connect](#tab/openid)
-
-OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
-
- ```azurecli-interactive
- az ad app create --display-name myApp
- ```
-
- This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
-
- You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-
-1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
-
- This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
-
- Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
-
- ```azurecli-interactive
- az ad sp create --id $appId
- ```
-
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
-
- ```azurecli-interactive
- az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/
- ```
-
-1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-
- * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
- * Set a value for `CREDENTIAL-NAME` to reference later.
- * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
- * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
- * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
- * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
- ```azurecli
- az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
- ```
-
- To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-
- ## Copy the MySQL connection string
In the Azure portal, go to your Azure Database for MySQL server and open **Setti
You'll use the connection string as a GitHub secret. ## Configure GitHub secrets
-# [Service principal](#tab/userlevel)
-
-1. In [GitHub](https://github.com/), browse your repository.
-
-2. Select **Settings > Secrets > New secret**.
-3. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
-
- When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
-
- ```yaml
- - uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
- ```
-
-4. Select **New secret** again.
-
-5. Paste the connection string value into the secret's value field. Give the secret the name `AZURE_MYSQL_CONNECTION_STRING`.
-
-# [OpenID Connect](#tab/openid)
-
-You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-
-1. Open your GitHub repository and go to **Settings**.
-
-1. Select **Settings > Secrets > New secret**.
-
-1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
-
- |GitHub Secret | Active Directory Application |
- |||
- |AZURE_CLIENT_ID | Application (client) ID |
- |AZURE_TENANT_ID | Directory (tenant) ID |
- |AZURE_SUBSCRIPTION_ID | Subscription ID |
-
-1. Save each secret by selecting **Add secret**.
-- ## Add your workflow
network-watcher Network Insights Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-topology.md
# Topology (Preview)
-Topology provides a visualization of the entire hybrid network for understanding network configuration. It provides an interactive interface to view resources and their relationships in Azure spanning across multiple subscriptions, resource groups and locations. You can also drill down to a resource view for resources to view their component level visualization.
+Topology provides a visualization of the entire network for understanding network configuration. It provides an interactive interface to view resources and their relationships in Azure spanning across multiple subscriptions, resource groups and locations. You can also drill down to a resource view for resources to view their component level visualization.
## Prerequisites
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
Connect-AzureAD -TenantId <customer tenant id>
```powershell New-AzureADServicePrincipal -AppId 5657e26c-cc92-45d9-bc47-9da6cfdb4ed9 ```
-This command will grant Azure Database for PostgreSQL Flexible Server Service Principal read access to customer tenant to request Graph API tokens for Azure AD validation tasks. AppID (5657e26c-cc92-45d9-bc47-9da6cfdb4ed) in the above command is the AppID for Azure Database for PostgreSQL Flexible Server Service.
+This command will grant Azure Database for PostgreSQL Flexible Server Service Principal read access to customer tenant to request Graph API tokens for Azure AD validation tasks. AppID (5657e26c-cc92-45d9-bc47-9da6cfdb4ed9) in the above command is the AppID for Azure Database for PostgreSQL Flexible Server Service.
### Step 3: Networking Requirements
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
As client connects to the database, the connection string to the server resolves
The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for PostgreSQL server.
-As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
+As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**.You should use fully qualified domain name (FQDN) of your server in the format `<servername>.postgres.database.azure.com`, in the connection string for your application. * You do not update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
The following table lists the gateway IP addresses of the Azure Database for Pos
* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you are provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column. * **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you are provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we have not decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you are expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there is no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
+* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
| **Region name** | **Gateway IP addresses** |**Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** | |:-|:-|:-|:|
Only Gateway nodes will be decommissioned. When users connect to their servers,
Ping your server's FQDN, for example ``ping xxx.postgres.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway.
-You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses
+You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 5432 and ensure that return IP address isn't one of the decommissioning IP addresses
### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
-You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
+You will receive an email to inform you when we'll start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
-### What do I do if my client applications are still connecting to old gateway server ?
+### What do I do if my client applications are still connecting to old gateway server?
This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code. ### Is there any impact for my application connections?
-This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections will connect to the new IP address and all the existing connections will still be working fine until the old IP address gets fully decommissioned, which is usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
This maintenance operation will not drop the existing connections. It only makes the new connection requests go to new gateway ring. ### Can I request for a specific time window for the maintenance? As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for majority of users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-### I am using private link, will my connections get affected?
+### I'm using private link, will my connections get affected?
No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Migrate (Microsoft.Migrate) / migrate projects, assessment project and discovery site | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com | | Azure API Management (Microsoft.ApiManagement/service) / gateway | privatelink.azure-api.net </br> privatelink.developer.azure-api.net | azure-api.net </br> developer.azure-api.net | | Microsoft PowerBI (Microsoft.PowerBI/privateLinkServicesForPowerBI) | privatelink.analysis.windows.net </br> privatelink.pbidedicated.windows.net </br> privatelink.tip1.powerquery.microsoft.com | analysis.windows.net </br> pbidedicated.windows.net </br> tip1.powerquery.microsoft.com |
-| Azure Bot Service (Microsoft.BotService/botServices) / Bot | botplinks.botframework.com | directline.botframework.com </br> europe.directline.botframework.com |
-| Azure Bot Service (Microsoft.BotService/botServices) / Token | bottoken.botframework.com | token.botframework.com </br> europe.directline.botframework.com |
+| Azure Bot Service (Microsoft.BotService/botServices) / Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com |
+| Azure Bot Service (Microsoft.BotService/botServices) / Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
private-link Private Link Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-link-service-overview.md
Title: What is Azure Private Link service? description: Learn about Azure Private Link service. -+ Previously updated : 09/16/2019- Last updated : 10/27/2022+ # What is Azure Private Link service?
-Azure Private Link service is the reference to your own service that is powered by Azure Private Link. Your service that is running behind [Azure Standard Load Balancer](../load-balancer/load-balancer-overview.md) can be enabled for Private Link access so that consumers to your service can access it privately from their own VNets. Your customers can create a private endpoint inside their VNet and map it to this service. This article explains concepts related to the service provider side.
+Azure Private Link service is the reference to your own service that is powered by Azure Private Link. Your service that is running behind [Azure Standard Load Balancer](../load-balancer/load-balancer-overview.md) can be enabled for Private Link access so that consumers to your service can access it privately from their own VNets. Your customers can create a private endpoint inside their virtual network and map it to this service. This article explains concepts related to the service provider side.
:::image type="content" source="./media/private-link-service-overview/consumer-provider-endpoint.png" alt-text="Private link service workflow" border="true":::
Azure Private Link service is the reference to your own service that is powered
![Private Link service workflow](media/private-link-service-overview/private-link-service-workflow.png) - *Figure: Azure Private Link service workflow.* ### Create your Private Link Service - Configure your application to run behind a standard load balancer in your virtual network. If you already have your application configured behind a standard load balancer, you can skip this step. -- Create a Private Link Service referencing the load balancer above. In the load balancer selection process, choose the frontend IP configuration where you want to receive the traffic. Choose a subnet for NAT IP addresses for the Private Link Service. It is recommended to have at least eight NAT IP addresses available in the subnet. All consumer traffic will appear to originate from this pool of private IP addresses to the service provider. Choose the appropriate properties/settings for the Private Link Service. +
+- Create a Private Link Service referencing the load balancer above. In the load balancer selection process, choose the frontend IP configuration where you want to receive the traffic. Choose a subnet for NAT IP addresses for the Private Link Service. It's recommended to have at least eight NAT IP addresses available in the subnet. All consumer traffic will appear to originate from this pool of private IP addresses to the service provider. Choose the appropriate properties/settings for the Private Link Service.
> [!NOTE] > Azure Private Link Service is only supported on Standard Load Balancer. ### Share your service
-After you create a Private Link service, Azure will generate a globally unique named moniker called "alias" based on the name you provide for your service. You can share either the alias or resource URI of your service with your customers offline. Consumers can start a Private Link connection using the alias or the resource URI.
+After you create a Private Link service, Azure will generate a globally unique named moniker called **alias** based on the name you provide for your service. You can share either the alias or resource URI of your service with your customers offline. Consumers can start a Private Link connection using the alias or the resource URI.
### Manage your connection requests
A Private Link service specifies the following properties:
|Property |Explanation | |||
-|Provisioning State (provisioningState) |A read-only property that lists the current provisioning state for Private Link service. Applicable provisioning states are: "Deleting; Failed; Succeeded; Updating". When the provisioning state is "Succeeded", you have successfully provisioned your Private Link service. |
+|Provisioning State (provisioningState) |A read-only property that lists the current provisioning state for Private Link service. Applicable provisioning states are: **Deleting**, **Failed**,**Succeeded**,***Updating**. When the provisioning state is **Succeeded**, you've successfully provisioned your Private Link service. |
|Alias (alias) | Alias is a globally unique read-only string for your service. It helps you mask the customer data for your service and at the same time creates an easy-to-share name for your service. When you create a Private Link service, Azure generates the alias for your service that you can share with your customers. Your customers can use this alias to request a connection to your service. |
-|Visibility (visibility) | Visibility is the property that controls the exposure settings for your Private Link service. Service providers can choose to limit the exposure to their service to subscriptions with Azure role-based access control (Azure RBAC) permissions, a restricted set of subscriptions, or all Azure subscriptions. |
+|Visibility (visibility) | Visibility is the property that controls the exposure settings for your Private Link service. Service providers can choose to limit the exposure to their service to subscriptions with Azure role-based access control permissions. A restricted set of subscriptions can also be used to limit exposure. |
|Auto Approval (autoApproval) | Auto-approval controls the automated access to the Private Link service. The subscriptions specified in the auto-approval list are approved automatically when a connection is requested from private endpoints in those subscriptions. |
-|Load Balancer Frontend IP Configuration (loadBalancerFrontendIpConfigurations) | Private Link service is tied to the frontend IP address of a Standard Load Balancer. All traffic destined for the service will reach the frontend of the SLB. You can configure SLB rules to direct this traffic to appropriate backend pools where your applications are running. Load balancer frontend IP configurations are different than NAT IP configurations. |
-|NAT IP Configuration (ipConfigurations) | This property refers to the NAT (Network Address Translation) IP configuration for the Private Link service. The NAT IP can be chosen from any subnet in a service provider's virtual network. Private Link service performs destination side NAT-ing on the Private Link traffic. This ensures that there is no IP conflict between source (consumer side) and destination (service provider) address space. On the destination side (service provider side), the NAT IP address will show up as Source IP for all packets received by your service and destination IP for all packets sent by your service. |
+|Load balancer frontend IP configuration (loadBalancerFrontendIpConfigurations) | Private Link service is tied to the frontend IP address of a Standard Load Balancer. All traffic destined for the service will reach the frontend of the SLB. You can configure SLB rules to direct this traffic to appropriate backend pools where your applications are running. Load balancer frontend IP configurations are different than NAT IP configurations. |
+|NAT IP configuration (ipConfigurations) | This property refers to the NAT (Network Address Translation) IP configuration for the Private Link service. The NAT IP can be chosen from any subnet in a service provider's virtual network. Private Link service performs destination side NAT-ing on the Private Link traffic. This NAT ensures that there's no IP conflict between source (consumer side) and destination (service provider) address space. On the destination or service provider side, the NAT IP address displays as **source IP** for all packets received by your service. **Destination IP** is displayed for all packets sent by your service. |
|Private endpoint connections (privateEndpointConnections) | This property lists the private endpoints connecting to Private Link service. Multiple private endpoints can connect to the same Private Link service and the service provider can control the state for individual private endpoints. | |TCP Proxy V2 (EnableProxyProtocol) | This property lets the service provider use tcp proxy v2 to retrieve connection information about the service consumer. Service Provider is responsible for setting up receiver configs to be able to parse the proxy protocol v2 header. |
-|||
- ### Details -- Private Link service can be accessed from approved private endpoints in any public region. The private endpoint can be reached from the same virtual network, regionally peered VNets, globally peered VNets and on premises using private VPN or ExpressRoute connections.
+- Private Link service can be accessed from approved private endpoints in any public region. The private endpoint can be reached from the same virtual network and regionally peered virtual networks. The private endpoint can be reached from globally peered virtual networks and on premises using private VPN or ExpressRoute connections.
-- When creating a Private Link Service, a network interface is created for the lifecycle of the resource. This interface is not manageable by the customer.
+- Upon creation of a Private Link Service, a network interface is created for the lifecycle of the resource. This interface isn't manageable by the customer.
- The Private Link Service must be deployed in the same region as the virtual network and the Standard Load Balancer. -- A single Private Link Service can be accessed from multiple Private Endpoints belonging to different VNets, subscriptions and/or Active Directory tenants. The connection is established through a connection workflow.
+- A single Private Link Service can be accessed from multiple Private Endpoints belonging to different virtual networks, subscriptions and/or Active Directory tenants. The connection is established through a connection workflow.
- Multiple Private Link services can be created on the same Standard Load Balancer using different front-end IP configurations. There are limits to the number of Private Link services you can create per Standard Load Balancer and per subscription. For details, seeΓÇ»[Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits). -- Private Link service can have more than one NAT IP configurations linked to it. Choosing more than one NAT IP configurations can help service providers to scale. Today, service providers can assign up to eight NAT IP addresses per Private Link service. With each NAT IP address, you can assign more ports for your TCP connections and thus scale out. After you add multiple NAT IP addresses to a Private Link service, you can't delete the NAT IP addresses. This is done to ensure that active connections are not impacted while deleting the NAT IP addresses.-
+- Private Link service can have more than one NAT IP configurations linked to it. Choosing more than one NAT IP configurations can help service providers to scale. Today, service providers can assign up to eight NAT IP addresses per Private Link service. With each NAT IP address, you can assign more ports for your TCP connections and thus scale out. After you add multiple NAT IP addresses to a Private Link service, you can't delete the NAT IP addresses. This restriction is in place to ensure that active connections aren't impacted while deleting the NAT IP addresses.
## Alias
A Private Link service specifies the following properties:
The alias is composed of three parts: *Prefix*.*GUID*.*Suffix* - Prefix is the service name. You can pick your own prefix. After "Alias" is created, you can't change it, so select your prefix appropriately. -- GUID will be provided by platform. This helps make the name globally unique. +
+- GUID will be provided by platform. This GUID makes the name globally unique.
+ - Suffix is appended by Azure: *region*.azure.privatelinkservice Complete alias: *Prefix*. {GUID}.*region*.azure.privatelinkservice
Complete alias: *Prefix*. {GUID}.*region*.azure.privatelinkservice
The Private Link service provides you with three options in the **Visibility** setting to control the exposure of your service. Your visibility setting determines whether a consumer can connect to your service. Here are the visibility setting options, from most restrictive to least restrictive: -- **Role-based access control only**: If your service is for private consumption from different VNets that you own, you can use RBAC as an access control mechanism inside subscriptions that are associated with the same Active Directory tenant. Note: Cross tenant visibility is permitted through RBAC.
+- **Role-based access control only**: If your service is for private consumption from different virtual networks that you own, use role-based access control inside subscriptions that are associated with the same Active Directory tenant. **Cross tenant visibility is permitted through role-based access control**.
+ - **Restricted by subscription**: If your service will be consumed across different tenants, you can restrict the exposure to a limited set of subscriptions that you trust. Authorizations can be pre-approved.+ - **Anyone with your alias**: If you want to make your service public and allow anyone with your Private Link service alias to request a connection, select this option. ## Control service access
-Consumers having exposure (controlled by visibility setting) to your Private Link service can create a private endpoint in their VNets and request a connection to your Private Link service. The private endpoint connection will be created in a "Pending" state on the Private Link service object. The service provider is responsible for acting on the connection request. You can either approve the connection, reject the connection, or delete the connection. Only connections that are approved can send traffic to the Private Link service.
+Consumers having exposure controlled by visibility setting to your Private Link service can create a private endpoint in their virtual networks and request a connection to your Private Link service. The private endpoint connection will be created in a **Pending** state on the Private Link service object. The service provider is responsible for acting on the connection request. You can either approve the connection, reject the connection, or delete the connection. Only connections that are approved can send traffic to the Private Link service.
+
+The action of approving the connections can be automated by using the auto-approval property on the Private Link service. Auto-Approval is an ability for service providers to preapprove a set of subscriptions for automated access to their service. Customers will need to share their subscriptions offline for service providers to add to the auto-approval list. Auto-approval is a subset of the visibility array.
-The action of approving the connections can be automated by using the auto-approval property on the Private Link service. Auto-Approval is an ability for service providers to preapprove a set of subscriptions for automated access to their service. Customers will need to share their subscriptions offline for service providers to add to the auto-approval list. Auto-approval is a subset of the visibility array. Visibility controls the exposure settings whereas auto-approval controls the approval settings for your service. If a customer requests a connection from a subscription in the auto-approval list, the connection is automatically approved and the connection is established. Service providers donΓÇÖt need to manually approve the request anymore. On the other hand, if a customer requests a connection from a subscription in the visibility array and not in the auto-approval array, the request will reach the service provider but the service provider has to manually approve the connections.
+Visibility controls the exposure settings whereas auto-approval controls the approval settings for your service. If a customer requests a connection from a subscription in the auto-approval list, the connection is automatically approved, and the connection is established. Service providers donΓÇÖt need to manually approve the request. If a customer requests a connection from a subscription in the visibility array and not in the auto-approval array, the request will reach the service provider. The service provider must manually approve the connections.
## Getting connection Information using TCP Proxy v2
-When using private link service, the source IP address of the packets coming from private endpoint is network address translated (NAT) on the service provider side using the NAT IP allocated from provider's virtual network. Hence the applications receive the allocated NAT IP address instead of actual source IP address of the service consumers. If your application needs actual source IP address from consumer side, you can enable Proxy protocol on your service and retrieve the information from the proxy protocol header. In addition to source IP address, proxy protocol header also carries the LinkID of the private endpoint. Combination of source IP address and LinkID can help service providers uniquely identify their consumers. For more information on Proxy Protocol, visit [here](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt).
+In the private link service, the source IP address of the packets coming from private endpoint is network address translated (NAT) on the service provider side using the NAT IP allocated from the provider's virtual network. The applications receive the allocated NAT IP address instead of actual source IP address of the service consumers. If your application needs an actual source IP address from the consumer side, you can enable proxy protocol on your service and retrieve the information from the proxy protocol header. In addition to source IP address, proxy protocol header also carries the LinkID of the private endpoint. Combination of source IP address and LinkID can help service providers uniquely identify their consumers.
+
+For more information on Proxy Protocol, visit [here](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt).
This information is encoded using a custom Type-Length-Value (TLV) vector as follows:
Custom TLV details:
| |4 |UINT32 (4 bytes) representing the LINKID of the private endpoint. Encoded in little endian format.| > [!NOTE]
- > Service provider is responsible for making sure that the service behind the standard load balancer is configured to parse the proxy protocol header as per the [specification](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) when proxy protocol is enabled on private link service. The request will fail if proxy protocol setting is enabled on private link service but service provider's service is not configured to parse the header. Similarly, the request will fail if the service provider's service is expecting a proxy protocol header while the setting is not enabled on the private link service. Once proxy protocol setting is enabled, proxy protocol header will also be included in HTTP/TCP health probes from host to the backend virtual machines, even though there will be no client information in the header.
+ > The service provider is responsible for making sure that the service behind the standard load balancer is configured to parse the proxy protocol header as per the [specification](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) when proxy protocol is enabled on private link service. The request will fail if proxy protocol setting is enabled on private link service but the service provider's service is not configured to parse the header. The request will fail if the service provider's service is expecting a proxy protocol header while the setting is not enabled on the private link service. Once proxy protocol setting is enabled, proxy protocol header will also be included in HTTP/TCP health probes from host to the backend virtual machines. Client information isn't contained in the header.
+
+The matching `LINKID` that is part of the PROXYv2 (TLV) protocol can be found at the `PrivateEndpointConnection` as property `linkIdentifier`.
-The matching `LINKID` that is part of the PROXYv2 (TLV) protocol can be found at the `PrivateEndpointConnection` as property `linkIdentifier`, see
-[Private Link Services API](/../../../rest/api/virtualnetwork/private-link-services/get-private-endpoint-connection#privateendpointconnection) for more details.
+For more information, see [Private Link Services API](/../../../rest/api/virtualnetwork/private-link-services/get-private-endpoint-connection#privateendpointconnection).
## Limitations The following are the known limitations when using the Private Link service:+ - Supported only on Standard Load Balancer. Not supported on Basic Load Balancer. + - Supported only on Standard Load Balancer where backend pool is configured by NIC when using VM/VMSS.+ - Supports IPv4 traffic only+ - Supports TCP and UDP traffic only-- Private Link Service has an idle timeout of ~5 minutes (300 seconds). To avoid hitting this limit, applications connecting through Private Link Service must leverage TCP Keep Alives lower than that time.
+- Private Link Service has an idle timeout of ~5 minutes (300 seconds). To avoid hitting this limit, applications connecting through Private Link Service must use TCP Keepalives lower than that time.
## Next steps+ - [Create a private link service using Azure PowerShell](create-private-link-service-powershell.md)+ - [Create a private link service using Azure CLI](create-private-link-service-cli.md)
role-based-access-control Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-powershell.md
Previously updated : 12/06/2021 Last updated : 10/26/2022
To assign roles, you must have:
- `Microsoft.Authorization/roleAssignments/write` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner) - [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-az-ps)-- The account you use to run the PowerShell command must have the Azure Active Directory Graph `Directory.Read.All` and Microsoft Graph `Directory.Read.All` permissions.
+- The account you use to run the PowerShell command must have the Microsoft Graph `Directory.Read.All` permission.
## Steps to assign an Azure role
search Knowledge Store Projection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projection-overview.md
Previously updated : 10/15/2021 Last updated : 10/25/2022 # Knowledge store "projections" in Azure Cognitive Search
Recall that projections are exclusive to knowledge stores, and are not used to s
1. Check your results in Azure Storage. On subsequent runs, avoid naming collisions by deleting objects in Azure Storage or changing project names in the skillset.
+1. If you are using [Table projections](knowledge-store-projections-examples.md#define-a-table-projection) check [Understanding the Table Service data model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model) and [Scalability and performance targets for Table storage](/azure/storage/tables/scalability-targets) to make sure your data requirements are within Table storage documented limits.
+ ## Next steps Review syntax and examples for each projection type. > [!div class="nextstepaction"]
-> [Define projections in a knowledge store](knowledge-store-projections-examples.md)
+> [Define projections in a knowledge store](knowledge-store-projections-examples.md)
search Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-rest.md
Learn about the REST API samples that demonstrate the functionality and workflow
REST is the definitive programming interface for Azure Cognitive Search, and all operations that can be invoked programmatically are available first in REST, and then in SDKs. For this reason, most examples in the documentation leverage the REST APIs to demonstrate or explain important concepts.
-REST samples are usually developed and tested on Postman, but you can use any client that supports HTTP calls:
-
-+ [Use Postman](search-get-started-rest.md). This quickstart explains how to formulate the HTTP request from end-to-end.
-+ [Use the Visual Studio Code extension for Azure Cognitive Search](search-get-started-vs-code.md), currently in preview. This quickstart uses Azure integration and builds the requests internally, which means you can complete tasks more quickly.
+REST samples are usually developed and tested on Postman, but you can use any client that supports HTTP calls, including the [Postman desktop app](https://www.postman.com/downloads/). [This quickstart](search-get-started-rest.md) explains how to formulate the HTTP request from end-to-end.
## Doc samples
search Search Blob Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-blob-storage-integration.md
You can start directly in your Storage Account portal page.
1. Use [Search explorer](search-explorer.md) in the search portal page to query your content.
-The wizard is the best place to start, but you'll discover more flexible options when you [configure a blob indexer](search-howto-indexing-azure-blob-storage.md) yourself. You can call the REST APIs using a tool like Postman or Visual Studio Code. [Tutorial: Index and search semi-structured data (JSON blobs) in Azure Cognitive Search](search-semi-structured-data.md) walks you through the steps of calling the REST API in Postman.
+The wizard is the best place to start, but you'll discover more flexible options when you [configure a blob indexer](search-howto-indexing-azure-blob-storage.md) yourself. You can call the REST APIs using a tool like Postman. [Tutorial: Index and search semi-structured data (JSON blobs) in Azure Cognitive Search](search-semi-structured-data.md) walks you through the steps of calling the REST API in Postman.
## How blobs are indexed
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-capacity-planning.md
One approach for estimating capacity is to start with the Free tier. Remember th
+ [Create a free service](search-create-service-portal.md). + Prepare a small, representative dataset.
-+ Create an index and load your data. If the dataset can be hosted in an Azure data source supported by indexers, you can use the [Import data wizard in the portal](search-get-started-portal.md) to both create and load the index. Otherwise, you should use REST and [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) to create the index and push the data. The push model requires data to be in the form of JSON documents, where fields in the document correspond to fields in the index.
++ Create an index and load your data. If the dataset can be hosted in an Azure data source supported by indexers, you can use the [Import data wizard in the portal](search-get-started-portal.md) to both create and load the index. Otherwise, you could use [REST and Postman](search-get-started-rest.md) to create the index and push the data. The push model requires data to be in the form of JSON documents, where fields in the document correspond to fields in the index. + Collect information about the index, such as size. Features and attributes have an impact on storage. For example, adding suggesters (search-as-you-type queries) will increase storage requirements. Using the same data set, you might try creating multiple versions of an index, with different attributes on each field, to see how storage requirements vary. For more information, see ["Storage implications" in Create a basic index](search-what-is-an-index.md#index-size).
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-file-storage-integration.md
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ Read permissions on Azure Storage. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles instead, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Data and Reader** permissions.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
++ A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer. ## Supported document formats
search Search Get Started Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rest.md
Title: 'Quickstart: Create a search index using REST APIs'
-description: In this REST API quickstart, learn how to call the Azure Cognitive Search REST APIs using either Postman or Visual Studio Code.
+description: In this REST API quickstart, learn how to call the Azure Cognitive Search REST APIs using Postman.
zone_pivot_groups: URL-test-interface-rest-apis
ms.devlang: rest-api Previously updated : 12/07/2021 Last updated : 10/25/2022
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
The following screenshot highlights where **Add index** and **Import data** appe
### [**REST**](#tab/index-rest)
-[**Create Index (REST API)**](/rest/api/searchservice/create-index) is used to create an index. Both Postman and Visual Studio Code (with an extension for Azure Cognitive Search) can function as a search index client. Using either tool, you can connect to your search service and send requests.
-
-The following links show you how to set up the request:
-
-+ [Create a search index using REST and Postman](search-get-started-rest.md)
-+ [Get started with Visual Studio Code and Azure Cognitive Search](search-get-started-vs-code.md)
+[**Create Index (REST API)**](/rest/api/searchservice/create-index) is used to create an index. The Postman desktop app can function as a search index client to connect to your search service and send requests. See [Create a search index using REST and Postman](search-get-started-rest.md) to get started.
The REST API provides defaults for field attribution. For example, all `Edm.String` fields are searchable by default. Attributes are shown in full below for illustrative purposes, but you can omit attribution in cases where the default values apply.
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-create-indexers.md
When you're ready to create an indexer on a remote search service, you'll need a
### [**REST**](#tab/indexer-rest)
-Both Postman and Visual Studio Code (with an extension for Azure Cognitive Search) can function as an indexer client. Using either tool, you can connect to your search service and send [Create Indexer (REST)](/rest/api/searchservice/create-indexer) or [Update indexer](/rest/api/searchservice/update-indexer) requests.
+The Postman desktop app can function as an indexer client. Using the app, you can connect to your search service and send [Create Indexer (REST)](/rest/api/searchservice/create-indexer) or [Update indexer](/rest/api/searchservice/update-indexer) requests.
```http POST /indexers?api-version=[api-version]
POST /indexers?api-version=[api-version]
} ```
-There are numerous tutorials and examples that demonstrate REST clients for creating objects. Start with either of these articles to learn about each client:
-
-+ [Create a search index using REST and Postman](search-get-started-rest.md)
-+ [Get started with Visual Studio Code and Azure Cognitive Search](search-get-started-vs-code.md)
+There are numerous tutorials and examples that demonstrate REST clients for creating objects. [Create a search index using REST and Postman](search-get-started-rest.md) can get you started.
Refer to the [Indexer operations (REST)](/rest/api/searchservice/Indexer-operations) for help with formulating indexer requests.
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-azure-data-lake-storage.md
For a code sample in C#, see [Index Data Lake Gen2 using Azure AD](https://githu
+ Read permissions on Azure Storage. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles instead, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Storage Blob Data Reader** permissions.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
++ A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer. > [!NOTE] > ADLS Gen2 implements an [access control model](../storage/blobs/data-lake-storage-access-control.md) that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs) at the blob level. Azure Cognitive Search does not support document-level permissions. All users have the same level of access to all searchable and retrievable content in the index. If document-level permissions are an application requirement, consider [security trimming](search-security-trimming-for-azure-search.md) as a potential solution.
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB in
+ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
++ A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer. ## Define the data source
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB in
+ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
++ A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer. ## Limitations
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB in
+ Read permissions. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Cosmos DB Account Reader Role** permissions.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
++ A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer. ## Define the data source
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
When configured to include a high water mark and soft deletion, the indexer take
- Read permissions. A *full access* connection string includes a key that grants access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Reader** permissions on MySQL. -- A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
+- A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer.
You can also use the [Azure SDK for .NET](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql). You can't use the portal for indexer creation, but you can manage indexers and data sources once they're created.
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-blob-storage.md
Blob indexers are frequently used for both [AI enrichment](cognitive-search-conc
By default, both search and storage accept requests from public IP addresses. If network security isn't an immediate concern, you can index blob data using just the connection string and read permissions. When you're ready to add network protections, see [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md) for guidance about data access.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to make the requests described in this article.
++ A REST client, such as [Postman](search-get-started-rest.md), to make the requests described in this article. <a name="SupportedFormats"></a>
search Search Howto Indexing Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-tables.md
This article supplements [**Create an indexer**](search-howto-create-indexers.md
+ Read permissions to access Azure Storage. A "full access" connection string includes a key that gives access to the content, but if you're using Azure roles, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Data and Reader** permissions.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
++ A REST client, such as [Postman](search-get-started-rest.md), to send REST calls that create the data source, index, and indexer. ## Define the data source
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-reindex.md
If indexing workloads introduce unacceptable levels of query latency, conduct [p
You can begin querying an index as soon as the first document is loaded. If you know a document's ID, the [Lookup Document REST API](/rest/api/searchservice/lookup-document) returns the specific document. For broader testing, you should wait until the index is fully loaded, and then use queries to verify the context you expect to see.
-You can use [Search Explorer](search-explorer.md) or a Web testing tool like [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md) to check for updated content.
+You can use [Search Explorer](search-explorer.md) or a Web testing tool like [Postman](search-get-started-rest.md) to check for updated content.
If you added or renamed a field, use [$select](search-query-odata-select.md) to return that field: `search=*&$select=document-id,my-new-field,some-old-field&$count=true`
search Search Query Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-create.md
You can select any index and REST API version, including preview. A query string
### Use a REST client
-Both Postman and Visual Studio Code (with an extension for Azure Cognitive Search) can function as a query client. Using either tool, you can connect to your search service and send [Search Documents (REST)](/rest/api/searchservice/search-documents) requests. Numerous tutorials and examples demonstrate REST clients for querying indexing.
+The [Postman desktop app](https://www.postman.com/downloads/) can function as a query client. Using the app, you can connect to your search service and send [Search Documents (REST)](/rest/api/searchservice/search-documents) requests. Numerous tutorials and examples demonstrate REST clients for querying indexing.
-Start with either of these articles to learn about each client (both include instructions for queries):
-
-+ [Create a search index using REST and Postman](search-get-started-rest.md)
-+ [Get started with Visual Studio Code and Azure Cognitive Search](search-get-started-vs-code.md)
+Start with [Create a search index using REST and Postman](search-get-started-rest.md) for step-by-step instructions for setting up requests.
Each request is standalone, so you must provide the endpoint, index name, and API version on every request. Other properties, Content-Type and API key, are passed on the request header. For more information, see [Search Documents (REST)](/rest/api/searchservice/search-documents) for help with formulating query requests.
search Search Query Lucene Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-lucene-examples.md
The Lucene parser supports complex query formats, such as field-scoped queries,
The following queries are based on the hotels-sample-index, which you can create by following the instructions in this [quickstart](search-get-started-portal.md).
-Example queries are articulated using the REST API and POST requests. You can paste and run them in [Postman](search-get-started-rest.md) or in [Visual Studio Code with the Cognitive Search extension](search-get-started-vs-code.md).
+Example queries are articulated using the REST API and POST requests. You can paste and run them in [Postman](search-get-started-rest.md) or another web client.
Request headers must have the following values:
search Search Query Simple Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-simple-examples.md
In Azure Cognitive Search, the [simple query syntax](query-simple-syntax.md) inv
The following queries are based on the hotels-sample-index, which you can create by following the instructions in this [quickstart](search-get-started-portal.md).
-Example queries are articulated using the REST API and POST requests. You can paste and run them in [Postman](search-get-started-rest.md) or in [Visual Studio Code with the Cognitive Search extension](search-get-started-vs-code.md).
+Example queries are articulated using the REST API and POST requests. You can paste and run them in [Postman](search-get-started-rest.md) or another web client.
Request headers must have the following values:
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-manage-encryption-keys.md
The following tools and services are used in this scenario.
You should have a search client that can create the encrypted object. Into this code, you'll reference a key vault key and Active Directory registration information. This code could be a working app, or prototype code such as the [C# code sample DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK). > [!TIP]
-> You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or [Azure PowerShell](search-get-started-powershell.md), to call REST APIs that create indexes and synonym maps that include an encryption key parameter. You can also use Azure SDKs. Portal support for adding a key to indexes or synonym maps isn't supported.
+> You can use [Postman](search-get-started-rest.md) or [Azure PowerShell](search-get-started-powershell.md) to call REST APIs that create indexes and synonym maps that include an encryption key parameter. You can also use Azure SDKs. Portal support for adding a key to indexes or synonym maps isn't supported.
## Key Vault tips
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Captions and answers are extracted verbatim from text in the search document. Th
+ An existing search index with content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage). Semantic search works best on content that is informational or descriptive.
-+ A search client for sending queries.
++ A search client for sending queries and updating indexes.
- The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that makes REST calls to the preview APIs. You can also use [Search explorer](search-explorer.md) in Azure portal to submit a semantic query or use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
-
-+ A search client for updating indexes.
-
- The search client must support preview REST APIs on the query request. You can use the Azure portal, [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that makes REST calls to the preview APIs. You can also use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
+ The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), another web client, or code that makes REST calls to the preview APIs. [Search explorer](search-explorer.md) in Azure portal can be used to submit a semantic query. You can also use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
+ A [query request](/rest/api/searchservice/preview-api/search-documents) must include `queryType=semantic` and other parameters described in this article.
search Speller How To Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/speller-how-to-add.md
To use spell check, you'll need the following:
+ [A query request](/rest/api/searchservice/preview-api/search-documents) that has "speller=lexicon", and "queryLanguage" set to a [supported language](#supported-languages). Spell check works on strings passed in the "search" parameter. It's not supported for filters.
-Use a search client that supports preview APIs on the query request. For REST, you can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that you've modified to make REST calls to the preview APIs. You can also use beta releases of the Azure SDKs.
+Use a search client that supports preview APIs on the query request. For REST, you can use [Postman](search-get-started-rest.md), another web client, or code that you've modified to make REST calls to the preview APIs. You can also use beta releases of the Azure SDKs.
| Client library | Versions | |-|-|
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
The following services are generally available for Customer Lockbox:
- Azure Edge Zone Platform Storage - Azure Functions - Azure HDInsight
+- Azure Health Bot
- Azure Intelligent Recommendations - Azure Kubernetes Service - Azure Monitor-- Azure Red Hat OpenShift - Azure Spring Apps - Azure SQL Database - Azure SQL managed Instance
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
The following image shows where ingestion-time data transformation enters the da
Microsoft Sentinel collects data into the Log Analytics workspace from multiple sources. - Data from built-in data connectors is processed in Log Analytics using some combination of hardcoded workflows and ingestion-time transformations in the workspace DCR. This data can be stored in standard tables or in a specific set of custom tables.-- Data ingested directly into the Logs ingestion API endpoint is processed by a DCR that may include an ingestion-time transformation, and then stored in either standard or custom tables. This data can then be stored in either standard or custom tables of any kind.
+- Data ingested directly into the Logs ingestion API endpoint is processed by a standard DCR that may include an ingestion-time transformation. This data can then be stored in either standard or custom tables of any kind.
:::image type="content" source="media/data-transformation/data-transformation-architecture.png" alt-text="Diagram of the Microsoft Sentinel data transformation architecture.":::
For more in-depth information on ingestion-time transformation, the Custom Logs
- [Data collection transformations in Azure Monitor Logs (preview)](../azure-monitor/essentials/data-collection-transformations.md) - [Logs ingestion API in Azure Monitor Logs (Preview)](../azure-monitor/logs/logs-ingestion-api-overview.md) - [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)+
sentinel Extend Sentinel Across Workspaces Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md
You can use saved [functions](../azure-monitor/logs/functions.md) to simplify cr
A function can also simplify a commonly used union. For example, you can save the following expression as a function called `unionSecurityEvent`:
-`union workspace(ΓÇ£hard-to-remember-workspace-name-1ΓÇ¥).SecurityEvent, workspace(ΓÇ£hard-to-remember-workspace-name-2ΓÇ¥).SecurityEvent`
+`union workspace("hard-to-remember-workspace-name-1").SecurityEvent, workspace("hard-to-remember-workspace-name-2").SecurityEvent`
You can then write a query across both workspaces by beginning with `unionSecurityEvent | where ...` .
service-bus-messaging Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sessions.md
Title: Azure Service Bus message sessions | Microsoft Docs description: This article explains how to use sessions to enable joint and ordered handling of unbounded sequences of related messages. Previously updated : 09/01/2021 Last updated : 10/25/2022 # Message sessions
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
Title: Compare Azure Storage queues and Service Bus queues
description: Analyzes differences and similarities between two types of queues offered by Azure. Previously updated : 06/15/2021 Last updated : 10/25/2022 # Storage queues and Service Bus queues - compared and contrasted
service-bus-messaging Service Bus Dead Letter Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dead-letter-queues.md
Title: Service Bus dead-letter queues | Microsoft Docs description: Describes dead-letter queues in Azure Service Bus. Service Bus queues and topic subscriptions provide a secondary subqueue, called a dead-letter queue. Previously updated : 08/30/2021 Last updated : 10/25/2022
As there can be valuable business data in messages that ended up in the dead-let
Tools like [Azure Service Bus Explorer](./explorer.md) enable manual moving of messages between queues and topics. If there are many messages in the dead-letter queue that need to be moved, [code like this](https://stackoverflow.com/a/68632602/151350) can help move them all at once. Operators will often prefer having a user interface so they can troubleshoot which message types have failed processing, from which source queues, and for what reasons, while still being able to resubmit batches of messages to be reprocessed. Tools like [ServicePulse with NServiceBus](https://docs.particular.net/servicepulse/intro-failed-messages) provide these capabilities. ## Next steps
-See [Enable dead lettering for a queue or subscription](enable-dead-letter.md) to learn about different ways of configuring the **dead lettering on message expiration** setting.
+See [Enable dead lettering for a queue or subscription](enable-dead-letter.md) to learn about different ways of configuring the **dead lettering on message expiration** setting.
service-bus-messaging Service Bus Resource Manager Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-exceptions.md
Title: Azure Service Bus Resource Manager exceptions | Microsoft Docs description: List of Service Bus exceptions surfaced by Azure Resource Manager and suggested actions. Previously updated : 09/15/2021 Last updated : 10/25/2022 # Service Bus Resource Manager exceptions
This class of errors indicates the absence of authorization to run the command.
| Error code | Error SubCode | Error message | Description | Recommendation | | - | - | - | -- | -- | | Unauthorized | none | Invalid operation on the Secondary namespace. Secondary namespace is read-only. | The operation was performed against the secondary namespace, which is setup as a readonly namespace. | Retry the command against the primary namespace. Learn more about [secondary namespace](service-bus-geo-dr.md) |
-| Unauthorized | none | MissingToken: The authorization header was not found. | This error occurs when the authorization has null or incorrect values. | Ensure that the token value mentioned in the authorization header is correct and not null. |
+| Unauthorized | none | MissingToken: The authorization header was not found. | This error occurs when the authorization has null or incorrect values. | Ensure that the token value mentioned in the authorization header is correct and not null. |
static-web-apps Deploy Nextjs Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs-hybrid.md
In this tutorial, you learn to deploy a [Next.js](https://nextjs.org) website to
- [Node.js](https://nodejs.org) installed. - [Next.js CLI](https://nextjs.org/docs/getting-started) installed. Refer to the [Next.js Getting Started guide](https://nextjs.org/docs/getting-started) for details. + ## Set up a Next.js app Begin by initializing a new Next.js application.
static-web-apps Deploy Nuxtjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nuxtjs.md
In this tutorial, you learn to deploy a [Nuxt 3](https://v3.nuxtjs.org/) applica
You can set up a new Nuxt project using `npx nuxi init nuxt-app`. Instead of using a new project, this tutorial uses an existing repository set up to demonstrate how to deploy a Nuxt 3 site with universal rendering on Azure Static Web Apps. 1. Create a new repository under your GitHub account from a template repository.
-1. Go to [http://github.com/staticwebdev/nuxtjs-starter/generate](https://github.com/login?return_to=/staticwebdev/nuxtjs-starter/generate)
-1. Name the repository **nuxtjs-starter**.
+1. Go to [http://github.com/staticwebdev/nuxt-3-starter/generate](https://github.com/login?return_to=/staticwebdev/nuxt-3-starter/generate)
+1. Name the repository **nuxt-3-starter**.
1. Next, clone the new repo to your machine. Make sure to replace <YOUR_GITHUB_ACCOUNT_NAME> with your account name. ```bash
static-web-apps Nextjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/nextjs.md
Last updated 10/12/2022
+ # Deploy Next.js websites on Azure Static Web Apps+ Next.js support on Azure Static Web Apps can be categorized as two deployment models, [Static HTML Export](https://nextjs.org/docs/advanced-features/static-html-export) Next.js applications, and _hybrid_ rendering, which covers [Server-Side Rendering](https://nextjs.org/docs/advanced-features/react-18/streaming) and [Incremental Static Regeneration](https://nextjs.org/docs/basic-features/data-fetching/incremental-static-regeneration). ## Static HTML export
Key features that are available in the preview are:
Follow the [deploy hybrid Next.js applications](deploy-nextjs-hybrid.md) tutorial to learn how to deploy a hybrid Next.js application to Azure.
-### Unsupported features in preview
-
-During the preview, the following features of Static Web Apps are unsupported for Next.js with server-side rendering:
--- APIs using Azure Functions, Azure AppService, Azure Container Apps or Azure API Management.-- Deployment via the SWA CLI.--- Static Web Apps provided Authentication and Authorization.
- - Instead, you can use the Next.js [Authentication](https://nextjs.org/docs/authentication) feature.
-- The `staticwebapps.config.json` file.
- - Features such as custom headers and routing can be controlled using the `next.config.js` file.
-- `skip_app_build` and `skip_api_build` can't be used.-- The maximum app size for the hybrid Next.js application is 100 MB. Consider using Static HTML exported Next.js apps if your requirement is more than 100 MB.
storage Blob Containers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-portal.md
To create a container in the [Azure portal](https://portal.azure.com), follow th
1. In the portal navigation pane on the left side of the screen, select **Storage accounts** and choose a storage account. If the navigation pane isn't visible, select the menu button to toggle its visibility.
- :::image type="content" source="media/blob-containers-portal/menu-expand-sml.png" alt-text="Screenshot of the Azure Portal homepage showing the location of the Menu button in the browser." lightbox="media/blob-containers-portal/menu-expand-lrg.png":::
+ :::image type="content" source="media/blob-containers-portal/menu-expand-sml.png" alt-text="Screenshot of the Azure portal homepage showing the location of the Menu button in the browser." lightbox="media/blob-containers-portal/menu-expand-lrg.png":::
1. In the navigation pane for the storage account, scroll to the **Data storage** section and select **Containers**. 1. Within the **Containers** pane, select the **+ Container** button to open the **New container** pane.
To generate an SAS token using the [Azure portal](https://portal.azure.com), fol
1. Select the checkbox next to the name of the container for which you'll generate an SAS token. 1. Select the container's **More** button (**...**), and select **Generate SAS** to display the **Generate SAS** pane.
- :::image type="content" source="media/blob-containers-portal/select-container-sas-sml.png" alt-text="Screenshot showing how to access container shared access signature settings within the Azure portal" lightbox="media/blob-containers-portal/select-container-sas-lrg.png":::
+ :::image type="content" source="media/blob-containers-portal/select-container-sas-sml.png" alt-text="Screenshot showing how to access container shared access signature settings in the Azure portal." lightbox="media/blob-containers-portal/select-container-sas-lrg.png":::
1. Within the **Generate SAS** pane, select the **Account key** value for the **Signing method** field. 1. In the **Signing method** field, select **Account key**. Choosing the account key will result in the creation of a service SAS.
Configuring a stored access policy is a two-step process: the policy must first
1. Select the checkbox next to the name of the container for which you'll generate an SAS token. 1. Select the container's **More** button (**...**), and select **Access policy** to display the **Access policy** pane.
- :::image type="content" source="media/blob-containers-portal/select-container-policy-sml.png" alt-text="Screenshot showing how to access container stored access policy settings within the Azure portal." lightbox="media/blob-containers-portal/select-container-policy-lrg.png":::
+ :::image type="content" source="media/blob-containers-portal/select-container-policy-sml.png" alt-text="Screenshot showing how to access container stored access policy settings in the Azure portal." lightbox="media/blob-containers-portal/select-container-policy-lrg.png":::
1. Within the **Access policy** pane, select **+ Add policy** in the **Stored access policies** section to display the **Add policy** pane. Any existing policies will be displayed in either the appropriate section.
- :::image type="content" source="media/blob-containers-portal/select-add-policy-sml.png" alt-text="Screenshot showing how to add a stored access policy settings within the Azure portal." lightbox="media/blob-containers-portal/select-add-policy-lrg.png":::
+ :::image type="content" source="media/blob-containers-portal/select-add-policy-sml.png" alt-text="Screenshot showing how to add a stored access policy in the Azure portal." lightbox="media/blob-containers-portal/select-add-policy-lrg.png":::
1. Within the **Add policy** pane, select the **Identifier** box and add a name for your new policy. 1. Select the **Permissions** field, then select the check boxes corresponding to the permissions desired for your new policy.
Configuring a stored access policy is a two-step process: the policy must first
> [!CAUTION] > Although your policy is now displayed in the **Stored access policy** table, it is still not applied to the container. If you navigate away from the **Access policy** pane at this point, the policy will *not* be saved or applied and you will lose your work.
- :::image type="content" source="media/blob-containers-portal/select-save-policy-sml.png" alt-text="Screenshot showing how to define a stored access policy within the Azure portal." lightbox="media/blob-containers-portal/select-save-policy-lrg.png":::
+ :::image type="content" source="media/blob-containers-portal/select-save-policy-sml.png" alt-text="Screenshot showing how to create a stored access policy within the Azure portal." lightbox="media/blob-containers-portal/select-save-policy-lrg.png":::
1. In the **Access policy** pane, select **+ Add policy** to define another policy, or select **Save** to apply your new policy to the container. After creating at least one stored access policy, you'll be able to associate other secure access signatures (SAS) with it.
To acquire a lease using the Azure portal, follow these steps:
1. Select the checkbox next to the name of the container for which you'll acquire a lease. 1. Select the container's **More** button (**...**), and select **Acquire lease** to request a new lease and display the details in the **Lease status** pane.
- :::image type="content" source="media/blob-containers-portal/acquire-container-lease-sml.png" alt-text="Screenshot showing how to access container lease settings within the Azure portal." lightbox="media/blob-containers-portal/acquire-container-lease-lrg.png":::
+ :::image type="content" source="media/blob-containers-portal/acquire-container-lease-sml.png" alt-text="Screenshot showing how to access container lease settings in the Azure portal." lightbox="media/blob-containers-portal/acquire-container-lease-lrg.png":::
1. The **Container** and **Lease ID** property values of the newly requested lease are displayed within the **Lease status** pane. Copy and paste these values in a secure location. They'll only be displayed once and can't be retrieved after the pane is closed.
You can restore a soft-deleted container and its contents within the retention p
- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json) - [Manage blob containers using PowerShell](blob-containers-powershell.md)
-<!--Point-in-time restore: /azure/storage/blobs/point-in-time-restore-manage?tabs=portal-->
+<!--Point-in-time restore: /azure/storage/blobs/point-in-time-restore-manage?tabs=portal-->
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
Title: Create an expiration policy for shared access signatures
+ Title: Configure an expiration policy for shared access signatures (SAS)
-description: Create a policy on the storage account that defines the length of time that a shared access signature (SAS) should be valid. Learn how to monitor policy violations to remediate security risks.
+description: Configure a policy on the storage account that defines the length of time that a shared access signature (SAS) should be valid. Learn how to monitor policy violations to remediate security risks.
Previously updated : 04/18/2022 Last updated : 10/25/2022
-# Create an expiration policy for shared access signatures
+# Configure an expiration policy for shared access signatures
You can use a shared access signature (SAS) to delegate access to resources in your Azure Storage account. A SAS token includes the targeted resource, the permissions granted, and the interval over which access is permitted. Best practices recommend that you limit the interval for a SAS in case it is compromised. By setting a SAS expiration policy for your storage accounts, you can provide a recommended upper expiration limit when a user creates a service SAS or an account SAS.
A SAS expiration policy does not prevent a user from creating a SAS with an expi
When a SAS expiration policy is in effect for the storage account, the signed start field is required for every SAS. If the signed start field is not included on the SAS, and you have configured a diagnostic setting for logging with Azure Monitor, then Azure Storage writes a message to the **SasExpiryStatus** property in the logs whenever a user creates or uses a SAS without a value for the signed start field.
-## Create a SAS expiration policy
+## Configure a SAS expiration policy
-When you create a SAS expiration policy on a storage account, the policy applies to each type of SAS that is signed with the account key. The types of shared access signatures that are signed with the account key are the service SAS and the account SAS.
+When you configure a SAS expiration policy on a storage account, the policy applies to each type of SAS that is signed with the account key. The types of shared access signatures that are signed with the account key are the service SAS and the account SAS.
> [!NOTE]
-> Before you can create a SAS expiration policy, you may need to rotate each of your account access keys at least once.
+> Before you can configure a SAS expiration policy, you may need to rotate each of your account access keys at least once.
### [Azure portal](#tab/azure-portal)
-To create a SAS expiration policy in the Azure portal, follow these steps:
+To configure a SAS expiration policy in the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal. 1. Under **Settings**, select **Configuration**.
To create a SAS expiration policy in the Azure portal, follow these steps:
### [PowerShell](#tab/azure-powershell)
-To create a SAS expiration policy, use the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command, and then set the `-SasExpirationPeriod` parameter to the number of days, hours, minutes, and seconds that a SAS token can be active from the time that a SAS is signed. The string that you provide the `-SasExpirationPeriod` parameter uses the following format: `<days>.<hours>:<minutes>:<seconds>`. For example, if you wanted the SAS to expire 1 day, 12 hours, 5 minutes, and 6 seconds after it is signed, then you would use the string `1.12:05:06`.
+To configure a SAS expiration policy, use the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command, and then set the `-SasExpirationPeriod` parameter to the number of days, hours, minutes, and seconds that a SAS token can be active from the time that a SAS is signed. The string that you provide the `-SasExpirationPeriod` parameter uses the following format: `<days>.<hours>:<minutes>:<seconds>`. For example, if you wanted the SAS to expire 1 day, 12 hours, 5 minutes, and 6 seconds after it is signed, then you would use the string `1.12:05:06`.
```powershell $account = Set-AzStorageAccount -ResourceGroupName <resource-group> `
The SAS expiration period appears in the console output.
### [Azure CLI](#tab/azure-cli)
-To create a SAS expiration policy, use the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command, and then set the `--key-exp-days` parameter to the number of days, hours, minutes, and seconds that a SAS token can be active from the time that a SAS is signed. The string that you provide the `--key-exp-days` parameter uses the following format: `<days>.<hours>:<minutes>:<seconds>`. For example, if you wanted the SAS to expire 1 day, 12 hours, 5 minutes, and 6 seconds after it is signed, then you would use the string `1.12:05:06`.
+To configure a SAS expiration policy, use the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command, and then set the `--key-exp-days` parameter to the number of days, hours, minutes, and seconds that a SAS token can be active from the time that a SAS is signed. The string that you provide the `--key-exp-days` parameter uses the following format: `<days>.<hours>:<minutes>:<seconds>`. For example, if you wanted the SAS to expire 1 day, 12 hours, 5 minutes, and 6 seconds after it is signed, then you would use the string `1.12:05:06`.
```azurecli-interactive az storage account update \
To monitor your storage accounts for compliance with the key expiration policy,
:::image type="content" source="media/sas-expiration-policy/policy-compliance-report-portal-inline.png" alt-text="Screenshot showing how to view the compliance report for the SAS expiration built-in policy" lightbox="media/sas-expiration-policy/policy-compliance-report-portal-expanded.png":::
-To bring a storage account into compliance, configure a SAS expiration policy for that account, as described in [Create a SAS expiration policy](#create-a-sas-expiration-policy).
+To bring a storage account into compliance, configure a SAS expiration policy for that account, as described in [Configure a SAS expiration policy](#configure-a-sas-expiration-policy).
## See also
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
The following recommendations for using shared access signatures can help mitiga
- **Configure a SAS expiration policy for the storage account.** A SAS expiration policy specifies a recommended interval over which the SAS is valid. SAS expiration policies apply to a service SAS or an account SAS. When a user generates service SAS or an account SAS with a validity interval that is larger than the recommended interval, they'll see a warning. If Azure Storage logging with Azure Monitor is enabled, then an entry is written to the Azure Storage logs. To learn more, see [Create an expiration policy for shared access signatures](sas-expiration-policy.md). -- **Define a stored access policy for a service SAS.** Stored access policies give you the option to revoke permissions for a service SAS without having to regenerate the storage account keys. Set the expiration on these very far in the future (or infinite) and make sure it's regularly updated to move it farther into the future. There is a limit of five stored access policies per container.
+- **Create a stored access policy for a service SAS.** Stored access policies give you the option to revoke permissions for a service SAS without having to regenerate the storage account keys. Set the expiration on these very far in the future (or infinite) and make sure it's regularly updated to move it farther into the future. There is a limit of five stored access policies per container.
- **Use near-term expiration times on an ad hoc SAS service SAS or account SAS.** In this way, even if a SAS is compromised, it's valid only for a short time. This practice is especially important if you cannot reference a stored access policy. Near-term expiration times also limit the amount of data that can be written to a blob by limiting the time available to upload to it.
The following recommendations for using shared access signatures can help mitiga
- **Be specific with the resource to be accessed.** A security best practice is to provide a user with the minimum required privileges. If a user only needs read access to a single entity, then grant them read access to that single entity, and not read/write/delete access to all entities. This also helps lessen the damage if a SAS is compromised because the SAS has less power in the hands of an attacker.
- There is no direct way to identify which clients have accessed a resource. However, you can use the unique fields in the SAS, the signed IP (`sip`), signed start (`st`), and signed expiry (`se`) fields, to track access. For example, you can generate a SAS token with a unique expiry time that you can then correlate with client to whom it was issued.
+ There is no direct way to identify which clients have accessed a resource. However, you can use the unique fields in the SAS, the signed IP (`sip`), signed start (`st`), and signed expiry (`se`) fields, to track access. For example, you can generate a SAS token with a unique expiry time that you can then correlate with the client to whom it was issued.
- **Understand that your account will be billed for any usage, including via a SAS.** If you provide write access to a blob, a user may choose to upload a 200 GB blob. If you've given them read access as well, they may choose to download it 10 times, incurring 2 TB in egress costs for you. Again, provide limited permissions to help mitigate the potential actions of malicious users. Use short-lived SAS to reduce this threat (but be mindful of clock skew on the end time).
The following recommendations for using shared access signatures can help mitiga
- **Use Azure Monitor and Azure Storage logs to monitor your application.** Authorization failures can occur because of an outage in your SAS provider service. They can also occur from an inadvertent removal of a stored access policy. You can use Azure Monitor and storage analytics logging to observe any spike in these types of authorization failures. For more information, see [Azure Storage metrics in Azure Monitor](../blobs/monitor-blob-storage.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json) and [Azure Storage Analytics logging](storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+- **Configure a SAS expiration policy for the storage account.** Best practices recommend that you limit the interval for a SAS in case it is compromised. By setting a SAS expiration policy for your storage accounts, you can provide a recommended upper expiration limit when a user creates a service SAS or an account SAS. For more information, see [Create an expiration policy for shared access signatures](sas-expiration-policy.md).
+ > [!NOTE] > Storage doesn't track the number of shared access signatures that have been generated for a storage account, and no API can provide this detail. If you need to know the number of shared access signatures that have been generated for a storage account, you must track the number manually.
storage Storage Stored Access Policy Define Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-stored-access-policy-define-dotnet.md
The following Azure Storage resources support stored access policies:
> > Stored access policies are supported for a service SAS only. Stored access policies are not supported for account SAS or user delegation SAS.
-For more information about stored access policies, see [Define a stored access policy](/rest/api/storageservices/define-stored-access-policy).
+For more information about stored access policies, see [Create a stored access policy](/rest/api/storageservices/define-stored-access-policy).
## Create a stored access policy
private static async Task CreateStoredAccessPolicyAsync(CloudBlobContainer conta
## See also - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md)-- [Define a stored access policy](/rest/api/storageservices/define-stored-access-policy)
+- [Create a stored access policy](/rest/api/storageservices/define-stored-access-policy)
- [Configure Azure Storage connection strings](storage-configure-connection-string.md)
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
description: Learn how to connect to an Azure Elastic SAN (preview) volume from
Previously updated : 10/24/2022 Last updated : 10/25/2022
# Connect to Elastic SAN (preview) volumes - Linux
-This article explains how to connect to an elastic storage area network (SAN) volume from a Linux client. For details on connecting from a Windows client, see [Connect to Elastic SAN (preview) volumes - Windows](elastic-san-connect-windows.md)
+This article explains how to connect to an Elastic storage area network (SAN) volume from a Linux client. For details on connecting from a Windows client, see [Connect to Elastic SAN (preview) volumes - Windows](elastic-san-connect-windows.md).
+
+In this article, you'll add the Storage service endpoint to an Azure virtual network's subnet, then you'll configure your volume group to allow connections from your subnet. Finally, you'll configure your client environment to connect to an Elastic SAN volume and establish a connection.
## Prerequisites
This article explains how to connect to an elastic storage area network (SAN) vo
[!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
-## Enable Storage service endpoint
+## Networking configuration
+
+To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets.
+
+### Enable Storage service endpoint
+
+In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN.
+> [!NOTE]
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
# [Portal](#tab/azure-portal)
az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "my
```
-## Configure networking
+### Configure volume group networking
Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For more information on networking, see [Configure Elastic SAN networking (preview)](elastic-san-networking.md).
+By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Enabling access to virtual networks in other regions (preview)](elastic-san-networking.md#enabling-access-to-virtual-networks-in-other-regions-preview).
# [Portal](#tab/azure-portal)
You can either create single sessions or multiple-sessions to every Elastic SAN
When using multiple sessions, generally, you should aggregate them with Multipath I/O. It allows you to aggregate multiple sessions from an iSCSI initiator to the target into a single device, and can improve performance by optimally distributing I/O over all available paths based on a load balancing policy.
-## Environment setup
+### Environment setup
To create iSCSI connections from a Linux client, install the iSCSI initiator package. The exact command may vary depending on your distribution, and you should consult their documentation if necessary. As an example, with Ubuntu you'd use `sudo apt -y install open-iscsi` and with Red Hat Enterprise Linux (RHEL) you'd use `sudo yum install iscsi-initiator-utils -y`.
-### Multipath I/O
+#### Multipath I/O - for multi-session connectivity
Install the Multipath I/O package for your Linux distribution. The installation will vary based on your distribution, and you should consult their documentation. As an example, on Ubuntu the command would be `sudo apt install multipath-tools` and for RHEL the command would be `sudo yum install device-mapper-multipath`.
-Once you've installed the package, check if **/etc/multipath.conf** exists. If **/etc/multipath.conf** doesn't exist, create an empty file and use the settings in the following example for a general configuration. As an example, `mpathconf --enable` to create **/etc/multipath.conf** will create the file on RHEL.
+Once you've installed the package, check if **/etc/multipath.conf** exists. If **/etc/multipath.conf** doesn't exist, create an empty file and use the settings in the following example for a general configuration. As an example, `mpathconf --enable` will create **/etc/multipath.conf** on RHEL.
-You'll need to make some modifications to **/etc/multipath.conf**. You'll need to add the devices section in the following example, and the defaults section in the following example sets some defaults that'll generally be applicable. If you need to make any other specific configurations, such as excluding volumes from the multipath topology, see the man page for multipath.conf.
+You'll need to make some modifications to **/etc/multipath.conf**. You'll need to add the devices section in the following example, and the defaults section in the following example sets some defaults are generally applicable. If you need to make any other specific configurations, such as excluding volumes from the multipath topology, see the man page for multipath.conf.
``` defaults {
You should see a list of output that looks like the following:
Note down the values for **targetIQN**, **targetPortalHostName**, and **targetPortalPort**, you'll need them for the next sections.
-## Multi-session connections
+## Determine sessions to create
+
+You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
+
+For multi-session connections, install [Multipath I/O - for multi-session connectivity](#multipath-iofor-multi-session-connectivity).
+
+### Multi-session connections
To establish multiple sessions to a volume, first you'll need to create a single session with particular parameters.
for i in `seq 1 numberOfAdditionalSessions`; do sudo iscsiadm -m session -r sess
You can verify the number of sessions using `sudo multipath -ll`
-## Single-session connections
+### Single-session connections
To establish persistent iSCSI connections, modify **node.startup** in **/etc/iscsi/iscsid.conf** from **manual** to **automatic**.
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
description: Learn how to connect to an Azure Elastic SAN (preview) volume from
Previously updated : 10/24/2022 Last updated : 10/25/2022
# Connect to Elastic SAN (preview) volumes - Windows
-This article explains how to connect to an elastic storage area network (SAN) volume from a Windows client. For details on connecting from a Linux client, see [Connect to Elastic SAN (preview) volumes - Linux](elastic-san-connect-linux.md).
+This article explains how to connect to an Elastic storage area network (SAN) volume from a Windows client. For details on connecting from a Linux client, see [Connect to Elastic SAN (preview) volumes - Linux](elastic-san-connect-linux.md).
+
+In this article, you'll add the Storage service endpoint to an Azure virtual network's subnet, then you'll configure your volume group to allow connections from your subnet. Finally, you'll configure your client environment to connect to an Elastic SAN volume and establish a connection.
## Prerequisites
This article explains how to connect to an elastic storage area network (SAN) vo
[!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
-## Enable Storage service endpoint
+## Configure networking
+
+To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets.
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN.
+### Enable Storage service endpoint
+
+In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+
+> [!NOTE]
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
# [Portal](#tab/azure-portal)
az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "my
```
-## Configure networking
+### Configure volume group networking
Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For more information on networking, see [Configure Elastic SAN networking (preview)](elastic-san-networking.md).
+By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Enabling access to virtual networks in other regions (preview)](elastic-san-networking.md#enabling-access-to-virtual-networks-in-other-regions-preview).
# [Portal](#tab/azure-portal)
You can either create single sessions or multiple-sessions to every Elastic SAN
When using multiple sessions, generally, you should aggregate them with Multipath I/O. It allows you to aggregate multiple sessions from an iSCSI initiator to the target into a single device, and can improve performance by optimally distributing I/O over all available paths based on a load balancing policy.
-## Set up your environment
+### Set up your environment
To create iSCSI connections from a Windows client, confirm the iSCSI service is running. If it's not, start the service, and set it to start automatically.
Start-Service -Name MSiSCSI
Set-Service -Name MSiSCSI -StartupType Automatic ```
-### Multipath I/O
-
-Multipath I/O enables highly available and fault-tolerant iSCSI network connections. It allows you to aggregate multiple sessions from an iSCSI initiator to the target into a single device, and can improve performance by optimally distributing I/O over all available paths based on a load balancing policy.
+#### Multipath I/O - for multi-session connectivity
Install Multipath I/O, enable multipath support for iSCSI devices, and set a default load balancing policy.
$connectVolume.storagetargetportalport
Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next sections.
-## Multi-session configuration
+## Determine sessions to create
+
+You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
+
+For multi-session connections, install [Multipath I/O - for multi-session connectivity](#multipath-iofor-multi-session-connectivity).
+
+### Multi-session configuration
To create multiple sessions to each volume, you must configure the target and connect to it multiple times, based on the number of sessions you want to that volume.
foreach ($Target in $TargetConfig.Targets.Target)
Verify the number of sessions your volume has with either `iscsicli SessionList` or `mpclaim -s -d`
-## Single-session configuration
+### Single-session configuration
Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume. If you'd like to modify these commands, run `iscsicli commandHere -?` for information on the command and its parameters.
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
description: An overview of Azure Elastic SAN (preview), a service that enables
Previously updated : 10/24/2022 Last updated : 10/25/2022
Each volume group supports up to 200 virtual network rules.
> [!IMPORTANT] > If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
-## Required permissions
+## Enable Storage service endpoint
-To enable service point for Azure Storage, the user must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role.
-
-An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+
+# [Portal](#tab/azure-portal)
+
+1. Navigate to your virtual network and select **Service Endpoints**.
+1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
+1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
++
+# [PowerShell](#tab/azure-powershell)
+
+```powershell
+$resourceGroupName = "yourResourceGroup"
+$vnetName = "yourVirtualNetwork"
+$subnetName = "yourSubnet"
+
+$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
+
+$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
+
+$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+```
+ ## Available virtual network regions
virtual-desktop Azure Monitor Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor-glossary.md
Previously updated : 03/25/2022 Last updated : 10/26/2022
Any active Azure Monitor alerts that you've configured on the subscription and c
Available sessions shows the number of available sessions in the host pool. The service calculates this number by multiplying the number of virtual machines (VMs) by the maximum number of sessions allowed per virtual machine, then subtracting the total sessions.
+## Client operating system (OS)
+
+The client operating system (OS) shows which version of the OS end-users accessing Azure Virtual Desktop resources are currently using. The client OS also shows which version of the web (HTML) client and the full Remote Desktop client the users have. For a full list of Windows OS versions, see [Operating System Version](/windows/win32/sysinfo/operating-system-version).
+
+>[!IMPORTANT]
+>Windows 7 support will end on January 10, 2023. The client OS version for Windows 7 is Windows 6.1.
+ ## Connection success This item shows connection health. "Connection success" means that the connection could reach the host, as confirmed by the stack on that virtual machine. A failed connection means that the connection couldn't reach the host.
virtual-machines Dasv5 Dadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dasv5-dadsv5-series.md
Dadsv5-series virtual machines support Standard SSD, Standard HDD, and Premium S
| Standard_D64ads_v5 | 64 | 256 | 2400 | 32 | 300000 / 4000 | 80000/1200 | 80000/2000 | 8 | 32000 | | Standard_D96ads_v5 | 96 | 384 | 3600 | 32 | 450000 / 4000 | 80000/1600 | 80000/2000 | 8 | 40000 |
-* These IOPs values can be achieved by using Gen2 VMs.<br>
+<sup>*</sup> These IOPs values can be achieved by using Gen2 VMs.<br>
<sup>1</sup> Dadsv5-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
virtual-machines Ddv5 Ddsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv5-ddsv5-series.md
Ddv5-series virtual machines support Standard SSD and Standard HDD disk types. T
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) | |||||||||
-| Standard_D2d_v5<sup>1,2</sup> | 2 | 8 | 75 | 4 | 9000/125 | 2 | 12500 |
+| Standard_D2d_v5 | 2 | 8 | 75 | 4 | 9000/125 | 2 | 12500 |
| Standard_D4d_v5 | 4 | 16 | 150 | 8 | 19000/250 | 2 | 12500 | | Standard_D8d_v5 | 8 | 32 | 300 | 16 | 38000/500 | 4 | 12500 | | Standard_D16d_v5 | 16 | 64 | 600 | 32 | 75000/1000 | 8 | 12500 |
Ddv5-series virtual machines support Standard SSD and Standard HDD disk types. T
<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br> <sup>1</sup> Accelerated networking is required and turned on by default on all Ddv5 virtual machines.<br>
-<sup>2</sup> Accelerated networking can be applied to two NICs.
## Ddsv5-series
Ddsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
Ddsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>3</sup> | Max NICs | Max network bandwidth (Mbps) | |||||||||||
-| Standard_D2ds_v5<sup>1,2</sup> | 2 | 8 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_D2ds_v5 | 2 | 8 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 |
| Standard_D4ds_v5 | 4 | 16 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_D8ds_v5 | 8 | 32 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 | | Standard_D16ds_v5 | 16 | 64 | 600 | 32 | 75000/1000 | 25600/600 | 40000/1200 | 8 | 12500 |
Ddsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SS
<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br> <sup>1</sup> Accelerated networking is required and turned on by default on all Ddsv5 virtual machines.<br>
-<sup>2</sup> Accelerated networking can be applied to two NICs.<br>
-<sup>3</sup> Ddsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+<sup>2</sup> Ddsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Disks Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-cross-tenant-customer-managed-keys.md
description: Learn how to use customer-managed keys with your Azure disks in dif
Previously updated : 10/04/2022 Last updated : 10/26/2022
If you have questions about cross-tenant customer-managed keys with managed disk
## Limitations -- Currently this feature is only available in the North Central US, West Central US, West US, East US 2, and North Europe regions.
+- Currently this feature is only available in the Central US, North Central US, West US, West Central US, East US, East US 2, and North Europe regions.
- Managed Disks and the customer's Key Vault must be in the same Azure region, but they can be in different subscriptions. - This feature doesn't support Ultra Disks or Azure Premium SSD v2 managed disks.
virtual-machines Dv5 Dsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv5-dsv5-series.md
Dv5-series virtual machines do not have any temporary storage thus lowering the
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max NICs|Max network bandwidth (Mbps) | ||||||||
-| Standard_D2_v5<sup>1, 2</sup> | 2 | 8 | Remote Storage Only | 4 | 2 | 12500 |
+| Standard_D2_v5 | 2 | 8 | Remote Storage Only | 4 | 2 | 12500 |
| Standard_D4_v5 | 4 | 16 | Remote Storage Only | 8 | 2 | 12500 | | Standard_D8_v5 | 8 | 32 | Remote Storage Only | 16 | 4 | 12500 | | Standard_D16_v5 | 16 | 64 | Remote Storage Only | 32 | 8 | 12500 |
Dv5-series virtual machines do not have any temporary storage thus lowering the
| Standard_D96_v5 | 96 | 384 | Remote Storage Only | 32 | 8 | 35000 | <sup>1</sup> Accelerated networking is required and turned on by default on all Dv5 virtual machines.<br>
-<sup>2</sup> Accelerated networking can be applied to two NICs.
## Dsv5-series
Dsv5-series virtual machines do not have any temporary storage thus lowering the
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>3</sup> | Max NICs | Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>2</sup> | Max NICs | Max network bandwidth (Mbps) |
||||||||||
-| Standard_D2s_v5<sup>1,2</sup> | 2 | 8 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_D2s_v5 | 2 | 8 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
| Standard_D4s_v5 | 4 | 16 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_D8s_v5 | 8 | 32 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 | | Standard_D16s_v5 | 16 | 64 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 8 | 12500 |
Dsv5-series virtual machines do not have any temporary storage thus lowering the
| Standard_D96s_v5 | 96 | 384 | Remote Storage Only | 32 | 80000/2600 | 80000/4000 | 8 | 35000 | <sup>1</sup> Accelerated networking is required and turned on by default on all Dsv5 virtual machines.<br>
-<sup>2</sup> Accelerated networking can be applied to two NICs.<br>
-<sup>3</sup> Dsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+<sup>2</sup> Dsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake)
> [!IMPORTANT] > - Accelerated networking is required and turned on by default on all Ebsv5 and Ebdsv5 VMs.
-> - Accelerated networking can be applied to two NICs.
->- Ebsv5 and Ebdsv5-series VMs can [burst their disk performance](disk-bursting.md) and get up to their bursting max for up to 30 minutes at a time.
+> - Ebsv5 and Ebdsv5-series VMs can [burst their disk performance](disk-bursting.md) and get up to their bursting max for up to 30 minutes at a time.
## Ebdsv5 series
virtual-machines Edv5 Edsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv5-edsv5-series.md
Edv5-series virtual machines support Standard SSD and Standard HDD disk types. T
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br><br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max NICs|Max network bandwidth (Mbps) | |||||||||
-| Standard_E2d_v5<sup>1,2</sup> | 2 | 16 | 75 | 4 | 9000/125 | 2 | 12500 |
+| Standard_E2d_v5 | 2 | 16 | 75 | 4 | 9000/125 | 2 | 12500 |
| Standard_E4d_v5 | 4 | 32 | 150 | 8 | 19000/250 | 2 | 12500 | | Standard_E8d_v5 | 8 | 64 | 300 | 16 | 38000/500 | 4 | 12500 | | Standard_E16d_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 8 | 12500 |
Edv5-series virtual machines support Standard SSD and Standard HDD disk types. T
<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br> <sup>1</sup> Accelerated networking is required and turned on by default on all Edv5 virtual machines.<br>
-<sup>2</sup> Accelerated networking can be applied to two NICs.<br>
-<sup>3</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.
+<sup>2</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.
## Edsv5-series
Edsv5-series virtual machines support Standard SSD and Standard HDD disk types.
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>5</sup> | Max NICs | Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps<sup>*</sup> | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>4</sup> | Max NICs | Max network bandwidth (Mbps) |
|||||||||||
-| Standard_E2ds_v5<sup>1,2</sup> | 2 | 16 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_E2ds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 3750/85 | 10000/1200 | 2 | 12500 |
| Standard_E4ds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_E8ds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 | | Standard_E16ds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 25600/600 | 40000/1200 | 8 | 12500 |
Edsv5-series virtual machines support Standard SSD and Standard HDD disk types.
<sup>1</sup> Accelerated networking is required and turned on by default on all Edsv5 virtual machines.
-<sup>2</sup> Accelerated networking can be applied to two NICs.
-
-<sup>3</sup> [Constrained Core](constrained-vcpu.md) sizes available.
+<sup>2</sup> [Constrained Core](constrained-vcpu.md) sizes available.
-<sup>4</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.
+<sup>3</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.
-<sup>5</sup> Edsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+<sup>4</sup> Edsv5-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
-<sup>6</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_E104ids_v5** results in higher IOPs and MBps than standard premium disks:
+<sup>5</sup> Attaching Ultra Disk or Premium v2 SSDs to **Standard_E104ids_v5** results in higher IOPs and MBps than standard premium disks:
- Max uncached Ultra Disk and Premium v2 SSD throughput (IOPS/ MBps): 120000/4000 - Max burst uncached Ultra Disk and Premium v2 SSD disk throughput (IOPS/ MBps): 120000/4000
virtual-machines Ev5 Esv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev5-esv5-series.md
Ev5-series supports Standard SSD and Standard HDD disk types. To use Premium SSD
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max NICs|Max network bandwidth (Mbps) | ||||||||
-| Standard_E2_v5<sup>1,2</sup> | 2 | 16 | Remote Storage Only | 4 | 2 | 12500 |
+| Standard_E2_v5 | 2 | 16 | Remote Storage Only | 4 | 2 | 12500 |
| Standard_E4_v5 | 4 | 32 | Remote Storage Only | 8 | 2 | 12500 | | Standard_E8_v5 | 8 | 64 | Remote Storage Only | 16 | 4 | 12500 | | Standard_E16_v5 | 16 | 128 | Remote Storage Only | 32 | 8 | 12500 |
Ev5-series supports Standard SSD and Standard HDD disk types. To use Premium SSD
| Standard_E104i_v5<sup>3</sup> | 104 | 672 | Remote Storage Only | 64 | 8 | 100000 | <sup>1</sup> Accelerated networking is required and turned on by default on all Ev5 virtual machines.<br>
-<sup>2</sup> Accelerated networking can be applied to two NICs.<br>
-<sup>3</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.<br>
+<sup>2</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.<br>
## Esv5-series
Esv5-series supports Standard SSD, Standard HDD, and Premium SSD disk types. You
[Live Migration](maintenance-and-updates.md): Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Required <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Required <br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>5</sup> | Max NICs | Max network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>4</sup> | Max NICs | Max network bandwidth (Mbps) |
||||||||||
-| Standard_E2s_v5<sup>1,2</sup> | 2 | 16 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_E2s_v5 | 2 | 16 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
| Standard_E4s_v5 | 4 | 32 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 | | Standard_E8s_v5 | 8 | 64 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 | | Standard_E16s_v5 | 16 | 128 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 8 | 12500 |
Esv5-series supports Standard SSD, Standard HDD, and Premium SSD disk types. You
| Standard_E48s_v5 | 48 | 384 | Remote Storage Only | 32 | 76800/1315 | 80000/3000 | 8 | 24000 | | Standard_E64s_v5 | 64 | 512 | Remote Storage Only | 32 | 80000/1735 | 80000/3000 | 8 | 30000 | | Standard_E96s_v5<sup>3</sup> | 96 | 672 | Remote Storage Only | 32 | 80000/2600 | 80000/4000 | 8 | 35000 |
-| Standard_E104is_v5<sup>4,6</sup> | 104 | 672 | Remote Storage Only | 64 | 120000/4000 | 120000/4000 | 8 | 100000 |
+| Standard_E104is_v5<sup>3,5</sup> | 104 | 672 | Remote Storage Only | 64 | 120000/4000 | 120000/4000 | 8 | 100000 |
<sup>1</sup> Accelerated networking is required and turned on by default on all Esv5 virtual machines.
-<sup>2</sup> Accelerated networking can be applied to two NICs.
+<sup>2</sup> [Constrained core](constrained-vcpu.md) sizes available.
-<sup>3</sup> [Constrained core](constrained-vcpu.md) sizes available.
+<sup>3</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.
-<sup>4</sup> Instance is [isolated](../security/fundamentals/isolation-choices.md#compute-isolation) to hardware dedicated to a single customer.
+<sup>4</sup> Esv5-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
-<sup>5</sup> Esv5-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
-
-<sup>6</sup> Attaching Ultra Disk or Premium SSDs V2 to **Standard_E104is_v5** results in higher IOPs and MBps than standard premium disks:
+<sup>5</sup> Attaching Ultra Disk or Premium SSDs V2 to **Standard_E104is_v5** results in higher IOPs and MBps than standard premium disks:
- Max uncached Ultra Disk and Premium SSD V2 throughput (IOPS/ MBps): 120000/4000 - Max burst uncached Ultra Disk and Premium SSD V2 disk throughput (IOPS/ MBps): 120000/4000
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/time-sync.md
For more information about chrony, see [Using chrony](https://access.redhat.com/
On SUSE and Ubuntu releases before 19.10, time sync is configured using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). For more information about Ubuntu, see [Time Synchronization](https://help.ubuntu.com/lts/serverguide/NTP.html). For more information about SUSE, see Section 4.5.8 in [SUSE Linux Enterprise Server 12 SP3 Release Notes](https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12-SP3/#InfraPackArch.ArchIndependent.SystemsManagement).
+### cloud-init
+
+Images that use cloud-init to provision the VM can use the ntp section to setup a time sync service. An example of cloud-init installing chrony and configuring it to use the PTP clock source for Ubuntu VMs:
+
+```yaml
+#cloud-config
+ntp:
+ enabled: true
+ ntp_client: chrony
+ config:
+ confpath: /etc/chrony/chrony.conf
+ packages:
+ - chrony
+ service_name: chrony
+ template: |
+ ## template:jinja
+ driftfile /var/lib/chrony/chrony.drift
+ logdir /var/log/chrony
+ maxupdateskey 100.0
+ refclock PHC /dev/ptp_hyperv poll 3 dpoll -2
+ makestep 1.0 -1
+```
+
+You can then base64 the above cloud-config for use in the `osProfile` section in an ARM template:
+
+```powershell
+[Convert]::ToBase64String((Get-Content -Path ./cloud-config.txt -Encoding Byte))
+```
+
+```json
+"osProfile": {
+ "customData": "I2Nsb3VkLWNvbmZpZwpudHA6CiAgZW5hYmxlZDogdHJ1ZQogIG50cF9jbGllbnQ6IGNocm9ueQogIGNvbmZpZzoKICAgIGNvbmZwYXRoOiAvZXRjL2Nocm9ueS9jaHJvbnkuY29uZgogICAgcGFja2FnZXM6CiAgICAgLSBjaHJvbnkKICAgIHNlcnZpY2VfbmFtZTogY2hyb255CiAgICB0ZW1wbGF0ZTogfAogICAgICAgIyMgdGVtcGxhdGU6amluamEKICAgICAgIGRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvY2hyb255LmRyaWZ0CiAgICAgICBsb2dkaXIgL3Zhci9sb2cvY2hyb255CiAgICAgICBtYXh1cGRhdGVza2V5IDEwMC4wCiAgICAgICByZWZjbG9jayBQSEMgL2Rldi9wdHBfaHlwZXJ2IHBvbGwgMyBkcG9sbCAtMgogICAgICAgbWFrZXN0ZXAgMS4wIC0x"
+}
+```
+
+For more information about cloud-init on Azure, see [Overview of cloud-init support for Linux VMs in Azure](./using-cloud-init.md).
+ ## Next steps For more information, see [Accurate time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time).
virtual-machines Tutorial Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-load-balancer.md
az network lb address-pool show \
--lb-name myLoadBalancer \ --name myBackEndPool \ --query backendIpConfigurations \
- --output tsv | cut -f4
+ --output tsv | cut -f5
``` The output is similar to the following example, which shows that the virtual NIC for VM 2 is no longer part of the backend address pool:
virtual-machines Windows Desktop Multitenant Hosting Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/windows-desktop-multitenant-hosting-deployment.md
Title: How to deploy Windows 10 on Azure
+ Title: How to deploy Windows 11 on Azure
description: Learn how to maximize your Windows Software Assurance benefits to bring on-premises licenses to Azure with Multitenant Hosting Rights. Previously updated : 2/2/2021 Last updated : 10/24/2022
-# How to deploy Windows 10 on Azure
+# How to deploy Windows 11 on Azure
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-For customers with Windows 10 Enterprise E3/E5 per user or Azure Virtual Desktop Access per user (User Subscription Licenses or Add-on User Subscription Licenses), Multitenant Hosting Rights for Windows 10 allows you to bring your Windows 10 Licenses to the cloud and run Windows 10 Virtual Machines on Azure without paying for another license. Multitenant Hosting Rights are only available for Windows 10 (version 1703 or later).
+For customers with Windows 11 Enterprise E3/E5 per user or Azure Virtual Desktop Access per user (User Subscription Licenses or Add-on User Subscription Licenses), Multitenant Hosting Rights for Windows 11 allows you to bring your Windows 11 Licenses to the cloud and run Windows 11 Virtual Machines on Azure without paying for another license.
-For more information, see [Multitenant Hosting for Windows 10](https://www.microsoft.com/en-us/CloudandHosting).
+For more information, see [Multitenant Hosting for Windows 11](https://www.microsoft.com/en-us/CloudandHosting).
> [!NOTE] > - To use Windows 7, 8.1 and 10 images for development or testing see [Windows client in Azure for dev/test scenarios](client-images.md)
For more information, see [Multitenant Hosting for Windows 10](https://www.micro
## Subscription Licenses that qualify for Multitenant Hosting Rights
-For more details about subscription licenses that qualify to run Windows 10 on Azure, download the [Windows 10 licensing brief for Virtual Desktops](https://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/Licensing_brief_PLT_Windows_10_licensing_for_Virtual_Desktops.pdf)
+For more details about subscription licenses that qualify to run Windows 11 on Azure, download the [Windows 11 licensing brief for Virtual Desktops](https://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/Licensing_brief_PLT_Windows_10_licensing_for_Virtual_Desktops.pdf)
> [!IMPORTANT]
-> Users **must** have one of the below subscription licenses in order to use Windows 10 images in Azure for any production workload. If you do not have one of these subscription licenses, they can be purchased through your [Cloud Service Partner](https://azure.microsoft.com/overview/choosing-a-cloud-service-provider/) or directly through [Microsoft](https://www.microsoft.com/microsoft-365?rtc=1).
+> Users **must** have one of the below subscription licenses in order to use Windows 11 images in Azure for any production workload. If you do not have one of these subscription licenses, they can be purchased through your [Cloud Service Partner](https://azure.microsoft.com/overview/choosing-a-cloud-service-provider/) or directly through [Microsoft](https://www.microsoft.com/microsoft-365?rtc=1).
## Operating systems and licenses
You have a choice of operating systems that you can use for session hosts to pro
### Operating system licenses - Windows 11 Enterprise multi-session - Windows 11 Enterprise-- Windows 10 Enterprise, version 1909 and later
+- Windows 10 Enterprise, version 1909 and later (For Windows 10 deployments)
### License entitlement - Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit
You have a choice of operating systems that you can use for session hosts to pro
External users can use [per-user access pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) instead of license entitlement.
-## Deploying Windows 10 Image from Azure Marketplace
-For PowerShell, CLI and Azure Resource Manager template deployments, Windows 10 images can be found using the `PublisherName: MicrosoftWindowsDesktop` and `Offer: Windows-10`. Windows 10 version Creators Update (1809) or later is supported for Multitenant Hosting Rights.
+## Deploying Windows 11 Image from Azure Marketplace
+For PowerShell, CLI and Azure Resource Manager template deployments, Windows 11 images can be found using the `PublisherName: MicrosoftWindowsDesktop` and `Offer: Windows-11`.
```powershell
-Get-AzVmImageSku -Location '$location' -PublisherName 'MicrosoftWindowsDesktop' -Offer 'Windows-10'
-
-Skus Offer PublisherName Location
-- -- - --
-rs4-pro Windows-10 MicrosoftWindowsDesktop eastus
-rs4-pron Windows-10 MicrosoftWindowsDesktop eastus
-rs5-enterprise Windows-10 MicrosoftWindowsDesktop eastus
-rs5-enterprisen Windows-10 MicrosoftWindowsDesktop eastus
-rs5-pron Windows-10 MicrosoftWindowsDesktop eastus
+Get-AzVmImageSku -Location 'West US' -PublisherName 'MicrosoftWindowsDesktop' -Offer 'Windows-11'
+
+Skus Offer PublisherName Location
+- -- - --
+win11-21h2-avd Windows-11 MicrosoftWindowsDesktop westus
+win11-21h2-ent Windows-11 MicrosoftWindowsDesktop westus
+win11-21h2-entn Windows-11 MicrosoftWindowsDesktop westus
+win11-21h2-pro Windows-11 MicrosoftWindowsDesktop westus
+win11-21h2-pron Windows-11 MicrosoftWindowsDesktop westus
+win11-22h2-avd Windows-11 MicrosoftWindowsDesktop westus
+win11-22h2-ent Windows-11 MicrosoftWindowsDesktop westus
+win11-22h2-entn Windows-11 MicrosoftWindowsDesktop westus
+win11-22h2-pro Windows-11 MicrosoftWindowsDesktop westus
+win11-22h2-pron Windows-11 MicrosoftWindowsDesktop westus
+ ```
-For more information on available images see [Find and use Azure Marketplace VM images with Azure PowerShell](./cli-ps-findimage.md)
+For more information on available images, see [Find and use Azure Marketplace VM images with Azure PowerShell](./cli-ps-findimage.md)
-## Uploading Windows 10 VHD to Azure
-if you are uploading a generalized Windows 10 VHD, please note Windows 10 does not have built-in administrator account enabled by default. To enable the built-in administrator account, include the following command as part of the Custom Script extension.
+## Uploading Windows 11 VHD to Azure
+If you're uploading a generalized Windows 11 VHD, note Windows 11 doesn't have built-in administrator account enabled by default. To enable the built-in administrator account, include the following command as part of the Custom Script extension.
```powershell Net user <username> /active:yes
For more information:
* [How to prepare a Windows VHD to upload to Azure](prepare-for-upload-vhd-image.md)
-## Deploying Windows 10 with Multitenant Hosting Rights
-Make sure you have [installed and configured the latest Azure PowerShell](/powershell/azure/). Once you have prepared your VHD, upload the VHD to your Azure Storage account using the `Add-AzVhd` cmdlet as follows:
+## Deploying Windows 11 with Multitenant Hosting Rights
+Make sure you've [installed and configured the latest Azure PowerShell](/powershell/azure/). Once you've prepared your VHD, upload the VHD to your Azure Storage account using the `Add-AzVhd` cmdlet as follows:
```powershell Add-AzVhd -ResourceGroupName "myResourceGroup" -LocalFilePath "C:\Path\To\myvhd.vhd" `
Add-AzVhd -ResourceGroupName "myResourceGroup" -LocalFilePath "C:\Path\To\myvhd.
**Deploy using Azure Resource Manager Template Deployment**
-Within your Resource Manager templates, an additional parameter for `licenseType` can be specified. You can read more about [authoring Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md). Once you have your VHD uploaded to Azure, edit you Resource Manager template to include the license type as part of the compute provider and deploy your template as normal:
+Within your Resource Manager templates, an additional parameter for `licenseType` can be specified. You can read more about [authoring Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md). Once you've your VHD uploaded to Azure, edit your Resource Manager template to include the license type as part of the compute provider and deploy your template as normal:
```json "properties": { "licenseType": "Windows_Client",
Within your Resource Manager templates, an additional parameter for `licenseType
``` **Deploy via PowerShell**
-When deploying your Windows Server VM via PowerShell, you have an additional parameter for `-LicenseType`. Once you have your VHD uploaded to Azure, you create a VM using `New-AzVM` and specify the licensing type as follows:
+When deploying your Windows Server VM via PowerShell, you need to add another parameter for `-LicenseType`. Once you have your VHD uploaded to Azure, you can create a VM using `New-AzVM` and specify the licensing type as follows:
+ ```powershell New-AzVM -ResourceGroupName "myResourceGroup" -Location "West US" -VM $vm -LicenseType "Windows_Client" ``` ## Verify your VM is utilizing the licensing benefit
-Once you have deployed your VM through either the PowerShell or Resource Manager deployment method, verify the license type with `Get-AzVM` as follows:
+Once you've deployed your VM through either the PowerShell or Resource Manager deployment method, verify the license type with `Get-AzVM`:
```powershell Get-AzVM -ResourceGroup "myResourceGroup" -Name "myVM" ```
LicenseType :
``` ## Additional Information about joining Azure Active Directory
-Azure provisions all Windows VMs with built-in administrator account, which cannot be used to join Azure Active Directory. For example, *Settings > Account > Access Work or School > +Connect* will not work. You must create and log on as a second administrator account to join Azure AD manually. You can also configure Azure AD using a provisioning package, use the link in the *Next Steps* section to learn more.
+Azure provisions all Windows VMs with built-in administrator account, which can't be used to join Azure Active Directory. For example, *Settings > Account > Access Work or School > + Connect* won't work. You must create and log on as a second administrator account to join Azure AD manually. You can also configure Azure AD using a provisioning package, use the link in the *Next Steps* section to learn more.
## Next Steps-- Learn more about [Configuring VDA for Windows 10](/windows/deployment/vda-subscription-activation)-- Learn more about [Multitenant Hosting for Windows 10](https://www.microsoft.com/en-us/CloudandHosting)
+- Learn more about [Configuring VDA for Windows 11](/windows/deployment/vda-subscription-activation)
+- Learn more about [Multitenant Hosting for Windows 11](https://www.microsoft.com/en-us/CloudandHosting)
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
Make sure to assign the custom role to the service principal at all VM (cluster
vm.dirty_bytes = 629145600 vm.dirty_background_bytes = 314572800 </code></pre>
+
+ c. Make sure vm.swappiness is set to 10 to avoid [hang issues with backups/compression on NetAPP filesystem] (https://me.sap.com/notes/2080199) as well as to reduce swap usage and favor memory.
+
+ <pre><code>sudo vi /etc/sysctl.conf
+ # Change/set the following setting
+ vm.swappiness = 10
+ </code></pre>
1. **[A]** Configure *cloud-netconfig-azure* for the high availability cluster.
Azure offers [scheduled events](../../linux/scheduled-events.md). Scheduled even
* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide] * [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server][sles-nfs-guide] * [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications][sles-guide]
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High availability of SAP HANA on Azure Virtual Machines][sap-hana-ha]
+* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High availability of SAP HANA on Azure Virtual Machines][sap-hana-ha]
virtual-network Create Public Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-cli.md
Create a resource group with [az group create](/cli/azure/group#az-group-create)
--name QuickStartCreateIP-rg \ --location eastus2 ```
+## Create public IP
# [**Standard SKU**](#tab/create-public-ip-standard)
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
To fully remove a custom IP prefix, it must be deprovisioned and then deleted.
> [!NOTE] > If there is a requirement to migrate an provisioned range from one region to the other, the original custom IP prefix must be fully removed from the fist region before a new custom IP prefix with the same address range can be created in another region. >
-> The estimated time to complete the deprovisioning process can range from 30 minutes to 13 hours.
+> The estimated time to complete the deprovisioning process is anywhere from 30 to 60 minutes.
The following commands can be used in Azure CLI and Azure PowerShell to deprovision and remove the range from Microsoft. The deprovisioning operation is asynchronous. You can use the view commands to retrieve the status. The **CommissionedState** field will initially show the prefix as **Deprovisioning**, followed by **Deprovisioned** as it transitions to the earlier state. When the range is in the **Deprovisioned** state, it can be deleted by using the commands to remove.
virtual-network Public Ip Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-cli.md
Title: Upgrade a public IP address - Azure CLI
+ Title: 'Upgrade a public IP address - Azure CLI'
description: In this article, learn how to upgrade a basic SKU public IP address using the Azure CLI. Previously updated : 05/20/2021 Last updated : 10/25/2022 ms.devlang: azurecli
In this article, you'll learn how to upgrade a static Basic SKU public IP addres
## Prerequisites
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-* A **static** basic SKU public IP address in your subscription. For more information, see [Create public IP address - Azure portal](./create-public-ip-portal.md#create-a-basic-sku-public-ip-address).
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* A **static** basic SKU public IP address in your subscription. For more information, see [Create a basic public IP address using the Azure CLI](./create-public-ip-cli.md?tabs=create-public-ip-basic%2Ccreate-public-ip-zonal%2Crouting-preference#create-public-ip).
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment-no-header.md)]
In this article, you'll learn how to upgrade a static Basic SKU public IP addres
In this section, you'll use the Azure CLI and upgrade your static Basic SKU public IP to the Standard SKU.
-In order to upgrade a public IP, it must not be associated with any resource (see [this page](/azure/virtual-network/virtual-network-public-ip-address#view-modify-settings-for-or-delete-a-public-ip-address) for more information about how to disassociate public IPs).
+In order to upgrade a public IP, it must not be associated with any resource. For more information, see [View, modify settings for, or delete a public IP address](./virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address) to learn how to disassociate a public IP.
>[!IMPORTANT] >Public IPs upgraded from Basic to Standard SKU continue to have no [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones). This means they cannot be associated with an Azure resource that is either zone-redundant or tied to a pre-specified zone in regions where this is offered.
az network public-ip update \
``` > [!NOTE]
-> The basic public IP you are upgrading must have the static allocation type. You'll receive a warning that the IP can't be upgraded if you try to upgrade a dynamically allocated IP address.
+> The basic public IP you are upgrading must have static assignment. You'll receive a warning that the IP can't be upgraded if you try to upgrade a dynamically allocated IP address. Change the IP address assignment to static before upgrading.
> [!WARNING] > Upgrading a basic public IP to standard SKU can't be reversed. Public IPs upgraded from basic to standard SKU continue to have no guaranteed [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones).
In this article, you upgraded a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP - Azure portal](./create-public-ip-portal.md)
+- [Create a public IP address using the Azure CLI](./create-public-ip-cli.md)
virtual-network Virtual Network Deploy Static Pip Arm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
| Select inbound ports | Select **RDP (3389)** | > [!WARNING]
- > Portal 3389 is selected, to enable remote access to the Windows Server virtual machine from the internet. Opening port 3389 to the internet is not recommended to manage production workloads. </br> For secure access to Azure virtual machines, see **[What is Azure Bastion?](../../bastion/bastion-overview.md)**
+ > Port 3389 is selected, to enable remote access to the Windows Server virtual machine from the internet. Opening port 3389 to the internet is not recommended to manage production workloads. </br> For secure access to Azure virtual machines, see **[What is Azure Bastion?](../../bastion/bastion-overview.md)**
3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
virtual-wan Virtual Wan Point To Site Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-azure-ad.md
A User VPN configuration defines the parameters for connecting remote clients. I
* **Authentication method** - Select Azure Active Directory. * **Audience** - Type in the Application ID of the [Azure VPN](openvpn-azure-ad-tenant.md) Enterprise Application registered in your Azure AD tenant. * **Issuer** - `https://sts.windows.net/<your Directory ID>/`
- * **AAD Tenant:** TenantID for the Azure AD tenant
+ * **AAD Tenant:** TenantID for the Azure AD tenant. Make sure there is no `/` at the end of the AAD tenant URL.
- * Enter `https://login.microsoftonline.com/{AzureAD TenantID}/` for Azure Public AD
- * Enter `https://login.microsoftonline.us/{AzureAD TenantID/` for Azure Government AD
- * Enter `https://login-us.microsoftonline.de/{AzureAD TenantID/` for Azure Germany AD
- * Enter `https://login.chinacloudapi.cn/{AzureAD TenantID/` for China 21Vianet AD
+ * Enter `https://login.microsoftonline.com/{AzureAD TenantID}` for Azure Public AD
+ * Enter `https://login.microsoftonline.us/{AzureAD TenantID}` for Azure Government AD
+ * Enter `https://login-us.microsoftonline.de/{AzureAD TenantID}` for Azure Germany AD
+ * Enter `https://login.chinacloudapi.cn/{AzureAD TenantID}` for China 21Vianet AD
1. Click **Create** to create the User VPN configuration. You'll select this configuration later in the exercise.
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
Title: 'How to create an Azure AD tenant for P2S OpenVPN protocol connections: Azure AD authentication'
+ Title: 'Configure P2S for different user and group access: Azure AD authentication and multi app'
description: Learn how to set up an Azure AD tenant for P2S OpenVPN authentication and register multiple apps in Azure AD to allow different access for different users and groups.- - Previously updated : 05/05/2021 Last updated : 10/25/2022
-# Create an Active Directory (AD) tenant for P2S OpenVPN protocol connections
+# Configure P2S for access based on users and groups - Azure AD authentication
-When you connect to your VNet using Point-to-Site, you have a choice of which protocol to use. The protocol you use determines the authentication options that are available to you. If you want to use Azure Active Directory authentication, you can do so when using the OpenVPN protocol. If you want different set of users to be able to connect to different VPN gateways, you can register multiple apps in AD and link them to different VPN gateways. This article helps you set up an Azure AD tenant for P2S OpenVPN and create and register multiple apps in Azure AD for allowing different access for different users and groups. For more information about Point-to-Site protocols and authentication, see [About Point-to-Site VPN](point-to-site-about.md).
+When you use Azure AD as the authentication method for P2S, you can configure P2S to allow different access for different users and groups. If you want different sets of users to be able to connect to different VPN gateways, you can register multiple apps in AD and link them to different VPN gateways. This article helps you set up an Azure AD tenant for P2S Azure AD authentication and create and register multiple apps in Azure AD for allowing different access for different users and groups. For more information about point-to-site protocols and authentication, see [About point-to-site VPN](point-to-site-about.md).
[!INCLUDE [OpenVPN note](../../includes/vpn-gateway-openvpn-auth-include.md)]
+## Azure AD tenant
+
+The steps in this article require an Azure AD tenant. If you don't have an Azure AD tenant, you can create one using the steps in the [Create a new tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) article. Note the following fields when creating your directory:
+
+* Organizational name
+* Initial domain name
+
+## Create Azure AD tenant users
+
+1. Create two accounts in the newly created Azure AD tenant. For steps, see [Add or delete a new user](../active-directory/fundamentals/add-users-azure-active-directory.md).
+
+ * Global administrator account
+ * User account
+
+ The global administrator account will be used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
+
+1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+
+## Authorize the Azure VPN application
++
+## Register additional applications
+
+In this section, you can register additional applications for various users and groups. Repeat the steps to create as many applications that are needed for your security requirements. Each application will be associated to a VPN gateway and can have a different set of users. Only one application can be associated to a gateway.
+
+### Add a scope
+
+1. In the Azure portal, select **Azure Active Directory**.
+1. In the left pane, select **App registrations**.
+1. At the top of the **App registrations** page, select **+ New registration**.
+1. On the **Register an application** page, enter the **Name**. For example, MarketingVPN. You can always change the name later.
+ * Select the desired **Supported account types**.
+ * At the bottom of the page, click **Register**.
+1. Once the new app has been registered, in the left pane, click **Expose an API**. Then click **+ Add a scope**.
+ * On the **Add a scope** page, leave the default **Application ID URI**.
+ * Click **Save and continue**.
+1. The page returns back to the **Add a scope** page. Fill in the required fields and ensure that **State** is **Enabled**.
+
+ :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/add-scope.png" alt-text="Screenshot of Azure Active Directory add a scope page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/add-scope.png":::
+1. When you're done filling out the fields, click **Add scope**.
+
+### Add a client application
+
+1. On the **Expose an API** page, click **+ Add a client application**.
+1. On the **Add a client application** page, for **Client ID**, enter the following values depending on the cloud:
+
+ * Azure Public: `41b23e61-6c1e-4545-b367-cd054e0ed4b4`
+ * Azure Government: `51bb15d4-3a4f-4ebf-9dca-40096fe32426`
+ * Azure Germany: `538ee9e6-310a-468d-afef-ea97365856a9`
+ * Azure China 21Vianet: `49f817b6-84ae-4cc0-928c-73f27289b3aa`
+1. Select the checkbox for the **Authorized scopes** to include. Then, click **Add application**.
+
+ :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/add-application.png" alt-text="Screenshot of Azure Active Directory add client application page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/add-application.png":::
+
+1. Click **Add application**.
+
+### Copy Application (client) ID
+
+When you enable authentication on the VPN gateway, you'll need the **Application (client) ID** value in order to fill out the Audience value for the point-to-site configuration.
+
+1. Go to the **Overview** page.
+
+1. Copy the **Application (client) ID** from the **Overview** page and save it so that you can access this value later. You'll need this information to configure your VPN gateway(s).
+
+ :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/client-id.png" alt-text="Screenshot showing Client ID value." lightbox="./media/openvpn-azure-ad-tenant-multi-app/client-id.png":::
+
+## Assign users to applications
+
+Assign the users to your applications.
+
+1. Go to your Azure Active Directory and select **Enterprise applications**.
+1. From the list, locate the application you just registered and click to open it.
+1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**, then **Save**.
+1. In the left pane, click **Users and groups**. On the **Users and groups** page, click **+ Add user/group** to open the **Add Assignment** page.
+1. Click the link under **Users and groups** to open the **Users and groups** page. Select the users and groups that you want to assign, then click **Select**.
+1. After you finish selecting users and groups, click **Assign**.
+
+## Configure authentication for the gateway
+
+In this step, you configure P2S Azure AD authentication for the virtual network gateway.
+
+1. Go to the virtual network gateway. In the left pane, click **Point-to-site configuration**.
-## <a name="enable-authentication"></a>6. Enable authentication on the gateway
+ :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png" alt-text="Screenshot showing point-to-site configuration page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/client-id.png":::
-In this step, you will enable Azure AD authentication on the VPN gateway.
+ Configure the following values:
-1. Enable Azure AD authentication on the VPN gateway by navigating to **Point-to-site configuration** and picking **OpenVPN (SSL)** as the **Tunnel type**. Select **Azure Active Directory** as the **Authentication type** then fill in the information under the **Azure Active Directory** section.
+ * **Address pool**: client address pool
+ * **Tunnel type:** OpenVPN (SSL)
+ * **Authentication type**: Azure Active Directory
- ![Azure portal view](./media/openvpn-azure-ad-tenant-multi-app/azure-ad-auth-portal.png)
+ For **Azure Active Directory** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values.
- > [!NOTE]
- > Do not use the Azure VPN client's application ID: It will grant all users access to the VPN gateway. Use the ID of the application(s) you registered.
+ * **Tenant**: https://login.microsoftonline.com/{TenantID}
+ * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Azure AD Enterprise App - use application ID that you created and registered. If you use the application ID for the ""Azure VPN" Azure AD Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
+ * **Issuer**: https://sts.window.net/{TenantID} For the Issuer value, make sure to include a trailing **/** at the end.
-2. Create and download the profile by clicking on the **Download VPN client** link.
+1. Once you finish configuring settings, click **Save** at the top of the page.
-3. Extract the downloaded zip file.
+## Download the Azure VPN Client profile configuration package
-4. Browse to the unzipped ΓÇ£AzureVPNΓÇ¥ folder.
+In this section, you generate and download the Azure VPN Client profile configuration package. This package contains the settings that you can use to configure the Azure VPN Client profile on client computers.
-5. Make a note of the location of the ΓÇ£azurevpnconfig.xmlΓÇ¥ file. The azurevpnconfig.xml contains the setting for the VPN connection and can be imported directly into the Azure VPN Client application. You can also distribute this file to all the users that need to connect via e-mail or other means. The user will need valid Azure AD credentials to connect successfully.
## Next steps
-In order to connect to your virtual network, you must create and configure a VPN client profile. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+* To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
description: Learn how to set up an Azure AD tenant for P2S Azure AD authenticat
Previously updated : 09/06/2022 Last updated : 10/25/2022
This article helps you configure your AD tenant and P2S settings for Azure AD au
[!INCLUDE [OpenVPN note](../../includes/vpn-gateway-openvpn-auth-include.md)]
-## <a name="tenant"></a>1. Verify Azure AD tenant
+## <a name="tenant"></a> Azure AD tenant
-Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, you can create one using the steps in the [Create a new tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) article. Note the following fields when creating your directory:
+The steps in this article require an Azure AD tenant. If you don't have an Azure AD tenant, you can create one using the steps in the [Create a new tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) article. Note the following fields when creating your directory:
* Organizational name * Initial domain name
-## <a name="users"></a>2. Create Azure AD tenant users
+## Create Azure AD tenant users
1. Create two accounts in the newly created Azure AD tenant. For steps, see [Add or delete a new user](../active-directory/fundamentals/add-users-azure-active-directory.md).
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
The global administrator account will be used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication. 1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
-## <a name="enable-authentication"></a>3. Enable Azure AD authentication on the VPN gateway
+## Authorize the Azure VPN application
-### Enable the application
+### Authorize the application
-### Configure point-to-site settings
+## <a name="enable-authentication"></a>Configure authentication for the gateway
1. Locate the tenant ID of the directory that you want to use for authentication. It's listed in the properties section of the Active Directory page. For help with finding your tenant ID, see [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
> [!IMPORTANT] > The Basic SKU is not supported for OpenVPN.
-1. Enable Azure AD authentication on the VPN gateway by going to **Point-to-site configuration** and picking **OpenVPN (SSL)** as the **Tunnel type**. Select **Azure Active Directory** as the **Authentication type**, then fill in the information under the **Azure Active Directory** section. Replace {AzureAD TenantID} with your tenant ID.
+1. Go to the virtual network gateway. In the left pane, click **Point-to-site configuration**.
- * **Tenant:** TenantID for the Azure AD tenant
+ :::image type="content" source="./media/openvpn-create-azure-ad-tenant/configuration.png" alt-text="Screenshot showing settings for Tunnel type, Authentication type, and Azure Active Directory settings.":::
- * Enter `https://login.microsoftonline.com/{AzureAD TenantID}/` for Azure Public AD
- * Enter `https://login.microsoftonline.us/{AzureAD TenantID/` for Azure Government AD
- * Enter `https://login-us.microsoftonline.de/{AzureAD TenantID/` for Azure Germany AD
- * Enter `https://login.chinacloudapi.cn/{AzureAD TenantID/` for China 21Vianet AD
-
- * **Audience:** Application ID of the "Azure VPN" Azure AD Enterprise App
+ Configure the following values:
- * Enter 41b23e61-6c1e-4545-b367-cd054e0ed4b4 for Azure Public
- * Enter 51bb15d4-3a4f-4ebf-9dca-40096fe32426 for Azure Government
- * Enter 538ee9e6-310a-468d-afef-ea97365856a9 for Azure Germany
- * Enter 49f817b6-84ae-4cc0-928c-73f27289b3aa for Azure China 21Vianet
+ * **Address pool**: client address pool
+ * **Tunnel type:** OpenVPN (SSL)
+ * **Authentication type**: Azure Active Directory
+ For **Azure Active Directory** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values. Replace {AzureAD TenantID} with your tenant ID.
- * **Issuer**: URL of the Secure Token Service `https://sts.windows.net/{AzureAD TenantID}/`
+ * **Tenant:** TenantID for the Azure AD tenant. Enter the tenant ID that corresponds to your configuration. Make sure the Tenant URL does not have a `\` at the end.
+ * Azure Public AD: `https://login.microsoftonline.com/{AzureAD TenantID}`
+ * Azure Government AD: `https://login.microsoftonline.us/{AzureAD TenantID}`
+ * Azure Germany AD: `https://login-us.microsoftonline.de/{AzureAD TenantID}`
+ * China 21Vianet AD: `https://login.chinacloudapi.cn/{AzureAD TenantID}`
- :::image type="content" source="./media/openvpn-create-azure-ad-tenant/configuration.png" alt-text="Screenshot showing settings for Tunnel type, Authentication type, and Azure Active Directory settings.":::
+ * **Audience**: The Application ID of the "Azure VPN" Azure AD Enterprise App.
- > [!NOTE]
- > Make sure you include a trailing slash at the end of the **Issuer** value. Otherwise, the connection may fail.
- >
+ * Azure Public: `41b23e61-6c1e-4545-b367-cd054e0ed4b4`
+ * Azure Government: `51bb15d4-3a4f-4ebf-9dca-40096fe32426`
+ * Azure Germany: `538ee9e6-310a-468d-afef-ea97365856a9`
+ * Azure China 21Vianet: `49f817b6-84ae-4cc0-928c-73f27289b3aa`
-1. Save your changes.
+ * **Issuer**: URL of the Secure Token Service. Include a trailing slash at the end of the **Issuer** value. Otherwise, the connection may fail.
-1. At the top of the page, click **Download VPN client**. It takes a few minutes for the client configuration package to generate.
+ * `https://sts.windows.net/{AzureAD TenantID}/`
-1. Your browser indicates that a client configuration zip file is available. It's named the same name as your gateway.
+1. Once you finish configuring settings, click **Save** at the top of the page.
-1. Extract the downloaded zip file.
+## Download the Azure VPN Client profile configuration package
-1. Browse to the unzipped ΓÇ£AzureVPNΓÇ¥ folder.
+In this section, you generate and download the Azure VPN Client profile configuration package. This package contains the settings that you can use to configure the Azure VPN Client profile on client computers.
-1. Make a note of the location of the ΓÇ£azurevpnconfig.xmlΓÇ¥ file. The azurevpnconfig.xml contains the setting for the VPN connection. You can also distribute this file to all the users that need to connect via e-mail or other means. The user will need valid Azure AD credentials to connect successfully. For more information, see [Azure VPN client profile config files for Azure AD authentication](about-vpn-profile-download.md).
## Next steps
-[Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+* To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).
web-application-firewall Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/cdn/cdn-overview.md
Custom rules can have match rules and rate control rules.
You can configure the following custom match rules: -- *IP allowlist and blocklist*: You can control access to your web applications based on a list of client IP addresses or IP address ranges. Both IPv4 and IPv6 address types are supported. This list can be configured to either block or allow those requests where the source IP matches an IP in the list.
+- *IP allowlist and blocklist*: You can control access to your web applications based on a list of client IP addresses or IP address ranges. Both IPv4 and IPv6 address types are supported. IP list rules use the RemoteAddress IP contained in the X-Forwarded-For request header and not the SocketAddress that the WAF sees. IP lists can be configured to either block or allow requests where the RemoteAddress IP matches an IP in the list. If you have a requirement to block request on the source IP address that WAF sees, for example the proxy server address if the user is behind a proxy, you should use the Azure Front Door standard or premium tiers. For more information, see [Configure an IP restriction rule with a Web Application Firewall for Azure Front Door](https://learn.microsoft.com/azure/web-application-firewall/afds/waf-front-door-configure-ip-restriction) for details.
- *Geographic based access control*: You can control access to your web applications based on the country code that's associated with a client's IP address.