Updates from: 03/10/2021 04:15:12
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Age Gating https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/age-gating.md
+
+ Title: Enable age gating in Azure Active Directory B2C | Microsoft Docs
+description: Learn about how to identify minors using your application.
+++++++ Last updated : 03/09/2021++
+zone_pivot_groups: b2c-policy-type
++
+# Enable age gating in Azure Active Directory B2C
++
+Age gating in Azure Active Directory B2C (Azure AD B2C) enables you to identify minors that want to use your application, with, or without parental consent. You can choose to block the minor from sign-in into the application. Or allow uses to complete the sign-in, and provide the application the minor status.
+
+>[!IMPORTANT]
+>This feature is in public preview. Do not use feature for production applications.
+>
+
+When age gating is enabled for a user flow, users are asked for their date of birth, and country of residence. If a user signs in that hasn't previously entered the information, they'll need to enter it the next time they sign in. The rules are applied every time a user signs in.
+
+![Screenshot of age gating information gather flow](./media/age-gating/age-gating-information-gathering.png)
+
+Azure AD B2C uses the information that the user enters to identify whether they're a minor. The **ageGroup** field is then updated in their account. The value can be `null`, `Undefined`, `Minor`, `Adult`, and `NotAdult`. The **ageGroup** and **consentProvidedForMinor** fields are then used to calculate the value of **legalAgeGroupClassification**.
++
+## Prerequisites
++
+## Set up your tenant for age gating
+
+To use age gating in a user flow, you need to configure your tenant to have extra properties.
+
+1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu. Select the directory that contains your tenant.
+1. Select **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**.
+1. Select **Properties** for your tenant in the menu on the left.
+1. Under the **Age gating**, select **Configure**.
+1. Wait for the operation to complete and your tenant will be set up for age gating.
++
+## Enable age gating in your user flow
+
+After your tenant is set up to use age gating, you can then use this feature in [user flows](user-flow-versions.md) where it's enabled. You enable age gating with the following steps:
+
+1. Create a user flow that has age gating enabled.
+1. After you create the user flow, select **Properties** in the menu.
+1. In the **Age gating** section, select **Enabled**.
+1. For **Sign-up or sign-in**, select how you want to manage users:
+ - Allow minors to access your application.
+ - Block only minors below age of consent from accessing your application.
+ - Block all minors from accessing your application.
+1. For **On block**, select one of the following options:
+ - **Send a JSON back to the application** - this option sends a response back to the application that a minor was blocked.
+ - **Show an error page** - the user is shown a page informing them that they can't access the application.
+
+## Test your user flow
+
+1. To test your policy, select **Run user flow**.
+1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Select the **Run user flow** button.
+1. Sign-in with a local or social account. Then select your country of residence, and date of birth that simulate a minor.
+1. Repeat the test, and select a date of birth that simulates an adult.
+
+When you sign-in as a minor, you should see the following error message: *Unfortunately, your sign on has been blocked. Privacy and online safety laws in your country prevent access to accounts belonging to children.*
+++
+## Enable age gating in your custom policy
+
+1. Get the example of an age gating policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/age-gating).
+1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`.
+1. Upload the policy files.
++
+## Next steps
+
+- Learn how to [Manage user access in Azure AD B2C](manage-user-access.md).
+
active-directory-b2c Basic Age Gating https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/basic-age-gating.md
- Title: Enable age gating in Azure Active Directory B2C | Microsoft Docs
-description: Learn about how to identify minors using your application.
------- Previously updated : 11/13/2018----
-# Enable Age Gating in Azure Active Directory B2C
-
->[!IMPORTANT]
->This feature is in public preview. Do not use feature for production applications.
->
-
-Age gating in Azure Active Directory B2C (Azure AD B2C) enables you to identify minors that want to use your application. You can choose to block the minor from signing into the application. Users can also go back to the application and identify their age group and their parental consent status. Azure AD B2C can block minors without parental consent. Azure AD B2C can also be set up to allow the application to decide what to do with minors.
-
-After you enable age gating in your [user flow](user-flow-overview.md), users are asked when they were born and what country/region they live in. If a user signs in that hasn't previously entered the information, they'll need to enter it the next time they sign in. The rules are applied every time a user signs in.
-
-Azure AD B2C uses the information that the user enters to identify whether they're a minor. The **ageGroup** field is then updated in their account. The value can be `null`, `Undefined`, `Minor`, `Adult`, and `NotAdult`. The **ageGroup** and **consentProvidedForMinor** fields are then used to calculate the value of **legalAgeGroupClassification**.
-
-Age gating involves two age values: the age that someone is no longer considered a minor, and the age at which a minor must have parental consent. The following table lists the age rules that are used for defining a minor and a minor requiring consent.
-
-| Country/Region | Country/Region name | Minor consent age | Minor age |
-| -- | - | -- | |
-| Default | None | None | 18 |
-| AE | United Arab Emirates | None | 21 |
-| AT | Austria | 14 | 18 |
-| BE | Belgium | 14 | 18 |
-| BG | Bulgaria | 16 | 18 |
-| BH | Bahrain | None | 21 |
-| CM | Cameroon | None | 21 |
-| CY | Cyprus | 16 | 18 |
-| CZ | Czech Republic | 16 | 18 |
-| DE | Germany | 16 | 18 |
-| DK | Denmark | 16 | 18 |
-| EE | Estonia | 16 | 18 |
-| EG | Egypt | None | 21 |
-| ES | Spain | 13 | 18 |
-| FR | France | 16 | 18 |
-| GB | United Kingdom | 13 | 18 |
-| GR | Greece | 16 | 18 |
-| HR | Croatia | 16 | 18 |
-| HU | Hungary | 16 | 18 |
-| IE | Ireland | 13 | 18 |
-| IT | Italy | 16 | 18 |
-| KR | Korea, Republic of | 14 | 18 |
-| LT | Lithuania | 16 | 18 |
-| LU | Luxembourg | 16 | 18 |
-| LV | Latvia | 16 | 18 |
-| MT | Malta | 16 | 18 |
-| NA | Namibia | None | 21 |
-| NL | Netherlands | 16 | 18 |
-| PL | Poland | 13 | 18 |
-| PT | Portugal | 16 | 18 |
-| RO | Romania | 16 | 18 |
-| SE | Sweden | 13 | 18 |
-| SG | Singapore | None | 21 |
-| SI | Slovenia | 16 | 18 |
-| SK | Slovakia | 16 | 18 |
-| TD | Chad | None | 21 |
-| TH | Thailand | None | 20 |
-| TW | Taiwan | None | 20 |
-| US | United States | 13 | 18 |
-
-## Age gating options
-
-### Allowing minors without parental consent
-
-For user flows that allow either sign-up, sign-in, or both, you can choose to allow minors without consent into your application. Minors without parental consent are allowed to sign in or sign up as normal and Azure AD B2C issues an ID token with the **legalAgeGroupClassification** claim. This claim defines the experience that users have, such as collecting parental consent and updating the **consentProvidedForMinor** field.
-
-### Blocking minors without parental consent
-
-For user flows that allow either sign-up, sign-in or both, you can choose to block minors without consent from the application. The following options are available for handling blocked users in Azure AD B2C:
--- Send a JSON back to the application - this option sends a response back to the application that a minor was blocked.-- Show an error page - the user is shown a page informing them that they can't access the application.-
-## Set up your tenant for age gating
-
-To use age gating in a user flow, you need to configure your tenant to have additional properties.
-
-1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu. Select the directory that contains your tenant.
-2. Select **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**.
-3. Select **Properties** for your tenant in the menu on the left.
-2. Under the **Age gating** section, click on **Configure**.
-3. Wait for the operation to complete and your tenant will be set up for age gating.
-
-## Enable age gating in your user flow
-
-After your tenant is set up to use age gating, you can then use this feature in [user flows](user-flow-versions.md) where it's enabled. You enable age gating with the following steps:
-
-1. Create a user flow that has age gating enabled.
-2. After you create the user flow, select **Properties** in the menu.
-3. In the **Age gating** section, select **Enabled**.
-4. You then decide how you want to manage users that identify as minors. For **Sign-up or sign-in**, you select `Allow minors to access your application` or `Block minors from accessing your application`. If blocking minors is selected, you select `Send a JSON back to the application` or `Show an error message`.
----
active-directory-b2c Custom Policy Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-get-started.md
- [Register your application](tutorial-register-applications.md) in the tenant that you created so that it can communicate with Azure AD B2C. - Complete the steps in [Set up sign-up and sign-in with a Facebook account](identity-provider-facebook.md) to configure a Facebook application. Although a Facebook application is not required for using custom policies, it's used in this walkthrough to demonstrate enabling social login in a custom policy.
+> [!TIP]
+> This article explains how to set up your tenant manually. You can automate the entire process from this article. Automating will deploy the Azure AD B2C [SocialAndLocalAccountsWithMFA starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack), which will provide Sign Up and Sign In, Password Reset and Profile Edit journeys. To automate the walkthrough below, visit the [IEF Setup App](https://aka.ms/iefsetup) and follow the instructions.
++ ## Add signing and encryption keys 1. Sign in to the [Azure portal](https://portal.azure.com).
active-directory-b2c Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/faq.md
Previously updated : 10/14/2020 Last updated : 03/08/2021
The email signature contains the Azure AD B2C tenant's name that you provided wh
1. Change the **Name** field. 1. Click **Save** at the top of the page.
-Currently there is no way to change the "From:" field on the email.
+Currently you cannot change the "From:" field on the email.
+
+> [!TIP]
+> With Azure AD B2C [custom policy](custom-policy-overview.md), you can customize the email Azure AD B2C sends to users, including the "From:" field on the email. The custom email verification requires the use of a third-party email provider like [Mailjet](custom-email-mailjet.md), [SendGrid](custom-email-sendgrid.md), or [SparkPost](https://sparkpost.com).
### How can I migrate my existing user names, passwords, and profiles from my database to Azure AD B2C?
You can use our new unified **App registrations** experience or our legacy **Ap
1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant. 1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Delete all **User flows (policies)** in your Azure AD B2C tenant.
+1. Delete all **Identity Providers** in your Azure AD B2C tenant.
1. Select **App registrations**, then select the **All applications** tab. 1. Delete all applications that you registered. 1. Delete the **b2c-extensions-app**. 1. Under **Manage**, select **Users**. 1. Select each user in turn (exclude the *Subscription Administrator* user you are currently signed in as). Select **Delete** at the bottom of the page and select **Yes** when prompted. 1. Select **Azure Active Directory** on the left-hand menu.
-1. Under **Manage**, select **User settings**.
1. Under **Manage**, select **Properties** 1. Under **Access management for Azure resources**, select **Yes**, and then select **Save**. 1. Sign out of the Azure portal and then sign back in to refresh your access.
No, Azure AD B2C is a pay-as-you-go Azure service and is not part of Enterprise
### How do I report issues with Azure AD B2C?
-See [File support requests for Azure Active Directory B2C](support-options.md).
+See [File support requests for Azure Active Directory B2C](support-options.md).
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-apple-id.md
Previously updated : 03/08/2021 Last updated : 03/09/2021
You can define an Apple ID as a claims provider by adding it to the **ClaimsProv
<Item Key="response_types">code</Item> <Item Key="external_user_identity_claim_id">sub</Item> <Item Key="response_mode">form_post</Item>
- <Item Key="ReadBodyClaimsOnIdpRedirect">user.name.firstName user.name.lastName user.email</Item>
+ <Item Key="ReadBodyClaimsOnIdpRedirect">user.firstName user.lastName user.email</Item>
<Item Key="client_id">You Apple ID</Item> <Item Key="UsePolicyInRedirectUri">false</Item> </Metadata>
You can define an Apple ID as a claims provider by adding it to the **ClaimsProv
<OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" /> <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="https://appleid.apple.com" AlwaysUseDefaultValue="true" /> <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
- <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="user.name.firstName"/>
- <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="user.name.lastName"/>
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="user.firstName"/>
+ <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="user.lastName"/>
<OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="user.email"/> </OutputClaims> <OutputClaimsTransformations>
active-directory-b2c Manage User Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/manage-user-access.md
Previously updated : 10/15/2020 Last updated : 03/09/2021
If an application has reliably gathered DOB or country/region data by other meth
- If a user is known to be an adult, update the directory attribute **ageGroup** with a value of **Adult**. - If a user is known to be a minor, update the directory attribute **ageGroup** with a value of **Minor** and set **consentProvidedForMinor**, as appropriate.
-For more information about gathering DOB data, see [Use age gating in Azure AD B2C](basic-age-gating.md).
+## Minor calculation rules
+
+Age gating involves two age values: the age that someone is no longer considered a minor, and the age at which a minor must have parental consent. The following table lists the age rules that are used for defining a minor and a minor requiring consent.
+
+| Country/Region | Country/Region name | Minor consent age | Minor age |
+| -- | - | -- | |
+| Default | None | None | 18 |
+| AE | United Arab Emirates | None | 21 |
+| AT | Austria | 14 | 18 |
+| BE | Belgium | 14 | 18 |
+| BG | Bulgaria | 16 | 18 |
+| BH | Bahrain | None | 21 |
+| CM | Cameroon | None | 21 |
+| CY | Cyprus | 16 | 18 |
+| CZ | Czech Republic | 16 | 18 |
+| DE | Germany | 16 | 18 |
+| DK | Denmark | 16 | 18 |
+| EE | Estonia | 16 | 18 |
+| EG | Egypt | None | 21 |
+| ES | Spain | 13 | 18 |
+| FR | France | 16 | 18 |
+| GB | United Kingdom | 13 | 18 |
+| GR | Greece | 16 | 18 |
+| HR | Croatia | 16 | 18 |
+| HU | Hungary | 16 | 18 |
+| IE | Ireland | 13 | 18 |
+| IT | Italy | 16 | 18 |
+| KR | Korea, Republic of | 14 | 18 |
+| LT | Lithuania | 16 | 18 |
+| LU | Luxembourg | 16 | 18 |
+| LV | Latvia | 16 | 18 |
+| MT | Malta | 16 | 18 |
+| NA | Namibia | None | 21 |
+| NL | Netherlands | 16 | 18 |
+| PL | Poland | 13 | 18 |
+| PT | Portugal | 16 | 18 |
+| RO | Romania | 16 | 18 |
+| SE | Sweden | 13 | 18 |
+| SG | Singapore | None | 21 |
+| SI | Slovenia | 16 | 18 |
+| SK | Slovakia | 16 | 18 |
+| TD | Chad | None | 21 |
+| TH | Thailand | None | 20 |
+| TW | Taiwan | None | 20 |
+| US | United States | 13 | 18 |
++ ## Capture terms of use agreement
The following is an example of a version-based terms of use consent in a claim.
## Next steps
+- [Enable Age Gating in Azure AD B2C](age-gating.md).
- To learn how to delete and export user data, see [Manage user data](manage-user-data.md). - For an example custom policy that implements a terms of use prompt, see [A B2C IEF Custom Policy - Sign Up and Sign In with 'Terms of Use' prompt](https://github.com/azure-ad-b2c/samples/tree/master/policies/sign-in-sign-up-versioned-tou).
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/page-layout.md
Previously updated : 03/04/2021 Last updated : 03/09/2021
Page layout packages are periodically updated to include fixes and improvements
## Unified sign-in sign-up page with password reset link (unifiedssp)
+> [!TIP]
+> If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select.
+ **2.1.2** - Fixed the localization encoding issue for languages such as Spanish and French. - Allowing the "forgot password" link to use as claims exchange. For more information, see [Self-service password reset](add-password-reset-policy.md#self-service-password-reset-recommended).
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-arkose-labs.md
# Tutorial: Configure Arkose Labs with Azure Active Directory B2C
-In this tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [Arkose Labs](https://www.arkoselabs.com/). Arkose Labs help organizations against bot attacks, account takeover attacks, and fraudulent account openings.
+In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C authentication with [Arkose Labs](https://www.arkoselabs.com/). Arkose Labs help organizations against bot attacks, account takeover attacks, and fraudulent account openings.
## Prerequisites
See [this article](https://docs.microsoft.com/azure/azure-functions/functions-de
[Create an API connector](https://docs.microsoft.com/azure/active-directory-b2c/add-api-connector) and enable it for your user flow. Your API connector configuration should look like:
-![Image shows search by app id](media/partner-arkose-labs/configure-api-connector.png)
+![Image shows how to configure api connector](media/partner-arkose-labs/configure-api-connector.png)
- **Endpoint URL** - is the Function URL you copied earlier while you deployed Azure Function.
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-tenant.md
Before your applications can interact with Azure Active Directory B2C (Azure AD
> [!NOTE] > You can create up to 20 tenants per subscription. This limit helps protect against threats to your resources, such as denial-of-service attacks, and is enforced in both the Azure portal and the underlying tenant creation API. If you need to create more than 20 tenants, please contact [Microsoft Support](support-options.md).
+>
+> If you want to reuse a tenant name that you previously tried to delete, but you see the error "Already in use by another directory" when you enter the domain name, you'll need to [follow these steps to fully delete the tenant first](https://docs.microsoft.com/azure/active-directory-b2c/faq?tabs=app-reg-ga#how-do-i-delete-my-azure-ad-b2c-tenant). A role of at least Subscription Administrator is required. After deleting the tenant, you might also need to sign out and sign back in before you can reuse the domain name.
In this article, you learn how to:
In this article, you learned how to:
Next, learn how to register a web application in your new tenant. > [!div class="nextstepaction"]
-> [Register your applications >](tutorial-register-applications.md)
+> [Register your applications >](tutorial-register-applications.md)
active-directory-b2c User Flow Versions Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-versions-legacy.md
In the table below, unless a user flow is identified as **Recommended**, it is c
| User flow | Recommended | Description | | | -- | -- |
-| Password reset v2 | No | Enables a user to choose a new password after verifying their email. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>Token compatibility settings</li><li>[Age gating](basic-age-gating.md)</li><li>[password complexity requirements](password-complexity.md)</li></ul> |
+| Password reset v2 | No | Enables a user to choose a new password after verifying their email. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>Token compatibility settings</li><li>[Age gating](age-gating.md)</li><li>[password complexity requirements](password-complexity.md)</li></ul> |
| Profile editing v2 | Yes | Enables a user to configure their user attributes. Using this user flow, you can configure: <ul><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li></ul> |
-| Sign in v2 | No | Enables a user to sign in to their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](basic-age-gating.md)</li><li>Sign-in page customization</li></ul> |
-| Sign up v2 | No | Enables a user to create an account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](basic-age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
-| Sign up and sign in v2 | No | Enables a user to create an account or sign in their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Age gating](basic-age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
+| Sign in v2 | No | Enables a user to sign in to their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](age-gating.md)</li><li>Sign-in page customization</li></ul> |
+| Sign up v2 | No | Enables a user to create an account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
+| Sign up and sign in v2 | No | Enables a user to create an account or sign in their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Age gating](age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
active-directory-b2c User Flow Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-versions.md
Recommended user flows are preview versions that combine new features with legac
| User flow | Description | | | -- |
-| Password reset (preview) | Enables a user to choose a new password after verifying their email. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>Token compatibility settings</li><li>[Age gating](basic-age-gating.md)</li><li>[password complexity requirements](password-complexity.md)</li></ul> |
+| Password reset (preview) | Enables a user to choose a new password after verifying their email. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>Token compatibility settings</li><li>[Age gating](age-gating.md)</li><li>[password complexity requirements](password-complexity.md)</li></ul> |
| Profile editing (preview) | Enables a user to configure their user attributes. Using this user flow, you can configure: <ul><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li></ul> |
-| Sign in (preview) | Enables a user to sign in to their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](basic-age-gating.md)</li><li>Sign-in page customization</li></ul> |
-| Sign up (preview) | Enables a user to create an account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](basic-age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
-| Sign up and sign in (preview) | Enables a user to create an account or sign in their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Age gating](basic-age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
+| Sign in (preview) | Enables a user to sign in to their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](age-gating.md)</li><li>Sign-in page customization</li></ul> |
+| Sign up (preview) | Enables a user to create an account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
+| Sign up and sign in (preview) | Enables a user to create an account or sign in their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Age gating](age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
## Standard user flows
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-profile-attributes.md
Previously updated : 03/04/2021 Last updated : 03/09/2021
The table below lists the [user resource type](/graph/api/resources/user) attrib
|passwordPolicies |String|Policy of the password. It's a string consisting of different policy name separated by comma. For example, "DisablePasswordExpiration, DisableStrongPassword".|No|No|Persisted, Output| |physicalDeliveryOfficeName (officeLocation)|String|The office location in the user's place of business. Max length 128.|Yes|No|Persisted, Output| |postalCode |String|The postal code for the user's postal address. The postal code is specific to the user's country/region. In the United States of America, this attribute contains the ZIP code. Max length 40.|Yes|No|Persisted, Output|
-|preferredLanguage |String|The preferred language for the user. Should follow ISO 639-1 Code. Example: "en-US".|No|No|Persisted, Output|
-|refreshTokensValidFromDateTime|DateTime|Any refresh tokens issued before this time are invalid, and applications will get an error when using an invalid refresh token to acquire a new access token. If this happens, the application will need to acquire a new refresh token by making a request to the authorize endpoint. Read-only.|No|No|Output|
+|preferredLanguage |String|The preferred language for the user. The preferred language format is based on RFC 4646. The name is a combination of an ISO 639 two-letter lowercase culture code associated with the language, and an ISO 3166 two-letter uppercase subculture code associated with the country or region. Example: "en-US", or "es-ES".|No|No|Persisted, Output|
+|refreshTokensValidFromDateTime (signInSessionsValidFromDateTime)|DateTime|Any refresh tokens issued before this time are invalid, and applications will get an error when using an invalid refresh token to acquire a new access token. If this happens, the application will need to acquire a new refresh token by making a request to the authorize endpoint. Read-only.|No|No|Output|
|signInNames ([Identities](#identities-attribute)) |String|The unique sign-in name of the local account user of any type in the directory. Use this attribute to get a user with sign-in value without specifying the local account type.|No|No|Input| |signInNames.userName ([Identities](#identities-attribute)) |String|The unique username of the local account user in the directory. Use this attribute to create or get a user with a specific sign-in username. Specifying this in PersistedClaims alone during Patch operation will remove other types of signInNames. If you would like to add a new type of signInNames, you also need to persist existing signInNames.|No|No|Input, Persisted, Output| |signInNames.phoneNumber ([Identities](#identities-attribute)) |String|The unique phone number of the local account user in the directory. Use this attribute to create or get a user with a specific sign-in phone number. Specifying this attribute in PersistedClaims alone during Patch operation will remove other types of signInNames. If you would like to add a new type of signInNames, you also need to persist existing signInNames.|No|No|Input, Persisted, Output|
active-directory-b2c Userinfo Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/userinfo-endpoint.md
Previously updated : 03/04/2021 Last updated : 03/09/2021
The [UserJourney](userjourneys.md) element defines the path that the user takes
To include the UserInfo endpoint in the relying party application, add an [Endpoint](relyingparty.md#endpoints) element to the *SocialAndLocalAccounts/SignUpOrSignIn.xml* file. ```xml
-<Endpoints>
- <Endpoint Id="UserInfo" UserJourneyReferenceId="UserInfoJourney" />
-</Endpoints>
+<!--
+<RelyingParty> -->
+ <Endpoints>
+ <Endpoint Id="UserInfo" UserJourneyReferenceId="UserInfoJourney" />
+ </Endpoints>
+<!--
+</RelyingParty> -->
``` The completed relying party element will be as follows:
https://yourtenant.b2clogin.com/yourtenant.onmicrosoft.com/policy-name/v2.0/.wel
```http GET /yourtenant.onmicrosoft.com/b2c_1a_signup_signin/openid/v2.0/userinfo Host: b2cninja.b2clogin.com
-Authorization: Bearer <your ID token>
+Authorization: Bearer <your access token>
``` A successful response would look like:
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-configure-ldaps.md
Before you can use the digital certificate created in the previous step with you
On the **Security** page, choose the option for **Password** to protect the *.PFX* certificate file. The encryption algorithm must be *TripleDES-SHA1*. Enter and confirm a password, then select **Next**. This password is used in the next section to enable secure LDAP for your managed domain.
- If you export using the [PowerShell export-pfxcertificate cmdlet](https://docs.microsoft.com/powershell/module/pkiclient/export-pfxcertificate?view=win10-ps), you need to pass the *-CryptoAlgorithmOption* flag using TripleDES_SHA1.
+ If you export using the [PowerShell export-pfxcertificate cmdlet](https://docs.microsoft.com/powershell/module/pkiclient/export-pfxcertificate), you need to pass the *-CryptoAlgorithmOption* flag using TripleDES_SHA1.
![Screenshot of how to encrypt the password](./media/tutorial-configure-ldaps/encrypt.png)
active-directory Application Provisioning Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md
We're taking an open source and community-based approach to application provisio
- [Get started with queries in Azure Monitor logs](../../azure-monitor/logs/get-started-queries.md) - [Create and manage alert groups in the Azure portal](../../azure-monitor/alerts/action-groups.md) - [Install and use the log analytics views for Azure Active Directory](../reports-monitoring/howto-install-use-log-analytics-views.md)-- [Provisioning logs API](/graph/api/resources/provisioningobjectsummary?preserve-view=true&view=graph-rest-beta.md)
+- [Provisioning logs API](/graph/api/resources/provisioningobjectsummary?preserve-view=true&view=graph-rest-beta)
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-deployment.md
> > **If you're an end user and need to get back into your account, go to [https://aka.ms/sspr](https://aka.ms/sspr)**.
-[Self-Service Password Reset (SSPR)](https://www.youtube.com/watch?v=tnb2Qf4hTP8) is an Azure Active Directory (AD) feature that enables users to reset their passwords without contacting IT staff for help. The users can quickly unblock themselves and continue working no matter where they are or time of day. By allowing the employees to unblock themselves, your organization can reduce the non-productive time and high support costs for most common password-related issues.
+[Self-Service Password Reset (SSPR)](https://www.youtube.com/watch?v=pS3XwfxJrMo) is an Azure Active Directory (AD) feature that enables users to reset their passwords without contacting IT staff for help. The users can quickly unblock themselves and continue working no matter where they are or time of day. By allowing the employees to unblock themselves, your organization can reduce the non-productive time and high support costs for most common password-related issues.
SSPR has the following key capabilities:
active-directory How To Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-attribute-mapping.md
Title: 'Azure AD Connect cloud sync attribute editor'
-description: This article describes how to use the attribute editor.
+ Title: 'Attribute mapping in Azure AD Connect cloud sync'
+description: This article describes how to use the cloud sync feature of Azure AD Connect to map attributes.
-# Azure AD Connect cloud sync attribute mapping
+# Attribute mapping in Azure AD Connect cloud sync
-Azure AD Connect cloud sync has introduced a new feature, that will allow you easily map attributes between your on-premises user/group objects and the objects in Azure AD. This feature has been added to the cloud sync configuration.
+You can use the cloud sync feature of Azure Active Directory (Azure AD) Connect to map attributes between your on-premises user or group objects and the objects in Azure AD. This capability has been added to the cloud sync configuration.
-You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings. For a list of attributes that are synchronized see [attributes that are synchronized](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md).
+You can customize (change, delete, or create) the default attribute mappings according to your business needs. For a list of attributes that are synchronized, see [Attributes synchronized to Azure Active Directory](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md).
-## Understanding attribute-mapping types
-With attribute-mappings, you control how attributes are populated in Azure AD.
-There are four different mapping types supported:
+## Understand types of attribute mapping
+With attribute mapping, you control how attributes are populated in Azure AD. Azure AD supports four mapping types:
-- **Direct** ΓÇô the target attribute is populated with the value of an attribute of the linked object in AD.-- **Constant** ΓÇô the target attribute is populated with a specific string you specified.-- **Expression** - the target attribute is populated based on the result of a script-like expression.
- For more information, see [Writing Expressions for Attribute-Mappings](reference-expressions.md).
-- **None** - the target attribute is left unmodified. However, if the target attribute is ever empty, it's populated with the Default value that you specify.
+- **Direct**: The target attribute is populated with the value of an attribute of the linked object in Active Directory.
+- **Constant**: The target attribute is populated with a specific string that you specify.
+- **Expression**: The target attribute is populated based on the result of a script-like expression. For more information, see [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md).
+- **None**: The target attribute is left unmodified. However, if the target attribute is ever empty, it's populated with the default value that you specify.
-Along with these four basic types, custom attribute-mappings support the concept of an optional **default** value assignment. The default value assignment ensures that a target attribute is populated with a value if there's not a value in Azure AD or on the target object. The most common configuration is to leave this blank.
+Along with these basic types, custom attribute mappings support the concept of an optional *default* value assignment. The default value assignment ensures that a target attribute is populated with a value if Azure AD or the target object doesn't have a value. The most common configuration is to leave this blank.
-## Understanding attribute-mapping properties
+## Understand properties of attribute mapping
-In the previous section, you were already introduced to the attribute-mapping type property.
-Along with this property, attribute-mappings also support the following attributes:
+Along with the type property, attribute mappings support the following attributes:
-- **Source attribute** - The user attribute from the source system (example: Active Directory).-- **Target attribute** ΓÇô The user attribute in the target system (example: Azure Active Directory).-- **Default value if null (optional)** - The value that will be passed to the target system if the source attribute is null. This value will only be provisioned when a user is created. The "default value when null" will not be provisioned when updating an existing user. -- **Apply this mapping**
- - **Always** ΓÇô Apply this mapping on both user creation and update actions.
- - **Only during creation** - Apply this mapping only on user creation actions.
+- **Source attribute**: The user attribute from the source system (example: Active Directory).
+- **Target attribute**: The user attribute in the target system (example: Azure Active Directory).
+- **Default value if null (optional)**: The value that will be passed to the target system if the source attribute is null. This value will be provisioned only when a user is created. It won't be provisioned when you're updating an existing user.
+- **Apply this mapping**:
+ - **Always**: Apply this mapping on both user-creation and update actions.
+ - **Only during creation**: Apply this mapping only on user-creation actions.
> [!NOTE]
-> This document describes how to use the Azure portal to map attributes. For information on using Graph see [Transformations](how-to-transformation.md)
+> This article describes how to use the Azure portal to map attributes. For information on using Microsoft Graph, see [Transformations](how-to-transformation.md).
-## Using attribute mapping
+## Add an attribute mapping
-To use the new feature, follow the steps below.
+To use the new capability, follow these steps:
1. In the Azure portal, select **Azure Active Directory**. 2. Select **Azure AD Connect**. 3. Select **Manage cloud sync**.
- ![Manage provisioning](media/how-to-install/install-6.png)
+ ![Screenshot that shows the link for managing cloud sync.](media/how-to-install/install-6.png)
4. Under **Configuration**, select your configuration.
-5. Select **Click to edit mappings**. This will open the attribute mapping screen.
+5. Select **Click to edit mappings**. This link opens the **Attribute mappings** screen.
- ![Adding attributes](media/how-to-attribute-mapping/mapping-6.png)
+ ![Screenshot that shows the link for adding attributes.](media/how-to-attribute-mapping/mapping-6.png)
-6. Click **Add Attribute**.
+6. Select **Add attribute**.
- ![Mapping type](media/how-to-attribute-mapping/mapping-1.png)
+ ![Screenshot that shows the button for adding an attribute, along with lists of attributes and mapping types.](media/how-to-attribute-mapping/mapping-1.png)
-7. Select the **Mapping type**. In this example we use Expression.
-8. Enter the expression in the box. For this example we are using: `Replace([mail], "@contoso.com", , ,"", ,).`
-9. Enter the target attribute. In this example we use ExtensionAttribute15.
-10. Select when to apply this and then click **Apply**
+7. Select the mapping type. For this example, we're using **Expression**.
+8. Enter the expression in the box. For this example, we're using `Replace([mail], "@contoso.com", , ,"", ,)`.
+9. Enter the target attribute. For this example, we're using **ExtensionAttribute15**.
+10. Select when to apply this mapping, and then select **Apply**.
- ![Edit mappings](media/how-to-attribute-mapping/mapping-2a.png)
+ ![Screenshot that shows the filled-in boxes for creating an attribute mapping.](media/how-to-attribute-mapping/mapping-2a.png)
-11. Back on the attribute mapping screen you should see your new attribute mapping.
-12. Click **Save Schema**.
+11. Back on the **Attribute mappings** screen, you should see your new attribute mapping.
+12. Select **Save schema**.
- ![Save Schema](media/how-to-attribute-mapping/mapping-3.png)
+ ![Screenshot that shows the Save schema button.](media/how-to-attribute-mapping/mapping-3.png)
## Test your attribute mapping
-To test your attribute mapping, you can use [on-demand provisioning](how-to-on-demand-provision.md). From the
+To test your attribute mapping, you can use [on-demand provisioning](how-to-on-demand-provision.md):
1. In the Azure portal, select **Azure Active Directory**. 2. Select **Azure AD Connect**. 3. Select **Manage provisioning**. 4. Under **Configuration**, select your configuration.
-5. Under **Validate** click the **Provision a user** button.
-6. On the on-demand provisioning screen. Enter the **distinguished name** of a user or group and click the **Provision** button.
-7. Once it completes, you should see a success screen and 4 green check boxes indicating it was successfully provisioned.
+5. Under **Validate**, select the **Provision a user** button.
+6. On the **Provision on demand** screen, enter the distinguished name of a user or group and select the **Provision** button.
- ![Success for provisioning](media/how-to-attribute-mapping/mapping-4.png)
+ The screen shows that the provisioning is in progress.
-8. Under **Perform Action** click **View details**. On the right, you should see the new attribute synchronized and the expression applied.
+ ![Screenshot that shows provisioning in progress.](media/how-to-attribute-mapping/mapping-4.png)
- ![Perform action](media/how-to-attribute-mapping/mapping-5.png)
+8. After provisioning finishes, a success screen appears with four green check marks.
-## Next Steps
+ Under **Perform action**, select **View details**. On the right, you should see the new attribute synchronized and the expression applied.
+
+ ![Screenshot that shows success and export details.](media/how-to-attribute-mapping/mapping-5.png)
+
+## Next steps
- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)-- [Writing Expressions for Attribute-Mappings](reference-expressions.md)-- [Attributes that are synchronized](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md)
+- [Writing expressions for attribute mappings](reference-expressions.md)
+- [Attributes synchronized to Azure Active Directory](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md)
active-directory How To On Demand Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-on-demand-provision.md
Title: 'Azure AD Connect cloud sync on-demand provisioning'
-description: This article describes the on-demand provisioning feature.
+ Title: 'On-demand provisioning in Azure AD Connect cloud sync'
+description: This article describes how to use the cloud sync feature of Azure AD Connect to test configuration changes.
-# Azure AD Connect cloud sync on-demand provisioning
+# On-demand provisioning in Azure AD Connect cloud sync
-Azure AD Connect cloud sync has introduced a new feature, that will allow you to test configuration changes, by applying these changes to a single user. You can use this to validate and verify that the changes made to the configuration were applied properly and are being correctly synchronized to Azure AD.
+You can use the cloud sync feature of Azure Active Directory (Azure AD) Connect to test configuration changes by applying these changes to a single user. This on-demand provisioning helps you validate and verify that the changes made to the configuration were applied properly and are being correctly synchronized to Azure AD.
> [!IMPORTANT]
-> When you use on-demand provisioning, the scoping filters are not applied to the user you selected. This means that you can use on-demand provisioning on users that are outside the OUs you have specified.
--
-## Using on-demand provisioning
-To use the new feature, follow the steps below.
+> When you use on-demand provisioning, the scoping filters are not applied to the user that you selected. You can use on-demand provisioning on users who are outside the organization units that you specified.
+## Validate a user
+To use on-demand provisioning, follow these steps:
1. In the Azure portal, select **Azure Active Directory**. 2. Select **Azure AD Connect**. 3. Select **Manage cloud sync**.
- ![Manage provisioning](media/how-to-install/install-6.png)
+ ![Screenshot that shows the link for managing cloud sync.](media/how-to-install/install-6.png)
4. Under **Configuration**, select your configuration.
-5. Under **Validate** click the **Provision a user** button.
+5. Under **Validate**, select the **Provision a user** button.
- ![Provision a user](media/how-to-on-demand-provision/on-demand-2.png)
+ ![Screenshot that shows the button for provisioning a user.](media/how-to-on-demand-provision/on-demand-2.png)
-6. On the on-demand provisioning screen. Enter the **distinguished name** of a user and click the **Provision** button.
+6. On the **Provision on demand** screen, enter the distinguished name of a user and select the **Provision** button.
- ![Provisioning on-demand](media/how-to-on-demand-provision/on-demand-3.png)
-7. Once it completes, you should see a success screen and 4 green check boxes indicating it was successfully provisioned. Any errors will appear to the left.
+ ![Screenshot that shows a username and a Provision button.](media/how-to-on-demand-provision/on-demand-3.png)
+7. After provisioning finishes, a success screen appears with four green check marks. Any errors appear to the left.
+
+ ![Screenshot that shows successful provisioning.](media/how-to-on-demand-provision/on-demand-4.png)
+
+## Get details about provisioning
+Now you can look at the user information and determine if the changes that you made in the configuration have been applied. The rest of this article describes the individual sections that appear in the details of a successfully synchronized user.
+
+### Import user
+The **Import user** section provides information on the user who was imported from Active Directory. This is what the user looks like before provisioning into Azure AD. Select the **View details** link to display this information.
- ![Success](media/how-to-on-demand-provision/on-demand-4.png)
+![Screenshot of the button for viewing details about an imported user.](media/how-to-on-demand-provision/on-demand-5.png)
-Now you can look at the user and determine if the changes you made in the configuration have been applied. The remainder of this document will describe the individual sections that are displayed in the details of a successfully synchronized user.
+By using this information, you can see the various attributes (and their values) that were imported. If you created a custom attribute mapping, you can see the value here.
-## Import User details
-This section provides information on the user that was imported from Active Directory. This is what the user looks like before it is provisioned into Azure AD. Click the **View details** link to display this information.
+![Screenshot that shows user details.](media/how-to-on-demand-provision/on-demand-6.png)
-![Import user](media/how-to-on-demand-provision/on-demand-5.png)
+### Determine if user is in scope
+The **Determine if user is in scope** section provides information on whether the user who was imported to Azure AD is in scope. Select the **View details** link to display this information.
-Using this information, you can see the various attributes, and their values, that were imported. If you have created a custom attribute mapping, you will be able to see the value here.
-![Import user details](media/how-to-on-demand-provision/on-demand-6.png)
+![Screenshot of the button for viewing details about user scope.](media/how-to-on-demand-provision/on-demand-7.png)
-## Determine if user is in scope details
-This section provides information on whether the user that was imported to Azure AD is in scope. Click the **View details** link to display this information.
+By using this information, you can see if the user is in scope.
-![User scope](media/how-to-on-demand-provision/on-demand-7.png)
+![Screenshot that shows user scope details.](media/how-to-on-demand-provision/on-demand-10a.png)
-Using this information, you can see additional information about the scope of your users.
+### Match user between source and target system
+The **Match user between source and target system** section provides information on whether the user already exists in Azure AD and whether a join should occur instead of provisioning a new user. Select the **View details** link to display this information.
-![User scope details](media/how-to-on-demand-provision/on-demand-10a.png)
+![Screenshot of the button for viewing details about a matched user.](media/how-to-on-demand-provision/on-demand-8.png)
-## Match user between source and target system details
-This section provides information on whether the user already exists in Azure AD and should a join occur instead of provisioning a new user. Click the **View details** link to display this information.
-![View details](media/how-to-on-demand-provision/on-demand-8.png)
+By using this information, you can see whether a match was found or if a new user is going to be created.
-Using this information, you can see whether a match was found or if a new user is going to be created.
+![Screenshot that shows user information.](media/how-to-on-demand-provision/on-demand-11.png)
-![User information](media/how-to-on-demand-provision/on-demand-11.png)
+The matching details show a message with one of the three following operations:
+- **Create**: A user is created in Azure AD.
+- **Update**: A user is updated based on a change made in the configuration.
+- **Delete**: A user is removed from Azure AD.
-The Matching details will show a message with one of the three following operations. They are:
-- Create - a user is created in Azure AD-- Update - a user is updated based on a change made in the configuration-- Delete - a user is removed from Azure AD.
+Depending on the type of operation that you've performed, the message will vary.
-Depending on the type of operation you have performed, the message will vary.
+### Perform action
+The **Perform action** section provides information on the user who was provisioned or exported into Azure AD after the configuration was applied. This is what the user looks like after provisioning into Azure AD. Select the **View details** link to display this information.
-## Perform action details
-This section provides information on the user that was provisioned or exported into Azure AD after the configuration is applied. This is what the user looks like once it is provisioned into Azure AD. Click the **View details** link to display this information.
-![Perform action details](media/how-to-on-demand-provision/on-demand-9.png)
+![Screenshot of the button for viewing details about a performed action.](media/how-to-on-demand-provision/on-demand-9.png)
-Using this information, you can see the values of the attributes after the configuration is applied. Do they look similar to what was imported or are the different? Does the configuration apply successful?
+By using this information, you can see the values of the attributes after the configuration was applied. Do they look similar to what was imported, or are they different? Was the configuration applied successfully?
-This will process allow you to trace the attribute transformation as it moves through the cloud and into your Azure AD tenant.
+This process enables you to trace the attribute transformation as it moves through the cloud and into your Azure AD tenant.
-![trace attribute](media/how-to-on-demand-provision/on-demand-12.png)
+![Screenshot that shows traced attribute details.](media/how-to-on-demand-provision/on-demand-12.png)
## Next steps - [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)-- [How to install Azure AD Connect cloud sync](how-to-install.md)
+- [Install Azure AD Connect cloud sync](how-to-install.md)
active-directory How To Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-troubleshoot.md
You might get an error message when you install the cloud provisioning agent.
This problem is typically caused by the agent being unable to execute the PowerShell registration scripts due to local PowerShell execution policies.
-To resolve this problem, change the PowerShell execution policies on the server. You need to have Machine and User policies set as *Undefined* or *RemoteSigned*. If they're set as *Unrestricted*, you'll see this error. For more information, see [PowerShell execution policies](/powershell/module/microsoft.powershell.core/about/about_execution_policies?view=powershell-6).
+To resolve this problem, change the PowerShell execution policies on the server. You need to have Machine and User policies set as *Undefined* or *RemoteSigned*. If they're set as *Unrestricted*, you'll see this error. For more information, see [PowerShell execution policies](/powershell/module/microsoft.powershell.core/about/about_execution_policies).
### Log files
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Sign-in frequency defines the time period before a user is asked to sign in agai
The Azure Active Directory (Azure AD) default configuration for user sign-in frequency is a rolling window of 90 days. Asking users for credentials often seems like a sensible thing to do, but it can backfire: users that are trained to enter their credentials without thinking can unintentionally supply them to a malicious credential prompt.
-It might sound alarming to not ask for a user to sign back in, in reality any violation of IT policies will revoke the session. Some examples include (but are not limited to) a password change, an incompliant device, or account disable. You can also explicitly [revoke usersΓÇÖ sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken?view=azureadps-2.0&preserve-view=true). The Azure AD default configuration comes down to ΓÇ£donΓÇÖt ask users to provide their credentials if security posture of their sessions has not changedΓÇ¥.
+It might sound alarming to not ask for a user to sign back in, in reality any violation of IT policies will revoke the session. Some examples include (but are not limited to) a password change, an incompliant device, or account disable. You can also explicitly [revoke usersΓÇÖ sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken). The Azure AD default configuration comes down to ΓÇ£donΓÇÖt ask users to provide their credentials if security posture of their sessions has not changedΓÇ¥.
The sign-in frequency setting works with apps that have implemented OAUTH2 or OIDC protocols according to the standards. Most Microsoft native apps for Windows, Mac, and Mobile including the following web applications comply with the setting.
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
Immediately afterward, the user tries to access Web Application B. The user is r
## Cmdlet reference
-These are the cmdlets in the [Azure Active Directory PowerShell for Graph Preview module](/powershell/module/azuread/?view=azureadps-2.0-preview#service-principals&preserve-view=true&preserve-view=true).
+These are the cmdlets in the [Azure Active Directory PowerShell for Graph Preview module](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#service-principals).
### Manage policies
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-support-help-options.md
# Support and help options for developers
-If you're just starting to integrate with Azure Active Directory (Azure AD), Microsoft identities, or Microsoft Graph API, or when you're implementing a new feature to your application, there are times when you need to obtain help from the community or understand the support options that you have as a developer. This article helps you to understand these options, including:
+If you're just starting to integrate with Azure Active Directory (Azure AD), Microsoft identities, or Microsoft Graph API, or when you're implementing a new feature to your application, there are times when you need to obtain help from the community or understand the support options that you have as a developer. Here are suggestions for where you can get help when developing your Microsoft identity platform solutions.
-> [!div class="checklist"]
-> * How to search whether your question hasn't been answered by the community, or if an existing documentation for the feature you're trying to implement already exists
-> * In some cases, you just want to use our support tools to help you debug a specific problem
-> * If you can't find the answer that you need, you may want to ask a question on *Microsoft Q&A*
-> * If you find an issue with one of our authentication libraries, raise a *GitHub* issue
-> * Finally, if you need to talk to someone, you might want to open a support request
+## Create an Azure support request
-## Search
+<div class='icon is-large'>
+ <img alt='Azure support' src='https://docs.microsoft.com/media/logos/logo_azure.svg'>
+</div>
-If you have a development-related question, you may be able to find the answer in the documentation, [GitHub samples](https://github.com/azure-samples), or answers to [Microsoft Q&A](/answers/products/) questions.
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
-### Scoped search
+- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+- If you are not an Azure customer, you can also open a support request with Microsoft via [our commercial support](https://support.serviceshub.microsoft.com/supportforbusiness).
-For faster results, scope your search to [Microsoft Q&A](https://docs.microsoft.com/answers/products/) the documentation, and the code samples by using the following query in your favorite search engine:
+## Post a question to Microsoft Q&A
+<div class='icon is-large'>
+ <img alt='Microsoft Q&A' src='./media/common/question-mark-icon.png'>
+</div>
-For faster results, scope your search to [Microsoft Q&A](/answers/products/)the documentation, and the code samples by using the following query in your favorite search engine:
+Get answers to your identity app development questions directly from Microsoft engineers, Azure Most Valuable Professionals (MVPs), and members of our expert community.
+[Microsoft Q&A](/answers/products/) is Azure's recommended source of community support.
-```
-{Your Search Terms} (site:http://www.docs.microsoft.com/answers/products/ OR site:docs.microsoft.com OR site:github.com/azure-samples OR site:cloudidentity.com OR site:developer.microsoft.com/graph)
-```
+If you can't find an answer to your problem by searching Microsoft Q&A, submit a new question. Use one of following tags when you ask your [high-quality question](https://docs.microsoft.com/answers/articles/24951/how-to-write-a-quality-question.html):
-Where *{Your Search Terms}* correspond to your search keywords.
+| Component/area| Tags |
+|||
+| Active Directory Authentication Library (ADAL) | [[adal]](https://docs.microsoft.com/answers/topics/azure-ad-adal-deprecation.html) |
+| Microsoft Authentication Library (MSAL) | [[msal]](https://docs.microsoft.com/answers/topics/azure-ad-msal.html) |
+| Open Web Interface for .NET (OWIN) middleware | [[azure-active-directory]](https://docs.microsoft.com/answers/topics/azure-active-directory.html) |
+| [Azure AD B2B / External Identities](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](https://docs.microsoft.com/answers/topics/azure-ad-b2b.html) |
+| [Azure AD B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](https://docs.microsoft.com/answers/topics/azure-ad-b2c.html) |
+| [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[azure-ad-graph]](https://docs.microsoft.com/answers/topics/azure-ad-graph.html) |
+| All other authentication and authorization areas | [[azure-active-directory]](https://docs.microsoft.com/answers/topics/azure-active-directory.html) |
-## Use the development support tools
+## Create a GitHub issue
-| Tool | Description |
-|||
-| [jwt.ms](https://jwt.ms) | Paste an ID or access token to decode the claims names and values. |
-| [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer)| Tool that lets you make requests and see responses against the Microsoft Graph API. |
+<div class='icon is-large'>
+ <img alt='GitHub-image' src='./media/common/github.svg'>
+</div>
-## Post a question to Microsoft Q&A
+If you need help with one of the Microsoft Authentication Libraries (MSAL), open an issue in its repository on GitHub.
-[Microsoft Q&A](/answers/products/) is the preferred channel for development-related questions. Here, members of the developer community and Microsoft team members are directly involved in helping you to solve your problems.
+| MSAL Library | GitHub issues URL|
+| | |
+| MSAL for Android | https://github.com/AzureAD/microsoft-authentication-library-for-android/issues |
+| MSAL Angular | https://github.com/AzureAD/microsoft-authentication-library-for-js/issues |
+| MSAL for iOS and macOS| https://github.com/AzureAD/microsoft-authentication-library-for-objc/issues |
+| MSAL Java | https://github.com/AzureAD/microsoft-authentication-library-for-java/issues |
+| MSAL.js | https://github.com/AzureAD/microsoft-authentication-library-for-js/issues |
+|MSAL.NET| https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues |
+| MSAL Node | https://github.com/AzureAD/microsoft-authentication-library-for-js/issues |
+| MSAL Python | https://github.com/AzureAD/microsoft-authentication-library-for-python/issues |
+| MSAL React | https://github.com/AzureAD/microsoft-authentication-library-for-js/issues |
-If you can't find an answer to your question through search, submit a new question to [Microsoft Q&A](/answers/products/) . Use one of the following tags when asking questions to help the community identify and answer your question more quickly:
+## Submit feedback on Azure Feedback
-|Component/area | Tags |
-|||
-| ADAL library | [[adal]](/answers/topics/azure-ad-adal-deprecation.html) |
-| MSAL library | [[msal]](/answers/topics/azure-ad-msal.html) |
-| OWIN middleware | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
-| [Azure B2B](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](/answers/topics/azure-ad-b2b.html) |
-| [Azure B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](/answers/topics/azure-ad-b2c.html) |
-| [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[azure-ad-graph]](/answers/topics/azure-ad-graph.html) |
-| Any other area related to authentication or authorization topics | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
+<div class='icon is-large'>
+ <img alt='UserVoice' src='https://docs.microsoft.com/media/logos/logo-uservoice.svg'>
+</div>
-The following posts from [Microsoft Q&A](/answers/products/) contain tips on how to ask questions and how to add source code. Follow these guidelines to increase the chances for community members to assess and respond to your question quickly:
+To request new features, post them on Azure Feedback. Share your ideas for making the Microsoft identity platform work better for the applications you develop.
-* [How do I ask a good question](/answers/articles/24951/how-to-write-a-quality-question.html)
-* [How to create a minimal, complete, and verifiable example](/answers/articles/24907/how-to-write-a-quality-answer.html)
-
-## Create a GitHub issue
+| Service | Azure Feedback URL |
+|-||
+| Azure Active Directory | https://feedback.azure.com/forums/169401-azure-active-directory |
+| Azure Active Directory - Developer experiences | https://feedback.azure.com/forums/169401-azure-active-directory?category_id=164757 |
+| Azure Active Directory - Authentication | https://feedback.azure.com/forums/169401-azure-active-directory?category_id=167256 |
-If you find a bug or problem related to our libraries, raise an issue in our GitHub repositories. Because our libraries are open source, you can also submit a pull request.
+## Stay informed of updates and new releases
-For a list of libraries and their GitHub repositories, see the following:
+<div class='icon is-large'>
+ <img alt='Stay informed' src='https://docs.microsoft.com/media/common/i_blog.svg'>
+</div>
-* [Azure Active Directory Authentication Library (ADAL)](../azuread-dev/active-directory-authentication-libraries.md) libraries and GitHub repositories
-* [Microsoft Authentication Library (MSAL)](reference-v2-libraries.md) libraries and GitHub repositories
+- [Azure Updates](https://azure.microsoft.com/updates/?category=identity): Learn about important product updates, roadmap, and announcements.
-## Open a support request
+- [What's new in docs](https://docs.microsoft.com/azure/active-directory/develop/whats-new-docs): Get to know what's new in the Microsoft identity platform documentation.
-If you need to talk to someone, you can open a support request. If you are an Azure customer, there are several support options available. To compare plans, see [this page](https://azure.microsoft.com/support/plans/). Developer support is also available for Azure customers. For information on how to purchase Developer support plans, see [this page](https://azure.microsoft.com/support/plans/developer/).
+- [Azure Active Directory Identity Blog](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity): Get news and information about Azure AD.
-* If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest)
+- [Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity/): Share your experiences, engage and learn from experts.
-* If you are not an Azure customer, you can also open a support request with Microsoft via [our commercial support](https://support.serviceshub.microsoft.com/supportforbusiness).
-You can also try a [virtual agent](https://support.microsoft.com/contactus/?ws=support) to obtain support or ask questions.
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Select **Register** to create the application. > 1. Under **Manage**, select **Authentication**. > 1. Select **Add a platform** > **Mobile and desktop applications**.
-> 1. In the **Redirect URIs** section, select `https://login.microsoftonline.com/common/oauth2/nativeclient`.
+> 1. In the **Redirect URIs** section, select `https://login.microsoftonline.com/common/oauth2/nativeclient` and in **Custom redirect URIs** add `ms-appx-web://microsoft.aad.brokerplugin/{client_id}` where `{client_id}` is the application (client) ID of your application (the same GUID that appears in the `msal{client_id}://auth` checkbox).
> 1. Select **Configure**. > [!div class="sxs-lookup" renderon="portal"] > #### Step 1: Configure your application in Azure portal
-> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient`.
+> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient` and `ms-appx-web://microsoft.aad.brokerplugin/{client_id}`.
> > [!div renderon="portal" id="makechanges" class="nextstepaction"] > > [Make this change for me]() >
active-directory Scenario Desktop App Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-app-registration.md
The redirect URIs to use in a desktop application depend on the flow you want to
- If you use interactive authentication or device code flow, use `https://login.microsoftonline.com/common/oauth2/nativeclient`. To achieve this configuration, select the corresponding URL in the **Authentication** section for your application. > [!IMPORTANT]
- > Using `https://login.microsoftonline.com/common/oauth2/nativeclient` as the redirect URI is recommended as a security best practice. If no redirect URI is specified, MSAL.NET uses `urn:ietf:wg:oauth:2.0:oob` by default which is not recommneded. This default will be updated as a breaking change in the next major release.
+ > Using `https://login.microsoftonline.com/common/oauth2/nativeclient` as the redirect URI is recommended as a security best practice. If no redirect URI is specified, MSAL.NET uses `urn:ietf:wg:oauth:2.0:oob` by default which is not recommended. This default will be updated as a breaking change in the next major release.
- If you build a native Objective-C or Swift app for macOS, register the redirect URI based on your application's bundle identifier in the following format: `msauth.<your.app.bundle.id>://auth`. Replace `<your.app.bundle.id>` with your application's bundle identifier. - If your app uses only Integrated Windows Authentication or a username and a password, you don't need to register a redirect URI for your application. These flows do a round trip to the Microsoft identity platform v2.0 endpoint. Your application won't be called back on any specific URI.
active-directory Scenario Protected Web Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
services.Configure<JwtBearerOptions>(JwtBearerDefaults.AuthenticationScheme, opt
// Your code to add extra configuration that will be executed after the current event implementation. options.TokenValidationParameters.ValidIssuers = new[] { /* list of valid issuers */ }; options.TokenValidationParameters.ValidAudiences = new[] { /* list of valid audiences */};
- }
+ };
}); ```
You can also validate incoming access tokens in Azure Functions. You can find ex
## Next steps Move on to the next article in this scenario,
-[Verify scopes and app roles in your code](scenario-protected-web-api-verification-scope-app-roles.md).
+[Verify scopes and app roles in your code](scenario-protected-web-api-verification-scope-app-roles.md).
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/troubleshoot-publisher-verification.md
When a request to add a verified publisher is made, an number of signals are use
## Next steps
-If you have reviewed all of the previous information and are still receiving an error from Microsoft Graph, gather as much of the following information as possible related to the failing request and [contact Microsoft support](developer-support-help-options.md#open-a-support-request).
+If you have reviewed all of the previous information and are still receiving an error from Microsoft Graph, gather as much of the following information as possible related to the failing request and [contact Microsoft support](developer-support-help-options.md#create-an-azure-support-request).
- Timestamp - CorrelationId
active-directory Tutorial Blazor Webassembly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-webassembly.md
Once registered, under **Manage**, select **Authentication** > **Implicit grant
To create the app you need the latest Blazor templates. You can install them for the .NET Core CLI with the following command: ```dotnetcli
-dotnet new --install Microsoft.AspNetCore.Components.WebAssembly.Templates::3.2.1
+dotnet new -i Microsoft.Identity.Web.ProjectTemplates::1.6.0
``` Then run the following command to create the application. Replace the placeholders in the command with the proper information from your app's overview page and execute the command in a command shell. The output location specified with the `-o|--output` option creates a project folder if it doesn't exist and becomes part of the app's name.
active-directory Hybrid Azuread Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-plan.md
The table below provides details on support for these on-premises AD UPNs in Win
| -- | -- | -- | -- | | Routable | Federated | From 1703 release | Generally available | | Non-routable | Federated | From 1803 release | Generally available |
-| Routable | Managed | From 1803 release | Generally available, Azure AD SSPR on Windows lockscreen is not supported |
+| Routable | Managed | From 1803 release | Generally available, Azure AD SSPR on Windows lockscreen is not supported. The on-premises UPN must be synced to the `onPremisesUserPrincipalName` attribute in Azure AD |
| Non-routable | Managed | Not supported | | ## Next steps
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/domains-manage.md
If you find that any of the conditions havenΓÇÖt been met, manually clean up the
Most management tasks for domain names in Azure Active Directory can also be completed using Microsoft PowerShell, or programmatically using the Microsoft Graph API. * [Using PowerShell to manage domain names in Azure AD](/powershell/module/azuread/#domains&preserve-view=true)
-* [Domain resource type](/graph/api/resources/domain?view=graph-rest-1.0&preserve-view=true)
+* [Domain resource type](/graph/api/resources/domain)
## Next steps
active-directory Groups Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-create-rule.md
If the rule you entered isn't valid, an explanation of why the rule couldn't be
## Turn on or off welcome email
-When a new Microsoft 365 group is created, a welcome email notification is sent the users who are added to the group. Later, if any attributes of a user or device change, all dynamic group rules in the organization are processed for membership changes. Users who are added then also receive the welcome notification. You can turn off this behavior in [Exchange PowerShell](/powershell/module/exchange/users-and-groups/Set-UnifiedGroup?view=exchange-ps&preserve-view=true).
+When a new Microsoft 365 group is created, a welcome email notification is sent the users who are added to the group. Later, if any attributes of a user or device change, all dynamic group rules in the organization are processed for membership changes. Users who are added then also receive the welcome notification. You can turn off this behavior in [Exchange PowerShell](/powershell/module/exchange/users-and-groups/Set-UnifiedGroup).
## Check processing status for a rule
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
If you use Exchange Online, some users in your organization might be incorrectly
> Get-Recipient -ResultSize unlimited | where {$_.EmailAddresses -match "user@contoso.onmicrosoft.com"} | fL Name, RecipientType,emailaddresses > ``` > For more information about this problem, see ["Proxy address
-> is already being used" error message in Exchange Online](https://support.microsoft.com/help/3042584/-proxy-address-address-is-already-being-used-error-message-in-exchange-online). The article also includes information on [how to connect to Exchange Online by using remote PowerShell](/powershell/exchange/connect-to-exchange-online-powershell?view=exchange-ps).
+> is already being used" error message in Exchange Online](https://support.microsoft.com/help/3042584/-proxy-address-address-is-already-being-used-error-message-in-exchange-online). The article also includes information on [how to connect to Exchange Online by using remote PowerShell](/powershell/exchange/connect-to-exchange-online-powershell).
After you resolve any proxy address problems for the affected users, make sure to force license processing on the group to make sure that the licenses can now be applied.
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/reset-redemption-status.md
New-AzureADMSInvitation -InvitedUserEmailAddress <<external email>> -SendInvitat
## Use Microsoft Graph API to reset redemption status
-Using the [Microsoft Graph invitation API](/graph/api/resources/invitation?view=graph-rest-1.0), set the `resetRedemption` property to `true` and specify the new email address in the `invitedUserEmailAddress` property.
+Using the [Microsoft Graph invitation API](/graph/api/resources/invitation), set the `resetRedemption` property to `true` and specify the new email address in the `invitedUserEmailAddress` property.
```json POST https://graph.microsoft.com/beta/invitations
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/2-secure-access-current-state.md
Individuals in your organization are probably already collaborating with users f
The users initiating external collaboration best understand the applications most relevant for external collaboration, and when that access should end. Understanding these users can help you determine who should be delegated permission to inviting external users, create access packages, and complete access reviews.
-To find users who are currently collaborating, review the [Microsoft 365 audit log for sharing and access request activities](/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance?view=o365-worldwide#sharing-and-access-request-activities). You can also review the [Azure AD audit log for details on who invited B2B](../external-identities/auditing-and-reporting.md) users to your directory.
+To find users who are currently collaborating, review the [Microsoft 365 audit log for sharing and access request activities](/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance#sharing-and-access-request-activities). You can also review the [Azure AD audit log for details on who invited B2B](../external-identities/auditing-and-reporting.md) users to your directory.
## Find current collaboration partners
-External users may be [Azure AD B2B users](../external-identities/what-is-b2b.md) (preferable) with partner-managed credentials, or external users with locally provisioned credentials. These users are typically (but not always) marked with a UserType of Guest. You can enumerate guest users through the [Microsoft Graph API](/graph/api/user-list?tabs=http&view=graph-rest-1.0), [PowerShell](/graph/api/user-list?tabs=http&view=graph-rest-1.0), or the [Azure portal](../enterprise-users/users-bulk-download.md).
+External users may be [Azure AD B2B users](../external-identities/what-is-b2b.md) (preferable) with partner-managed credentials, or external users with locally provisioned credentials. These users are typically (but not always) marked with a UserType of Guest. You can enumerate guest users through the [Microsoft Graph API](/graph/api/user-list?tabs=http), [PowerShell](/graph/api/user-list?tabs=http), or the [Azure portal](../enterprise-users/users-bulk-download.md).
### Use email domains and companyName property
Consider whether your organization wants to allow collaboration with only specif
## Find access being granted to external users
-Once you have an inventory of external users and organizations,, you can determine the access granted to these users using the Microsoft Graph API to determine Azure AD [group membership](/graph/api/resources/groups-overview?view=graph-rest-1.0) or [direct application assignment](/graph/api/resources/approleassignment?view=graph-rest-1.0) in Azure AD.
+Once you have an inventory of external users and organizations,, you can determine the access granted to these users using the Microsoft Graph API to determine Azure AD [group membership](/graph/api/resources/groups-overview) or [direct application assignment](/graph/api/resources/approleassignment) in Azure AD.
### Enumerate application-specific permissions
You may also be able to perform application-specific permission enumeration. For
Specifically investigate access to all of your business-sensitive and business-critical apps so that you are fully aware of any external access. ### Detect Ad Hoc Sharing
-If your email and network plans enable it, you can investigate content being shared through email or through unauthorized software as a service (SaaS) apps. [Microsoft 365 Data Loss Protection](/microsoft-365/compliance/data-loss-prevention-policies?view=o365-worldwide) helps you identify, prevent, and monitor the accidental sharing of sensitive information across your Microsoft 365 infrastructure. [Microsoft Cloud App Security](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/cloud-app-security) can help you identify the use of unauthorized SaaS apps in your environment.
+If your email and network plans enable it, you can investigate content being shared through email or through unauthorized software as a service (SaaS) apps. [Microsoft 365 Data Loss Protection](/microsoft-365/compliance/data-loss-prevention-policies) helps you identify, prevent, and monitor the accidental sharing of sensitive information across your Microsoft 365 infrastructure. [Microsoft Cloud App Security](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/cloud-app-security) can help you identify the use of unauthorized SaaS apps in your environment.
## Next steps
active-directory 3 Secure Access Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/3-secure-access-plan.md
While your policies will be highly customized to your needs, consider the follow
* If you are using [connected organizations](../governance/entitlement-management-organization.md) to group all users from a single partner, schedule regular reviews with the business owner and the partner representative.
-* **Microsoft 365 Groups**. Set a [group expiration policy](/microsoft-365/solutions/microsoft-365-groups-expiration-policy?view=o365-worldwide) for Microsoft 365 Groups to which external users are invited.
+* **Microsoft 365 Groups**. Set a [group expiration policy](/microsoft-365/solutions/microsoft-365-groups-expiration-policy) for Microsoft 365 Groups to which external users are invited.
* **Other options**. If external users have access outside of Entitlement Management access packages or Microsoft 365 groups, set up business process to review when accounts should be made inactive or deleted. For example:
active-directory 4 Secure Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/4-secure-access-groups.md
As you develop your group strategy to secure external access to your resources,
* *By default all users can create M365 Groups and groups are open for all (internal and external) users in your tenant to join*.
- * [You can restrict Microsoft 365 Group creation](/microsoft-365/solutions/manage-creation-of-groups?view=o365-worldwide) to the members of a particular security group. Use Windows PowerShell to configure this setting.
+ * [You can restrict Microsoft 365 Group creation](/microsoft-365/solutions/manage-creation-of-groups) to the members of a particular security group. Use Windows PowerShell to configure this setting.
* **Who should be able to invite people to groups?** Can all group members be able to add other members, or can only group owners add members?
Hybrid organizations have both an on-premises infrastructure and an Azure AD clo
## Microsoft 365 Groups
-[Microsoft 365 Groups](/microsoft-365/admin/create-groups/office-365-groups?view=o365-worldwide) are the foundational membership service that drives all access across M365. They can be created from the [Azure portal](https://portal.azure.com/), or the [M365 portal](https://admin.microsoft.com/). When an M365 group is created, you grant access to a group of resources used to collaborate. See [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups?view=o365-worldwide) for a complete listing of these resources.
+[Microsoft 365 Groups](/microsoft-365/admin/create-groups/office-365-groups) are the foundational membership service that drives all access across M365. They can be created from the [Azure portal](https://portal.azure.com/), or the [M365 portal](https://admin.microsoft.com/). When an M365 group is created, you grant access to a group of resources used to collaborate. See [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups) for a complete listing of these resources.
M365 Groups have the following nuances for their roles * **Owners** - Group owners can add or remove members and have unique permissions like the ability to delete conversations from the shared inbox or change group settings. Group owners can rename the group, update the description or picture and more.
-* **Members** - Members can access everything in the group but can't change group settings. By default group members can invite guests to join your group, though you can [control that setting](/microsoft-365/admin/create-groups/manage-guest-access-in-groups?view=o365-worldwide).
+* **Members** - Members can access everything in the group but can't change group settings. By default group members can invite guests to join your group, though you can [control that setting](/microsoft-365/admin/create-groups/manage-guest-access-in-groups).
* **Guests** - Group guests are members who are from outside your organization. Guests by default have some limits to functionality in Teams.
active-directory 8 Secure Access Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md
# Control access with sensitivity labels
-[Sensitivity labels](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide) help you control access to your content in Office 365 applications, and in containers like Microsoft Teams, Microsoft 365 Groups, and SharePoint sites. They can protect your content without hindering your usersΓÇÖ collaboration and production abilities. Sensitivity labels allow you to send your organizationΓÇÖs content across devices, apps, and services, while protecting your data and meeting your compliance and security policies.
+[Sensitivity labels](/microsoft-365/compliance/sensitivity-labels) help you control access to your content in Office 365 applications, and in containers like Microsoft Teams, Microsoft 365 Groups, and SharePoint sites. They can protect your content without hindering your usersΓÇÖ collaboration and production abilities. Sensitivity labels allow you to send your organizationΓÇÖs content across devices, apps, and services, while protecting your data and meeting your compliance and security policies.
With sensitivity labels you can: * **Classify content without adding any protection settings**. You can assign a classification to content (like a sticker) that persists and roams with your content as itΓÇÖs used and shared. You can use this classification to generate usage reports and see activity data for your sensitive content.
-* **Enforce protection settings such as encryption, watermarks, and access restrictions**. For example, users can apply a Confidential label to a document or email, and that label can [encrypt the content](/microsoft-365/compliance/encryption-sensitivity-labels?view=o365-worldwide) and add a ΓÇ£ConfidentialΓÇ¥ watermark. In addition, you can [apply a sensitivity label to a container](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide) like a SharePoint site, and enforce whether external users can access the content it contains.
+* **Enforce protection settings such as encryption, watermarks, and access restrictions**. For example, users can apply a Confidential label to a document or email, and that label can [encrypt the content](/microsoft-365/compliance/encryption-sensitivity-labels) and add a ΓÇ£ConfidentialΓÇ¥ watermark. In addition, you can [apply a sensitivity label to a container](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites) like a SharePoint site, and enforce whether external users can access the content it contains.
Sensitivity labels on email and other content travel with the content. Sensitivity labels on containers can restrict access to the container, but content in the container doesn't inherit the label. For example, a user could take content from a protected site, download it, and then share it without restrictions unless the content also had a sensitivity label.
As you think about governing external access to your content, determine the foll
* How will you define what is High, Medium, or Low Business Impact (HBI, MBI, LBI)? Consider the impact to your organization if specific types of content are shared inappropriately.
- * Content with specific types of inherently [sensitive content](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide), such as credit cards or passport numbers
+ * Content with specific types of inherently [sensitive content](/microsoft-365/compliance/apply-sensitivity-label-automatically), such as credit cards or passport numbers
* Content created by specific groups or people (for example, compliance officers, financial officers, or executives)
As you think about governing external access to your content, determine the foll
* What defaults should be in place for HBI data, sites, or Microsoft 365 Groups?
-* Where will you use sensitivity labels to [label and monitor](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide), versus to [enforce encryption](/microsoft-365/compliance/encryption-sensitivity-labels?view=o365-worldwide) or to [enforce container access restrictions](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide)?
+* Where will you use sensitivity labels to [label and monitor](/microsoft-365/compliance/sensitivity-labels), versus to [enforce encryption](/microsoft-365/compliance/encryption-sensitivity-labels) or to [enforce container access restrictions](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites)?
**For email and content**
-* Do you want to [automatically apply sensitivity labels](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide) to content, or do so manually?
+* Do you want to [automatically apply sensitivity labels](/microsoft-365/compliance/apply-sensitivity-label-automatically) to content, or do so manually?
- * If you choose to do so manually, do you want to [recommend that users apply a label](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide)?
+ * If you choose to do so manually, do you want to [recommend that users apply a label](/microsoft-365/compliance/apply-sensitivity-label-automatically)?
**For containers** * What criteria will determine if M365 Groups, Teams, or SharePoint sites require access to be restricted by using sensitivity labels?
-* Do you want to only label content in these containers moving forward, or do you want to [automatically label](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide) existing files in SharePoint and OneDrive?
+* Do you want to only label content in these containers moving forward, or do you want to [automatically label](/microsoft-365/compliance/apply-sensitivity-label-automatically) existing files in SharePoint and OneDrive?
-See these [common scenarios for sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels?view=o365-worldwide) for other ideas on how you can use sensitivity labels.
+See these [common scenarios for sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels) for other ideas on how you can use sensitivity labels.
### Sensitivity labels on email and content
When you assign a sensitivity label to a document or email, it's like a stamp th
### Sensitivity labels on containers
-You can apply sensitivity labels on containers such as [Microsoft 365 Groups](../enterprise-users/groups-assign-sensitivity-labels.md), [Microsoft Teams](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide), and [SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide). When you apply this sensitivity label to a supported container, the label automatically applies the classification and protection settings to the connected site or group. Sensitivity labels on these containers can control the following aspects of containers:
+You can apply sensitivity labels on containers such as [Microsoft 365 Groups](../enterprise-users/groups-assign-sensitivity-labels.md), [Microsoft Teams](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites), and [SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites). When you apply this sensitivity label to a supported container, the label automatically applies the classification and protection settings to the connected site or group. Sensitivity labels on these containers can control the following aspects of containers:
* **Privacy**. You can choose who can see the site: specific users, all internal users, or anyone.
You can apply sensitivity labels on containers such as [Microsoft 365 Groups](..
When you apply a sensitivity label to a container such as a SharePoint site, it is not applied to content there: sensitivity labels on containers control access to the content within the container.
-* If you want to automatically apply labels to the content within the container, see [Apply a sensitivity to content automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide).
+* If you want to automatically apply labels to the content within the container, see [Apply a sensitivity to content automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically).
-* If you want users to be able to manually apply labels to this content, be sure that youΓÇÿve [enabled sensitivity labels for Office files in SharePoint and OneDrive](/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files?view=o365-worldwide).
+* If you want users to be able to manually apply labels to this content, be sure that youΓÇÿve [enabled sensitivity labels for Office files in SharePoint and OneDrive](/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files).
### Plan to implement sensitivity labels Once you have determined how you want to use sensitivity labels, and to what content and sites you want to apply them, see the following documentation to help you perform your implementation.
-1. [Get started with sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels?view=o365-worldwide)
+1. [Get started with sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels)
-2. [Create a deployment strategy](/microsoft-365/compliance/get-started-with-sensitivity-labels?view=o365-worldwide)
+2. [Create a deployment strategy](/microsoft-365/compliance/get-started-with-sensitivity-labels)
-3. [Create and publish sensitivity labels](/microsoft-365/compliance/create-sensitivity-labels?view=o365-worldwide)
+3. [Create and publish sensitivity labels](/microsoft-365/compliance/create-sensitivity-labels)
-4. [Restrict access to content using sensitivity labels to apply encryption](/microsoft-365/compliance/encryption-sensitivity-labels?view=o365-worldwide)
+4. [Restrict access to content using sensitivity labels to apply encryption](/microsoft-365/compliance/encryption-sensitivity-labels)
-5. [Use sensitivity labels with teams, groups, and sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide)
+5. [Use sensitivity labels with teams, groups, and sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites)
-6. [Enable sensitivity labels for Office files in SharePoint and OneDrive](/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files?view=o365-worldwide)
+6. [Enable sensitivity labels for Office files in SharePoint and OneDrive](/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files)
### Next steps
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
To learn more about managing external access in Teams, see the following resourc
* [Manage external access in Microsoft Teams](/microsoftteams/manage-external-access)
-* [Microsoft 365 identity models and Azure Active Directory](/microsoft-365/enterprise/about-microsoft-365-identity?view=o365-worldwide)
+* [Microsoft 365 identity models and Azure Active Directory](/microsoft-365/enterprise/about-microsoft-365-identity)
* [Identity models and authentication for Microsoft Teams](/MicrosoftTeams/identify-models-authentication)
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
the principles illustrated in the following diagram:
These administrator accounts are restricted-use accounts. *No on-premises accounts should have administrative privileges in Microsoft 365.*
- For more information, see the [overview of Microsoft 365 administrator roles](/microsoft-365/admin/add-users/about-admin-roles?view=o365-worldwide). Also see [Roles for Microsoft 365 in Azure AD](../roles/m365-workload-docs.md).
+ For more information, see the [overview of Microsoft 365 administrator roles](/microsoft-365/admin/add-users/about-admin-roles). Also see [Roles for Microsoft 365 in Azure AD](../roles/m365-workload-docs.md).
1. **Manage devices from Microsoft 365.** Use Azure AD join and cloud-based mobile device management (MDM) to eliminate dependencies
your on-premises infrastructure.
* **Collaboration**: Use Microsoft 365 Groups and Microsoft Teams for modern collaboration. Decommission on-premises distribution lists, and [upgrade distribution lists to Microsoft 365 Groups in
- Outlook](/office365/admin/manage/upgrade-distribution-lists?view=o365-worldwide).
+ Outlook](/office365/admin/manage/upgrade-distribution-lists).
* **Access**: Use Azure AD security groups or Microsoft 365 Groups to authorize access to applications in Azure AD.
authentication decisions. For more information, see the
For more information, see [Legacy authentication protocols](../fundamentals/auth-sync-overview.md). Or see specific details for [Exchange Online](/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](/powershell/module/sharepoint-online/set-spotenant?view=sharepoint-ps).
-* Implement the recommended [identity and device access configurations](/microsoft-365/security/office-365-security/identity-access-policies?view=o365-worldwide).
+* Implement the recommended [identity and device access configurations](/microsoft-365/security/office-365-security/identity-access-policies).
* If you're using a version of Azure AD that doesn't include Conditional Access, ensure that you're using the [Azure AD security defaults](../fundamentals/concept-fundamentals-security-defaults.md).
active-directory Service Accounts Govern On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-govern-on-premises.md
Use the following settings with user accounts used as service accounts:
* **LogonWorkstations**: restrict permissions for where the service account can sign in. If it runs locally on a machine and accesses only resources on that machine, restrict it from logging on anywhere else.
-* [**Cannot change password**](/powershell/module/addsadministration/set-aduser?view=win10-ps): prevent the service account from changing its own password by setting the parameter to false.
+* [**Cannot change password**](/powershell/module/addsadministration/set-aduser): prevent the service account from changing its own password by setting the parameter to false.
## Build a lifecycle management process
Create service account only after relevant information is documented in your CMD
* [Account Expiry](/powershell/module/activedirectory/set-adaccountexpiration?view=winserver2012-ps)
- * For all user accounts used as service accounts, define a realistic and definite end-date for use. Set this using the ΓÇ£Account ExpiresΓÇ¥ flag. For more details, refer to[ Set-ADAccountExpiration](/powershell/module/addsadministration/set-adaccountexpiration?view=win10-ps).
+ * For all user accounts used as service accounts, define a realistic and definite end-date for use. Set this using the ΓÇ£Account ExpiresΓÇ¥ flag. For more details, refer to[ Set-ADAccountExpiration](/powershell/module/addsadministration/set-adaccountexpiration).
-* Log On To ([LogonWorkstation](/powershell/module/addsadministration/set-aduser?view=win10-ps))
+* Log On To ([LogonWorkstation](/powershell/module/addsadministration/set-aduser))
* [Password Policy](../../active-directory-domain-services/password-policy.md) requirements
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-principal.md
Using PowerShell
`Get-AzureADServicePrincipal -All:$true`
-For more information see [Get-AzureADServicePrincipal](https://docs.microsoft.com/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0)
+For more information see [Get-AzureADServicePrincipal](https://docs.microsoft.com/powershell/module/azuread/get-azureadserviceprincipal)
## Assess service principal security
active-directory Service Accounts Standalone Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-standalone-managed.md
sMSAs offer greater security than user accounts used as service accounts, while
* The DNS name of the host computer is changed.
- * When adding or removing an additional sam-accountname or dns-hostname parameters using [PowerShell](/powershell/module/addsadministration/set-adserviceaccount?view=win10-ps)
+ * When adding or removing an additional sam-accountname or dns-hostname parameters using [PowerShell](/powershell/module/addsadministration/set-adserviceaccount)
## When to use sMSAs
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
To include resources in an access package, the resources must exist in a catalog
### Add a Multi-geo SharePoint Site
-1. If you have [Multi-Geo](https://docs.microsoft.com/microsoft-365/enterprise/multi-geo-capabilities-in-onedrive-and-sharepoint-online-in-microsoft-365?view=o365-worldwide) enabled for SharePoint, select the environment you would like to select sites from.
+1. If you have [Multi-Geo](https://docs.microsoft.com/microsoft-365/enterprise/multi-geo-capabilities-in-onedrive-and-sharepoint-online-in-microsoft-365) enabled for SharePoint, select the environment you would like to select sites from.
:::image type="content" source="media/entitlement-management-catalog-create/sharepoint-multigeo-select.png" alt-text="Access package - Add resource roles - Select SharePoint Multi-geo sites":::
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
To set the role assignment and create a query, do the following steps:
### Install Azure PowerShell module
-Once you have the appropriate role assignment, launch PowerShell, and [install the Azure PowerShell module](/powershell/azure/install-az-ps?view=azps-3.3.0) (if you haven't already), by typing:
+Once you have the appropriate role assignment, launch PowerShell, and [install the Azure PowerShell module](/powershell/azure/install-az-ps) (if you haven't already), by typing:
```azurepowershell install-module -Name az -allowClobber -Scope CurrentUser
$wks | ft CustomerId, Name
``` ### Send the query to the Log Analytics workspace
-Finally, once you have a workspace identified, you can use [Invoke-AzOperationalInsightsQuery](/powershell/module/az.operationalinsights/Invoke-AzOperationalInsightsQuery?view=azps-3.3.0
-) to send a Kusto query to that workspace. These queries are written in [Kusto query language](/azure/kusto/query/).
+Finally, once you have a workspace identified, you can use [Invoke-AzOperationalInsightsQuery](/powershell/module/az.operationalinsights/Invoke-AzOperationalInsightsQuery) to send a Kusto query to that workspace. These queries are written in [Kusto query language](/azure/kusto/query/).
For example, you can retrieve the date range of the audit event records from the Log Analytics workspace, with PowerShell cmdlets to send a query like:
active-directory How To Connect Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-preview.md
# More details about features in preview This topic describes how to use features currently in preview.
-## Azure AD Connect sync V2 endpoint API (public preview)
+## Azure AD Connect sync V2 endpoint API
-We have deployed a new endpoint (API) for Azure AD Connect that improves the performance of the synchronization service operations to Azure Active Directory. By utilizing the new V2 endpoint, you will experience noticeable performance gains on export and import to Azure AD. This new endpoint also supports syncing groups with up to 250k members. Using this endpoint also allows you to write back Microsoft 365 unified groups, with no maximum membership limit, to your on-premises Active Directory, when group writeback is enabled. For more information see [Azure AD Connect sync V2 endpoint API (public preview)](how-to-connect-sync-endpoint-api-v2.md).
+We have deployed a new endpoint (API) for Azure AD Connect that improves the performance of the synchronization service operations to Azure Active Directory. By utilizing the new V2 endpoint, you will experience noticeable performance gains on export and import to Azure AD. This new endpoint also supports syncing groups with up to 250k members. Using this endpoint also allows you to write back Microsoft 365 unified groups, with no maximum membership limit, to your on-premises Active Directory, when group writeback is enabled. For more information see [Azure AD Connect sync V2 endpoint API](how-to-connect-sync-endpoint-api-v2.md).
## User writeback > [!IMPORTANT]
active-directory How To Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso.md
Seamless SSO can be combined with either the [Password Hash Synchronization](how
## SSO via primary refresh token vs. Seamless SSO
-For Windows 10, itΓÇÖs recommended to use SSO via primary refresh token (PRT). For windows 7 and 8.1 itΓÇÖs recommended to use Seamless SSO.
+For Windows 10, Windows Server 2016 and later versions, itΓÇÖs recommended to use SSO via primary refresh token (PRT). For windows 7 and 8.1 itΓÇÖs recommended to use Seamless SSO.
Seamless SSO needs the user's device to be domain-joined, but it is not used on Windows 10 [Azure AD joined devices](../devices/concept-azure-ad-join.md) or [hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md). SSO on Azure AD joined, Hybrid Azure AD joined, and Azure AD registered devices works based on the [Primary Refresh Token (PRT)](../devices/concept-primary-refresh-token.md) SSO via PRT works once devices are registered with Azure AD for hybrid Azure AD joined, Azure AD joined or personal registered devices via Add Work or School Account.
active-directory Reference Connect Ports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-ports.md
This table describes the ports and protocols that are required for communication
| LDAP |389 (TCP/UDP) |Used for data import from AD. Data is encrypted with Kerberos Sign & Seal. | | SMB | 445 (TCP) |Used by Seamless SSO to create a computer account in the AD forest. | | LDAP/SSL |636 (TCP/UDP) |Used for data import from AD. The data transfer is signed and encrypted. Only used if you are using TLS. |
-| RPC |49152- 65535 (Random high RPC Port)(TCP) |Used during the initial configuration of Azure AD Connect when it binds to the AD forests, and during Password synchronization. See [KB929851](https://support.microsoft.com/kb/929851), [KB832017](https://support.microsoft.com/kb/832017), and [KB224196](https://support.microsoft.com/kb/224196) for more information. |
+| RPC |49152- 65535 (Random high RPC Port)(TCP) |Used during the initial configuration of Azure AD Connect when it binds to the AD forests, and during Password synchronization. If the dynamic port has been changed, you need to open that port. See [KB929851](https://support.microsoft.com/kb/929851), [KB832017](https://support.microsoft.com/kb/832017), and [KB224196](https://support.microsoft.com/kb/224196) for more information. |
|WinRM | 5985 (TCP) |Only used if you are installing AD FS with gMSA by Azure AD Connect Wizard| |AD DS Web Services | 9389 (TCP) |Only used if you are installing AD FS with gMSA by Azure AD Connect Wizard |
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/app-management-powershell-samples.md
The following table includes links to PowerShell script examples for Azure AD Ap
- The [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) or, - The [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true), unless otherwise noted.
-For more information about the cmdlets used in these samples, see [Applications](/powershell/module/azuread/?view=azureadps-2.0#applications&preserve-view=true).
+For more information about the cmdlets used in these samples, see [Applications](/powershell/module/azuread/#applications).
| Link | Description | ||| |**Application Management scripts**||
-| [Export all app registrations, secrets, and certificates](scripts/powershell-export-all-app-registrations-secrets-and-certs.md) | Exports all app registrations, secrets, and certificates for the specified apps in your directory. |
+| [Export secrets and certs (app registrations)](scripts/powershell-export-all-app-registrations-secrets-and-certs.md) | Export secrets and certificates for app registrations in Azure Active Directory tenant. |
+| [Export secrets and certs (enterprise apps)](scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md) | Export secrets and certificates for enterprise apps in Azure Active Directory tenant. |
+| [Export expiring secrets and certs](scripts/powershell-export-apps-with-expriring-secrets.md) | Export apps with expiring secrets and certificates in Azure Active Directory tenant. |
+| [Export secrets and certs expiring beyond required date](scripts/powershell-export-apps-with-secrets-beyond-required.md) | Export apps with secrets and certificates expiring beyond the required date in Azure Active Directory tenant. |
active-directory Application Proxy Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-powershell-samples.md
# Azure AD PowerShell examples for Azure AD Application Proxy
-The following table includes links to PowerShell script examples for Azure AD Application Proxy. These samples require either the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview), unless otherwise noted.
+The following table includes links to PowerShell script examples for Azure AD Application Proxy. These samples require either the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true), unless otherwise noted.
For more information about the cmdlets used in these samples, see [Application Proxy Application Management](/powershell/module/azuread/#application_proxy_application_management) and [Application Proxy Connector Management](/powershell/module/azuread/#application_proxy_connector_management).
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent-groups.md
In this example, all group owners are allowed to consent to apps accessing their
You can use the Azure AD PowerShell Preview module, [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview), to enable or disable group owners' ability to consent to applications accessing your organization's data for the groups they own.
-1. Make sure you're using the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module. This step is important if you have installed both the [AzureAD](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0) module and the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module).
+1. Make sure you're using the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module. This step is important if you have installed both the [AzureAD](/powershell/module/azuread/) module and the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module).
```powershell Remove-Module AzureAD
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Title: Azure AD secure hybrid access with F5 | Microsoft Docs
-description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure hybrid access
+description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-app-consent-policies.md
App consent policies where the ID begins with "microsoft-" are built-in policies
## Pre-requisites
-1. Make sure you're using the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module. This step is important if you have installed both the [AzureAD](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0) module and the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module).
+1. Make sure you're using the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module. This step is important if you have installed both the [AzureAD](/powershell/module/azuread/) module and the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module).
```powershell Remove-Module AzureAD -ErrorAction SilentlyContinue
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-assign-group-to-app.md
This PowerShell script example allows you to assign a specific group to an Azure
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-assign-user-to-app.md
This PowerShell script example allows you to assign a user to a specific Azure A
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-display-users-group-of-app.md
This PowerShell script example lists the users and groups assigned to a specific
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Export All App Registrations Secrets And Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-all-app-registrations-secrets-and-certs.md
Title: PowerShell sample - Export app registrations, secrets, and certificates in Azure Active Directory tenant.
-description: PowerShell example that exports all app registrations, secrets, and certificates for the specified apps in your Azure Active Directory tenant.
+ Title: PowerShell sample - Export secrets and certificates for app registrations in Azure Active Directory tenant.
+description: PowerShell example that exports all secrets and certificates for the specified app registrations in your Azure Active Directory tenant.
Previously updated : 02/18/2021 Last updated : 03/09/2021
-# Export app registrations, secrets, and certificates
+# Export secrets and certificates for app registrations
-This PowerShell script example exports all app registrations, secrets, and certificates for the specified apps in your directory.
+This PowerShell script example exports all secrets and certificates for the specified app registrations from your directory into a CSV file.
[!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/az
## Sample script
-[!code-azurepowershell[main](~/powershell_scripts/application-management/export-all-app-registrations-secrets-and-certs.ps1 "Exports all app registrations, secrets, and certificates for the specified apps in your directory.")]
+[!code-azurepowershell[main](~/powershell_scripts/application-management/export-all-app-registrations-secrets-and-certs.ps1 "Exports all secrets and certificates for the specified app registrations in your directory.")]
## Script explanation
+The "Add-Member" command is responsible for creating the columns in the CSV file.
+You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive.
+ | Command | Notes | |||
-| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication?view=azureadps-2.0&preserve-view=true) | Exports all app registrations, secrets, and certificates for the specified apps in your directory. |
+| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Retrieves an application from your directory. |
+| [Get-AzureADApplicationOwner](/powershell/module/azuread/Get-AzureADApplicationOwner) | Retrieves the owners of an application from your directory. |
## Next steps
active-directory Powershell Export All Enterprise Apps Secrets And Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md
+
+ Title: PowerShell sample - Export secrets and certificates for enterprise apps in Azure Active Directory tenant.
+description: PowerShell example that exports all secrets and certificates for the specified enterprise apps in your Azure Active Directory tenant.
+++++++ Last updated : 03/09/2021++++
+# Export secrets and certificates for enterprise apps
+This PowerShell script example exports all secrets and certificates for the specified enterprise apps from your directory into a CSV file.
++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-management/export-all-enterprise-apps-secrets-and-certs.ps1 "Exports all secrets and certificates for the specified enterprise apps in your directory.")]
+
+## Script explanation
+
+The "Add-Member" command is responsible for creating the columns in the CSV file.
+You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive.
+
+| Command | Notes |
+|||
+| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication?view=azureadps-2.0&preserve-view=true) | Retrieves an application from your directory. |
+| [Get-AzureADApplicationOwner](/powershell/module/azuread/Get-AzureADApplicationOwner?view=azureadps-2.0&preserve-view=true) | Retrieves the owners of an application from your directory. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Management, see [Azure AD PowerShell examples for Application Management](../app-management-powershell-samples.md).
active-directory Powershell Export Apps With Expriring Secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-expriring-secrets.md
+
+ Title: PowerShell sample - Export apps with expiring secrets and certificates in Azure Active Directory tenant.
+description: PowerShell example that exports all apps with expiring secrets and certificates for the specified apps in your Azure Active Directory tenant.
+++++++ Last updated : 03/09/2021++++
+# Export apps with expiring secrets and certificates
+
+This PowerShell script example exports all apps with expiring secrets and certificates for the specified apps from your directory in a CSV file.
++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-management/export-apps-with-expiring-secrets.ps1 "Exports all apps with expiring secrets and certificates for the specified apps in your directory.")]
+
+## Script explanation
+
+The "Add-Member" command is responsible for creating the columns in the CSV file.
+You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive.
+
+| Command | Notes |
+|||
+| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication?view=azureadps-2.0&preserve-view=true) | Retrieves an application from your directory. |
+| [Get-AzureADApplicationOwner](/powershell/module/azuread/Get-AzureADApplicationOwner?view=azureadps-2.0&preserve-view=true) | Retrieves the owners of an application from your directory. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Management, see [Azure AD PowerShell examples for Application Management](../app-management-powershell-samples.md).
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
+
+ Title: PowerShell sample - Export apps with secrets and certificates expiring beyond the required date in Azure Active Directory tenant.
+description: PowerShell example that exports all apps with secrets and certificates expiring beyond the required date for the specified apps in your Azure Active Directory tenant.
+++++++ Last updated : 03/09/2021++++
+# Export apps with secrets and certificates expiring beyond the required date
+
+This PowerShell script example exports all apps secrets and certificates expiring beyond the required date for the specified apps from your directory in a CSV file.
++
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+
+## Sample script
+
+[!code-azurepowershell[main](~/powershell_scripts/application-management/export-apps-with-secrets-beyond-required.ps1 "Exports all apps with secrets and certificates expiring beyond the required date for the specified apps in your directory.")]
+
+## Script explanation
+
+The "Add-Member" command is responsible for creating the columns in the CSV file.
+You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive.
+
+| Command | Notes |
+|||
+| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication?view=azureadps-2.0&preserve-view=true) | Retrieves an application from your directory. |
+| [Get-AzureADApplicationOwner](/powershell/module/azuread/Get-AzureADApplicationOwner?view=azureadps-2.0&preserve-view=true) | Retrieves the owners of an application from your directory. |
+
+## Next steps
+
+For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview).
+
+For other PowerShell examples for Application Management, see [Azure AD PowerShell examples for Application Management](../app-management-powershell-samples.md).
active-directory Powershell Get All App Proxy Apps Basic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-basic.md
This PowerShell script example lists information about all Azure Active Director
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script [!code-azurepowershell[main](~/powershell_scripts/application-proxy/get-all-appproxy-apps-basic.ps1 "Get all Application Proxy apps")]
active-directory Powershell Get All App Proxy Apps By Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
This PowerShell script example lists information about all Azure Active Director
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-app-proxy-apps-with-policy.md
This PowerShell script example lists all the Azure Active Directory (Azure AD) A
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Get All Connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-connectors.md
This PowerShell script example lists all Azure Active Directory (Azure AD) Appli
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-custom-domain-no-cert.md
This PowerShell script example lists all Azure Active Directory (Azure AD) Appli
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Get All Custom Domains And Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-custom-domains-and-certs.md
This PowerShell script example lists all Azure Active Directory (Azure AD) Appli
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Get All Default Domain Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-default-domain-apps.md
This PowerShell script example lists all the Azure Active Directory (Azure AD) A
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Get All Wildcard Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-all-wildcard-apps.md
This PowerShell script example lists all Azure Active Directory (Azure AD) Appli
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-custom-domain-identical-cert.md
This PowerShell script example lists all Azure Active Directory (Azure AD) Appli
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Get Custom Domain Replace Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-get-custom-domain-replace-cert.md
This PowerShell script example allows you to replace the certificate in bulk for
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Powershell Move All Apps To Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-move-all-apps-to-connector-group.md
This PowerShell script example moves all Azure Active Directory (Azure AD) Appli
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview) (AzureADPreview).
+This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
## Sample script
active-directory Pim Resource Roles Start Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-start-access-review.md
na
ms.devlang: na Previously updated : 02/11/2021 Last updated : 03/09/2021
# Create an access review of Azure resource roles in Privileged Identity Management
-Access to privileged Azure resource roles for employees changes over time. To reduce the risk associated with stale role assignments, you should regularly review access. You can use Azure Active Directory (Azure AD) Privileged Identity Management (PIM) to create access reviews for privileged Azure resource roles. You can also configure recurring access reviews that occur automatically.
+The need for access to privileged Azure resource roles by employees changes over time. To reduce the risk associated with stale role assignments, you should regularly review access. You can use Azure Active Directory (Azure AD) Privileged Identity Management (PIM) to create access reviews for privileged access to Azure resource roles. You can also configure recurring access reviews that occur automatically. This article describes how to create one or more access reviews.
-This article describes how to create one or more access reviews for privileged Azure resource roles.
-
-## Prerequisites
+## Prerequisite role
To create access reviews, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) Azure role for the resource. ## Open access reviews
-1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is a member of the Privileged Role Administrator role.
+1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is assigned to one of the prerequisite roles.
1. Open **Azure AD Privileged Identity Management**.
active-directory Powershell For Azure Ad Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/powershell-for-azure-ad-roles.md
This article contains instructions for using Azure Active Directory (Azure AD) P
![Find the organization ID in the properties for the Azure AD organization](./media/powershell-for-azure-ad-roles/tenant-id-for-Azure-ad-org.png) > [!Note]
-> The following sections are simple examples that can help get you up and running. You can find more detailed documentation regarding the following cmdlets at [https://docs.microsoft.com/powershell/module/azuread/?view=azureadps-2.0-preview#privileged_role_management&preserve-view=true](/powershell/module/azuread/?view=azureadps-2.0-preview#privileged_role_management&preserve-view=true). However, you must replace "azureResources" in the providerID parameter with "aadRoles". You will also need to remember to use the Tenant ID for your Azure AD organization as the resourceId parameter.
+> The following sections are simple examples that can help get you up and running. You can find more detailed documentation regarding the following cmdlets at [https://docs.microsoft.com/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#privileged_role_management](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#privileged_role_management). However, you must replace "azureResources" in the providerID parameter with "aadRoles". You will also need to remember to use the Tenant ID for your Azure AD organization as the resourceId parameter.
## Retrieving role definitions
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
The JSON file is downloaded in minified format to reduce the size of the downloa
Here are some sample commands to work with the JSON file by using PowerShell. You can use any programming language that you're comfortable with.
-First, [read the JSON file](/powershell/module/microsoft.powershell.utility/convertfrom-json?view=powershell-7.1) by running this command:
+First, [read the JSON file](/powershell/module/microsoft.powershell.utility/convertfrom-json) by running this command:
` $JSONContent = Get-Content -Path "<PATH TO THE PROVISIONING LOGS FILE>" | ConvertFrom-JSON`
active-directory Reference Powershell Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/reference-powershell-reporting.md
# Azure AD PowerShell cmdlets for reporting > [!NOTE]
-> These PowerShell cmdlets currently only work with the [Azure AD Preview](/powershell/module/azuread/?view=azureadps-2.0-preview#directory_auditing) Module. Please note that the preview module is not suggested for production use.
+> These PowerShell cmdlets currently only work with the [Azure AD Preview](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#directory_auditing) Module. Please note that the preview module is not suggested for production use.
To install the public preview release, use the following.
active-directory Admin Units Faq Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-faq-troubleshoot.md
# Azure AD administrative units: Troubleshooting and FAQ
-For more granular administrative control in Azure Active Directory (Azure AD), you can assign users to an Azure AD role with a scope that's limited to one or more administrative units. For sample PowerShell scripts for common tasks, see [Work with administrative units](/powershell/azure/active-directory/working-with-administrative-units?view=azureadps-2.0&preserve-view=true).
+For more granular administrative control in Azure Active Directory (Azure AD), you can assign users to an Azure AD role with a scope that's limited to one or more administrative units. For sample PowerShell scripts for common tasks, see [Work with administrative units](/powershell/azure/active-directory/working-with-administrative-units).
## Frequently asked questions
Administrative units, such as organizational units in Windows Server Active Dire
**Q: Are administrative units supported in PowerShell and the Graph API?**
-**A:** Yes. You'll find support for administrative units in [PowerShell cmdlet documentation](/powershell/module/Azuread/?view=azureadps-2.0&preserve-view=true) and [sample scripts](/powershell/azure/active-directory/working-with-administrative-units?view=azureadps-2.0&preserve-view=true).
+**A:** Yes. You'll find support for administrative units in [PowerShell cmdlet documentation](/powershell/module/Azuread/) and [sample scripts](/powershell/azure/active-directory/working-with-administrative-units).
-Find support for the [administrativeUnit resource type](/graph/api/resources/administrativeunit?view=graph-rest-1.0&preserve-view=true) in Microsoft Graph.
+Find support for the [administrativeUnit resource type](/graph/api/resources/administrativeunit) in Microsoft Graph.
## Next steps
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/administrative-units.md
To use administrative units, you need an Azure Active Directory Premium license
You can manage administrative units by using the Azure portal, PowerShell cmdlets and scripts, or Microsoft Graph. For more information, see: - [Create, remove, populate, and add roles to administrative units](admin-units-manage.md): Includes complete how-to procedures.-- [Work with administrative units](/powershell/azure/active-directory/working-with-administrative-units?view=azureadps-2.0&preserve-view=true): Covers how to work with administrative units by using PowerShell.-- [Administrative unit Graph support](/graph/api/resources/administrativeunit?view=graph-rest-1.0&preserve-view=true): Provides detailed documentation on Microsoft Graph for administrative units.
+- [Work with administrative units](/powershell/azure/active-directory/working-with-administrative-units): Covers how to work with administrative units by using PowerShell.
+- [Administrative unit Graph support](/graph/api/resources/administrativeunit): Provides detailed documentation on Microsoft Graph for administrative units.
### Plan your administrative units
active-directory M365 Workload Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/m365-workload-docs.md
All products in Microsoft 365 can be managed with administrative roles in Azure
Microsoft 365 service | Role content | API content - | | --
-Admin roles in Office 365 and Microsoft 365 business plans | [Microsoft 365 admin roles](/office365/admin/add-users/about-admin-roles?view=o365-worldwide&preserve-view=true) | Not available
-Azure Active Directory (Azure AD) and Azure AD Identity Protection| [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview?view=graph-rest-1.0&preserve-view=true&preserve-view=true)<br>[Fetch role assignments](/graph/api/directoryrole-list?view=graph-rest-1.0&preserve-view=true)
-Exchange Online| [Exchange role-based access control](/exchange/understanding-role-based-access-control-exchange-2013-help) | [PowerShell for Exchange](/powershell/module/exchange/role-based-access-control/add-managementroleentry?view=exchange-ps&preserve-view=true)<br>[Fetch role assignments](/powershell/module/exchange/role-based-access-control/get-rolegroup?view=exchange-ps&preserve-view=true)
-SharePoint Online | [Azure AD admin roles](permissions-reference.md)<br>Also [About the SharePoint admin role in Microsoft 365](/sharepoint/sharepoint-admin-role) | [Graph API](/graph/api/overview?view=graph-rest-1.0&preserve-view=true)<br>[Fetch role assignments](/graph/api/directoryrole-list?view=graph-rest-1.0&preserve-view=true)
-Teams/Skype for Business | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview?view=graph-rest-1.0&preserve-view=true)<br>[Fetch role assignments](/graph/api/directoryrole-list?view=graph-rest-1.0&preserve-view=true)
-Security & Compliance Center (Office 365 Advanced Threat Protection, Exchange Online Protection, Information Protection) | [Office 365 admin roles](/office365/SecurityCompliance/permissions-in-the-security-and-compliance-center) | [Exchange PowerShell](/powershell/module/exchange/role-based-access-control/add-managementroleentry?view=exchange-ps&preserve-view=true)<br>[Fetch role assignments](/powershell/module/exchange/role-based-access-control/get-rolegroup?view=exchange-ps&preserve-view=true)
-Secure Score | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview?view=graph-rest-1.0&preserve-view=true)<br>[Fetch role assignments](/graph/api/directoryrole-list?view=graph-rest-1.0&preserve-view=true)
+Admin roles in Office 365 and Microsoft 365 business plans | [Microsoft 365 admin roles](/office365/admin/add-users/about-admin-roles) | Not available
+Azure Active Directory (Azure AD) and Azure AD Identity Protection| [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview)<br>[Fetch role assignments](/graph/api/directoryrole-list)
+Exchange Online| [Exchange role-based access control](/exchange/understanding-role-based-access-control-exchange-2013-help) | [PowerShell for Exchange](/powershell/module/exchange/role-based-access-control/add-managementroleentry)<br>[Fetch role assignments](/powershell/module/exchange/role-based-access-control/get-rolegroup)
+SharePoint Online | [Azure AD admin roles](permissions-reference.md)<br>Also [About the SharePoint admin role in Microsoft 365](/sharepoint/sharepoint-admin-role) | [Graph API](/graph/api/overview)<br>[Fetch role assignments](/graph/api/directoryrole-list)
+Teams/Skype for Business | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview)<br>[Fetch role assignments](/graph/api/directoryrole-list)
+Security & Compliance Center (Office 365 Advanced Threat Protection, Exchange Online Protection, Information Protection) | [Office 365 admin roles](/office365/SecurityCompliance/permissions-in-the-security-and-compliance-center) | [Exchange PowerShell](/powershell/module/exchange/role-based-access-control/add-managementroleentry)<br>[Fetch role assignments](/powershell/module/exchange/role-based-access-control/get-rolegroup)
+Secure Score | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview)<br>[Fetch role assignments](/graph/api/directoryrole-list)
Compliance Manager | [Compliance Manager roles](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud#permissions-and-role-based-access-control) | Not available
-Azure Information Protection | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview?view=graph-rest-1.0&preserve-view=true)<br>[Fetch role assignments](/graph/api/directoryrole-list?view=graph-rest-1.0&preserve-view=true)
+Azure Information Protection | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview)<br>[Fetch role assignments](/graph/api/directoryrole-list)
Microsoft Cloud App Security | [Role-based access control](/cloud-app-security/manage-admins) | [API reference](/cloud-app-security/api-tokens) Azure Advanced Threat Protection | [Azure ATP role groups](/azure-advanced-threat-protection/atp-role-groups) | Not available Windows Defender Advanced Threat Protection | [Windows Defender ATP role-based access control](/windows/security/threat-protection/windows-defender-atp/rbac-windows-defender-advanced-threat-protection) | Not available
-Privileged Identity Management | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview?view=graph-rest-1.0&preserve-view=true)<br>[Fetch role assignments](/graph/api/directoryrole-list?view=graph-rest-1.0&preserve-view=true)
+Privileged Identity Management | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview)<br>[Fetch role assignments](/graph/api/directoryrole-list)
Intune | [Intune role-based access control](/intune/role-based-access-control) | [Graph API](/graph/api/resources/intune-rbac-conceptual?view=graph-rest-beta&preserve-view=true)<br>[Fetch role assignments](/graph/api/intune-rbac-roledefinition-list?view=graph-rest-beta&preserve-view=true)
-Managed Desktop | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview?view=graph-rest-1.0&preserve-view=true)<br>[Fetch role assignments](/graph/api/directoryrole-list?view=graph-rest-1.0&preserve-view=true)
+Managed Desktop | [Azure AD admin roles](permissions-reference.md) | [Graph API](/graph/api/overview)<br>[Fetch role assignments](/graph/api/directoryrole-list)
## Next steps
active-directory Bpanda Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bpanda-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
3. For establishing a successful connection between Azure AD and Bpanda, an access token must be retrieved in either of the following ways.
-Use this command on **Linux**
+* Use this command in **Linux**
``` curl -u scim:{Your client secret} --location --request POST '{Your tenant specific authentication endpoint}/protocol/openid-connect/token' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'grant_type=client_credentials'
+```
+
+* or this command in **PowerShell**
-or this command using **PowerShell**
-
+```
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("scim:{0}" -f {Your client secret}))) $headers=@{} $headers.Add("Content-Type", "application/x-www-form-urlencoded")
active-directory Linkedin Learning Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/linkedin-learning-provisioning-tutorial.md
- Title: 'Tutorial: Configure LinkedIn Learning for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to LinkedIn Learning.
--
-writer: Zhchia
------ Previously updated : 06/30/2020---
-# Tutorial: Configure LinkedIn Learning for automatic user provisioning
-
-This tutorial describes the steps you need to perform in both LinkedIn Learning and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [LinkedIn Learning](https://learning.linkedin.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
--
-## Capabilities supported
-> [!div class="checklist"]
-> * Create users in LinkedIn Learning
-> * Remove users in LinkedIn Learning when they do not require access anymore
-> * Keep user attributes synchronized between Azure AD and LinkedIn Learning
-> * Provision groups and group memberships in LinkedIn Learning
-> * [Single sign-on](linkedinlearning-tutorial.md) to LinkedIn Learning (recommended)
-
-## Prerequisites
-
-The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* Approval and SCIM enabled for LinkedIn Learning (contact by email).
-
-## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and LinkedIn Learning](../app-provisioning/customize-application-attributes.md).
-
-## Step 2. Configure LinkedIn Learning to support provisioning with Azure AD
-1. Sign into [LinkedIn Learning Settings](https://www.linkedin.com/learning-admin/settings/global). Select **SCIM Setup** then select **Add new SCIM configuration**.
-
- ![SCIM Setup configuration](./media/linkedin-learning-provisioning-tutorial/learning-scim-settings.png)
-
-2. Enter a name for the configuration, and set **Auto-assign licenses** to On. Then click **Generate token**.
-
- ![SCIM configuration name](./media/linkedin-learning-provisioning-tutorial/learning-scim-configuration.png)
-
-3. After the configuration is created, an **Access token** should be generated. Keep this copied for later.
-
- ![SCIM access token](./media/linkedin-learning-provisioning-tutorial/learning-scim-token.png)
-
-4. You may reissue any existing configurations (which will generate a new token) or remove them.
-
-## Step 3. Add LinkedIn Learning from the Azure AD application gallery
-
-Add LinkedIn Learning from the Azure AD application gallery to start managing provisioning to LinkedIn Learning. If you have previously setup LinkedIn Learning for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-
-## Step 4. Define who will be in scope for provisioning
-
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-* When assigning users and groups to LinkedIn Learning, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
--
-## Step 5. Configure automatic user provisioning to LinkedIn Learning
-
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
-
-### To configure automatic user provisioning for LinkedIn Learning in Azure AD:
-
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **LinkedIn Learning**.
-
- ![The LinkedIn Learning link in the Applications list](common/all-applications.png)
-
-3. Select the **Provisioning** tab.
-
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-
-4. Set the **Provisioning Mode** to **Automatic**.
-
- ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-
-5. Under the **Admin Credentials** section, input `https://api.linkedin.com/scim` in **Tenant URL**. Input the access token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to LinkedIn Learning. If the connection fails, ensure your LinkedIn Learning account has Admin permissions and try again.
-
- ![Screenshot shows the Admin Credentials dialog box, where you can enter your Tenant U R L and Secret Token.](./media/linkedin-learning-provisioning-tutorial/provisioning.png)
-
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
-
- ![Notification Email](common/provisioning-notification-email.png)
-
-7. Select **Save**.
-
-8. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
-
-9. Review the user attributes that are synchronized from Azure AD to LinkedIn Learning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LinkedIn Learning for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the LinkedIn Learning API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
-
- |Attribute|Type|Supported for filtering|
- ||||
- |externalId|String|&check;|
- |userName|String|
- |name.givenName|String|
- |name.familyName|String|
- |displayName|String|
- |addresses[type eq "work"].locality|String|
- |title|String|
- |emails[type eq "work"].value|String|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
-
-10. Under the **Mappings** section, select **Provision Azure Active Directory Groups**.
-
-11. Review the group attributes that are synchronized from Azure AD to LinkedIn Learning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in LinkedIn Learning for update operations. Select the **Save** button to commit any changes.
-
- |Attribute|Type|Supported for filtering|
- ||||
- |displayName|String|&check;|
- |members|Reference|
- |externalId|String|
-
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-13. To enable the Azure AD provisioning service for LinkedIn Learning, change the **Provisioning Status** to **On** in the **Settings** section.
-
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-
-14. Define the users and/or groups that you would like to provision to LinkedIn Learning by choosing the desired values in **Scope** in the **Settings** section.
-
- ![Provisioning Scope](common/provisioning-scope.png)
-
-15. When you are ready to provision, click **Save**.
-
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
-
-## Step 6. Monitor your deployment
-Once you've configured provisioning, use the following resources to monitor your deployment:
-
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-
-## Additional resources
-
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-
-## Next steps
-
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Tutorial List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tutorial-list.md
-+ Previously updated : 12/04/2020 Last updated : 03/09/2021 -+
To find more tutorials, use the table of contents on the left.
## Next steps
-To learn more about application management, see [What is application management](../manage-apps/what-is-application-management.md).
+To learn more about application management, see [What is application management](../manage-apps/what-is-application-management.md).
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/advisor-cost-recommendations.md
Although certain application scenarios can result in low utilization by design,
The recommended actions are shut down or resize, specific to the resource being evaluated. The advanced evaluation model in Advisor considers shutting down virtual machines when all of these statements are true: -- P95th of the maximum of maximum value of CPU utilization is less than 3%.
+- P95th of the maximum value of CPU utilization is less than 3%.
- Network utilization is less than 2% over a seven-day period. - Memory pressure is lower than the threshold values
To learn more about Advisor recommendations, see:
* [Advisor performance recommendations](advisor-performance-recommendations.md) * [Advisor high availability recommendations](advisor-high-availability-recommendations.md) * [Advisor security recommendations](advisor-security-recommendations.md)
-* [Advisor operational excellence recommendations](advisor-operational-excellence-recommendations.md)
+* [Advisor operational excellence recommendations](advisor-operational-excellence-recommendations.md)
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-disk-customer-managed-keys.md
When new node pools are added to the cluster created above, the customer-managed
OS disk encryption key will be used to encrypt data disk if key is not provided for data disk from v1.17.2, and you can also encrypt AKS data disks with your other keys. > [!IMPORTANT]
-> Ensure you have the proper AKS credentials. The Service principal will need to have contributor access to the resource group where the diskencryptionset is deployed. Otherwise, you will get an error suggesting that the service principal does not have permissions.
+> Ensure you have the proper AKS credentials. The managed identity will need to have contributor access to the resource group where the diskencryptionset is deployed. Otherwise, you will get an error suggesting that the managed identity does not have permissions.
```azurecli-interactive # Retrieve your Azure Subscription Id from id property as shown below
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-disk-volume.md
You also need the Azure CLI version 2.0.59 or later installed and configured. Ru
## Create an Azure disk
-When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If you instead create the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) service principal for your cluster the `Contributor` role to the disk's resource group. Alternatively, you can use the system assigned managed identity for permissions instead of the service principal. For more information, see [Use managed identities](use-managed-identity.md).
+When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If you instead create the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group.
For this article, create the disk in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster name *myAKSCluster* in the resource group name *myResourceGroup*:
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-container-registry-integration.md
Last updated 01/08/2021
When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. This operation is implemented as part of the CLI and Portal experience by granting the required permissions to your ACR. This article provides examples for configuring authentication between these two Azure services.
-You can set up the AKS to ACR integration in a few simple commands with the Azure CLI. This integration assigns the AcrPull role to the service principal associated to the AKS Cluster.
+You can set up the AKS to ACR integration in a few simple commands with the Azure CLI. This integration assigns the AcrPull role to the managed identity associated to the AKS Cluster.
> [!NOTE] > This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][Image Pull Secret].
These examples require:
* **Owner** or **Azure account administrator** role on the **Azure subscription** * Azure CLI version 2.7.0 or later
-To avoid needing an **Owner** or **Azure account administrator** role, you can configure a service principal manually or use an existing service principal to authenticate ACR from AKS. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md).
+To avoid needing an **Owner** or **Azure account administrator** role, you can configure a managed identity manually or use an existing managed identity to authenticate ACR from AKS. For more information, see [Use an Azure managed identity to authenticate to an Azure container registry](../container-registry/container-registry-authentication-managed-identity.md).
## Create a new AKS cluster with ACR integration
-You can set up AKS and ACR integration during the initial creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure Active Directory **service principal** is used. The following CLI command allows you to authorize an existing ACR in your subscription and configures the appropriate **ACRPull** role for the service principal. Supply valid values for your parameters below.
+You can set up AKS and ACR integration during the initial creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure Active Directory **managed identity** is used. The following CLI command allows you to authorize an existing ACR in your subscription and configures the appropriate **ACRPull** role for the managed identity. Supply valid values for your parameters below.
```azurecli # set this to the name of your Azure Container Registry. It must be globally unique
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-azure-cni.md
This article shows you how to use *Azure CNI* networking to create and use a vir
* The virtual network for the AKS cluster must allow outbound internet connectivity. * AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range.
-* The service principal used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
+* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
* `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read`
-* Instead of a service principal, you can use the system assigned managed identity for permissions. For more information, see [Use managed identities](use-managed-identity.md).
* The subnet assigned to the AKS node pool cannot be a [delegated subnet](../virtual-network/subnet-delegation-overview.md). ## Plan IP addressing for your cluster
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-kubenet.md
This article shows you how to use *kubenet* networking to create and use a virtu
* The virtual network for the AKS cluster must allow outbound internet connectivity. * Don't create more than one AKS cluster in the same subnet. * AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range or cluster virtual network address range.
-* The service principal used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) role on the subnet within your virtual network. You must also have the appropriate permissions, such as the subscription owner, to create a service principal and assign it permissions. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
+* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) role on the subnet within your virtual network. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
* `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read`
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-static-ip.md
az network public-ip create --resource-group MC_myResourceGroup_myAKSCluster_eas
``` > [!NOTE]
-> The above commands create an IP address that will be deleted if you delete your AKS cluster. Alternatively, you can create an IP address in a different resource group which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the service principal used by the AKS cluster has delegated permissions to the other resource group, such as *Network Contributor*. For more information, see [Use a static public IP address and DNS label with the AKS load balancer][aks-static-ip].
+> The above commands create an IP address that will be deleted if you delete your AKS cluster. Alternatively, you can create an IP address in a different resource group which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the cluster identity used by the AKS cluster has delegated permissions to the other resource group, such as *Network Contributor*. For more information, see [Use a static public IP address and DNS label with the AKS load balancer][aks-static-ip].
Now deploy the *nginx-ingress* chart with Helm. For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/internal-lb.md
This article assumes that you have an existing AKS cluster. If you need an AKS c
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-The AKS cluster service principal needs permission to manage network resources if you use an existing subnet or resource group. For information see [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)][use-kubenet] or [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][advanced-networking]. If you are configuring your load balancer to use an [IP address in a different subnet][different-subnet], ensure the the AKS cluster service principal also has read access to that subnet.
+The AKS cluster cluster identity needs permission to manage network resources if you use an existing subnet or resource group. For information see [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)][use-kubenet] or [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][advanced-networking]. If you are configuring your load balancer to use an [IP address in a different subnet][different-subnet], ensure the the AKS cluster identity also has read access to that subnet.
-Instead of a service principal, you can also use the system assigned managed identity for permissions. For more information, see [Use managed identities](use-managed-identity.md). For more information on permissions, see [Delegate AKS access to other Azure resources][aks-sp].
+For more information on permissions, see [Delegate AKS access to other Azure resources][aks-sp].
## Create an internal load balancer
internal-app LoadBalancer 10.1.15.188 10.0.0.35 80:31669/TCP 1m
``` > [!NOTE]
-> You may need to grant the service principal for your AKS cluster the *Network Contributor* role to the resource group where your Azure virtual network resources are deployed. View the service principal with [az aks show][az-aks-show], such as `az aks show --resource-group myResourceGroup --name myAKSCluster --query "servicePrincipalProfile.clientId"`. To create a role assignment, use the [az role assignment create][az-role-assignment-create] command.
+> You may need to grant the cluster identity for your AKS cluster the *Network Contributor* role to the resource group where your Azure virtual network resources are deployed. View the cluster identity with [az aks show][az-aks-show], such as `az aks show --resource-group myResourceGroup --name myAKSCluster --query "identity"`. To create a role assignment, use the [az role assignment create][az-role-assignment-create] command.
## Specify a different subnet
aks Kubernetes Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-service-principal.md
For information on how to update the credentials, see [Update or rotate the cred
[aad-service-principal]:../active-directory/develop/app-objects-and-service-principals.md [acr-intro]: ../container-registry/container-registry-intro.md [az-ad-sp-create]: /cli/azure/ad/sp#az-ad-sp-create-for-rbac
+[az-ad-sp-delete]: /cli/azure/ad/sp#az_ad_sp_delete
[azure-load-balancer-overview]: ../load-balancer/load-balancer-overview.md [install-azure-cli]: /cli/azure/install-azure-cli [service-principal]:../active-directory/develop/app-objects-and-service-principals.md
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
To create an AKS cluster, complete the following steps:
4. On the **Node pools** page, keep the default options. At the bottom of the screen, click **Next: Authentication**. > [!CAUTION]
- > Creating new AAD Service Principals may take multiple minutes to propagate and become available causing Service Principal not found errors and validation failures in Azure portal. If you hit this please visit [Troubleshoot common Azure Kubernetes Service problems](troubleshooting.md#received-an-error-saying-my-service-principal-wasnt-found-or-is-invalid-when-i-try-to-create-a-new-cluster) for mitigation.
+ > Creating new cluster identity may take multiple minutes to propagate and become available causing Service Principal not found errors and validation failures in Azure portal. If you hit this please visit [Troubleshoot common Azure Kubernetes Service problems](troubleshooting.md#received-an-error-saying-my-service-principal-wasnt-found-or-is-invalid-when-i-try-to-create-a-new-cluster) for mitigation.
5. On the **Authentication** page, configure the following options:
- - Create a new service principal by leaving the **Service Principal** field with **(new) default service principal**. Or you can choose *Configure service principal* to use an existing one. If you use an existing one, you will need to provide the SPN client ID and secret.
+ - Create a new cluster identity by leaving the **Authentication** field with **System-assinged managed identity**. Alternatively, you can choose **Service Principal** to use a service principal. Select *(new) default service principal* to create a default service principal or *Configure service principal* to use an existing one. If you use an existing one, you will need to provide the SPN client ID and secret.
- Enable the option for Kubernetes role-based access control (Kubernetes RBAC). This will provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster.
- Alternatively, you can use a managed identity instead of a service principal. See [use managed identities](use-managed-identity.md) for more information.
- By default, *Basic* networking is used, and Azure Monitor for containers is enabled. Click **Review + create** and then **Create** when validation completes. It takes a few minutes to create the AKS cluster. When your deployment is complete, click **Go to resource**, or browse to the AKS cluster resource group, such as *myResourceGroup*, and select the AKS resource, such as *myAKSCluster*. The AKS cluster dashboard is shown, as in this example:
aks Kubernetes Walkthrough Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-rm-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
- This article requires version 2.0.61 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- To create an AKS cluster using a Resource Manager template, you provide an SSH public key and Azure Active Directory service principal. Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for permissions. If you need either of these resources, see the following section; otherwise skip to the [Review the template](#review-the-template) section.
+- To create an AKS cluster using a Resource Manager template, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the template](#review-the-template) section.
### Create an SSH key pair
ssh-keygen -t rsa -b 2048
For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure][ssh-keys].
-### Create a service principal
-
-To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is used. Create a service principal using the [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] command. The `--skip-assignment` parameter limits any additional permissions from being assigned. By default, this service principal is valid for one year. Note that you can use a managed identity instead of a service principal. For more information, see [Use managed identities](use-managed-identity.md).
-
-```azurecli-interactive
-az ad sp create-for-rbac --skip-assignment
-```
-
-The output is similar to the following example:
-
-```json
-{
- "appId": "8b1ede42-d407-46c2-a1bc-6b213b04295f",
- "displayName": "azure-cli-2019-04-19-21-42-11",
- "name": "http://azure-cli-2019-04-19-21-42-11",
- "password": "27e5ac58-81b0-46c1-bd87-85b4ef622682",
- "tenant": "73f978cf-87f2-41bf-92ab-2e7ce012db57"
-}
-```
-
-Make a note of the *appId* and *password*. These values are used in the following steps.
- ## Review the template The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/101-aks/).
For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template
* **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*. * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*. * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
- * **Service Principal Client Id**: Copy and paste the *appId* of your service principal from the `az ad sp create-for-rbac` command.
- * **Service Principal Client Secret**: Copy and paste the *password* of your service principal from the `az ad sp create-for-rbac` command.
- * **I agree to the terms and conditions state above**: Check this box to agree.
![Resource Manager template to create an Azure Kubernetes Service cluster in the portal](./media/kubernetes-walkthrough-rm-template/create-aks-cluster-using-template-portal.png)
-3. Select **Purchase**.
+3. Select **Review + Create**.
It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step.
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/limit-egress-traffic.md
Now an AKS cluster can be deployed into the existing virtual network. We'll also
### Create a service principal with access to provision inside the existing virtual network
-A service principal is used by AKS to create cluster resources. The service principal that is passed at create time is used to create underlying AKS resources such as Storage resources, IPs, and Load Balancers used by AKS (you may also use a [managed identity](use-managed-identity.md) instead). If not granted the appropriate permissions below, you won't be able to provision the AKS Cluster.
+A cluster identity (managed identity or service principal) is used by AKS to create cluster resources. A service principal that is passed at create time is used to create underlying AKS resources such as Storage resources, IPs, and Load Balancers used by AKS (you may also use a [managed identity](use-managed-identity.md) instead). If not granted the appropriate permissions below, you won't be able to provision the AKS Cluster.
```azurecli # Create SP and Assign Permission to Virtual Network
aks Operator Best Practices Container Image Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-container-image-management.md
This article focused on how to secure your containers. To implement some of thes
* [Automate image builds on base image update with Azure Container Registry Tasks][acr-base-image-update] <!-- EXTERNAL LINKS -->
-[azure-pipelines]: /azure/devops/pipelines/?view=vsts
+[azure-pipelines]: /azure/devops/pipelines/
[twistlock]: https://www.twistlock.com/ [aqua]: https://www.aquasec.com/
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-network.md
The Container Networking Interface (CNI) is a vendor-neutral protocol that lets
A notable benefit of Azure CNI networking for production is the network model allows for separation of control and management of resources. From a security perspective, you often want different teams to manage and secure those resources. Azure CNI networking lets you connect to existing Azure resources, on-premises resources, or other services directly via IP addresses assigned to each pod.
-When you use Azure CNI networking, the virtual network resource is in a separate resource group to the AKS cluster. Delegate permissions for the AKS service principal to access and manage these resources. The service principal used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
+When you use Azure CNI networking, the virtual network resource is in a separate resource group to the AKS cluster. Delegate permissions for the AKS cluster identity to access and manage these resources. The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
* `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read`
-For more information about AKS service principal delegation, see [Delegate access to other Azure resources][sp-delegation]. Instead of a service principal, you can also use the system assigned managed identity for permissions. For more information, see [Use managed identities](use-managed-identity.md).
+By default, AKS uses a managed identity for its cluster identity, but you have the option to use a service principal instead. For more information about AKS service principal delegation, see [Delegate access to other Azure resources][sp-delegation]. For more information about managed identities, see [Use managed identities](use-managed-identity.md).
As each node and pod receive its own IP address, plan out the address ranges for the AKS subnets. The subnet must be large enough to provide IP addresses for every node, pods, and network resources that you deploy. Each AKS cluster must be placed in its own subnet. To allow connectivity to on-premises or peered networks in Azure, don't use IP address ranges that overlap with existing network resources. There are default limits to the number of pods that each node runs with both kubenet and Azure CNI networking. To handle scale out events or cluster upgrades, you also need extra IP addresses available for use in the assigned subnet. This extra address space is especially important if you use Windows Server containers, as those node pools require an upgrade to apply the latest security patches. For more information on Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade].
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
As mentioned, virtual network peering is one way to access your private cluster.
[virtual-network-peering]: ../virtual-network/virtual-network-peering-overview.md [azure-bastion]: ../bastion/tutorial-create-host-portal.md [express-route-or-vpn]: ../expressroute/expressroute-about-virtual-network-gateways.md
-[devops-agents]: /azure/devops/pipelines/agents/agents?view=azure-devops
+[devops-agents]: /azure/devops/pipelines/agents/agents
[availability-zones]: availability-zones.md
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/static-ip.md
$ az network public-ip show --resource-group myResourceGroup --name myAKSPublicI
## Create a service using the static IP address
-Before creating a service, ensure the service principal used by the AKS cluster has delegated permissions to the other resource group. For example:
+Before creating a service, ensure the cluster identity used by the AKS cluster has delegated permissions to the other resource group. For example:
```azurecli-interactive az role assignment create \
- --assignee <SP Client ID> \
+ --assignee <Client ID> \
--role "Network Contributor" \ --scope /subscriptions/<subscription id>/resourceGroups/<resource group name> ```
-Alternatively, you can use the system assigned managed identity for permissions instead of the service principal. For more information, see [Use managed identities](use-managed-identity.md).
- > [!IMPORTANT] > If you customized your outbound IP make sure your cluster identity has permissions to both the outbound public IP and this inbound public IP.
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/tutorial-kubernetes-deploy-cluster.md
AKS clusters can use Kubernetes role-based access control (Kubernetes RBAC). The
Create an AKS cluster using [az aks create][]. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. The following example does not specify a region so the AKS cluster is also created in the *eastus* region. For more information, see [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions] for more information about resource limits and region availability for AKS.
-To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is automatically created, since you did not specify one. Here, this service principal is [granted the right to pull images][container-registry-integration] from the Azure Container Registry (ACR) instance you created in the previous tutorial. To execute the command successfully, you're required to have an **Owner** or **Azure account administrator** role on the Azure subscription.
+To allow an AKS cluster to interact with other Azure resources, a cluster identity is automatically created, since you did not specify one. Here, this cluster identity is [granted the right to pull images][container-registry-integration] from the Azure Container Registry (ACR) instance you created in the previous tutorial. To execute the command successfully, you're required to have an **Owner** or **Azure account administrator** role on the Azure subscription.
```azurecli az aks create \
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/update-credentials.md
Last updated 03/11/2019
# Update or rotate the credentials for Azure Kubernetes Service (AKS)
-By default, AKS clusters are created with a service principal that has a one-year expiration time. As you near the expiration date, you can reset the credentials to extend the service principal for an additional period of time. You may also want to update, or rotate, the credentials as part of a defined security policy. This article details how to update these credentials for an AKS cluster.
+AKS clusters created with a service principal have a one-year expiration time. As you near the expiration date, you can reset the credentials to extend the service principal for an additional period of time. You may also want to update, or rotate, the credentials as part of a defined security policy. This article details how to update these credentials for an AKS cluster.
You may also have [integrated your AKS cluster with Azure Active Directory][aad-integration], and use it as an authentication provider for your cluster. In that case you will have 2 more identities created for your cluster, the AAD Server App and the AAD Client App, you may also reset those credentials.
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-ultra-disks.md
If you want to create clusters without ultra disk support, you can do so by omit
## Enable Ultra disks on an existing cluster
-You can enable ultra disks on existing clusters by adding a new node pool to your cluster that support ultra disks. Configure a new node pool to use host-based encryption by using the `--aks-custom-headers` flag.
+You can enable ultra disks on existing clusters by adding a new node pool to your cluster that support ultra disks. Configure a new node pool to use ultra disks by using the `--aks-custom-headers` flag.
```azurecli az aks nodepool add --name ultradisk --cluster-name myAKSCluster --resource-group myResourceGroup --node-vm-size Standard_L8s_v2 --zones 1 2 --node-count 2 --aks-custom-headers EnableUltraSSD=true
aks Virtual Nodes Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/virtual-nodes-cli.md
az network vnet subnet create \
## Create a service principal or use a managed identity
-To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is used. This service principal can be automatically created by the Azure CLI or portal, or you can pre-create one and assign additional permissions. Alternatively, you can use a managed identity for permissions instead of a service principal. For more information, see [Use managed identities](use-managed-identity.md).
+To allow an AKS cluster to interact with other Azure resources, a cluster identity is used. This cluster identity can be automatically created by the Azure CLI or portal, or you can pre-create one and assign additional permissions. By default, this cluster identity is a managed identity. For more information, see [Use managed identities](use-managed-identity.md). You can also use a service principal as your cluster identity. The following steps show you how to manually create and assign the service principal to your cluster.
Create a service principal using the [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] command. The `--skip-assignment` parameter limits any additional permissions from being assigned.
aks Virtual Nodes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/virtual-nodes-portal.md
On the **Scale** page, select *Enabled* under **Virtual nodes**.
![Create AKS cluster and enable the virtual nodes](media/virtual-nodes-portal/enable-virtual-nodes.png)
-By default, an Azure Active Directory service principal is created. This service principal is used for cluster communication and integration with other Azure services. Alternatively, you can use a managed identity for permissions instead of a service principal. For more information, see [Use managed identities](use-managed-identity.md).
+By default, a cluster identity is created. This cluster identity is used for cluster communication and integration with other Azure services. By default, this cluster identity is a managed identity. For more information, see [Use managed identities](use-managed-identity.md). You can also use a service principal as your cluster identity.
The cluster is also configured for advanced networking. The virtual nodes are configured to use their own Azure virtual network subnet. This subnet has delegated permissions to connect Azure resources between the AKS cluster. If you don't already have delegated subnet, the Azure portal creates and configures the Azure virtual network and subnet for use with the virtual nodes.
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-use-managed-service-identity.md
editor: ''
Previously updated : 11/14/2020 Last updated : 03/09/2021
The following example shows an Azure Resource Manager template that contains the
You can use the system-assigned identity to authenticate to the back end through the [authentication-managed-identity](api-management-authentication-policies.md#ManagedIdentity) policy.
+### <a name="apim-as-trusted-service"></a>Connect to Azure resources behind IP Firewall using System Assigned Managed Identity
++
+API Management is a trusted microsoft service to the following resources. This allows the service to connect to the following resources behind a firewall. After explicitly assigning the appropriate Azure role to the [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) for that resource instance, the scope of access for the instance corresponds to the Azure role assigned to the managed identity.
++
+|Azure Service | Link|
+|||
+|Azure Storage | [Trusted-access-to-azure-storage](../storage/common/storage-network-security.md?tabs=azure-portal#trusted-access-based-on-system-assigned-managed-identity)|
+|Azure Service Bus | [Trusted-access-to-azure-service-bus](../service-bus-messaging/service-bus-ip-filtering.md#trusted-microsoft-services)|
+|Azure Event Hub | [Trused-access-to-azure-event-hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)|
++ ## Create a user-assigned managed identity > [!NOTE]
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-internal-vnet.md
na Previously updated : 07/31/2019 Last updated : 03/09/2021
After the deployment succeeds, you should see **private** virtual IP address and
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-api-management-create-with-internal-vnet%2Fazuredeploy.json) You can also enable virtual network connectivity by using PowerShell cmdlets.
You can also enable virtual network connectivity by using PowerShell cmdlets.
* Update an existing deployment of an API Management service inside a virtual network: Use the cmdlet [Update-AzApiManagementRegion](/powershell/module/az.apimanagement/update-azapimanagementregion) to move an existing API Management service inside a virtual network and configure it to use the internal virtual network type. ## <a name="apim-dns-configuration"></a>DNS configuration
-When API Management is in external virtual network mode, the DNS is managed by Azure. For internal virtual network mode, you have to manage your own DNS.
+When API Management is in external virtual network mode, the DNS is managed by Azure. For internal virtual network mode, you have to manage your own DNS. Configuring an Azure DNS private zone and linking it to the virtual network API Management service is deployed into is the recommended option. Click [here](../dns/private-dns-getstarted-portal.md) to learn how to setup a private zone in Azure DNS.
> [!NOTE] > API Management service does not listen to requests coming from IP addresses. It only responds to requests to the host name configured on its service endpoints. These endpoints include gateway, the Azure portal and the Developer portal, direct management endpoint, and Git.
To learn more, see the following articles:
[Create API Management service]: get-started-create-service-instance.md [Common network configuration problems]: api-management-using-with-vnet.md#network-configuration-issues
-[ServiceTags]: ../virtual-network/network-security-groups-overview.md#service-tags
+[ServiceTags]: ../virtual-network/network-security-groups-overview.md#service-tags
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-ip-restrictions.md
You can add access restrictions programmatically by doing either of the followin
--rule-name 'IP example rule' --action Allow --ip-address 122.133.144.0/24 --priority 100 ```
-* Use [Azure PowerShell](/powershell/module/Az.Websites/Add-AzWebAppAccessRestrictionRule?view=azps-5.2.0&preserve-view=true). For example:
+* Use [Azure PowerShell](/powershell/module/Az.Websites/Add-AzWebAppAccessRestrictionRule). For example:
```azurepowershell-interactive
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-provider-aad.md
[!INCLUDE [app-service-mobile-selector-authentication](../../includes/app-service-mobile-selector-authentication.md)]
-This article shows you how to configure Azure App Service or Azure Functions to use Azure Active Directory (Azure AD) as an authentication provider.
+This article shows you how to configure authentication for Azure App Service or Azure Functions so that your app signs in users with Azure Active Directory (Azure AD) as the authentication provider.
-> [!NOTE]
-> The express settings flow sets up an AAD V1 application registration. If you wish to use [Azure Active Directory v2.0](../active-directory/develop/v2-overview.md) (including [MSAL](../active-directory/develop/msal-overview.md)), please follow the [advanced configuration instructions](#advanced).
-
-Follow these best practices when setting up your app and authentication:
--- Give each App Service app its own permissions and consent.-- Configure each App Service app with its own registration.-- Avoid permission sharing between environments by using separate app registrations for separate deployment slots. When testing new code, this practice can help prevent issues from affecting the production app.
+## <a name="express"> </a>Configure with express settings
-> [!NOTE]
-> This feature is currently not available on Linux Consumption plan for Azure Functions
+The **Express** option is designed to make enabling authentication simple and requires just a few clicks.
-## <a name="express"> </a>Configure with express settings
+The express settings will automatically create an application registration that uses the Azure Active Directory V1 endpoint. To use [Azure Active Directory v2.0](../active-directory/develop/v2-overview.md) (including [MSAL](../active-directory/develop/msal-overview.md)), follow the [advanced configuration instructions](#advanced).
> [!NOTE] > The **Express** option is not available for government clouds.
+To enable authentication using the **Express** option, follow these steps:
+ 1. In the [Azure portal], search for and select **App Services**, and then select your app. 2. From the left navigation, select **Authentication / Authorization** > **On**. 3. Select **Azure Active Directory** > **Express**.
For an example of configuring Azure AD login for a web app that accesses Azure S
## <a name="advanced"> </a>Configure with advanced settings
-You can configure app settings manually if you want to use an app registration from a different Azure AD tenant. To complete this custom configuration:
-
-1. Create a registration in Azure AD.
-2. Provide some of the registration details to App Service.
+In order for Azure AD to act as the authentication provider for your app, you must register your app with it. The Express option does this for you automatically. The Advanced option allows you to manually register your app, customizing the registration and manually inputting the registration details back to the App Service. This is useful, for example, if you want to use an app registration from a different Azure AD tenant than the one your App Service is in.
### <a name="register"> </a>Create an app registration in Azure AD for your App Service app
-You'll need the following information when you configure your App Service app:
+First, you will create your app registration. As you do so, collect the following information which you will need later when you configure the authentication in the App Service app:
- Client ID - Tenant ID - Client secret (optional) - Application ID URI
-Perform the following steps:
+To register the app, perform the following steps:
1. Sign in to the [Azure portal], search for and select **App Services**, and then select your app. Note your app's **URL**. You'll use it to configure your Azure Active Directory app registration.
-1. Select **Azure Active Directory** > **App registrations** > **New registration**.
+1. From the portal menu, select **Azure Active Directory**, then go to the **App registrations** tab and select **New registration**.
1. In the **Register an application** page, enter a **Name** for your app registration. 1. In **Redirect URI**, select **Web** and type `<app-url>/.auth/login/aad/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/aad/callback`.
-1. Select **REGISTER**.
+1. Select **Register**.
1. After the app registration is created, copy the **Application (client) ID** and the **Directory (tenant) ID** for later. 1. Select **Authentication**. Under **Implicit grant**, enable **ID tokens** to allow OpenID Connect user sign-ins from App Service. 1. (Optional) Select **Branding**. In **Home page URL**, enter the URL of your App Service app and select **Save**.
Perform the following steps:
You're now ready to use Azure Active Directory for authentication in your App Service app.
-## Configure a native client application
+## Configure client apps to access your App Service
+
+In the prior section, you registered your App Service or Azure Function to authenticate users. This section explains how to register native client or daemon apps so that they can request access to APIs exposed by your App Service on behalf of users or themselves. Completing the steps in this section is not required if you only wish to authenticate users.
+
+### Native client application
-You can register native clients to allow authentication to Web API's hosted in your app using a client library such as the **Active Directory Authentication Library**.
+You can register native clients to request access your App Service app's APIs on behalf of a signed in user.
1. In the [Azure portal], select **Active Directory** > **App registrations** > **New registration**. 1. In the **Register an application** page, enter a **Name** for your app registration.
You can register native clients to allow authentication to Web API's hosted in y
1. Select the app registration you created earlier for your App Service app. If you don't see the app registration, make sure that you've added the **user_impersonation** scope in [Create an app registration in Azure AD for your App Service app](#register). 1. Under **Delegated permissions**, select **user_impersonation**, and then select **Add permissions**.
-You have now configured a native client application that can access your App Service app on behalf of a user.
+You have now configured a native client application that can request access your App Service app on behalf of a user.
-## Configure a daemon client application for service-to-service calls
+### Daemon client application (service-to-service calls)
Your application can acquire a token to call a Web API hosted in your App Service or Function app on behalf of itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks without a logged in user. It uses the standard OAuth 2.0 [client credentials](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md) grant.
At present, this allows _any_ client application in your Azure AD tenant to requ
You have now configured a daemon client application that can access your App Service app using its own identity.
+## Best practices
+
+Regardless of the configuration you use to set up authentication, the following best practices will keep your tenant and applications more secure:
+
+- Give each App Service app its own permissions and consent.
+- Configure each App Service app with its own registration.
+- Avoid permission sharing between environments by using separate app registrations for separate deployment slots. When testing new code, this practice can help prevent issues from affecting the production app.
+ ## <a name="related-content"> </a>Next steps [!INCLUDE [app-service-mobile-related-content-get-started-users](../../includes/app-service-mobile-related-content-get-started-users.md)]
app-service Deploy Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-continuous-deployment.md
For Windows apps, you can manually configure continuous deployment from a cloud
## More resources
-* [Deploy from Azure Pipelines to Azure App Services](/azure/devops/pipelines/apps/cd/deploy-webdeploy-webapps?view=azure-devops&preserve-view=true)
+* [Deploy from Azure Pipelines to Azure App Services](/azure/devops/pipelines/apps/cd/deploy-webdeploy-webapps)
* [Investigate common issues with continuous deployment](https://github.com/projectkudu/kudu/wiki/Investigating-continuous-deployment) * [Use Azure PowerShell](/powershell/azure/) * [Project Kudu](https://github.com/projectkudu/kudu/wiki)
app-service App Service Web How To Create A Web App In An Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/app-service-web-how-to-create-a-web-app-in-an-ase.md
After creating your web app and App Service plan it is a good idea to scale it u
[HowtoScale]: app-service-web-scale-a-web-app-in-an-app-service-environment.md [HowtoConfigureASE]: app-service-web-configure-an-app-service-environment.md [ResourceGroups]: ../../azure-resource-manager/management/overview.md
-[AzurePowershell]: /powershell/azure/?view=azps-3.8.0
+[AzurePowershell]: /powershell/azure/
app-service Cli Continuous Deployment Vsts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scripts/cli-continuous-deployment-vsts.md
This sample script creates an app in App Service with its related resources, and then sets up continuous deployment from an Azure DevOps repository. For this sample, you need: * An Azure DevOps repository with application code, that you have administrative permissions for.
-* A [Personal Access Token (PAT)](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=vsts) for your Azure DevOps organization.
+* A [Personal Access Token (PAT)](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate) for your Azure DevOps organization.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
application-gateway Configuration Front End Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configuration-front-end-ip.md
For more information, see [Frequently asked questions about Application Gateway]
A public IP address isn't required for an internal endpoint that's not exposed to the Internet. That's known as an *internal load-balancer* (ILB) endpoint or private frontend IP. An application gateway ILB is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers in a multi-tier application within a security boundary that aren't exposed to the Internet but that require round-robin load distribution, session stickiness, or TLS termination.
-Only one public IP address or one private IP address is supported. You choose the front-end IP when you create the application gateway.
+Only one public IP address and one private IP address is supported. You choose the front-end IP when you create the application gateway.
- For a public IP address, you can create a new public IP address or use an existing public IP in the same location as the application gateway. For more information, see [static vs. dynamic public IP address](./application-gateway-components.md#static-versus-dynamic-public-ip-address).
A front-end IP address is associated to a *listener*, which checks for incoming
## Next steps -- [Learn about listener configuration](configuration-listeners.md)
+- [Learn about listener configuration](configuration-listeners.md)
application-gateway Tutorial Url Route Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/tutorial-url-route-powershell.md
for ($i=1; $i -le 3; $i++)
$vmssConfig = New-AzVmssConfig ` -Location eastus ` -SkuCapacity 2 `
- -SkuName Standard_DS2 `
+ -SkuName Standard_DS2_v2 `
-UpgradePolicyMode Automatic Set-AzVmssStorageProfile $vmssConfig `
attestation Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-powershell.md
Note that all semantic manipulation of the policy signer certificate must be don
For policy signer certificate sample, see [examples of policy signer certificate](policy-signer-examples.md).
-For more information on the cmdlets and its parameters, see [Azure Attestation PowerShell cmdlets](/powershell/module/az.attestation/?view=azps-4.3.0#attestation)
+For more information on the cmdlets and its parameters, see [Azure Attestation PowerShell cmdlets](/powershell/module/az.attestation/#attestation)
## Next steps
automation Automation Dsc Remediate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-remediate.md
For hybrid nodes, you can correct drift using the Python scripts. See [Performin
## Next steps -- For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation/?view=azps-3.7.0#automation).
+- For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation/#automation).
- To see an example of using Azure Automation State Configuration in a continuous deployment pipeline, see [Set up continuous deployment with Chocolatey](automation-dsc-cd-chocolatey.md).
automation Automation Graphical Authoring Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-graphical-authoring-intro.md
Select an activity on the canvas to configure its properties and parameters in t
A parameter set defines the mandatory and optional parameters that accept values for a particular cmdlet. All cmdlets have at least one parameter set, and some have several sets. If a cmdlet has multiple parameter sets, you must select the one to use before you can configure parameters. You can change the parameter set used by an activity by selecting **Parameter Set** and choosing another set. In this case, any parameter values that you have already configured are lost.
-In the following example, the [Get-AzVM](/powershell/module/az.compute/get-azvm?view=azps-3.5.0&preserve-view=true) cmdlet has three parameter sets. The example uses one set called **ListVirtualMachineInResourceGroupParamSet**, with a single optional parameter, for returning all virtual machines in a resource group. The example also uses the **GetVirtualMachineInResourceGroupParamSet** parameter set for specifying the virtual machine to return. This set has two mandatory parameters and one optional parameter.
+In the following example, the [Get-AzVM](/powershell/module/az.compute/get-azvm) cmdlet has three parameter sets. The example uses one set called **ListVirtualMachineInResourceGroupParamSet**, with a single optional parameter, for returning all virtual machines in a resource group. The example also uses the **GetVirtualMachineInResourceGroupParamSet** parameter set for specifying the virtual machine to return. This set has two mandatory parameters and one optional parameter.
![Parameter set](media/automation-graphical-authoring-intro/get-azvm-parameter-sets.png)
You have the option to revert to the Published version of a runbook. This operat
* To get started with graphical runbooks, see [Tutorial: Create a graphical runbook](learn/automation-tutorial-runbook-graphical.md). * To know more about runbook types and their advantages and limitations, see [Azure Automation runbook types](automation-runbook-types.md). * To understand how to authenticate using the Automation Run As account, see [Run As account](automation-security-overview.md#run-as-account).
-* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation/?view=azps-3.7.0&preserve-view=true#automation).
+* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation/#automation).
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-solution-vm-management.md
# Start/Stop VMs during off-hours overview
-The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
+The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
This feature uses [Start-AzVm](/powershell/module/az.compute/start-azvm) cmdlet to start VMs. It uses [Stop-AzVM](/powershell/module/az.compute/stop-azvm) for stopping VMs.
The following are limitations with the current feature:
- It manages VMs in any region, but can only be used in the same subscription as your Azure Automation account. - It is available in Azure and Azure Government for any region that supports a Log Analytics workspace, an Azure Automation account, and alerts. Azure Government regions currently don't support email functionality.
+> [!NOTE]
+> Before you install this version, we would like you to know about the [next version](https://github.com/microsoft/startstopv2-deployments), which is in preview right now. This new version (V2) offers all the same functionality as this one, but is designed to take advantage of newer technology in Azure. It adds some of the commonly requested features from customers, such as multi-subscription support from a single Start/Stop instance.
+ ## Prerequisites - The runbooks for the Start/Stop VMs during off hours feature work with an [Azure Run As account](./automation-security-overview.md#run-as-accounts). The Run As account is the preferred authentication method because it uses certificate authentication instead of a password that might expire or change frequently.
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/modules.md
Importing an Az module into your Automation account doesn't automatically import
* When a runbook invokes a cmdlet from a module. * When a runbook imports the module explicitly with the [Import-Module](/powershell/module/microsoft.powershell.core/import-module) cmdlet.
-* When a runbook imports the module explicitly with the [using module](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_using?view=powershell-7.1#module-syntax) statement. The using statement is supported starting with Windows PowerShell 5.0 and supports classes and enum type import.
+* When a runbook imports the module explicitly with the [using module](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_using#module-syntax) statement. The using statement is supported starting with Windows PowerShell 5.0 and supports classes and enum type import.
* When a runbook imports another dependent module. You can import the Az modules in the Azure portal. Remember to import only the Az modules that you need, not the entire Az.Automation module. Because [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/1.1.0) is a dependency for the other Az modules, be sure to import this module before any others.
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
PowerShell
Linux/macOS ```console
-echo '<your string to encode here>' | base64
+echo -n '<your string to encode here>' | base64
#Example
-# echo 'example' | base64
+# echo -n 'example' | base64
``` Once you have encoded the username and password you can create a file based on the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/controller-login-secret.yaml) and replace the username and password values with your own.
azure-arc Create Postgresql Hyperscale Server Group Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-kubernetes-native-tools.md
PowerShell
Linux/macOS ```console
-echo '<your string to encode here>' | base64
+echo -n '<your string to encode here>' | base64
#Example
-# echo 'example' | base64
+# echo -n 'example' | base64
``` ### Customizing the name
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
By following these best practices, you can help maximize the performance and cos
* **Reuse connections.** Creating new connections is expensive and increases latency, so reuse connections as much as possible. If you choose to create new connections, make sure to close the old connections before you release them (even in managed memory languages like .NET or Java).
+* **Use pipelining.** Try to choose a Redis client that supports [Redis pipelining](https://redis.io/topics/pipelining) in order to make most efficient use of the network to get the best throughput you can.
+ * **Configure your client library to use a *connect timeout* of at least 15 seconds**, giving the system time to connect even under higher CPU conditions. A small connection timeout value doesn't guarantee that the connection is established in that time frame. If something goes wrong (high client CPU, high server CPU, and so on), then a short connection timeout value will cause the connection attempt to fail. This behavior often makes a bad situation worse. Instead of helping, shorter timeouts aggravate the problem by forcing the system to restart the process of trying to reconnect, which can lead to a *connect -> fail -> retry* loop. We generally recommend that you leave your connection Timeout at 15 seconds or higher. It's better to let your connection attempt succeed after 15 or 20 seconds than to have it fail quickly only to retry. Such a retry loop can cause your outage to last longer than if you let the system just take longer initially. > [!NOTE] > This guidance is specific to the *connection attempt* and not related to the time you're willing to wait for an *operation* like GET or SET to complete.
azure-cache-for-redis Cache Event Grid Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-event-grid-quickstart-powershell.md
New-AzRedisCache
[-Confirm] [<CommonParameters>] ```
-For more information on creating a cache instance in PowerShell, see the [Azure PowerShell reference](/powershell/module/az.rediscache/new-azrediscache?view=azps-5.2.0).
+For more information on creating a cache instance in PowerShell, see the [Azure PowerShell reference](/powershell/module/az.rediscache/new-azrediscache).
## Create a message endpoint
Import-AzRedisCache
[-Confirm] [<CommonParameters>] ```
-For more information on importing in PowerShell, see the [Azure PowerShell reference](/powershell/module/az.rediscache/import-azrediscache?view=azps-5.2.0).
+For more information on importing in PowerShell, see the [Azure PowerShell reference](/powershell/module/az.rediscache/import-azrediscache).
You've triggered the event, and Event Grid sent the message to the endpoint you configured when subscribing. View your web app to see the event you just sent.
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
To obtain a recovery point, [Export](cache-how-to-import-export-data.md#export)
### Can I use PowerShell or Azure CLI to manage geo-replication?
-Yes, geo-replication can be managed using the Azure portal, PowerShell, or Azure CLI. For more information, see the [PowerShell docs](/powershell/module/az.rediscache/?view=azps-1.4.0#redis_cache) or [Azure CLI docs](/cli/azure/redis/server-link).
+Yes, geo-replication can be managed using the Azure portal, PowerShell, or Azure CLI. For more information, see the [PowerShell docs](/powershell/module/az.rediscache/#redis_cache) or [Azure CLI docs](/cli/azure/redis/server-link).
### How much does it cost to replicate my data across Azure regions?
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
Last updated 08/11/2020
# Enable zone redundancy for Azure Cache for Redis (Preview) In this article, you'll learn how to configure a zone-redundant Azure Cache instance using the Azure portal.
-Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](../virtual-machines/manage-availability.md) and highly available, they're susceptible to datacenter level failures. Azure Cache for Redis also supports zone redundancy in its Premium and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [availability zones](../virtual-machines/manage-availability.md#use-availability-zones-to-protect-from-datacenter-level-failures). It provides higher resilience and availability.
+Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](../virtual-machines/availability.md) and highly available, they're susceptible to datacenter level failures. Azure Cache for Redis also supports zone redundancy in its Premium and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [availability zones](../availability-zones/az-overview.md). It provides higher resilience and availability.
## Prerequisites * Azure subscription - [create one for free](https://azure.microsoft.com/free/)
azure-cache-for-redis Cache Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-ml.md
For more information on entry script, see [Define scoring code.](../machine-lear
These entities are encapsulated into an __inference configuration__. The inference configuration references the entry script and other dependencies. > [!IMPORTANT]
-> When creating an inference configuration for use with Azure Functions, you must use an [Environment](/python/api/azureml-core/azureml.core.environment%28class%29?preserve-view=true&view=azure-ml-py) object. Please note that if you are defining a custom environment, you must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. The following example demonstrates creating an environment object and using it with an inference configuration:
+> When creating an inference configuration for use with Azure Functions, you must use an [Environment](/python/api/azureml-core/azureml.core.environment%28class%29) object. Please note that if you are defining a custom environment, you must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. The following example demonstrates creating an environment object and using it with an inference configuration:
> > ```python > from azureml.core.environment import Environment
pip install azureml-contrib-functions
## Create the image
-To create the Docker image that is deployed to Azure Functions, use [azureml.contrib.functions.package](/python/api/azureml-contrib-functions/azureml.contrib.functions?preserve-view=true&view=azure-ml-py) or the specific package function for the trigger you are interested in using. The following code snippet demonstrates how to create a new package with a HTTP trigger from the model and inference configuration:
+To create the Docker image that is deployed to Azure Functions, use [azureml.contrib.functions.package](/python/api/azureml-contrib-functions/azureml.contrib.functions) or the specific package function for the trigger you are interested in using. The following code snippet demonstrates how to create a new package with a HTTP trigger from the model and inference configuration:
> [!NOTE] > The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-and-where.md).
After a few moments, the resource group and all of its resources are deleted.
* Learn more about [Azure Cache for Redis](./cache-overview.md) * Learn to configure your function app in the [Functions](../azure-functions/functions-create-function-linux-custom-image.md) documentation.
-* [API Reference](/python/api/azureml-contrib-functions/azureml.contrib.functions?preserve-view=true&view=azure-ml-py)
+* [API Reference](/python/api/azureml-contrib-functions/azureml.contrib.functions)
* Create a [Python app that uses Azure Cache for Redis](./cache-python-get-started.md)
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
Last updated 02/08/2021 #Customer intent: As a developer new to Azure Cache for Redis, I want to create an instance of Azure Cache for Redis Enterprise tier.
-# Quickstart: Create a Redis Enterprise cache (Preview)
+# Quickstart: Create a Redis Enterprise cache
Azure Cache for Redis' Enterprise tiers provide fully integrated and managed [Redis Enterprise](https://redislabs.com/redis-enterprise/) on Azure. They're currently available as a preview. There are two new tiers in this preview: * Enterprise, which uses volatile memory (DRAM) on a virtual machine to store data
azure-functions Dotnet Isolated Process Developer Howtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-developer-howtos.md
+
+ Title: Develop and publish .NET 5 functions using Azure Functions
+description: Learn how to create and debug C# functions using .NET 5.0, then deploy the local project to serverless hosting in Azure Functions.
Last updated : 03/03/2021+
+#Customer intent: As a developer, I need to know how to create functions that run in an isolated process so that I can run my function code on current (not LTS) releases of .NET.
+zone_pivot_groups: development-environment-functions
++
+# Develop and publish .NET 5 function using Azure Functions
+
+This article shows you how to work with C# functions using .NET 5.0, which run out-of-process from the Azure Functions runtime. You'll learn how to create, debug locally, and publish these .NET isolated process functions to Azure. In Azure, these functions run in an isolated process that supports .NET 5.0. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
+
+If you don't need to support .NET 5.0 or run your functions out-of-process, you might want to instead [create a C# class library function](functions-create-your-first-function-visual-studio.md).
+
+>[!NOTE]
+>Developing .NET isolated process functions in the Azure portal isn't currently supported. You must use either the Azure CLI or Visual Studio Code publishing to create a function app in Azure that supports running .NET 5.0 apps out-of-process.
+
+## Prerequisites
+++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).+++ [.NET SDK 5.0](https://www.microsoft.com/net/download)+++ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3381, or a later version.+++ [Azure CLI](/cli/azure/install-azure-cli) version 2.20, or a later version. ++ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). +++ The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code. +++ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code, version 1.3.0 or newer.++ [Visual Studio 2019](https://azure.microsoft.com/downloads/), including the **Azure development** workload.
+.NET isolated function project templates and publishing aren't currently available in Visual Studio.
+
+## Create a local function project
+
+In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function.
++
+>[!NOTE]
+> At this time, there are no Visual Studio project templates that support creating .NET isolated function projects. This article shows you how to use Core Tools to create your C# project, which you can then run locally and debug in Visual Studio.
++
+1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
+
+ ![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
+
+1. Choose a directory location for your project workspace and choose **Select**.
+
+ > [!NOTE]
+ > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
+
+1. Provide the following information at the prompts:
+
+ + **Select a language for your function project**: Choose `C#`.
+
+ + **Select a .NET runtime**: Choose `.NET 5 isolated`.
+
+ + **Select a template for your project's first function**: Choose `HTTP trigger`.
+
+ + **Provide a function name**: Type `HttpExample`.
+
+ + **Provide a namespace**: Type `My.Functions`.
+
+ + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
+
+ + **Select how you would like to open your project**: Choose `Add to workspace`.
+
+1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+
+1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj*:
+
+ ```console
+ func init LocalFunctionProj --worker-runtime dotnetisolated
+ ```
+
+ Specifying `dotnetisolated` creates a project that runs on .NET 5.0.
++
+1. Navigate into the project folder:
+
+ ```console
+ cd LocalFunctionProj
+ ```
+
+ This folder contains various files for the project, including the [local.settings.json](functions-run-local.md#local-settings-file) and [host.json](functions-host-json.md) configurations files. Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+
+1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
+
+ ```console
+ func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous"
+ ```
+
+ `func new` creates an HttpExample.cs code file.
+++++++
+## Run the function locally
+
+At this point, you can run the `func start` command from the root of your project folder to compile and run the C# isolated functions project. Currently, if you want to debug your out-of-process function code in Visual Studio, you need to manually attach a debugger to the running Functions runtime process by using the following steps:
+
+1. Open the project file (.csproj) in Visual Studio. You can review and modify your project code and set any desired break points in the code.
+
+1. From the root project folder, use the following command from the terminal or a command prompt to start the runtime host:
+
+ ```console
+ func start --dotnet-isolated-debug
+ ```
+
+ The `--dotnet-isolated-debug` option tells the process to wait for a debugger to attach before continuing. Towards the end of the output, you should see something like the following lines:
+
+ <pre>
+ ...
+
+ Functions:
+
+ HttpExample: [GET,POST] http://localhost:7071/api/HttpExample
+
+ For detailed output, run func with --verbose flag.
+ [2021-03-09T08:41:41.904Z] Azure Functions .NET Worker (PID: 81720) initialized in debug mode. Waiting for debugger to attach...
+ ...
+
+ </pre>
+
+ The `PID: XXXXXX` indicates the process ID (PID) of the dotnet.exe process that is the running Functions host.
+
+1. In the Azure Functions runtime output, make a note of the process ID of the host process, to which you'll attach a debugger. Also note the URL of your local function.
+
+1. From the **Debug** menu in Visual Studio, select **Attach to Process...**, locate the dotnet.exe process that matches the process ID, and select **Attach**.
+
+ :::image type="content" source="media/dotnet-isolated-process-developer-howtos/attach-to-process.png" alt-text="Attach the debugger to the Functions host process":::
+
+ With the debugger attached you can debug your function code as normal.
+
+1. Into your browser's address bar, type your local function URL, which looks like the following, and run the request.
+
+ <http://localhost:7071/api/HttpExample>
+
+ You should see trace output from the request written to the running terminal. Code execution stops at any break points you set in your function code.
+
+1. When you're done, go to the terminal and press Ctrl + C to stop the host process.
+
+After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure.
+
+> [!NOTE]
+> Visual Studio publishing isn't currently available for .NET isolated process apps. After you've finished developing your project in Visual Studio, you must use the Azure CLI to create the remote Azure resources. Then, you can again use Azure Functions Core Tools from the command line to publish your project to Azure.
+
+## Create supporting Azure resources for your function
+
+Before you can deploy your function code to Azure, you need to create three resources:
+
+- A [resource group](../azure-resource-manager/management/overview.md), which is a logical container for related resources.
+- A [Storage account](../storage/common/storage-account-create.md), which is used to maintain state and other information about your functions.
+- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.
+
+Use the following commands to create these items.
+
+1. If you haven't done so already, sign in to Azure:
+
+ ```azurecli
+ az login
+ ```
+
+ The [az login](/cli/azure/reference-index#az_login) command signs you into your Azure account.
+
+1. Create a resource group named `AzureFunctionsQuickstart-rg` in the `westeurope` region:
+
+ ```azurecli
+ az group create --name AzureFunctionsQuickstart-rg --location westeurope
+ ```
+
+ The [az group create](/cli/azure/group#az_group_create) command creates a resource group. You generally create your resource group and resources in a region near you, using an available region returned from the `az account list-locations` command.
+
+1. Create a general-purpose storage account in your resource group and region:
+
+ ```azurecli
+ az storage account create --name <STORAGE_NAME> --location westeurope --resource-group AzureFunctionsQuickstart-rg --sku Standard_LRS
+ ```
+
+ The [az storage account create](/cli/azure/storage/account#az_storage_account_create) command creates the storage account.
+
+ In the previous example, replace `<STORAGE_NAME>` with a name that is appropriate to you and unique in Azure Storage. Names must contain three to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account, which is [supported by Functions](storage-considerations.md#storage-account-requirements).
+
+1. Create the function app in Azure:
+
+ ```azurecli
+ az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime dotnet-isolated --runtime-version 5.0 --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
+ ```
+
+ The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure.
+
+ In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
+
+ This command creates a function app running .NET 5.0 under the [Azure Functions Consumption Plan](consumption-plan.md). This plan should be free for the amount of usage you incur in this article. The command also provisions an associated Azure Application Insights instance in the same resource group. Use this instance to monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+++++
+## Publish the project to Azure
+
+In this section, you create a function app and related resources in your Azure subscription and then deploy your code.
+
+> [!IMPORTANT]
+> Publishing to an existing function app overwrites the content of that app in Azure.
++
+1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
+
+ ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
+
+1. Provide the following information at the prompts:
+
+ - **Select folder**: Choose a folder from your workspace or browse to one that contains your function app. You won't see this prompt when you already have a valid function app opened.
+
+ - **Select subscription**: Choose the subscription to use. You won't see this prompt when you only have one subscription.
+
+ - **Select Function App in Azure**: Choose `- Create new Function App`. (Don't choose the `Advanced` option, which isn't covered in this article.)
+
+ - **Enter a globally unique name for the function app**: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
+
+ - **Select a runtime stack**: Choose `.NET 5 (non-LTS)`.
+
+ - **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
+
+ In the notification area, you see the status of individual resources as they're created in Azure.
+
+ :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
+
+1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
+
+ [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
+
+ A notification is displayed after your function app is created and the deployment package is applied.
+
+ [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
+
+4. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
+
+ ![Create complete notification](../../includes/media/functions-publish-project-vscode/function-create-notifications.png)
++++
+## Clean up resources
+
+You created resources to complete this article. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/).
+
+Use the following command to delete the resource group and all its contained resources to avoid incurring further costs.
+
+```azurecli
+az group delete --name AzureFunctionsQuickstart-rg
+```
+
+Use the following steps to delete the function app and its related resources to avoid incurring any further costs.
+
+Use the following steps to delete the function app and its related resources to avoid incurring any further costs.
+
+1. In the Cloud Explorer, expand your subscription > **App Services**, right-click your function app, and choose **Open in Portal**.
+
+1. In the function app page, select the **Overview** tab and then select the link under **Resource group**.
+
+ :::image type="content" source="media/functions-create-your-first-function-visual-studio/functions-app-delete-resource-group.png" alt-text="Select the resource group to delete from the function app page":::
+
+2. In the **Resource group** page, review the list of included resources, and verify that they're the ones you want to delete.
+
+3. Select **Delete resource group**, and follow the instructions.
+
+ Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can also select the bell icon at the top of the page to view the notification.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about .NET isolated functions](dotnet-isolated-process-guide.md)
+
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
# Guide for running functions on .NET 5.0 in Azure
-_.NET 5.0 support is currently in preview._
- This article is an introduction to using C# to develop .NET isolated process functions, which run out-of-process in Azure Functions. Running out-of-process lets you decouple your function code from the Azure Functions runtime. It also provides a way for you to create and run functions that target the current .NET 5.0 release.
+| Getting started | Concepts| Samples |
+|--|--|--|
+| <ul><li>[Using Visual Studio Code](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vscode)</li><li>[Using command line tools](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-cli)</li><li>[Using Visual Studio](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vs)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Monitoring](functions-monitoring.md)</li> | <ul><li>[Reference samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples)</li></ul> |
+ If you don't need to support .NET 5.0 or run your functions out-of-process, you might want to instead [develop C# class library functions](functions-dotnet-class-library.md). ## Why .NET isolated process?
The following example injects a singleton service dependency:
To learn more, see [Dependency injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-5.0&preserve-view=true).
-### Middleware
+<!--### Middleware
.NET isolated also supports middleware registration, again by using a model similar to what exists in ASP.NET. This model gives you the ability to inject logic into the invocation pipeline, and before and after functions execute.
-While the full middleware registration set of APIs is not yet exposed, middleware registration is supported and we've added an example to the sample application under the Middleware folder.
+While the full middleware registration set of APIs is not yet exposed, we do support middleware registration and have added an example to the sample application under the Middleware folder.
## Execution context
This section describes the current state of the functional and behavioral differ
| function.json artifact | Generated | Not generated | | Configuration | [host.json](functions-host-json.md) | [host.json](functions-host-json.md) and [custom initialization](#configuration) | | Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](#dependency-injection) |
-| Middleware | Not supported | [Supported](#middleware) |
+| Middleware | Not supported | Supported |
| Cold start times | Typical | Longer, because of just-in-time start-up. Run on Linux instead of Windows to reduce potential delays. | | ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | _TBD_ |
+## Known issues
+
+For information on workarounds to know issues running .NET isolated process functions, see [this known issues page](https://aka.ms/AAbh18e). To report problems, [create an issue in this GitHub repository](https://github.com/Azure/azure-functions-dotnet-worker/issues/new/choose).
## Next steps
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions currently supports the following languages:
* **C#**: both [precompiled class libraries](../functions-dotnet-class-library.md) and [C# script](../functions-reference-csharp.md). * **JavaScript**: supported only for version 2.x of the Azure Functions runtime. Requires version 1.7.0 of the Durable Functions extension, or a later version.
-* **Python**: requires version 2.3.1 of the Durable Functions extension, or a later version. Support for Durable Functions is currently in public preview.
+* **Python**: requires version 2.3.1 of the Durable Functions extension, or a later version.
* **F#**: precompiled class libraries and F# script. F# script is only supported for version 1.x of the Azure Functions runtime. * **PowerShell**: support for Durable Functions is currently in public preview. Supported only for version 3.x of the Azure Functions runtime and PowerShell 7. Requires version 2.2.2 of the Durable Functions extension, or a later version. Only the following patterns are currently supported: [Function chaining](#chaining), [Fan-out/fan-in](#fan-in-out), [Async HTTP APIs](#async-http).
azure-functions Durable Functions Zero Downtime Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-zero-downtime-deployment.md
public static async Task<IActionResult> StatusCheck(
} ```
-Next, configure the staging gate to wait until no orchestrations are running. For more information, see [Release deployment control using gates](/azure/devops/pipelines/release/approvals/gates?view=azure-devops)
+Next, configure the staging gate to wait until no orchestrations are running. For more information, see [Release deployment control using gates](/azure/devops/pipelines/release/approvals/gates)
![Deployment gate](media/durable-functions-zero-downtime-deployment/deployment-gate.png)
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/quickstart-python-vscode.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-run-local.md#local-settings-file) configuration files.
-A requirements.txt file is also created in the root folder. It specifies the Python packages needed to run your function app.
-
-## Update Azure Functions extension bundles version
-
-Python Azure Functions require version 2.x of [Azure Functions extension bundles](../functions-bindings-register.md#access-extensions-in-non-net-languages). Extension bundles are configured in *host.json*.
-
-1. Open *host.json* in the project. Update the extension bundle `version` to `[2.*, 3.0.0)`. This specifies a version range that is greater than or equal to 2.0, and less than 3.0.
-
- ```json
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[2.*, 3.0.0)"
- }
- ```
-
-1. VS Code must be reloaded before the updated extension bundle version is reflected. In the command palette, run search for the *Developer: Reload Window* command and run it.
+A *requirements.txt* file is also created in the root folder. It specifies the Python packages needed to run your function app.
## Install azure-functions-durable from PyPI
When you created the project, the Azure Functions VS Code extension automaticall
``` azure-functions
- azure-functions-durable>=1.0.0b12
+ azure-functions-durable
``` 1. Open the editor's integrated terminal in the current folder (<kbd>Ctrl+Shift+`</kbd>).
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-dotnet-class-library.md
Title: Develop C# functions using Azure Functions
-description: Understand how to use C# to develop and publish code that runs in-process with the Azure Functions runtime.
+ Title: Develop C# class library functions using Azure Functions
+description: Understand how to use C# to develop and publish code as class libraries that runs in-process with the Azure Functions runtime.
Last updated 07/24/2020
-# Develop C# functions using Azure Functions
+# Develop C# class library functions using Azure Functions
<!-- When updating this article, make corresponding changes to any duplicate content in functions-reference-csharp.md -->
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-recover-storage-account.md
In the preceding step, if you can't find a storage account connection string, it
* Required: * [`AzureWebJobsStorage`](./functions-app-settings.md#azurewebjobsstorage)
-* Required for Consumption and Premium plan functions:
+* Required for Premium plan functions:
* [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](./functions-app-settings.md) * [`WEBSITE_CONTENTSHARE`](./functions-app-settings.md)
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-glossary-cloud-terminology.md
The compute resources that [Azure App Service](app-service/overview.md) provides
## availability set A collection of virtual machines that are managed together to provide application redundancy and reliability. The use of an availability set ensures that during either a planned or unplanned maintenance event at least one virtual machine is available.
-See [Manage the availability of Windows virtual machines](./virtual-machines/manage-availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/manage-availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
## <a name="classic-model"></a>Azure classic deployment model One of two [deployment models](./azure-resource-manager/management/deployment-models.md) used to deploy resources in Azure (the new model is Azure Resource Manager). Some Azure services support only the Resource Manager deployment model, some support only the classic deployment model, and some support both. The documentation for each Azure service specifies which model(s) they support.
One of two [deployment models](./azure-resource-manager/management/deployment-mo
## fault domain The collection of virtual machines in an availability set that can possibly fail at the same time. An example is a group of machines in a rack that share a common power source and network switch. In Azure, the virtual machines in an availability set are automatically separated across multiple fault domains.
-See [Manage the availability of Windows virtual machines](./virtual-machines/manage-availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) or [Manage the availability of Linux virtual machines](./virtual-machines/manage-availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) or [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
## geo A defined boundary for data residency that typically contains two or more regions. The boundaries may be within or beyond national borders and are influenced by tax regulation. Every geo has at least one region. Examples of geos are Asia Pacific and Japan. Also called *geography*.
See [Using tags to organize your Azure resources](./azure-resource-manager/manag
## update domain The collection of virtual machines in an availability set that are updated at the same time. Virtual machines in the same update domain are restarted together during planned maintenance. Azure never restarts more than one update domain at a time. Also referred to as an upgrade domain.
-See [Manage the availability of Windows virtual machines](./virtual-machines/manage-availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/manage-availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
+See [Manage the availability of Windows virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) and [Manage the availability of Linux virtual machines](./virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
## <a name="vm"></a>virtual machine The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in a variety of sizes.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
If you are using the IoT Hub connection string (instead of the Event Hub-compati
This section outlines variations and considerations when using Management and Governance services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=managed-applications,azure-policy,network-watcher,monitor,traffic-manager,automation,scheduler,site-recovery,cost-management,backup,blueprints,advisor&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). > [!NOTE]
->This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM compatibility, see [**Introducing the new Azure PowerShell Az module**](/powershell/azure/new-azureps-module-az?preserve-view=true&view=azps-3.3.0). For Az module installation instructions, see [**Install Azure PowerShell**](/powershell/azure/install-az-ps?preserve-view=true&view=azps-3.3.0).
+>This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM compatibility, see [**Introducing the new Azure PowerShell Az module**](/powershell/azure/new-azureps-module-az). For Az module installation instructions, see [**Install Azure PowerShell**](/powershell/azure/install-az-ps).
### [Application Insights](../azure-monitor/overview.md)
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/connect-with-azure-pipelines.md
This article helps you use Azure Pipelines to set up continuous integration (CI)
> [!NOTE] > Azure Pipelines is not available as part of Azure Government. While this tutorial shows how to configure the CI/CD capabilities of Azure Pipelines in order to deploy an app to a service inside Azure Government, be aware that Azure Pipelines runs its pipelines outside of Azure Government. Research your organization's security and service policies before using it as part of your deployment tools.
-[Azure Pipelines](/azure/devops/pipelines/get-started/?view=vsts) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints?view=vsts) for Azure Government.
+[Azure Pipelines](/azure/devops/pipelines/get-started/) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
This article helps you use Azure Pipelines to set up continuous integration (CI)
Before starting this tutorial, you must have the following:
-+ [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization?view=vsts)
-+ [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project?;bc=%2fazure%2fdevops%2fuser-guide%2fbreadcrumb%2ftoc.json&tabs=new-nav&toc=%2fazure%2fdevops%2fuser-guide%2ftoc.json&view=vsts)
++ [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization)++ [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project?;bc=%2fazure%2fdevops%2fuser-guide%2fbreadcrumb%2ftoc.json&tabs=new-nav&toc=%2fazure%2fdevops%2fuser-guide%2ftoc.json) + Install and set up [Azure Powershell](/powershell/azure/install-az-ps) If you don't have an active Azure Government subscription, create a [free account](https://azure.microsoft.com/overview/clouds/government/) before you begin.
The following steps will set up a CD process to deploy to this Web App.
Follow through one of the quickstarts below to set up a Build for your specific type of app: -- [ASP.NET 4 app](/azure/devops/pipelines/apps/aspnet/build-aspnet-4?view=vsts)-- [ASP.NET Core app](/azure/devops/pipelines/languages/dotnet-core?tabs=yaml&view=vsts)-- [Node.js app with Gulp](/azure/devops/pipelines/languages/javascript?tabs=yaml&view=vsts)
+- [ASP.NET 4 app](/azure/devops/pipelines/apps/aspnet/build-aspnet-4)
+- [ASP.NET Core app](/azure/devops/pipelines/languages/dotnet-core?tabs=yaml)
+- [Node.js app with Gulp](/azure/devops/pipelines/languages/javascript?tabs=yaml)
## Generate a service principal
AzureUSGovernment." This sets the service principal to be created in Azure Gover
Follow the instructions in [Service connections for builds and releases](/azure/devops/pipelines/library/service-endpoints) to set up the Azure Pipelines service connection.
-Make one change specific to Azure Government: In step #3 of [Service connections for builds and releases](/azure/devops/pipelines/library/service-endpoints?view=vsts), click on "use the full version of the service connection catalog" and set **Environment** to **AzureUSGovernment**.
+Make one change specific to Azure Government: In step #3 of [Service connections for builds and releases](/azure/devops/pipelines/library/service-endpoints), click on "use the full version of the service connection catalog" and set **Environment** to **AzureUSGovernment**.
## Define a release process
-Follow [Deploy a web app to Azure App Services](/azure/devops/pipelines/apps/cd/deploy-webdeploy-webapps?view=vsts) instructions to set up your release pipeline and deploy to your application in Azure Government.
+Follow [Deploy a web app to Azure App Services](/azure/devops/pipelines/apps/cd/deploy-webdeploy-webapps) instructions to set up your release pipeline and deploy to your application in Azure Government.
## Q&A Q: Do I need a build agent?<br/>
-A: You need at least one [agent](/azure/devops/pipelines/agents/agents?view=vsts) to run your deployments. By default, the build and deployment processes are configured to use the [hosted agents](/azure/devops/pipelines/agents/agents?view=vsts#microsoft-hosted-agents). Configuring a private agent would limit data sharing outside of Azure Government.
+A: You need at least one [agent](/azure/devops/pipelines/agents/agents) to run your deployments. By default, the build and deployment processes are configured to use the [hosted agents](/azure/devops/pipelines/agents/agents#microsoft-hosted-agents). Configuring a private agent would limit data sharing outside of Azure Government.
Q: I use Team Foundation Server on-premises. Can I configure CD on my server to target Azure Government?<br/> A: Currently, Team Foundation Server cannot be used to deploy to an Azure Government Cloud.
azure-government Documentation Government Get Started Connect With Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-get-started-connect-with-ps.md
When you start PowerShell, you have to tell Azure PowerShell to connect to Azure
| Connection type | Command | | | | | [Azure](/powershell/module/az.accounts/Connect-AzAccount) commands |`Connect-AzAccount -EnvironmentName AzureUSGovernment` |
-| [Azure Active Directory](/powershell/module/azuread/connect-azuread?view=azureadps-2.0) commands |`Connect-AzureAD -AzureEnvironmentName AzureUSGovernment` |
+| [Azure Active Directory](/powershell/module/azuread/connect-azuread) commands |`Connect-AzureAD -AzureEnvironmentName AzureUSGovernment` |
| [Azure (Classic deployment model)](/powershell/module/servicemanagement/azure.service/add-azureaccount?view=azuresmps-3.7.0) commands |`Add-AzureAccount -Environment AzureUSGovernment` | | [Azure Active Directory (Classic deployment model)](/previous-versions/azure/jj151815(v=azure.100)) commands |`Connect-MsolService -AzureEnvironment UsGovernment` |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that are supported by the Azure
### Linux
-| Operating system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension |
+| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Dependency agent | Diagnostics extension <sup>2</sup>|
|:|::|::|::|:: | Amazon Linux 2017.09 | | X | | |
-| CentOS Linux 8 <sup>1</sup> <sup>2</sup> | X | X | X | |
+| CentOS Linux 8 | X <sup>3</sup> | X | X | |
| CentOS Linux 7 | X | X | X | X | | CentOS Linux 6 | | X | | | | CentOS Linux 6.5+ | | X | X | X |
The following tables list the operating systems that are supported by the Azure
| Debian 8 | | X | X | | | Debian 7 | | | | X | | OpenSUSE 13.1+ | | | | X |
-| Oracle Linux 8 <sup>1</sup> <sup>2</sup> | X | X | | |
+| Oracle Linux 8 | X <sup>3</sup> | X | | |
| Oracle Linux 7 | X | X | | X | | Oracle Linux 6 | | X | | | | Oracle Linux 6.4+ | | X | | X |
-| Red Hat Enterprise Linux Server 8 <sup>1</sup> <sup>2</sup> | X | X | X | |
+| Red Hat Enterprise Linux Server 8 | X <sup>3</sup> | X | X | |
| Red Hat Enterprise Linux Server 7 | X | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X | X |
-| SUSE Linux Enterprise Server 15.2 <sup>1</sup> <sup>2</sup> | X | | | |
-| SUSE Linux Enterprise Server 15.1 <sup>1</sup> <sup>2</sup> | X | X | | |
+| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | | |
+| SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | | |
| SUSE Linux Enterprise Server 15 | X | X | X | | | SUSE Linux Enterprise Server 12 | X | X | X | X |
-| Ubuntu 20.04 LTS <sup>1</sup> | X | X | X | |
+| Ubuntu 20.04 LTS | X | X | X | |
| Ubuntu 18.04 LTS | X | X | X | X | | Ubuntu 16.04 LTS | X | X | X | X | | Ubuntu 14.04 LTS | | X | | X |
-<sup>1</sup> Requires Python 3 to be installed on the machine.
+<sup>1</sup> Requires Python (2 or 3) to be installed on the machine.
+
+<sup>2</sup> Requires Python 2 to be installed on the machine.
-<sup>2</sup> Known issue collecting Syslog events. Only performance data is currently supported.
+<sup>3</sup> Known issue collecting Syslog events. Only performance data is currently supported.
#### Dependency agent Linux kernel support Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
Get more details on each of the agents at the following:
- [Overview of the Log Analytics agent](./log-analytics-agent.md) - [Azure Diagnostics extension overview](./diagnostics-extension-overview.md)-- [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md)
+- [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md)
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/gateway.md
To learn how to design and deploy a Windows Server 2016 network load balancing c
To learn how to design and deploy an Azure Load Balancer, see [What is Azure Load Balancer?](../../load-balancer/load-balancer-overview.md). To deploy a basic load balancer, follow the steps outlined in this [quickstart](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) excluding the steps outlined in the section **Create back-end servers**. > [!NOTE]
-> Configuring the Azure Load Balancer using the **Basic SKU**, requires that Azure virtual machines belong to an Availability Set. To learn more about availability sets, see [Manage the availability of Windows virtual machines in Azure](../../virtual-machines/manage-availability.md). To add existing virtual machines to an availability set, refer to [Set Azure Resource Manager VM Availability Set](https://gallery.technet.microsoft.com/Set-Azure-Resource-Manager-f7509ec4).
+> Configuring the Azure Load Balancer using the **Basic SKU**, requires that Azure virtual machines belong to an Availability Set. To learn more about availability sets, see [Manage the availability of Windows virtual machines in Azure](../../virtual-machines/availability.md). To add existing virtual machines to an availability set, refer to [Set Azure Resource Manager VM Availability Set](https://gallery.technet.microsoft.com/Set-Azure-Resource-Manager-f7509ec4).
> After the load balancer is created, a backend pool needs to be created, which distributes traffic to one or more gateway servers. Follow the steps described in the quickstart article section [Create resources for the load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
azure-monitor Status Monitor V2 Detailed Instructions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-detailed-instructions.md
To get started, you need an instrumentation key. For more information, see [Crea
PowerShell needs Administrator-level permissions to make changes to your computer. ### Execution policy - Description: By default, running PowerShell scripts is disabled. We recommend allowing RemoteSigned scripts for only the Current scope.-- Reference: [About Execution Policies](/powershell/module/microsoft.powershell.core/about/about_execution_policies?view=powershell-6) and [Set-ExecutionPolicy](/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-6).
+- Reference: [About Execution Policies](/powershell/module/microsoft.powershell.core/about/about_execution_policies) and [Set-ExecutionPolicy](/powershell/module/microsoft.powershell.security/set-executionpolicy).
- Command: `Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process`. - Optional parameter: - `-Force`. Bypasses the confirmation prompt.
These instructions were written and tested on a computer running Windows 10 and
These steps will prepare your server to download modules from PowerShell Gallery. > [!NOTE]
-> PowerShell Gallery is supported on Windows 10, Windows Server 2016, and PowerShell 6.
+> PowerShell Gallery is supported on Windows 10, Windows Server 2016, and PowerShell 6+.
> For information about earlier versions, see [Installing PowerShellGet](/powershell/scripting/gallery/installing-psget). 1. Run PowerShell as Admin with an elevated execution policy. 2. Install the NuGet package provider. - Description: You need this provider to interact with NuGet-based repositories like PowerShell Gallery.
- - Reference: [Install-PackageProvider](/powershell/module/packagemanagement/install-packageprovider?view=powershell-6).
+ - Reference: [Install-PackageProvider](/powershell/module/packagemanagement/install-packageprovider).
- Command: `Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201`. - Optional parameters: - `-Proxy`. Specifies a proxy server for the request.
These steps will prepare your server to download modules from PowerShell Gallery
3. Configure PowerShell Gallery as a trusted repository. - Description: By default, PowerShell Gallery is an untrusted repository.
- - Reference: [Set-PSRepository](/powershell/module/powershellget/set-psrepository?view=powershell-6).
+ - Reference: [Set-PSRepository](/powershell/module/powershellget/set-psrepository).
- Command: `Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted`. - Optional parameter: - `-Proxy`. Specifies a proxy server for the request.
These steps will download the Az.ApplicationMonitor module from PowerShell Galle
1. Ensure that all prerequisites for PowerShell Gallery are met. 2. Run PowerShell as Admin with an elevated execution policy. 3. Install the Az.ApplicationMonitor module.
- - Reference: [Install-Module](/powershell/module/powershellget/install-module?view=powershell-6).
+ - Reference: [Install-Module](/powershell/module/powershellget/install-module).
- Command: `Install-Module -Name Az.ApplicationMonitor`. - Optional parameters: - `-Proxy`. Specifies a proxy server for the request.
For more information, see [Installing a PowerShell Module](/powershell/scripting
#### Unzip nupkg as a zip file by using Expand-Archive (v1.0.1.0) - Description: The base version of Microsoft.PowerShell.Archive (v1.0.1.0) can't unzip nupkg files. Rename the file with the .zip extension.-- Reference: [Expand-Archive](/powershell/module/microsoft.powershell.archive/expand-archive?view=powershell-6).
+- Reference: [Expand-Archive](/powershell/module/microsoft.powershell.archive/expand-archive).
- Command: ```console
For more information, see [Installing a PowerShell Module](/powershell/scripting
#### Unzip nupkg by using Expand-Archive (v1.1.0.0) - Description: Use a current version of Expand-Archive to unzip nupkg files without changing the extension.-- Reference: [Expand-Archive](/powershell/module/microsoft.powershell.archive/expand-archive?view=powershell-6) and [Microsoft.PowerShell.Archive](https://www.powershellgallery.com/packages/Microsoft.PowerShell.Archive/1.1.0.0).
+- Reference: [Expand-Archive](/powershell/module/microsoft.powershell.archive/expand-archive) and [Microsoft.PowerShell.Archive](https://www.powershellgallery.com/packages/Microsoft.PowerShell.Archive/1.1.0.0).
- Command: ```console
For more information, see [Installing a PowerShell Module](/powershell/scripting
Install the manually downloaded PowerShell module into a PowerShell directory so it will be discoverable by PowerShell sessions. For more information, see [Installing a PowerShell Module](/powershell/scripting/developer/module/installing-a-powershell-module).
-If you're installing the module into any other directory, manually import the module by using [Import-Module](/powershell/module/microsoft.powershell.core/import-module?view=powershell-6).
+If you're installing the module into any other directory, manually import the module by using [Import-Module](/powershell/module/microsoft.powershell.core/import-module).
> [!IMPORTANT] > DLLs will install via relative paths.
azure-monitor Status Monitor V2 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/status-monitor-v2-troubleshoot.md
You can use troubleshooting tools to see symptomatic behavior:
This product was written and tested using PowerShell v5.1. This module isn't compatible with PowerShell versions 6 or 7. We recommend using PowerShell v5.1 alongside newer versions.
-For more information, see [Using PowerShell 7 side by side with PowerShell 5.1](/powershell/scripting/install/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.1#using-powershell-7-side-by-side-with-windows-powershell-51).
+For more information, see [Using PowerShell 7 side by side with PowerShell 5.1](/powershell/scripting/install/migrating-from-windows-powershell-51-to-powershell-7#using-powershell-7-side-by-side-with-windows-powershell-51).
### Conflict with IIS shared configuration
azure-monitor Container Insights Azure Redhat4 Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-azure-redhat4-setup.md
Title: Configure Azure Red Hat OpenShift v4.x with Container insights | Microsoft Docs description: This article describes how to configure monitoring for a Kubernetes cluster with Azure Monitor that's hosted on Azure Red Hat OpenShift version 4 or later. Previously updated : 06/30/2020 Last updated : 03/05/2021 # Configure Azure Red Hat OpenShift v4.x with Container insights
To enable monitoring for an Azure Red Hat OpenShift version 4 or later cluster t
`curl -o enable-monitoring.sh -L https://aka.ms/enable-monitoring-bash-script`
-1. To identify the *kubeContext* of your cluster, run the following commands
+1. Connect to ARO v4 cluster using the instructions in [Tutorial: Connect to an Azure Red Hat OpenShift 4 cluster](../../openshift/tutorial-connect-cluster.md).
- ```
- adminUserName=$(az aro list-credentials -g $clusterResourceGroup -n $clusterName --query 'kubeadminUsername' -o tsv)
- adminPassword=$(az aro list-credentials -g $clusterResourceGroup -n $clusterName --query 'kubeadminPassword' -o tsv)
- apiServer=$(az aro show -g $clusterResourceGroup -n $clusterName --query apiserverProfile.url -o tsv)
- oc login $apiServer -u $adminUserName -p $adminPassword
- # openshift project name for Container insights
- openshiftProjectName="azure-monitor-for-containers"
- oc new-project $openshiftProjectName
- # get the kube config context
- kubeContext=$(oc config current-context)
- ```
-
-1. Copy the value for later use.
### Integrate with an existing workspace
If you don't have a workspace to specify, you can skip to the [Integrate with th
1. In the output, find the workspace name, and then copy the full resource ID of that Log Analytics workspace under the field **ID**.
-1. To enable monitoring, run the following command. Replace the values for the `azureAroV4ClusterResourceId`, `logAnalyticsWorkspaceResourceId`, and `kubeContext` parameters.
+1. To enable monitoring, run the following command. Replace the values for the `azureAroV4ClusterResourceId` and `logAnalyticsWorkspaceResourceId` parameters.
```bash
- export azureAroV4ClusterResourceId=ΓÇ£/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>ΓÇ¥
- export logAnalyticsWorkspaceResourceId=ΓÇ£/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>ΓÇ¥
- export kubeContext="<kubeContext name of your ARO v4 cluster>"
+ export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
+ export logAnalyticsWorkspaceResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>"
``` Here is the command you must run once you have populated the 3 variables with Export commands:
- `bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId --kube-context $kubeContext --workspace-id $logAnalyticsWorkspaceResourceId`
+ `bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId --workspace-id $logAnalyticsWorkspaceResourceId`
After you've enabled monitoring, it might take about 15 minutes before you can view the health metrics for the cluster.
In this example, you're not required to pre-create or specify an existing worksp
The default workspace that's created is in the format of *DefaultWorkspace-\<GUID>-\<Region>*.
-Replace the values for the `azureAroV4ClusterResourceId` and `kubeContext` parameters.
+Replace the value for the `azureAroV4ClusterResourceId` parameter.
```bash export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
-export kubeContext="<kubeContext name of your ARO v4 cluster>"
``` For example:
-`bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId --kube-context $kubeContext`
+`bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId
After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
The multi-cluster view in Container insights highlights your Azure Red Hat OpenS
- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md). -- To learn how to stop monitoring your cluster by using Container insights, see [How to stop monitoring your Azure Red Hat OpenShift cluster](./container-insights-optout-openshift-v3.md).
+- To learn how to stop monitoring your cluster by using Container insights, see [How to stop monitoring your Azure Red Hat OpenShift cluster](./container-insights-optout-openshift-v3.md).
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs.md
See [Create diagnostic settings to send platform logs and metrics to different d
[Create a diagnostic setting](../essentials/diagnostic-settings.md) to send resource logs to a Log Analytics workspace. This data is stored in tables as described in [Structure of Azure Monitor Logs](../logs/data-platform-logs.md). The tables used by resource logs depend on what type of collection the resource is using: -- Azure diagnostics - All data written is to the _AzureDiagnostics_ table.
+- Azure diagnostics - All data written is to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
- Resource-specific - Data is written to individual table for each category of the resource. ### Azure diagnostics mode
-In this mode, all data from any diagnostic setting will be collected in the _AzureDiagnostics_ table. This is the legacy method used today by most Azure services. Since multiple resource types send data to the same table, its schema is the superset of the schemas of all the different data types being collected.
+In this mode, all data from any diagnostic setting will be collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. This is the legacy method used today by most Azure services. Since multiple resource types send data to the same table, its schema is the superset of the schemas of all the different data types being collected. See [AzureDiagnostics reference](/azure/azure-monitor/reference/tables/azurediagnostics) for details on the structure of this table and how it works with this potentially large number of columns.
Consider the following example where diagnostic settings are being collected in the same workspace for the following data types:
You can modify an existing diagnostic setting to resource-specific mode. In this
Continue to watch [Azure Updates](https://azure.microsoft.com/updates/) blog for announcements about Azure services supporting Resource-Specific mode.
-### Column limit in AzureDiagnostics
-There is a 500 property limit for any table in Azure Monitor Logs. Once this limit is reached, any rows containing data with any property outside of the first 500 will be dropped at ingestion time. The *AzureDiagnostics* table is in particular susceptible to this limit since it includes properties for all Azure services writing to it.
-
-If you're collecting resource logs from multiple services, _AzureDiagnostics_ may exceed this limit, and data will be missed. Until all Azure services support resource-specific mode, you should configure resources to write to multiple workspaces to reduce the possibility of reaching the 500 column limit.
-
-### Azure Data Factory
-Azure Data Factory, because of a detailed set of logs, is a service that is known to write a large number of columns and potentially cause _AzureDiagnostics_ to exceed its limit. For any diagnostic settings configured before the resource-specific mode was enabled, there will be a new column created for every uniquely named user parameter against any activity. More columns will be created because of the verbose nature of activity inputs and outputs.
-
-You should migrate your logs to use the resource-specific mode as soon as possible. If you are unable to do so immediately, an interim alternative is to isolate Azure Data Factory logs into their own workspace to minimize the chance of these logs impacting other log types being collected in your workspaces.
- ## Send to Azure Event Hubs Send resource logs to an event hub to send them outside of Azure, for example to a third-party SIEM or other log analytics solutions. Resource logs from event hubs are consumed in JSON format with a `records` element containing the records in each payload. The schema depends on the resource type as described in [Common and service-specific schema for Azure Resource Logs](resource-logs-schema.md).
Within the PT1H.json file, each event is stored with the following format. This
## Next steps * [Read more about resource logs](../essentials/platform-logs-overview.md).
-* [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).
+* [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-ingestion-time.md
Heartbeat
``` ## Next steps
-* Read the [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/log-analytics/v1_1/) for Azure Monitor.
+* Read the [Service Level Agreement (SLA)](https://azure.microsoft.com/en-us/support/legal/sla/monitor/v1_3/) for Azure Monitor.
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 01/21/2021 Last updated : 03/09/2021 # FAQs About Azure NetApp Files
Make sure that `CaseSensitiveLookup` is enabled on the Windows client to speed u
Example: `Mount -o rsize=1024 -o wsize=1024 -o mtype=hard \\10.x.x.x\testvol X:*`
+### How does Azure NetApp Files support NFSv4.1 file-locking?
+
+For NFSv4.1 clients, Azure NetApp Files supports the NFSv4.1 file-locking mechanism that maintains the state of all file locks under a lease-based model.
+
+Per RFC 3530, Azure NetApp Files defines a single lease period for all state held by an NFS client. If the client does not renew its lease within the defined period, all states associated with the client's lease will be released by the server.
+
+For example, if a client mounting a volume becomes unresponsive or crashes beyond the timeouts, the locks will be released. The client can renew its lease explicitly or implicitly by performing operations such as reading a file.
+
+A grace period defines a period of special processing in which clients can try to reclaim their locking state during a server recovery. The default timeout for the leases is 30 seconds with a grace period of 45 seconds. After that time, the client's lease will be released.
+ ## SMB FAQs ### Which SMB versions are supported by Azure NetApp Files?
azure-netapp-files Azure Netapp Files Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-sdk-cli.md
The table below lists the supported CLI tools and their command reference.
| Tool | Command reference | ||--| | Azure CLI | [az netappfiles](/cli/azure/netappfiles) |
-| PowerShell | [Azure PowerShell for Azure NetApp Files](/powershell/module/az.netappfiles/?view=azps-2.5.0#netapp_files&preserve-view=true) |
+| PowerShell | [Azure PowerShell for Azure NetApp Files](/powershell/module/az.netappfiles/#netapp_files&preserve-view=true) |
## Code samples
azure-percept Dev Tools Installer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/dev-tools-installer.md
The Dev Tools Pack Installer is a one-stop solution that installs and configures
* [Docker 19.03](https://www.docker.com/) * [PIP3](https://pip.pypa.io/en/stable/user_guide/) * [TensorFlow 1.13](https://www.tensorflow.org/)
-* [Azure Machine Learning SDK 1.1](https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py)
+* [Azure Machine Learning SDK 1.1](https://docs.microsoft.com/python/api/overview/azure/ml/)
## Optional Tools Available for Installation
azure-portal How To Manage Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
Follow these guidelines when you use the file upload option:
* Files can't be larger than 4 MB. * All files must have a file name extension, such as *.docx* or *.xlsx*. The following table shows the filename extensions that are allowed for upload.
-| 0-9, A-C | D-G | H-M | N-P | R-T | U-W | X-Z |
+| 0-9, A-C | D-G | H-N | O-Q | R-T | U-W | X-Z |
|-|-|-|-|-|||
-| .7z | .dat | .har | .odx | .rar | .tdb | .xlam |
-| .a | .db | .hwl | .oft | .rdl | .tdf | .xlr |
-| .abc | .DMP | .ics | .old | .rdlc | .text | .xls |
-| .adm | .do_ | .ini | .one | .re_ | .thmx | .xlsb |
-| .aspx | .doc | .java | .osd | .reg | .tif | .xlsm |
-| .ATF | .docm | .jpg | .OUT | .remove | .trc | .xlsx |
-| .b | .docx | .LDF | .p1 | .ren | .TTD | .xlt |
-| .ba_ | .dotm | .letterhead | .pcap | .rename | .tx_ | .xltx |
-| .bak | .dotx | .lnk | .pdb | .rft | .txt | .xml |
-| .bat | .dtsx | .lo_ | .pdf | .rpt | .uccapilog | .xmla |
-| .blg | .eds | .log | .piz | .rte | .uccplog | .xps |
-| .CA_ | .emf | .lpk | .pmls | .rtf | .udcx | .xsd |
-| .CAB | .eml | .manifest | .png | .run | .vb_ | .xsn |
-| .cap | .emz | .master | .potx | .saz | .vbs_ | .xxx |
-| .catx | .err | .mdmp | .ppt | .sql | .vcf | .z_ |
-| .CFG | .etl | .mof | .pptm | .sqlplan | .vsd | .z01 |
-| .compressed | .evt | .mp3 | .pptx | .stp | .wdb | .z02 |
-| .Config | .evtx | .mpg | .prn | .svclog | .wks | .zi |
-| .cpk | .EX | .ms_ | .psf | - | .wma | .zi_ |
-| .cpp | .ex_ | .msg | .pst | - | .wmv | .zip |
-| .cs | .ex0 | .msi | .pub | - | .wmz | .zip_ |
-| .CSV | .FRD | .mso | - | - | .wps | .zipp |
-| .cvr | .gif | .msu | - | - | .wpt | .zipped |
-| - | .guid | .nfo | - | - | .wsdl | .zippy |
-| - | .gz | - | - | - | .wsp | .zipx |
-| - | - | - | - | - | .wtl | .zit |
+| .7z | .dat | .har | .odx | .rar | .uccapilog | .xlam |
+| .a | .db | .hwl | .oft | .rdl | .uccplog | .xlr |
+| .abc | .DMP | .ics | .old | .rdlc | .udcx | .xls |
+| .adm | .do_ | .ini | .one | .re_ | .vb_ | .xlsb |
+| .aspx | .doc | .java | .osd | .remove | .vbs_ | .xlsm |
+| .ATF | .docm | .jpg | .OUT | .ren | .vcf | .xlsx |
+| .b | .docx | .LDF | .p1 | .rename | .vsd | .xlt |
+| .ba_ | .dotm | .letterhead | .pcap | .rft | .wdb | .xltx |
+| .bak | .dotx | .lo_ | .pdb | .rpt | .wks | .xml |
+| .blg | .dtsx | .log | .pdf | .rte | .wma | .xmla |
+| .CA_ | .eds | .lpk | .piz | .rtf | .wmv | .xps |
+| .CAB | .emf | .manifest | .pmls | .run | .wmz | .xsd |
+| .cap | .eml | .master | .png | .saz | .wps | .xsn |
+| .catx | .emz | .mdmp | .potx | .sql | .wpt | .xxx |
+| .CFG | .err | .mof | .ppt | .sqlplan | .wsdl | .z_ |
+| .compressed | .etl | .mp3 | .pptm | .stp | .wsp | .z01 |
+| .Config | .evt | .mpg | .pptx | .svclog | .wtl | .z02 |
+| .cpk | .evtx | .ms_ | .prn | .tdb | - | .zi |
+| .cpp | .EX | .msg | .psf | .tdf | - | .zi_ |
+| .cs | .ex_ | .mso | .pst | .text | - | .zip |
+| .CSV | .ex0 | .msu | .pub | .thmx | - | .zip_ |
+| .cvr | .FRD | .nfo | - | .tif | - | .zipp |
+| - | .gif | - | - | .trc | - | .zipped |
+| - | .guid | - | - | .TTD | - | .zippy |
+| - | .gz | - | - | .tx_ | - | .zipx |
+| - | - | - | - | .txt | - | .zit |
| - | - | - | - | - | - | .zix | | - | - | - | - | - | - | .zzz |
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/add-template-to-azure-pipelines.md
Title: CI/CD with Azure Pipelines and templates description: Describes how to configure continuous integration in Azure Pipelines by using Azure Resource Manager templates. It shows how to use a PowerShell script, or copy files to a staging location and deploy from there. Previously updated : 02/05/2021 Last updated : 03/09/2021 # Integrate ARM templates with Azure Pipelines
When you select **Save**, the build pipeline is automatically run. Go back to th
## Next steps
-To learn about using ARM templates with GitHub Actions, see [Deploy Azure Resource Manager templates by using GitHub Actions](deploy-github-actions.md).
+* To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/).
+* To learn about using ARM templates with GitHub Actions, see [Deploy Azure Resource Manager templates by using GitHub Actions](deploy-github-actions.md).
azure-resource-manager Copy Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-properties.md
The following example shows how to apply `copy` to the `dataDisks` property on a
"type": "int", "minValue": 0, "maxValue": 16,
- "defaultValue": 16,
+ "defaultValue": 3,
"metadata": { "description": "The number of dataDisks to create." }
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
Title: Deploy resources with Azure CLI and template description: Use Azure Resource Manager and Azure CLI to deploy resources to Azure. The resources are defined in a Resource Manager template or a Bicep file. Previously updated : 03/02/2021 Last updated : 03/04/2021 # Deploy resources with ARM templates and Azure CLI
-This article explains how to use Azure CLI with Azure Resource Manager templates (ARM templates) or Bicep file to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md) or [Bicep overview](bicep-overview.md).
+This article explains how to use Azure CLI with Azure Resource Manager templates (ARM templates) or Bicep files to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md) or [Bicep overview](bicep-overview.md).
-The deployment commands changed in Azure CLI version 2.2.0. The examples in this article require Azure CLI version 2.2.0 or later.
+The deployment commands changed in Azure CLI version 2.2.0. The examples in this article require Azure CLI version 2.2.0 or later. To deploy Bicep files, you need [Azure CLI version 2.20.0 or later](/cli/azure/install-azure-cli).
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)]
The deployment can take a few minutes to complete. When it finishes, you see a m
## Deploy remote template > [!NOTE]
-> Currently, Azure CLI doesn't support deploying remove Bicep files.
+> Currently, Azure CLI doesn't support deploying remote Bicep files. To deploy a remote Bicep file, use CLI Bicep to compile the Bicep file to a JSON template first.
Instead of storing ARM templates on your local machine, you may prefer to store them in an external location. You can store templates in a source control repository (such as GitHub). Or, you can store them in an Azure storage account for shared access in your organization.
To avoid conflicts with concurrent deployments and to ensure unique entries in t
## Deploy template spec > [!NOTE]
-> Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create an ARM template or a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep).
+> Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep).
Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. This feature is currently in preview.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
Title: Deploy resources with PowerShell and template
-description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Resource Manager template.
+description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Resource Manager template or a Bicep file.
Previously updated : 01/26/2021 Last updated : 03/04/2021 # Deploy resources with ARM templates and Azure PowerShell
-This article explains how to use Azure PowerShell with Azure Resource Manager templates (ARM templates) to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md).
+This article explains how to use Azure PowerShell with Azure Resource Manager templates (ARM templates) or Bicep files to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md) or [Bicep overview](bicep-overview.md).
+
+To deploy Bicep files, you need [Azure PowerShell version 5.6.0 or later](/powershell/azure/install-az-ps).
## Prerequisites
You can target your deployment to a resource group, subscription, management gro
- To deploy to a **resource group**, use [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment): ```azurepowershell
- New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateFile <path-to-template>
+ New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateFile <path-to-template-or-bicep>
``` - To deploy to a **subscription**, use [New-AzSubscriptionDeployment](/powershell/module/az.resources/new-azdeployment) which is an alias of the `New-AzDeployment` cmdlet: ```azurepowershell
- New-AzSubscriptionDeployment -Location <location> -TemplateFile <path-to-template>
+ New-AzSubscriptionDeployment -Location <location> -TemplateFile <path-to-template-or-bicep>
``` For more information about subscription level deployments, see [Create resource groups and resources at the subscription level](deploy-to-subscription.md).
You can target your deployment to a resource group, subscription, management gro
- To deploy to a **management group**, use [New-AzManagementGroupDeployment](/powershell/module/az.resources/New-AzManagementGroupDeployment). ```azurepowershell
- New-AzManagementGroupDeployment -Location <location> -TemplateFile <path-to-template>
+ New-AzManagementGroupDeployment -Location <location> -TemplateFile <path-to-template-or-bicep>
``` For more information about management group level deployments, see [Create resources at the management group level](deploy-to-management-group.md).
You can target your deployment to a resource group, subscription, management gro
- To deploy to a **tenant**, use [New-AzTenantDeployment](/powershell/module/az.resources/new-aztenantdeployment). ```azurepowershell
- New-AzTenantDeployment -Location <location> -TemplateFile <path-to-template>
+ New-AzTenantDeployment -Location <location> -TemplateFile <path-to-template-or-bicep>
``` For more information about tenant level deployments, see [Create resources at the tenant level](deploy-to-tenant.md).
When you specify a unique name for each deployment, you can run them concurrentl
To avoid conflicts with concurrent deployments and to ensure unique entries in the deployment history, give each deployment a unique name.
-## Deploy local template
+## Deploy local template or Bicep file
You can deploy a template from your local machine or one that is stored externally. This section describes deploying a local template.
If you're deploying to a resource group that doesn't exist, create the resource
New-AzResourceGroup -Name ExampleGroup -Location "Central US" ```
-To deploy a local template, use the `-TemplateFile` parameter in the deployment command. The following example also shows how to set a parameter value that comes from the template.
+To deploy a local template or Bicep file, use the `-TemplateFile` parameter in the deployment command. The following example also shows how to set a parameter value that comes from the template.
```azurepowershell New-AzResourceGroupDeployment ` -Name ExampleDeployment ` -ResourceGroupName ExampleGroup `
- -TemplateFile c:\MyTemplates\azuredeploy.json
+ -TemplateFile <path-to-template-or-bicep>
``` The deployment can take several minutes to complete. ## Deploy remote template
+> [!NOTE]
+> Currently, Azure PowerShell doesn't support deploying remote Bicep files. To deploy a remote Bicep file, use CLI Bicep to compile the Bicep file to a JSON template first.
+ Instead of storing ARM templates on your local machine, you may prefer to store them in an external location. You can store templates in a source control repository (such as GitHub). Or, you can store them in an Azure storage account for shared access in your organization. If you're deploying to a resource group that doesn't exist, create the resource group. The name of the resource group can only include alphanumeric characters, periods, underscores, hyphens, and parenthesis. It can be up to 90 characters. The name can't end in a period.
For more information, see [Use relative path for linked templates](./linked-temp
## Deploy template spec
+> [!NOTE]
+> Currently, Azure PowerShell doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep).
Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. This feature is currently in preview. The following examples show how to create and deploy a template spec.
To pass inline parameters, provide the names of the parameter with the `New-AzRe
```powershell $arrayParam = "value1", "value2" New-AzResourceGroupDeployment -ResourceGroupName testgroup `
- -TemplateFile c:\MyTemplates\demotemplate.json `
+ -TemplateFile <path-to-template-or-bicep> `
-exampleString "inline string" ` -exampleArray $arrayParam ```
You can also get the contents of file and provide that content as an inline para
```powershell $arrayParam = "value1", "value2" New-AzResourceGroupDeployment -ResourceGroupName testgroup `
- -TemplateFile c:\MyTemplates\demotemplate.json `
+ -TemplateFile <path-to-template-or-bicep> `
-exampleString $(Get-Content -Path c:\MyTemplates\stringcontent.txt -Raw) ` -exampleArray $arrayParam ```
$hash1 = @{ Name = "firstSubnet"; AddressPrefix = "10.0.0.0/24"}
$hash2 = @{ Name = "secondSubnet"; AddressPrefix = "10.0.1.0/24"} $subnetArray = $hash1, $hash2 New-AzResourceGroupDeployment -ResourceGroupName testgroup `
- -TemplateFile c:\MyTemplates\demotemplate.json `
+ -TemplateFile <path-to-template-or-bicep> `
-exampleArray $subnetArray ``` ### Parameter files
-Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file can be a local file or an external file with an accessible URI.
+Rather than passing parameters as inline values in your script, you may find it easier to use a JSON file that contains the parameter values. The parameter file can be a local file or an external file with an accessible URI. Both ARM template and Bicep file use JSON parameter files.
For more information about the parameter file, see [Create Resource Manager parameter file](parameter-files.md).
To pass a local parameter file, use the `TemplateParameterFile` parameter:
```powershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
- -TemplateFile c:\MyTemplates\azuredeploy.json `
+ -TemplateFile <path-to-template-or-bicep> `
-TemplateParameterFile c:\MyTemplates\storage.parameters.json ```
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/quickstart-create-bicep-use-visual-studio-code.md
The Bicep extension for Visual Studio Code provides language support and resource autocompletion. These tools help create and validate [Bicep](./bicep-overview.md) files. In this quickstart, you use the extension to create a Bicep file from scratch. While doing so you experience the extensions capabilities such as validation, and completions.
-To complete this quickstart, you need [Visual Studio Code](https://code.visualstudio.com/), with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) installed. You also need either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az?view=azps-3.7.0&preserve-view=true) installed and authenticated.
+To complete this quickstart, you need [Visual Studio Code](https://code.visualstudio.com/), with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) installed. You also need either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az) installed and authenticated.
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Quickstart Create Templates Use The Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md
Title: Deploy template - Azure portal description: Learn how to create your first Azure Resource Manager template (ARM template) using the Azure portal, and how to deploy it. Previously updated : 01/26/2021 Last updated : 03/09/2021 #Customer intent: As a developer new to Azure deployment, I want to learn how to use the Azure portal to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources.
Many experienced template developers use this method to generate templates when
![Select Create a resource from Azure portal menu](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-create-a-resource.png) 1. In the search box, type **storage account**, and then press **[ENTER]**.
-1. Select **Create**.
+1. Select the down arrow next to **Create**, and then select **Storage account**.
![Create an Azure storage account](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-template-tutorial-create-storage-account-portal.png)
Many experienced template developers use this method to generate templates when
The main pane shows the template. It is a JSON file with six top-level elements - `schema`, `contentVersion`, `parameters`, `variables`, `resources`, and `output`. For more information, see [Understand the structure and syntax of ARM templates](./template-syntax.md)
- There are eight parameters defined. One of them is called **storageAccountName**. The second highlighted part on the previous screenshot shows how to reference this parameter in the template. In the next section, you edit the template to use a generated name for the storage account.
+ There are nine parameters defined. One of them is called **storageAccountName**. The second highlighted part on the previous screenshot shows how to reference this parameter in the template. In the next section, you edit the template to use a generated name for the storage account.
In the template, one Azure resource is defined. The type is `Microsoft.Storage/storageAccounts`. Take a look of how the resource is defined, and the definition structure. 1. Select **Download** from the top of the screen.
Azure requires that each Azure service has a unique name. The deployment could f
- Remove the **storageAccountName** parameter as shown in the previous screenshot. - Add one variable called **storageAccountName** as shown in the previous screenshot:
- ```json
- "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]"
- ```
+ ```json
+ "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]"
+ ```
- Two template functions are used here: `concat()` and `uniqueString()`.
+ Two template functions are used here: `concat()` and `uniqueString()`.
- Update the name element of the **Microsoft.Storage/storageAccounts** resource to use the newly defined variable instead of the parameter:
- ```json
- "name": "[variables('storageAccountName')]",
- ```
-
- The final template shall look like:
-
- ```json
- {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string"
- },
- "accountType": {
- "type": "string"
- },
- "kind": {
- "type": "string"
- },
- "accessTier": {
- "type": "string"
- },
- "minimumTlsVersion": {
- "type": "string"
- },
- "supportsHttpsTrafficOnly": {
- "type": "bool"
- },
- "allowBlobPublicAccess": {
- "type": "bool"
- }
- },
- "variables": {
- "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]"
- },
- "resources": [
- {
- "name": "[variables('storageAccountName')]",
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-06-01",
- "location": "[parameters('location')]",
- "properties": {
- "accessTier": "[parameters('accessTier')]",
- "minimumTlsVersion": "[parameters('minimumTlsVersion')]",
- "supportsHttpsTrafficOnly": "[parameters('supportsHttpsTrafficOnly')]",
- "allowBlobPublicAccess": "[parameters('allowBlobPublicAccess')]"
- },
- "dependsOn": [],
- "sku": {
- "name": "[parameters('accountType')]"
- },
- "kind": "[parameters('kind')]",
- "tags": {}
- }
- ],
- "outputs": {}
- }
- ```
+ ```json
+ "name": "[variables('storageAccountName')]",
+ ```
+
+ The final template shall look like:
+
+ ```json
+ {
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string"
+ },
+ "accountType": {
+ "type": "string"
+ },
+ "kind": {
+ "type": "string"
+ },
+ "accessTier": {
+ "type": "string"
+ },
+ "minimumTlsVersion": {
+ "type": "string"
+ },
+ "supportsHttpsTrafficOnly": {
+ "type": "bool"
+ },
+ "allowBlobPublicAccess": {
+ "type": "bool"
+ },
+ "allowSharedKeyAccess": {
+ "type": "bool"
+ }
+ },
+ "variables": {
+ "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]"
+ },
+ "resources": [
+ {
+ "name": "[variables('storageAccountName')]",
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-06-01",
+ "location": "[parameters('location')]",
+ "properties": {
+ "accessTier": "[parameters('accessTier')]",
+ "minimumTlsVersion": "[parameters('minimumTlsVersion')]",
+ "supportsHttpsTrafficOnly": "[parameters('supportsHttpsTrafficOnly')]",
+ "allowBlobPublicAccess": "[parameters('allowBlobPublicAccess')]",
+ "allowSharedKeyAccess": "[parameters('allowSharedKeyAccess')]"
+ },
+ "dependsOn": [],
+ "sku": {
+ "name": "[parameters('accountType')]"
+ },
+ "kind": "[parameters('kind')]",
+ "tags": {}
+ }
+ ],
+ "outputs": {}
+ }
+ ```
1. Select **Save**. 1. Enter the following values:
Azure requires that each Azure service has a unique name. The deployment could f
|**Minimum TLS Version**|Enter **TLS1_0**. | |**Supports Https Traffic Only**| Select **true** for this quickstart. | |**Allow Blob Public Access**| Select **false** for this quickstart. |
+ |**Allow Shared Key Access**| Select **true** for this quickstart. |
1. Select **Review + create**. 1. Select **Create**.
azure-resource-manager Quickstart Create Templates Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md
The Azure Resource Manager Tools for Visual Studio Code provide language support, resource snippets, and resource autocompletion. These tools help create and validate Azure Resource Manager templates (ARM templates). In this quickstart, you use the extension to create an ARM template from scratch. While doing so you experience the extensions capabilities such as ARM template snippets, validation, completions, and parameter file support.
-To complete this quickstart, you need [Visual Studio Code](https://code.visualstudio.com/), with the [Azure Resource Manager tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) installed. You also need either the [Azure CLI](/cli/azure/) or the [Azure PowerShell module](/powershell/azure/new-azureps-module-az?view=azps-3.7.0) installed and authenticated.
+To complete this quickstart, you need [Visual Studio Code](https://code.visualstudio.com/), with the [Azure Resource Manager tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) installed. You also need either the [Azure CLI](/cli/azure/) or the [Azure PowerShell module](/powershell/azure/new-azureps-module-az) installed and authenticated.
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Template Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-deploy-what-if.md
Title: Template deployment what-if
description: Determine what changes will happen to your resources before deploying an Azure Resource Manager template. Previously updated : 02/05/2021 Last updated : 03/09/2021 # ARM template deployment what-if operation
You can use the what-if operation through the Azure SDKs.
## Next steps
+- To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/).
- If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues). - For a Microsoft Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).-- To deploy templates with Azure PowerShell, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).-- To deploy templates with Azure CLI, see [Deploy resources with ARM templates and Azure CLI](deploy-cli.md).-- To deploy templates with REST, see [Deploy resources with ARM templates and Resource Manager REST API](deploy-rest.md).
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
Previously updated : 03/03/2021 Last updated : 03/09/2021 # Auditing for Azure SQL Database and Azure Synapse Analytics
azure-sql Database Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-export.md
while ($exportStatus.Status -eq "InProgress")
[Console]::WriteLine("") $exportStatus ```
+## Cancel the export request
+
+Use the [Database Operations - Cancel API](https://docs.microsoft.com/rest/api/sql/databaseoperations/cancel)
+or the Powershell [Stop-AzSqlDatabaseActivity command](https://docs.microsoft.com/powershell/module/az.sql/Stop-AzSqlDatabaseActivity?view=azps-5.5.0), here an example of powershell command.
+
+```cmd
+Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId
+```
## Next steps
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
az sql db import --resource-group "<resourceGroup>" --server "<server>" --name "
> [!TIP] > For another script example, see [Import a database from a BACPAC file](scripts/import-from-bacpac-powershell.md).
+## Cancel the import request
+
+Use the [Database Operations - Cancel API](https://docs.microsoft.com/rest/api/sql/databaseoperations/cancel)
+or the Powershell [Stop-AzSqlDatabaseActivity command](https://docs.microsoft.com/powershell/module/az.sql/Stop-AzSqlDatabaseActivity?view=azps-5.5.0), here an example of powershell command.
+
+```cmd
+Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId
+```
++ ## Limitations - Importing to a database in elastic pool isn't supported. You can import data into a single database and then move the database to an elastic pool.
azure-sql Manage Application Rolling Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/manage-application-rolling-upgrade.md
ALTER DATABASE <Prod_DB>
SET (ALLOW_CONNECTIONS = NO) ```
-2. Terminate geo-replication by disconnecting the secondary (11). This action creates an independent but fully synchronized copy of the production database. This database will be upgraded. The following example uses Transact-SQL but [PowerShell](/powershell/module/az.sql/remove-azsqldatabasesecondary?view=azps-1.5.0&preserve-view=true) is also available.
+2. Terminate geo-replication by disconnecting the secondary (11). This action creates an independent but fully synchronized copy of the production database. This database will be upgraded. The following example uses Transact-SQL but [PowerShell](/powershell/module/az.sql/remove-azsqldatabasesecondary) is also available.
```sql -- Disconnect the secondary, terminating geo-replication
azure-sql Secure Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/secure-database-tutorial.md
For information about configuring Azure AD, see:
- [Add your own domain name to Azure AD](../../active-directory/fundamentals/add-custom-domain.md) - [Microsoft Azure now supports federation with Windows Server AD](https://azure.microsoft.com/blog/20../../windows-azure-now-supports-federation-with-windows-server-active-directory/) - [Administer your Azure AD directory](../../active-directory/fundamentals/active-directory-whatis.md)-- [Manage Azure AD using PowerShell](/powershell/azure/?view=azureadps-2.0)
+- [Manage Azure AD using PowerShell](/powershell/azure/)
- [Hybrid identity required ports and protocols](../../active-directory/hybrid/reference-connect-ports.md) ## Manage database access
azure-sql Transparent Data Encryption Byok Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-configure.md
For adding permissions to your server on a Managed HSM, add the 'Managed HSM Cry
## Add the Key Vault key to the server and set the TDE Protector -- Use the [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey?view=azps-2.4.0) cmdlet to retrieve the key ID from key vault
+- Use the [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey) cmdlet to retrieve the key ID from key vault
- Use the [Add-AzSqlServerKeyVaultKey](/powershell/module/az.sql/add-azsqlserverkeyvaultkey) cmdlet to add the key from the Key Vault to the server. - Use the [Set-AzSqlServerTransparentDataEncryptionProtector](/powershell/module/az.sql/set-azsqlservertransparentdataencryptionprotector) cmdlet to set the key as the TDE protector for all server resources. - Use the [Get-AzSqlServerTransparentDataEncryptionProtector](/powershell/module/az.sql/get-azsqlservertransparentdataencryptionprotector) cmdlet to confirm that the TDE protector was configured as intended.
azure-sql Frequently Asked Questions Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/frequently-asked-questions-faq.md
Yes, you can. For instructions, see [Move resources across regions](../database/
**How can I delete my Managed Instance?**
-You can delete Managed Instances via Azure portal, [PowerShell](/powershell/module/az.sql/remove-azsqlinstance?preserve-view=true&view=azps-4.3.0), [Azure CLI](/cli/azure/sql/mi#az-sql-mi-delete) or [Resource Manager REST APIs](/rest/api/sql/managedinstances/delete).
+You can delete Managed Instances via Azure portal, [PowerShell](/powershell/module/az.sql/remove-azsqlinstance), [Azure CLI](/cli/azure/sql/mi#az-sql-mi-delete) or [Resource Manager REST APIs](/rest/api/sql/managedinstances/delete).
**How much time does it take to create or update an instance, or to restore a database?**
azure-sql Migrate To Instance From Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/migrate-to-instance-from-sql-server.md
If you have resolved all identified migration blockers and are continuing the mi
SQL Managed Instance guarantees 99.99% availability even in critical scenarios, so overhead caused by these features cannot be disabled. For more information, see [the root causes that might cause different performance on SQL Server and Azure SQL Managed Instance](https://azure.microsoft.com/blog/key-causes-of-performance-differences-between-sql-managed-instance-and-sql-server/).
+#### In-Memory OLTP (Memory-optimized tables)
+
+SQL Server provides In-Memory OLTP capability that allows usage of memory-optimized tables, memory-optimized table types and natively compiled SQL modules to run workloads that have high throughput and low latency transactional processing requirements.
+
+> [!IMPORTANT]
+> In-Memory OLTP is only supported in the Business Critical tier in Azure SQL Managed Instance (and not supported in the General Purpose tier).
+
+If you have memory-optimized tables or memory-optimized table types in your on-premises SQL Server and you are looking to migrate to Azure SQL Managed Instance, you should either:
+
+- Choose Business Critical tier for your target Azure SQL Managed Instance that supports In-Memory OLTP, Or
+- If you want to migrate to General Purpose tier in Azure SQL Managed Instance, remove memory-optimized tables, memory-optimized table types and natively compiled SQL modules that interact with memory-optimized objects before migrating your database(s). The following T-SQL query can be used to identify all objects that need to be removed before migration to General Purpose tier:
+
+```tsql
+SELECT * FROM sys.tables WHERE is_memory_optimized=1
+SELECT * FROM sys.table_types WHERE is_memory_optimized=1
+SELECT * FROM sys.sql_modules WHERE uses_native_compilation=1
+```
+
+To learn more about in-memory technologies, see [Optimize performance by using in-memory technologies in Azure SQL Database and Azure SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/in-memory-oltp-overview)
+ ### Create a performance baseline If you need to compare the performance of your workload on a managed instance with your original workload running on SQL Server, you would need to create a performance baseline that will be used for comparison.
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
You can choose compute and storage resources during deployment and then change t
> [!IMPORTANT] > Any discrepancy in the [managed instance virtual network requirements](../../managed-instance/connectivity-architecture-overview.md#network-requirements) can prevent you from creating new instances or using existing ones. Learn more about [creating new](../../managed-instance/virtual-network-subnet-create-arm-template.md) and [configuring existing](../../managed-instance/vnet-existing-add-subnet.md) networks.
+Another key consideration in the selection of the target service tier in Azure SQL Managed Instance (General Purpose Vs Business Critical) is the availability of certain features like In-Memory OLTP that is only available in Business Critical tier.
+ ### SQL Server VM alternative Your business may have requirements that make SQL Server on Azure VMs a more suitable target than Azure SQL Managed Instance.
When migrating databases protected byΓÇ»[Transparent Data Encryption](../../data
Restore of system databases is not supported. To migrate instance-level objects (stored in master or msdb databases), script them using Transact-SQL (T-SQL) and then recreate them on the target managed instance.
+#### In-Memory OLTP (Memory-optimized tables)
+
+SQL Server provides In-Memory OLTP capability that allows usage of memory-optimized tables, memory-optimized table types and natively compiled SQL modules to run workloads that have high throughput and low latency transactional processing requirements.
+
+> [!IMPORTANT]
+> In-Memory OLTP is only supported in the Business Critical tier in Azure SQL Managed Instance (and not supported in the General Purpose tier).
+
+If you have memory-optimized tables or memory-optimized table types in your on-premises SQL Server and you are looking to migrate to Azure SQL Managed Instance, you should either:
+
+- Choose Business Critical tier for your target Azure SQL Managed Instance that supports In-Memory OLTP, Or
+- If you want to migrate to General Purpose tier in Azure SQL Managed Instance, remove memory-optimized tables, memory-optimized table types and natively compiled SQL modules that interact with memory-optimized objects before migrating your database(s). The following T-SQL query can be used to identify all objects that need to be removed before migration to General Purpose tier:
+
+```tsql
+SELECT * FROM sys.tables WHERE is_memory_optimized=1
+SELECT * FROM sys.table_types WHERE is_memory_optimized=1
+SELECT * FROM sys.sql_modules WHERE uses_native_compilation=1
+```
+
+To learn more about in-memory technologies, see [Optimize performance by using in-memory technologies in Azure SQL Database and Azure SQL Managed Instance](https://docs.microsoft.com/azure/azure-sql/in-memory-oltp-overview)
+ ## Leverage advanced features Be sure to take advantage of the advanced cloud-based features offered by SQL Managed Instance. For example, you no longer need to worry about managing backups as the service does it for you. You can restore to any [point in time within the retention period](../../database/recovery-using-backups.md#point-in-time-restore). Additionally, you do not need to worry about setting up high availability, as [high availability is built in](../../database/high-availability-sla.md).
azure-sql Availability Group Manually Configure Multiple Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-multiple-regions.md
For more information, see the following topics:
* [Always On Availability Groups](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server) * [Azure Virtual Machines](../../../virtual-machines/index.yml) * [Azure Load Balancers](availability-group-manually-configure-tutorial.md#configure-internal-load-balancer)
-* [Azure Availability Sets](../../../virtual-machines/manage-availability.md)
+* [Azure Availability Sets](../../../virtual-machines/availability.md)
azure-sql Availability Group Manually Configure Prerequisites Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial.md
The following table summarizes the network configuration settings:
## Create availability sets
-Before you create virtual machines, you need to create availability sets. Availability sets reduce the downtime for planned or unplanned maintenance events. An Azure availability set is a logical group of resources that Azure places on physical fault domains and update domains. A fault domain ensures that the members of the availability set have separate power and network resources. An update domain ensures that members of the availability set aren't brought down for maintenance at the same time. For more information, see [Manage the availability of virtual machines](../../../virtual-machines/manage-availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
+Before you create virtual machines, you need to create availability sets. Availability sets reduce the downtime for planned or unplanned maintenance events. An Azure availability set is a logical group of resources that Azure places on physical fault domains and update domains. A fault domain ensures that the members of the availability set have separate power and network resources. An update domain ensures that members of the availability set aren't brought down for maintenance at the same time. For more information, see [Manage the availability of virtual machines](../../../virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
You need two availability sets. One is for the domain controllers. The second is for the SQL Server VMs.
The following table shows the settings for these two machines:
| **Diagnostics storage account** |*Automatically created* | >[!IMPORTANT]
- >You can only place a VM in an availability set when you create it. You can't change the availability set after a VM is created. See [Manage the availability of virtual machines](../../../virtual-machines/manage-availability.md).
+ >You can only place a VM in an availability set when you create it. You can't change the availability set after a VM is created. See [Manage the availability of virtual machines](../../../virtual-machines/availability.md).
Azure creates the virtual machines.
Before you proceed consider the following design decisions.
* **Storage - Azure Managed Disks**
- For the virtual machine storage, use Azure Managed Disks. Microsoft recommends Managed Disks for SQL Server virtual machines. Managed Disks handles storage behind the scenes. In addition, when virtual machines with Managed Disks are in the same availability set, Azure distributes the storage resources to provide appropriate redundancy. For additional information, see [Azure Managed Disks Overview](../../../virtual-machines/managed-disks-overview.md). For specifics about managed disks in an availability set, see [Use Managed Disks for VMs in an availability set](../../../virtual-machines/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set).
+ For the virtual machine storage, use Azure Managed Disks. Microsoft recommends Managed Disks for SQL Server virtual machines. Managed Disks handles storage behind the scenes. In addition, when virtual machines with Managed Disks are in the same availability set, Azure distributes the storage resources to provide appropriate redundancy. For additional information, see [Azure Managed Disks Overview](../../../virtual-machines/managed-disks-overview.md). For specifics about managed disks in an availability set, see [Use Managed Disks for VMs in an availability set](../../../virtual-machines/availability.md).
* **Network - Private IP addresses in production**
Repeat the steps on the other SQL Server VM.
### Tuning Failover Cluster Network Thresholds
-When running Windows Failover Cluster nodes in Azure Vms with SQL Server availability groups, change the cluster setting to a more relaxed monitoring state. This will make the cluster much more stable and reliable. For details on this, see [IaaS with SQL Server - Tuning Failover Cluster Network Thresholds](/windows-server/troubleshoot/iaas-sql-failover-cluster).
+When running Windows Failover Cluster nodes in Azure VMs with SQL Server availability groups, change the cluster setting to a more relaxed monitoring state. This will make the cluster much more stable and reliable. For details on this, see [IaaS with SQL Server - Tuning Failover Cluster Network Thresholds](/windows-server/troubleshoot/iaas-sql-failover-cluster).
## <a name="endpoint-firewall"></a> Configure the firewall on each SQL Server VM
azure-sql Availability Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-overview.md
The following diagram illustrates an availability group for SQL Server on Azure
## VM redundancy
-To increase redundancy and high availability, SQL Server VMs should either be in the same [availability set](../../../virtual-machines/windows/tutorial-availability-sets.md#availability-set-overview), or different [availability zones](../../../availability-zones/az-overview.md).
+To increase redundancy and high availability, SQL Server VMs should either be in the same [availability set](../../../virtual-machines/availability-set-overview.md), or different [availability zones](../../../availability-zones/az-overview.md).
Placing a set of VMs in the same availability set protects from outages within a datacenter caused by equipment failure (VMs within an Availability Set do not share resources) or from updates (VMs within an Availability Set are not updated at the same time). Availability Zones protect against the failure of an entire datacenter, with each Zone representing a set of datacenters within a region. By ensuring resources are placed in different Availability Zones, no datacenter-level outage can take all of your VMs offline.
azure-sql Business Continuity High Availability Disaster Recovery Hadr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/business-continuity-high-availability-disaster-recovery-hadr-overview.md
Azure VMs, storage, and networking have different operational characteristics th
### High-availability nodes in an availability set Availability sets in Azure enable you to place the high-availability nodes into separate fault domains and update domains. The Azure platform assigns an update domain and a fault domain to each virtual machine in your availability set. This configuration within a datacenter ensures that during either a planned or unplanned maintenance event, at least one virtual machine is available and meets the Azure SLA of 99.95 percent.
-To configure a high-availability setup, place all participating SQL Server virtual machines in the same availability set to avoid application or data loss during a maintenance event. Only nodes in the same cloud service can participate in the same availability set. For more information, see [Manage the availability of virtual machines](../../../virtual-machines/manage-availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
+To configure a high-availability setup, place all participating SQL Server virtual machines in the same availability set to avoid application or data loss during a maintenance event. Only nodes in the same cloud service can participate in the same availability set. For more information, see [Manage the availability of virtual machines](../../../virtual-machines/availability.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
### High-availability nodes in an availability zone Availability zones are unique physical locations within an Azure region. Each zone consists of one or more datacenters equipped with independent power, cooling, and networking. The physical separation of availability zones within a region helps protect applications and data from datacenter failures by ensuring that at least one virtual machine is available and meets the Azure SLA of 99.99 percent.
azure-sql Create Sql Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/create-sql-vm-portal.md
On the **Disks** tab, configure your disk options.
* Under **Advanced**, select **Yes** under use **Managed Disks**. > [!NOTE]
- > Microsoft recommends Managed Disks for SQL Server. Managed Disks handles storage behind the scenes. In addition, when virtual machines with Managed Disks are in the same availability set, Azure distributes the storage resources to provide appropriate redundancy. For more information, see [Azure Managed Disks Overview](../../../virtual-machines/managed-disks-overview.md). For specifics about managed disks in an availability set, see [Use managed disks for VMs in availability set](../../../virtual-machines/manage-availability.md).
+ > Microsoft recommends Managed Disks for SQL Server. Managed Disks handles storage behind the scenes. In addition, when virtual machines with Managed Disks are in the same availability set, Azure distributes the storage resources to provide appropriate redundancy. For more information, see [Azure Managed Disks Overview](../../../virtual-machines/managed-disks-overview.md). For specifics about managed disks in an availability set, see [Use managed disks for VMs in availability set](../../../virtual-machines/availability.md).
![SQL VM Disk settings](./media/create-sql-vm-portal/azure-sqlvm-disks.png)
azure-vmware Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/windows-server-failover-cluster.md
Title: Windows Server Failover Cluster on Azure VMware Solution vSAN with native shared disks description: Set up Windows Server Failover Cluster (WSFC) on Azure VMware Solution and take advantage of solutions requiring WSFC capability. Previously updated : 03/08/2021 Last updated : 03/09/2021 # Windows Server Failover Cluster on Azure VMware Solution vSAN with native shared disks
-In this article, we'll walk through setting up Windows Server Failover Cluster on Azure VMware Solution. The implementation in this article is for proof of concept and pilot purposes.
+In this article, we'll walk through setting up Windows Server Failover Cluster on Azure VMware Solution. The implementation in this article is for proof of concept and pilot purposes. We recommend using a Cluster-in-a-Box (CIB) configuration until placement policies are available.
Windows Server Failover Cluster (WSFC), previously known as Microsoft Service Cluster Service (MSCS), is a feature of the Windows Server Operating System (OS). WSFC is a business-critical feature, and for many applications is required. For example, WSFC is required for the following configurations:
The following activities aren't supported and might cause WSFC node failover:
- **Validate Network Communication**. The Cluster Validation test will throw a warning that only one network interface per cluster node is available. You may ignore this warning. Azure VMware Solution provides the required availability and performance needed, since the nodes are connected to one of the NSX-T segments. However, keep this item as part of the Cluster Validation test, as it will validate other aspects of network communication.
-16. Create a DRS rule to separate the WSFC VMs cross Azure VMware Solution nodes. Use the following rules: one host-to-VM affinity and one VM-to-VM anti-affinity rule. This way cluster nodes won't run on the same Azure VMware Solution host.
+16. Create a DRS rule to place the WSFC VMs on the same Azure VMware Solution nodes. To do so, you need a host-to-VM affinity rule. This way, cluster nodes will run on the same Azure VMware Solution host. Again, this is for pilot purposes until placement policies are available.
>[!NOTE] > For this you need to create a Support Request ticket. Our Azure support organization will be able to help you with this.
backup Azure Backup Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/azure-backup-glossary.md
Backs up operating system files. This backup allows you to recover when a comput
A tenant is a representation of an organization. It's a dedicated instance of Azure AD that an organization or app developer receives when the organization or app developer creates a relationship with Microsoft, like signing up for Azure, Microsoft Intune, or Microsoft 365.
+## Tier
+
+Currently, Azure Backup supports the following backup storage tiers:
+
+### Snapshot tier
+
+(Workload specific term) In the first phase of VM backup, the snapshot taken is stored along with the disk. This form of storage is referred to as snapshot tier. Snapshot tier restores are faster (than restoring from a vault) because they eliminate the wait time for snapshots to get copied to from the vault before triggering the restore operation.
+
+### Vault-Standard tier
+
+Backup data for all workloads supported by Azure Backup is stored in vaults which hold backup storage, an auto-scaling set of storage accounts managed by Azure Backup. The Vault-Standard tier is an online storage tier that enables you to store an isolated copy of backup data in a Microsoft managed tenant, thus creating an additional layer of protection. For workloads where snapshot tier is supported, there is a copy of the backup data in both the snapshot tier and the vault-standard tier. Vault-standard tier ensures that backup data is available even if the datasource being backed up is deleted or compromised.
+ ## Unmanaged disk Refer to the [Unmanaged disks documentation](../storage/common/storage-disaster-recovery-guidance.md#azure-unmanaged-disks).
backup Back Up Hyper V Virtual Machines Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/back-up-hyper-v-virtual-machines-mabs.md
When you can recover a backed up virtual machine, you use the Recovery wizard to
4. On the **Select Recovery Type** screen, select where you want to restore the data and then select **Next**.
- - **Recover to original instance**: When you recover to the original instance, the original VHD is deleted. MABS recovers the VHD and other configuration files to the original location using Hyper-V VSS writer. At the end of the recovery process, virtual machines are still highly available.
+ - **Recover to original instance**: When you recover to the original instance, the original VHD and all associated checkpoints are deleted. MABS recovers the VHD and other configuration files to the original location using Hyper-V VSS writer. At the end of the recovery process, virtual machines are still highly available.
The resource group must be present for recovery. If it isn't available, recover to an alternate location and then make the virtual machine highly available. - **Recover as virtual machine to any host**: MABS supports alternate location recovery (ALR), which provides a seamless recovery of a protected Hyper-V virtual machine to a different Hyper-V host, independent of processor architecture. Hyper-V virtual machines that are recovered to a cluster node won't be highly available. If you choose this option, the Recovery Wizard presents you with an additional screen for identifying the destination and destination path.
+
+ >[!NOTE]
+ >If you select the original host the behavior is the same as **Recover to original instance**. The original VHD and all associated checkpoints will be deleted.
- **Copy to a network folder**: MABS supports item-level recovery (ILR), which allows you to do item-level recovery of files, folders, volumes, and virtual hard disks (VHDs) from a host-level backup of Hyper-V virtual machines to a network share or a volume on a MABS protected server. The MABS protection agent doesn't have to be installed inside the guest to perform item-level recovery. If you choose this option, the Recovery Wizard presents you with an additional screen for identifying the destination and destination path.
backup Backup Azure Diagnostics Mode Data Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-diagnostics-mode-data-model.md
Use the Log Analytics data model to create custom alerts from Log Analytics.
> [!NOTE] >
-> This data model is in reference to the Azure Diagnostics Mode of sending diagnostic
+> * This data model is in reference to the Azure Diagnostics Mode of sending diagnostic
> events to Log Analytics (LA). To learn the data model for the new Resource Specific Mode, you can refer to the following article: [Data Model for Azure Backup Diagnostic Events](./backup-azure-reports-data-model.md)
+> * For creating custom reporting views, it is recommended to use [system functions on Azure Monitor logs](backup-reports-system-functions.md) instead of working with the raw tables listed below.
## Using Azure Backup data model
backup Backup Azure Reports Data Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-reports-data-model.md
Last updated 10/30/2019
# Data Model for Azure Backup Diagnostics Events
+> [!NOTE]
+>
+> For creating custom reporting views, it is recommended to use [system functions on Azure Monitor logs](backup-reports-system-functions.md) instead of working with the raw tables listed below.
+ ## CoreAzureBackup This table provides information about core backup entities, such as vaults and backup items.
backup Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md
Most common backup failures can be self-resolved by following the troubleshootin
### Step 1: Check Azure VM health -- **Ensure Azure VM provisioning state is 'Running'**: If the [VM provisioning state](../virtual-machines/states-lifecycle.md#provisioning-states) is in the **Stopped/Deallocated/Updating** state, then it will interfere with the backup operation. Open *Azure portal > VM > Overview >* and check the VM status to ensure it's **Running** and retry the backup operation.
+- **Ensure Azure VM provisioning state is 'Running'**: If the [VM provisioning state](../virtual-machines/states-billing.md) is in the **Stopped/Deallocated/Updating** state, then it will interfere with the backup operation. Open *Azure portal > VM > Overview >* and check the VM status to ensure it's **Running** and retry the backup operation.
- **Review pending OS updates or reboots**: Ensure there are no pending OS update or pending reboots on the VM. ### Step 2: Check Azure VM Guest Agent service health
After you register and schedule a VM for the Azure Backup service, Backup starts
**Error code**: UserErrorVmProvisioningStateFailed<br> **Error message**: The VM is in failed provisioning state<br>
-This error occurs when one of the extension failures puts the VM into provisioning failed state.<br>**Open  Azure portal > VM > Settings > Extensions > Extensions status** and check if all extensions are in **provisioning succeeded** state. To learn more, see [Provisioning states](../virtual-machines/states-lifecycle.md#provisioning-states).
+This error occurs when one of the extension failures puts the VM into provisioning failed state.<br>**Open  Azure portal > VM > Settings > Extensions > Extensions status** and check if all extensions are in **provisioning succeeded** state. To learn more, see [Provisioning states](../virtual-machines/states-billing.md).
- If any extension is in a failed state, then it can interfere with the backup. Ensure those extension issues are resolved and retry the backup operation. - If the VM provisioning state is in an updating state, it can interfere with the backup. Ensure that it's healthy and retry the backup operation.
backup Backup Center Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-actions.md
Title: Perform actions using Backup Center
-description: This article explains how to perform actions using Backup Center
+ Title: Perform actions using Backup center
+description: This article explains how to perform actions using Backup center
Last updated 09/07/2020
-# Perform actions using Backup Center (Preview)
+# Perform actions using Backup center
-Backup Center allows you to perform key backup-related actions from a central interface without needing to navigate to an individual vault. Some actions that you can perform from Backup Center are:
+Backup center allows you to perform key backup-related actions from a central interface without needing to navigate to an individual vault. Some actions that you can perform from Backup center are:
* Configure backup for your datasources * Restore a backup instance
Backup Center allows you to perform key backup-related actions from a central in
## Supported scenarios
-* Backup Center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, and Azure Database for PostgreSQL Server backup.
+* Backup center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, and Azure Database for PostgreSQL Server backup.
* Refer to the [support matrix](backup-center-support-matrix.md) for a detailed list of supported and unsupported scenarios. ## Configure backup
Depending on the type of datasource you wish to back up, follow the appropriate
### Configure backup to a Recovery Services vault
-1. Navigate to the Backup Center and select **+ Backup** at the top of the **Overview** tab.
+1. Navigate to the Backup center and select **+ Backup** at the top of the **Overview** tab.
![Backup Center overview](./media/backup-center-actions/backup-center-overview-configure-backup.png)
Depending on the type of datasource you wish to back up, follow the appropriate
### Configure backup to a Backup vault
-1. Navigate to the Backup Center and select **+ Backup** at the top of the **Overview** tab.
+1. Navigate to the Backup center and select **+ Backup** at the top of the **Overview** tab.
2. Select the type of datasource you wish to back up (Azure Database for PostgreSQL server in this case). ![Select datasource to configure Azure Database for PostgreSQL Server backup](./media/backup-center-actions/backup-select-datasource-type-postgresql.png)
Depending on the type of datasource you wish to restore, follow the appropriate
### If you're restoring from a Recovery Services vault
-1. Navigate to the Backup Center and select **Restore** at the top of the **Overview** tab.
+1. Navigate to the Backup center and select **Restore** at the top of the **Overview** tab.
![Backup Center Overview to restore VM](./media/backup-center-actions/backup-center-overview-restore.png)
Depending on the type of datasource you wish to restore, follow the appropriate
### If you're restoring from a Backup vault
-1. Navigate to the Backup Center and select **Restore** at the top of the **Overview** tab.
+1. Navigate to the Backup center and select **Restore** at the top of the **Overview** tab.
2. Select the type of datasource you wish to restore (Azure Database for PostgreSQL Server in this case). ![Select datasource for Azure Database for PostgreSQL Server restore](./media/backup-center-actions/restore-select-datasource-postgresql.png)
Depending on the type of datasource you wish to restore, follow the appropriate
## Create a new vault
-You can create a new vault by navigating to Backup Center and selecting **+ Vault** at the top of the **Overview** tab.
+You can create a new vault by navigating to Backup center and selecting **+ Vault** at the top of the **Overview** tab.
![Create vault](./media/backup-center-actions/backup-center-create-vault.png)
Depending on the type of datasource you wish to back up, follow the appropriate
### If you're backing up to a Recovery Services vault
-1. Navigate to the Backup Center and select **+ Policy** at the top of the **Overview** tab.
+1. Navigate to the Backup center and select **+ Policy** at the top of the **Overview** tab.
![Backup Center Overview for backup policy](./media/backup-center-actions/backup-center-overview-policy.png)
Depending on the type of datasource you wish to back up, follow the appropriate
### If you're backing up to a Backup vault
-1. Navigate to the Backup Center and select **+ Policy** at the top of the **Overview** tab.
+1. Navigate to the Backup center and select **+ Policy** at the top of the **Overview** tab.
2. Select the type of datasource you wish to back up (Azure Database for PostgreSQL Server in this case). ![Select datasource for policy for Azure Database for PostgreSQL Server backup](./media/backup-center-actions/policy-select-datasource-postgresql.png)
Depending on the type of datasource you wish to back up, follow the appropriate
## Execute an on-demand backup for a backup instance
-Backup Center allows you to search for backup instances across your backup estate and execute backup operations on demand.
+Backup center allows you to search for backup instances across your backup estate and execute backup operations on demand.
-To trigger an on-demand backup, navigate to Backup Center and select the **Backup Instances** menu item. Selecting this lets you view details of all the backup instances that you have access to. You can search for the backup instance you wish to back up. Right-clicking on an item in the grid opens up a list of available actions. Select the **Backup Now** option to execute an on-demand backup.
+To trigger an on-demand backup, navigate to Backup center and select the **Backup Instances** menu item. Selecting this lets you view details of all the backup instances that you have access to. You can search for the backup instance you wish to back up. Right-clicking on an item in the grid opens up a list of available actions. Select the **Backup Now** option to execute an on-demand backup.
![On-demand backup](./media/backup-center-actions/backup-center-on-demand-backup.png)
To trigger an on-demand backup, navigate to Backup Center and select the **Backu
There are scenarios when you might want to stop backup for a backup instance, such as when the underlying resource being backed up doesnΓÇÖt exist anymore.
-To trigger an on-demand backup, navigate to Backup Center and select the **Backup Instances** menu item. Select this lets you view details of all the backup instances that you have access to. You can search for the backup instance you wish to back up. Right-clicking on an item in the grid opens up a list of available actions. Select the **Stop Backup** option to stop backup for the backup instance.
+To trigger an on-demand backup, navigate to Backup center and select the **Backup Instances** menu item. Select this lets you view details of all the backup instances that you have access to. You can search for the backup instance you wish to back up. Right-clicking on an item in the grid opens up a list of available actions. Select the **Stop Backup** option to stop backup for the backup instance.
![Stop protection](./media/backup-center-actions/backup-center-stop-protection.png)
backup Backup Center Community https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-community.md
Title: Access community resources using Backup Center
-description: Use Backup Center to access sample templates, scripts, and feature requests
+ Title: Access community resources using Backup center
+description: Use Backup center to access sample templates, scripts, and feature requests
Last updated 02/18/2021
-# Access community resources using Backup Center
+# Access community resources using Backup center
-You can use Backup Center to access various community resources useful for a backup admin or operator.
+You can use Backup center to access various community resources useful for a backup admin or operator.
## Using Community Hub
Some of the resources available via the Community Hub are:
## Next Steps -- [Learn More about Backup Center](backup-center-overview.md)
+- [Learn More about Backup center](backup-center-overview.md)
backup Backup Center Govern Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-govern-environment.md
Last updated 09/01/2020
-# Govern your backup estate using Backup Center (Preview)
+# Govern your backup estate using Backup Center
-Backup Center helps you govern your Azure environment to ensure that all your resources are compliant from a backup perspective. Below are some of the governance capabilities of Backup Center:
+Backup center helps you govern your Azure environment to ensure that all your resources are compliant from a backup perspective. Below are some of the governance capabilities of Backup center:
* View and assign Azure Policies for backup
backup Backup Center Monitor Operate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-monitor-operate.md
Last updated 09/01/2020
-# Monitor and operate backups using Backup Center
+# Monitor and operate backups using Backup center
-As a backup admin, you can use Backup Center as a single pane of glass to monitor your jobs and backup inventory on a day-to-day basis. You can also use Backup Center to perform your regular operations, such as responding to on-demand backup requests, restoring backups, creating backup policies, and so on.
+As a backup admin, you can use Backup center as a single pane of glass to monitor your jobs and backup inventory on a day-to-day basis. You can also use Backup center to perform your regular operations, such as responding to on-demand backup requests, restoring backups, creating backup policies, and so on.
## Supported scenarios
-* Backup Center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, Azure Blobs backup, Azure Managed Disks backup and Azure Database for PostgreSQL Server backup.
+* Backup center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, Azure Blobs backup, Azure Managed Disks backup and Azure Database for PostgreSQL Server backup.
* Refer to the [support matrix](backup-center-support-matrix.md) for a detailed list of supported and unsupported scenarios. ## Backup instances
-Backup Center allows for easy search and discoverability of backup instances across your backup estate.
+Backup center allows for easy search and discoverability of backup instances across your backup estate.
-Selecting the **Backup Instances** tab in Backup Center lets you view details of all the backup instances that you have access to.
+Selecting the **Backup Instances** tab in Backup center lets you view details of all the backup instances that you have access to.
You can view the following information about each of your backup instances:
Right-clicking on any of the items in the grid lets you perform actions on the g
## Backup jobs
-Backup Center allows you to view detailed information on all jobs that were created in your backup estate and take appropriate action for failing jobs.
+Backup center allows you to view detailed information on all jobs that were created in your backup estate and take appropriate action for failing jobs.
-Selecting the **Backup jobs** menu item in Backup Center provides a view of all your jobs. Each job contains the following information:
+Selecting the **Backup jobs** menu item in Backup center provides a view of all your jobs. Each job contains the following information:
* Backup instance associated with the job * Datasource subscription
Using the **Backup jobs** tab, you can view jobs up to the last seven days. To v
## Vaults
-Selecting the **Vaults** menu item in Backup Center allows you to see a list of all [Recovery Services vaults](backup-azure-recovery-services-vault-overview.md) and [Backup vaults](backup-vault-overview.md) that you have access to. You can filter the list with the following parameters:
+Selecting the **Vaults** menu item in Backup center allows you to see a list of all [Recovery Services vaults](backup-azure-recovery-services-vault-overview.md) and [Backup vaults](backup-vault-overview.md) that you have access to. You can filter the list with the following parameters:
* Vault subscription * Vault resource group
Selecting any item in the list allows you to navigate to a given vault.
## Backup policies
-Backup Center allows you to view and edit key information for any of your backup policies.
+Backup center allows you to view and edit key information for any of your backup policies.
Selecting the **Backup Policies** menu item allows you to view all the policies that you've created across your backup estate. You can filter the list by vault subscription, resource group, datasource type, and vault. Right-clicking on an item in the grid lets you view associated items for that policy, edit the policy, or even delete it if necessary.
Selecting the **Backup Policies** menu item allows you to view all the policies
## Next steps * [Govern your backup estate](backup-center-govern-environment.md)
-* [Perform actions using Backup Center](backup-center-actions.md)
+* [Perform actions using Backup center](backup-center-actions.md)
* [Obtain insights on your backups](backup-center-obtain-insights.md)
backup Backup Center Obtain Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-obtain-insights.md
Title: Obtain insights using Backup Center
-description: Learn how to analyze historical trends and gain deeper insights on your backups with Backup Center.
+ Title: Obtain insights using Backup center
+description: Learn how to analyze historical trends and gain deeper insights on your backups with Backup center.
Last updated 09/01/2020
-# Obtain Insights using Backup Center
+# Obtain Insights using Backup center
For analyzing historical trends and gaining deeper insights on your backups, Backup Center provides an interface to [Backup Reports](configure-reports.md), which uses [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) and [Azure Workbooks](../azure-monitor/visualize/workbooks-overview.md). Backup Reports offers the following capabilities:
For analyzing historical trends and gaining deeper insights on your backups, Bac
[Learn how to configure diagnostics settings at scale for your vaults](./configure-reports.md#get-started)
-### View Backup Reports in the Backup Center portal
+### View Backup Reports in the Backup center portal
-Selecting the **Backup Reports** menu item in Backup Center opens up the reports. Choose one or more Log Analytics workspaces to view and analyze key information on your backups.
+Selecting the **Backup Reports** menu item in Backup center opens up the reports. Choose one or more Log Analytics workspaces to view and analyze key information on your backups.
![Backup reports in Backup Center](./media/backup-center-obtain-insights/backup-center-backup-reports.png)
Following are the views available:
1. **Summary** - Use this tab to get a high-level overview of your backup estate. [Learn more](./configure-reports.md#summary)
-1. **Backup Items** - Use this tab to see information and trends on cloud storage consumed at a Backup-item level. [Learn more](./configure-reports.md#backup-items)
+2. **Backup Items** - Use this tab to see information and trends on cloud storage consumed at a Backup-item level. [Learn more](./configure-reports.md#backup-items)
-1. **Usage** - Use this tab to view key billing parameters for your backups. [Learn more](./configure-reports.md#usage)
+3. **Usage** - Use this tab to view key billing parameters for your backups. [Learn more](./configure-reports.md#usage)
-1. **Jobs** - Use this tab to view long-running trends on jobs, such as the number of failed jobs per day and the top causes of job failure. [Learn more](./configure-reports.md#jobs)
+4. **Jobs** - Use this tab to view long-running trends on jobs, such as the number of failed jobs per day and the top causes of job failure. [Learn more](./configure-reports.md#jobs)
-1. **Policies** - Use this tab to view information on all of your active policies, such as the number of associated items and the total cloud storage consumed by items backed up under a given policy. [Learn more](./configure-reports.md#policies)
+5. **Policies** - Use this tab to view information on all of your active policies, such as the number of associated items and the total cloud storage consumed by items backed up under a given policy. [Learn more](./configure-reports.md#policies)
-1. **Optimize** - Use this tab to gain visibility into potential cost-optimization opportunities for your backups. [Learn more](./configure-reports.md#optimize)
+6. **Optimize** - Use this tab to gain visibility into potential cost-optimization opportunities for your backups. [Learn more](./configure-reports.md#optimize)
-1. **Policy adherence** - Use this tab to gain visibility into whether every backup instance has had at least one successful backup per day.
+7. **Policy adherence** - Use this tab to gain visibility into whether every backup instance has had at least one successful backup per day. [Learn more](./configure-reports.md#policy-adherence)
+
+You can also configure emails for any these reports using the [Email Report](backup-reports-email.md) feature.
## Next steps - [Monitor and Operate backups](backup-center-monitor-operate.md) - [Govern your backup estate](backup-center-govern-environment.md)-- [Perform actions using Backup Center](backup-center-actions.md)
+- [Perform actions using Backup center](backup-center-actions.md)
backup Backup Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-overview.md
Title: Overview of Backup Center
-description: This article provides an overview of Backup Center for Azure.
+ Title: Overview of Backup center
+description: This article provides an overview of Backup center for Azure.
Last updated 09/30/2020
-# Overview of Backup Center
+# Overview of Backup center
Backup Center provides a **single unified management experience** in Azure for enterprises to govern, monitor, operate, and analyze backups at scale. As such, it's consistent with AzureΓÇÖs native management experiences.
-Some of the key benefits of Backup Center include:
+Some of the key benefits of Backup center include:
-* **Single pane of glass to manage backups** ΓÇô Backup Center is designed to function well across a large and distributed Azure environment. You can use Backup Center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and [Azure Lighthouse](../lighthouse/overview.md) tenants.
-* **Datasource-centric management** ΓÇô Backup Center provides views and filters that are centered on the datasources that you're backing up (for example, VMs and databases). This allows a resource owner or a backup admin to monitor and operate backups of items without needing to focus on which vault an item is backed up to. A key feature of this design is the ability to filter views by datasource-specific properties, such as datasource subscription, datasource resource group, and datasource tags. For example, if your organization follows a practice of assigning different tags to VMs belonging to different departments, you can use Backup Center to filter backup information based on the tags of the underlying VMs being backed up without needing to focus on the tag of the vault.
-* **Connected experiences** ΓÇô Backup Center provides native integrations to existing Azure services that enable management at scale. For example, Backup Center uses the [Azure Policy](../governance/policy/overview.md) experience to help you govern your backups. It also leverages [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) and [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) to help you view detailed reports on backups. So you don't need to learn any new principles to use the varied features that Backup Center offers. You can also discover community resources from the Backup Center.
+* **Single pane of glass to manage backups** ΓÇô Backup center is designed to function well across a large and distributed Azure environment. You can use Backup center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and [Azure Lighthouse](../lighthouse/overview.md) tenants.
+* **Datasource-centric management** ΓÇô Backup center provides views and filters that are centered on the datasources that you're backing up (for example, VMs and databases). This allows a resource owner or a backup admin to monitor and operate backups of items without needing to focus on which vault an item is backed up to. A key feature of this design is the ability to filter views by datasource-specific properties, such as datasource subscription, datasource resource group, and datasource tags. For example, if your organization follows a practice of assigning different tags to VMs belonging to different departments, you can use Backup center to filter backup information based on the tags of the underlying VMs being backed up without needing to focus on the tag of the vault.
+* **Connected experiences** ΓÇô Backup center provides native integrations to existing Azure services that enable management at scale. For example, Backup center uses the [Azure Policy](../governance/policy/overview.md) experience to help you govern your backups. It also leverages [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) and [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) to help you view detailed reports on backups. So you don't need to learn any new principles to use the varied features that Backup center offers. You can also discover community resources from the Backup center.
## Supported scenarios
-* Backup Center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, Azure Blobs backup, Azure Managed Disks backup, and Azure Database for PostgreSQL Server backup.
+* Backup center is currently supported for Azure VM backup, SQL in Azure VM backup, SAP HANA in Azure VM backup, Azure Files backup, Azure Blobs backup, Azure Managed Disks backup, and Azure Database for PostgreSQL Server backup.
* Refer to the [support matrix](backup-center-support-matrix.md) for a detailed list of supported and unsupported scenarios. ## Get started
-To get started with using Backup Center, search for **Backup Center** in the Azure portal and navigate to the **Backup Center** dashboard.
+To get started with using Backup center, search for **Backup center** in the Azure portal and navigate to the **Backup center** dashboard.
![Backup Center Search](./media/backup-center-overview/backup-center-search.png)
In the **Jobs** tile, you get a summarized view of all backup and restore relate
In the **Backup Instances** tile, you get a summarized view of all backup instances across your backup estate. For example, you can see the number of backup instances that are in soft-deleted state compared to the number of instances that are still configured for protection. Selecting any of the numbers in this tile allows you to view more information on backup instances for a particular datasource type and protection state. You can also view all backup instances whose underlying datasource is not found (the datasource might be deleted, or you may not have access to the datasource).
-Watch the following video to understand the capabilities of Backup Center:
+Watch the following video to understand the capabilities of Backup center:
> [!VIDEO https://www.youtube.com/embed/pFRMBSXZcUk?t=497]
-Follow the [next steps](#next-steps) to understand the different capabilities that Backup Center provides, and how you can use these capabilities to manage your backup estate efficiently.
+Follow the [next steps](#next-steps) to understand the different capabilities that Backup center provides, and how you can use these capabilities to manage your backup estate efficiently.
## Next steps * [Monitor and Operate backups](backup-center-monitor-operate.md) * [Govern your backup estate](backup-center-govern-environment.md) * [Obtain insights on your backups](backup-center-obtain-insights.md)
-* [Perform actions using Backup Center](backup-center-actions.md)
+* [Perform actions using Backup center](backup-center-actions.md)
backup Backup Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-support-matrix.md
Title: Support matrix for Backup Center
-description: This article summarizes the scenarios that Backup Center supports for each workload type
+ Title: Support matrix for Backup center
+description: This article summarizes the scenarios that Backup center supports for each workload type
Last updated 09/07/2020
-# Support matrix for Backup Center
+# Support matrix for Backup center
-Backup Center provides a single pane of glass for enterprises to [govern, monitor, operate, and analyze backups at scale](backup-center-overview.md). This article summarizes the scenarios that Backup Center supports for each workload type.
+Backup Center provides a single pane of glass for enterprises to [govern, monitor, operate, and analyze backups at scale](backup-center-overview.md). This article summarizes the scenarios that Backup center supports for each workload type.
## Supported scenarios | **Category** | **Scenario** | **Supported workloads** | **Limits** | | -| - | -- ||
-| Monitoring | View all jobs | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | <li> 7 days worth of jobs available out of the box. <br> <li> Each filter/drop-down supports a maximum of 1000 items. So Backup Center can be used to monitor a maximum of 1000 subscriptions and 1000 vaults across tenants. |
+| Monitoring | View all jobs | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | <li> 7 days worth of jobs available out of the box. <br> <li> Each filter/drop-down supports a maximum of 1000 items. So Backup center can be used to monitor a maximum of 1000 subscriptions and 1000 vaults across tenants. |
| Monitoring | View all backup instances | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Same as above | | Monitoring | View all backup policies | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Same as above | | Monitoring | View all vaults | <li> Azure Virtual Machine <br><br> <li> Azure Database for PostgreSQL server <br><br> <li> SQL in Azure VM <br><br> <li> SAP HANA in Azure VM <br><br> <li> Azure Files<br/><br/> <li>Azure Blobs<br/><br/> <li>Azure Managed Disks | Same as above |
Backup Center provides a single pane of glass for enterprises to [govern, monito
|--|| | Monitoring | View alerts at scale | | Actions | Configure vault settings at scale |
-| Actions | Execute cross-region restore job from Backup Center |
+| Actions | Execute cross-region restore job from Backup center |
## Next steps
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-rbac-rs-vault.md
Title: Manage Backups with Azure role-based access control
description: Use Azure role-based access control to manage access to backup management operations in Recovery Services vault. Previously updated : 06/24/2019 Last updated : 03/09/2021 # Use Azure role-based access control to manage Azure Backup recovery points
If you're looking to define your own roles for even more control, see how to [bu
## Mapping Backup built-in roles to backup management actions
+### Minimum role requirements for Azure VM backup
+ The following table captures the Backup management actions and corresponding minimum Azure role required to perform that operation.
-| Management Operation | Minimum Azure role required | Scope Required |
-| | | |
-| Create Recovery Services vault | Backup Contributor | Resource group containing the vault |
-| Enable backup of Azure VMs | Backup Operator | Resource group containing the vault |
-| | Virtual Machine Contributor | VM resource |
-| On-demand backup of VM | Backup Operator | Recovery Services vault |
-| Restore VM | Backup Operator | Recovery Services vault |
-| | Contributor | Resource group in which VM will be deployed |
-| | Virtual Machine Contributor | Source VM that got backed up |
+| Management Operation | Minimum Azure role required | Scope Required | Alternative |
+| | | | |
+| Create Recovery Services vault | Backup Contributor | Resource group containing the vault | |
+| Enable backup of Azure VMs | Backup Operator | Resource group containing the vault | |
+| | Virtual Machine Contributor | VM resource | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| On-demand backup of VM | Backup Operator | Recovery Services vault | |
+| Restore VM | Backup Operator | Recovery Services vault | |
+| | Contributor | Resource group in which VM will be deployed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.DomainRegistration/domains/write, Microsoft.Compute/virtualMachines/write Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
| Restore unmanaged disks VM backup | Backup Operator | Recovery Services vault |
-| | Virtual Machine Contributor | Source VM that got backed up |
-| | Storage Account Contributor | Storage account resource where disks are going to be restored |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Storage Account Contributor | Storage account resource where disks are going to be restored | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Storage/storageAccounts/write |
| Restore managed disks from VM backup | Backup Operator | Recovery Services vault |
-| | Virtual Machine Contributor | Source VM that got backed up |
-| | Storage Account Contributor | Temporary Storage account selected as part of restore to hold data from vault before converting them to managed disks |
-| | Contributor | Resource group to which managed disk(s) will be restored |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Storage Account Contributor | Temporary Storage account selected as part of restore to hold data from vault before converting them to managed disks | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Storage/storageAccounts/write |
+| | Contributor | Resource group to which managed disk(s) will be restored | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write|
| Restore individual files from VM backup | Backup Operator | Recovery Services vault |
-| | Virtual Machine Contributor | Source VM that got backed up |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
| Create backup policy for Azure VM backup | Backup Contributor | Recovery Services vault | | Modify backup policy of Azure VM backup | Backup Contributor | Recovery Services vault | | Delete backup policy of Azure VM backup | Backup Contributor | Recovery Services vault |
The following table captures the Backup management actions and corresponding min
> [!IMPORTANT] > If you specify VM Contributor at a VM resource scope and select **Backup** as part of VM settings, it will open the **Enable Backup** screen, even though the VM is already backed up. This is because the call to verify backup status works only at the subscription level. To avoid this, either go to the vault and open the backup item view of the VM or specify the VM Contributor role at a subscription level.
-## Minimum role requirements for the Azure File share backup
+### Minimum role requirements for Azure workload backups (SQL and HANA DB backups)
+
+The following table captures the Backup management actions and corresponding minimum Azure role required to perform that operation.
+
+| Management Operation | Minimum Azure role required | Scope Required | Alternative |
+| | | | |
+| Create Recovery Services vault | Backup Contributor | Resource group containing the vault | |
+| Enable backup of SQL and/or HANA databases | Backup Operator | Resource group containing the vault | |
+| | Virtual Machine Contributor | VM resource where DB is installed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| On-demand backup of DB | Backup Operator | Recovery Services vault | |
+| Restore database or Restore as files | Backup Operator | Recovery Services vault | |
+| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| | Virtual Machine Contributor | Target VM in which DB will be restored or files are created | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| Create backup policy for Azure VM backup | Backup Contributor | Recovery Services vault |
+| Modify backup policy of Azure VM backup | Backup Contributor | Recovery Services vault |
+| Delete backup policy of Azure VM backup | Backup Contributor | Recovery Services vault |
+| Stop backup (with retain data or delete data) on VM backup | Backup Contributor | Recovery Services vault |
+
+### Minimum role requirements for the Azure File share backup
The following table captures the Backup management actions and corresponding role required to perform Azure File share operation.
backup Backup Reports Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-reports-email.md
+
+ Title: Email Azure Backup Reports
+description: Create automated tasks to receive periodic reports via email
+ Last updated : 03/01/2021++
+# Email Azure Backup Reports
+
+Using the **Email Report** feature available in Backup Reports, you can create automated tasks to receive periodic reports via email. This feature works by deploying a logic app in your Azure environment that queries data from your selected Log Analytics (LA) workspaces, based on the inputs that you provide. [Learn more about Logic apps and their pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
+
+## Getting Started
+
+To configure email tasks via Backup Reports, perform the following steps:
+
+1. Navigate to **Backup Center** > **Backup Reports** and click on the **Email Report** tab.
+2. Create a task by specifying the following information:
+ * **Task Details** - The name of the logic app to be created, and the subscription, resource group, and location in which it should be created. Note that the logic app can query data across multiple subscriptions, resource groups, and locations (as selected in the Report Filters section), but is created in the context of a single subscription, resource group and location.
+ * **Data To Export** - The tab which you wish to export. You can either create a single task app per tab, or email all tabs using a single task, by selecting the **All Tabs** option.
+ * **Email options**: The email frequency, recipient email ID(s), and the email subject.
+
+ ![Email Tab](./media/backup-azure-configure-backup-reports/email-tab.png)
+
+3. After you click **Submit** and **Confirm**, the logic app will get created. The logic app and the associated API connections are created with the tag **UsedByBackupReports: true** for easy discoverability. You'll need to perform a one-time authorization step for the logic app to run successfully, as described in the section below.
+
+## Authorize connections to Azure Monitor Logs and Office 365
+
+The logic app uses the [azuremonitorlogs](https://docs.microsoft.com/connectors/azuremonitorlogs/) connector for querying the LA workspace(s) and uses the [Office365 Outlook](https://docs.microsoft.com/connectors/office365connector/) connector for sending emails. You will need to perform a one-time authorization for these two connectors.
+
+To perform the authorization, follow the steps below:
+
+1. Navigate to **Logic Apps** in the Azure portal.
+2. Search for the name of the logic app you have created and navigate to the resource.
+
+ ![Logic Apps](./media/backup-azure-configure-backup-reports/logic-apps.png)
+
+3. Click on the **API connections** menu item.
+
+ ![API Connections](./media/backup-azure-configure-backup-reports/api-connections.png)
+
+4. You will see two connections with the format `<location>-azuremonitorlogs` and `<location>-office365` - that is, _eastus-azuremonitorlogs_ and _eastus-office365_.
+5. Navigate to each of these connections and select the **Edit API connection** menu item. In the screen that appears, select **Authorize**, and save the connection once authorization is complete.
+
+ ![Authorize connection](./media/backup-azure-configure-backup-reports/authorize-connections.png)
+
+6. To test whether the logic app works after authorization, you can navigate back to the logic app, open **Overview** and select **Run Trigger** in the top pane, to test whether an email is being generated successfully.
+
+## Contents of the email
+
+* All the charts and graphs shown in the portal are available as inline content in the email.
+* The grids shown in the portal are available as *.csv attachments in the email.
+* The data shown in the email uses all the report-level filters selected by the user in the report, at the time of creating the email task.
+* Tab-level filters such as **Backup Instance Name**, **Policy Name** and so on, are not applied. The only exception to this is the **Retention Optimizations** grid in the **Optimize** tab, where the filters for **Daily**, **Weekly**, **Monthly** and **Yearly** RP retention are applied.
+* The time range and aggregation type (for charts) are based on the userΓÇÖs time range selection in the reports. For example, if the time range selection is last 60 days (translating to weekly aggregation type), and email frequency is daily, the recipient will receive an email every day with charts spanning data taken over the last 60-day period, with data aggregated at a weekly level.
+
+## Troubleshooting issues
+
+If you aren't receiving emails as expected even after successful deployment of the logic app, you can follow the steps below to troubleshoot the configuration:
+
+### Scenario 1: Receiving neither a successful email nor an error email
+
+* This issue could be occurring because the Outlook API connector is not authorized. To authorize the connection, follow the authorization steps provided above.
+
+* This issue could also be occurring if you have specified an incorrect email recipient while creating the logic app. To verify that the email recipient has been specified correctly, you can navigate to the logic app in the Azure portal, open the Logic App designer and select email step to see whether the correct email IDs are being used.
+
+### Scenario 2: Receiving an error email that says that the logic app failed to execute to completion
+
+To troubleshoot this issue:
+1. Navigate to the logic app in the Azure portal.
+2. At the bottom of the **Overview** screen, you will see a **Runs History** section. You can open on the latest run and view which steps in the workflow failed. Some possible causes could be:
+ * **Azure Monitor Logs Connector has not been not authorized**: To fix this issue, follow the authorization steps as provided above.
+ * **Error in the LA query**: In case you have customized the logic app with your own queries, an error in any of the LA queries might be causing the logic app to fail. You can select the relevant step and view the error which is causing the query to run incorrectly.
+
+If the issues persist, contact Microsoft support.
+
+## Next steps
+[Learn more about Backup Reports](https://docs.microsoft.com/azure/backup/configure-reports)
backup Backup Reports System Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-reports-system-functions.md
+
+ Title: System functions on Azure Monitor Logs
+description: Write custom queries on Azure Monitor Logs using system functions
+ Last updated : 03/01/2021++
+# System functions on Azure Monitor Logs
+
+Azure Backup provides a set of functions, called system functions or solution functions, that are available by default in your Log Analytics (LA) workspaces.
+
+These functions operate on data in the [raw Azure Backup tables](https://docs.microsoft.com/azure/backup/backup-azure-reports-data-model) in LA and return formatted data that helps you easily retrieve information of all your backup-related entities, using simple queries. Users can pass parameters to these functions to filter the data that is returned by these functions.
+
+It's recommended to use system functions for querying your backup data in LA workspaces for creating custom reports, as they provide a number of benefits, as detailed in the section below.
+
+## Benefits of using system functions
+
+* **Simpler queries**: Using functions helps you reduce the number of joins needed in your queries. By default, the functions return ΓÇÿflattenedΓÇÖ schemas, that incorporate all information pertaining to the entity (backup instance, job, vault, and so on) being queried. For example, if you need to get a list of successful backup jobs by backup item name and its associated container, a simple call to the **_AzureBackup_getJobs()** function will give you all of this information for each job. On the other hand, querying the raw tables directly would require you to perform multiple joins between [AddonAzureBackupJobs](https://docs.microsoft.com/azure/backup/backup-azure-reports-data-model#addonazurebackupjobs) and [CoreAzureBackup](https://docs.microsoft.com/azure/backup/backup-azure-reports-data-model#coreazurebackup) tables.
+
+* **Smoother transition from the legacy diagnostics event**: Using system functions helps you transition smoothly from the [legacy diagnostics event](https://docs.microsoft.com/azure/backup/backup-azure-diagnostic-events#legacy-event) (AzureBackupReport in AzureDiagnostics mode) to the [resource-specific events](https://docs.microsoft.com/azure/backup/backup-azure-diagnostic-events#diagnostics-events-available-for-azure-backup-users). All the system functions provided by Azure Backup allow you to specify a parameter that lets you choose whether the function should query data only from the resource-specific tables, or query data from both the legacy table and the resource-specific tables (with deduplication of records).
+ * If you have successfully migrated to the resource-specific tables, you can choose to exclude the legacy table from being queried by the function.
+ * If you are currently in the process of migration and have some data in the legacy tables which you require for analysis, you can choose to include the legacy table. When the transition is complete, and you no longer need data from the legacy table, you can simply update the value of the parameter passed to the function in your queries, to exclude the legacy table.
+ * If you are still using only the legacy table, the functions will still work if you choose to include the legacy table via the same parameter. However, it is recommended to [switch to the resource-specific tables](https://docs.microsoft.com/azure/backup/backup-azure-diagnostic-events#steps-to-move-to-new-diagnostics-settings-for-a-log-analytics-workspace) at the earliest.
+
+* **Reduces possibility of custom queries breaking**: If Azure Backup introduces improvements to the schema of the underlying LA tables to accommodate future reporting scenarios, the definition of the functions will also be updated to take into account the schema changes. Thus, if you use system functions for creating custom queries, your queries will not break, even if there are changes in the underlying schema of the tables.
+
+> [!NOTE]
+> System functions are maintained by Microsoft and their definitions cannot be edited by users. If you require editable functions, you can create [saved functions](https://docs.microsoft.com/azure/azure-monitor/logs/functions) in LA.
+
+## Types of system functions offered by Azure Backup
+
+* **Core functions**: These are functions that help you query any of the key Azure Backup entities, such as Backup Instances, Vaults, Policies, Jobs and Billing Entities. For example, the **_AzureBackup_getBackupInstances** function returns a list of all the backup instances that exist in your environment as of the latest completed day (in UTC). The parameters and returned schema for each of these core functions are summarized below in this article.
+
+* **Trend functions**: These are functions that return historical records for your backup-related entities (for example, backup instances, billing groups) and allow you to get daily, weekly and monthly trend information on key metrics (for example, Count, Storage consumed) pertaining to these entities. The parameters and returned schema for each of these trend functions are summarized below in this article.
+
+> [!NOTE]
+> Currently, system functions return data for up to the last completed day (in UTC). Data for the current partial day isn't returned. So if you are looking to retrieve records for the current day, you'll need to use the raw LA tables.
++
+## List of system functions
+
+### Core Functions
+
+#### _AzureBackup_GetVaults()
+
+This function returns the list of all Recovery Services vaults in your Azure environment that are associated with the LA workspace.
+
+**Parameters**
+
+| **Parameter Name** | **Description** | **Required?** | **Example value** |
+| -- | - | | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all vault-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each vault. | N | "2021-03-03 00:00:00" |
+| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all vault-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each vault. | N |"2021-03-10 00:00:00"|
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those vaults that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those vaults that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records across all vaults. | N |"vault1,vault2,vault3"|
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter | N | "Microsoft.RecoveryServices/vaults"|
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
+
+**Returned Fields**
+
+| **Field Name** | **Description** |
+| -- | |
+| UniqueId | Primary key denoting unique ID of the vault |
+| Id | Azure Resource Manager (ARM) ID of the vault |
+| Name | Name of the vault |
+| SubscriptionId | ID of the subscription in which the vault exists |
+| Location | Location in which the vault exists |
+| VaultStore_StorageReplicationType | Storage Replication Type associated with the vault |
+| Tags | Tags of the vault |
+| TimeGenerated | Timestamp of the record |
+| Type | Type of the vault, which is "Microsoft.RecoveryServices/vaults"|
+
+#### _AzureBackup_GetPolicies()
+
+This function returns the list of backup policies that are being used in your Azure environment along with detailed information about each policy such as the datasource type, storage replication type, and so on.
+
+**Parameters**
+
+| **Parameter Name** | **Description** | **Required?** | **Example value** |
+| -- | - | | -- |
+| RangeStart | Use this parameter along with the RangeStart parameter only if you need to fetch all policy-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each policy. | N | "2021-03-03 00:00:00" |
+| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all policy-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each policy. | N |"2021-03-10 00:00:00"|
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those policies that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those policies that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of policies pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of policies across all vaults. | N |"vault1,vault2,vault3"|
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
+
+**Returned Fields**
+
+| **Field Name** | **Description** |
+| -- | |
+| UniqueId | Primary key denoting unique ID of the policy |
+| Id | Azure Resource Manager (ARM) ID of the policy |
+| Name | Name of the policy |
+| Backup Solution | Backup Solution that the policy is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
+| TimeGenerated | Timestamp of the record |
+| VaultUniqueId | Foreign key that refers to the vault associated with the policy |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the policy |
+| VaultName | Name of the vault associated with the policy |
+| VaultTags | Tags of the vault associated with the policy |
+| VaultLocation | Location of the vault associated with the policy |
+| VaultSubscriptionId | Subscription ID of the vault associated with the policy |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the policy |
+| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
+| ExtendedProperties | Additional properties of the policy |
+
+#### _AzureBackup_GetJobs()
+
+This function returns a list of all backup and restore related jobs that were triggered in a specified time range, along with detailed information about each job, such as job status, job duration, data transferred, and so on.
+
+**Parameters**
+
+| **Parameter Name** | **Description** | **Required?** | **Example value** |
+| -- | - | | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter to retrieve the list of all jobs that started in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" |
+| RangeEnd | Use this parameter along with RangeStart parameter to retrieve the list of all jobs that started in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00"|
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those jobs that are associated with vaults in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those jobs that are associated with vaults in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
+| VaultList | Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve jobs pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for jobs across all vaults. | N |"vault1,vault2,vault3"|
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
+| JobOperationList | Use this parameter to filter the output of the function for a specific type of job. For example, Backup or Restore. By default, the value of this parameter is "*", which makes the function search for both Backup and Restore jobs. | N | "Backup" |
+| JobStatusList | Use this parameter to filter the output of the function for a specific job status. For example, Completed, Failed, and so on. By default, the value of this parameter is "*", which makes the function search for all jobs irrespective of status. | N | "Failed,CompletedWithWarnings" |
+| JobFailureCodeList | Use this parameter to filter the output of the function for a specific failure code. By default, the value of this parameter is "*", which makes the function search for all jobs irrespective of failure code. | N | "Success"
+| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" |
+| BackupInstanceName | Use this parameter to search for jobs on a particular backup instance by name. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" |
+| ExcludeLog | Use this parameter to exclude log jobs from being returned by the function (helps in query performance). By default, the value of this parameter is true, which makes the function exclude log jobs. | N | true
++
+**Returned Fields**
+
+| **Field Name** | **Description** |
+| -- | |
+| UniqueId | Primary key denoting unique ID of the job |
+| OperationCategory | Category of the operation being performed. For example, Backup, Restore |
+| Operation | Details of the operation being performed. For example, Log (for log backup)|
+| Status | Status of the job. For example, Completed, Failed, CompletedWithWarnings |
+| ErrorTitle | Failure code of the job |
+| StartTime | Date and time at which the job started |
+| DurationInSecs | Duration of the job in seconds |
+| DataTransferredInMBs | Data transferred by the job in MBs |
+| RestoreJobRPDateTime | The date and time when the recovery point that's being recovered was created |
+| RestoreJobRPLocation | The location where the recovery point that's being recovered was stored |
+| BackupInstanceUniqueId | Foreign key that refers to the backup instance associated with the job |
+| BackupInstanceId | Azure Resource Manager (ARM) ID of the backup instance associated with the job |
+| BackupInstanceFriendlyName | Name of the backup instance associated with the job |
+| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource associated with the job. For example, Azure Resource Manager (ARM) ID of the VM |
+| DatasourceFriendlyName | Friendly name of the underlying datasource associated with the job |
+| DatasourceType | Type of the datasource associated with the job. For example "Microsoft.Compute/virtualMachines" |
+| BackupSolution | Backup Solution that the job is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
+| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists |
+| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the job |
+| VaultUniqueId | Foreign key that refers to the vault associated with the job |
+| VaultName | Name of the vault associated with the job |
+| VaultTags | Tags of the vault associated with the job |
+| VaultSubscriptionId | Subscription ID of the vault associated with the job |
+| VaultLocation | Location of the vault associated with the job |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the job |
+| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
+| TimeGenerated | Timestamp of the record |
+
+#### _AzureBackup_GetBackupInstances()
+
+This function returns the list of backup instances that are associated with your Recovery Services vaults, along with detailed information about each backup instance, such as cloud storage consumption, associated policy, and so on.
+
+**Parameters**
+
+| **Parameter Name** | **Description** | **Required?** | **Example value** |
+| -- | - | | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all backup instance-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each backup instance. | N | "2021-03-03 00:00:00" |
+| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all backup instance-related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each backup instance. | N |"2021-03-10 00:00:00"|
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those backup instances that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those backup instances that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of backup instances across all vaults. | N |"vault1,vault2,vault3"|
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
+| ProtectionInfoList | Use this parameter to choose whether to include only those backup instances that are actively protected, or to also include those instances for which protection has been stopped and instances for which initial backup is pending. Supported values are "Protected", "ProtectionStopped", "InitialBackupPending" or a comma-separated combination of any of these values. By default, the value is "*", which makes the function search for all backup instances irrespective of protection details. | N | "Protected" |
+| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" |
+| BackupInstanceName | Use this parameter to search for a particular backup instance by name. By default, the value is "*", which makes the function search for all backup instances. | N | "testvm" |
+| DisplayAllFields | Use this parameter to choose whether to retrieve only a subset of the fields returned by the function. If the value of this parameter is false, the function eliminates storage and retention point related information from the output of the function. This is useful if you are using this function as an intermediate step in a larger query and need to optimize the performance of the query by eliminating columns which you do not require for analysis. By default, the value of this parameter is true, which makes the function return all fields pertaining to the backup instance. | N | true |
+
+**Returned Fields**
+
+| **Field Name** | **Description** |
+| -- | |
+| UniqueId | Primary key denoting unique ID of the backup instance |
+| Id | Azure Resource Manager (ARM) ID of the backup instance |
+| FriendlyName | Friendly name of the backup instance |
+| ProtectionInfo | Information about the protection settings of the backup instance. For example, protection configured, protection stopped, initial backup pending |
+| LatestRecoveryPoint | Date and time of the latest recovery point associated with the backup instance |
+| OldestRecoveryPoint | Date and time of the oldest recovery point associated with the backup instance |
+| SourceSizeInMBs | Frontend size of the backup instance in MBs |
+| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the backup instance in the vault-standard tier |
+| DataSourceFriendlyName | Friendly name of the datasource corresponding to the backup instance |
+| BackupSolution | Backup Solution that the backup instance is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
+| DatasourceType | Type of the datasource corresponding to the backup instance. For example "Microsoft.Compute/virtualMachines" |
+| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource corresponding to the backup instance. For example, Azure Resource Manager (ARM) ID of the VM |
+| DatasourceSetFriendlyName | Friendly name of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the name of the VM in which the SQL Database exists |
+| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists |
+| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM |
+| PolicyName | Name of the policy associated with the backup instance |
+| PolicyUniqueId | Foreign key that refers to the policy associated with the backup instance |
+| PolicyId | Azure Resource Manager (ARM) ID of the policy associated with the backup instance |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the backup instance |
+| VaultUniqueId | Foreign key which refers to the vault associated with the backup instance |
+| VaultName | Name of the vault associated with the backup instance |
+| VaultTags | Tags of the vault associated with the backup instance |
+| VaultSubscriptionId | Subscription ID of the vault associated with the backup instance |
+| VaultLocation | Location of the vault associated with the backup instance |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the backup instance |
+| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
+| TimeGenerated | Timestamp of the record |
+
+#### _AzureBackup_GetBillingGroups()
+
+This function returns a list of all backup-related billing entities (billing groups) along with information on key billing components such as frontend size and total cloud storage.
+
+**Parameters**
+
+| **Parameter Name** | **Description** | **Required?** | **Example value** |
+| -- | - | | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter only if you need to fetch all billing group related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each billing group. | N | "2021-03-03 00:00:00" |
+| RangeEnd | Use this parameter along with RangeStart parameter only if you need to fetch all billing group related records in the time period from RangeStart to RangeEnd. By default, the value of RangeStart and RangeEnd are null, which will make the function retrieve only the latest record for each billing group. | N |"2021-03-10 00:00:00"|
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those billing groups that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those billing groups that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of billing groups across all vaults. | N |"vault1,vault2,vault3"|
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
+| BillingGroupName | Use this parameter to search for a particular billing group by name. By default, the value is "*", which makes the function search for all billing groups. | N | "testvm" |
+
+**Returned Fields**
+
+| **Field Name** | **Description** |
+| -- | |
+| UniqueId | Primary key denoting unique ID of the billing group |
+| FriendlyName | Friendly name of the billing group |
+| Name | Name of the billing group |
+| Type | Type of billing group. For example, ProtectedContainer or BackupItem |
+| SourceSizeInMBs | Frontend size of the billing group in MBs |
+| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the billing group in the vault-standard tier |
+| BackupSolution | Backup Solution that the billing group is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the billing group |
+| VaultUniqueId | Foreign key which refers to the vault associated with the billing group |
+| VaultName | Name of the vault associated with the billing group |
+| VaultTags | Tags of the vault associated with the billing group |
+| VaultSubscriptionId | Subscription ID of the vault associated with the billing group |
+| VaultLocation | Location of the vault associated with the billing group |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the billing group |
+| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
+| TimeGenerated | Timestamp of the record |
+| ExtendedProperties | Additional properties of the billing group |
+
+### Trend Functions
+
+#### _AzureBackup_GetBackupInstancesTrends()
+
+This function returns historical records for each backup instance, allowing you to view key daily, weekly and monthly trends related to backup instance count and storage consumption, at multiple levels of granularity.
+
+**Parameters**
+
+| **Parameter Name** | **Description** | **Required?** | **Example value** |
+| -- | - | | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter to retrieve all backup instance related records in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" |
+| RangeEnd | Use this parameter along with RangeStart parameter to retrieve all backup instance related records in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00"|
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those backup instances that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those backup instances that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of backup instances across all vaults. | N |"vault1,vault2,vault3"|
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
+| ProtectionInfoList | Use this parameter to choose whether to include only those backup instances that are actively protected, or to also include those instances for which protection has been stopped and instances for which initial backup is pending. Supported values are "Protected", "ProtectionStopped", "InitialBackupPending" or a comma-separated combination of any of these values. By default, the value is "*", which makes the function search for all backup instances irrespective of protection details. | N | "Protected" |
+| DatasourceSetName | Use this parameter to filter the output of the function to a particular parent resource. For example, to return SQL in Azure VM backup instances belonging to the virtual machine "testvm", specify _testvm_ as the value of this parameter. By default, the value is "*", which makes the function search for records across all backup instances. | N | "testvm" |
+| BackupInstanceName | Use this parameter to search for a particular backup instance by name. By default, the value is "*", which makes the function search for all backup instances. | N | "testvm" |
+| DisplayAllFields | Use this parameter to choose whether to retrieve only a subset of the fields returned by the function. If the value of this parameter is false, the function eliminates storage and retention point related information from the output of the function. This is useful if you are using this function as an intermediate step in a larger query and need to optimize the performance of the query by eliminating columns which you do not require for analysis. By default, the value of this parameter is true, which makes the function return all fields pertaining to the backup instance. | N | true |
+| AggregationType | Use this parameter to specify the time granularity at which data should be retrieved. If the value of this parameter is "Daily", the function returns a record per backup instance per day, allowing you to analyze daily trends of storage consumption and backup instance count. If the value of this parameter is "Weekly", the function returns a record per backup instance per week, allowing you to analyze weekly trends. Similarly, you can specify "Monthly" to analyze monthly trends. Default value is "Daily". If you are viewing data across larger time ranges, it is recommended to use "Weekly" or "Monthly" for better query performance and ease of trend analysis. | N | "Weekly" |
+
+**Returned Fields**
+
+| **Field Name** | **Description** |
+| -- | |
+| UniqueId | Primary key denoting unique ID of the backup instance |
+| Id | Azure Resource Manager (ARM) ID of the backup instance |
+| FriendlyName | Friendly name of the backup instance |
+| ProtectionInfo | Information about the protection settings of the backup instance. For example, protection configured, protection stopped, initial backup pending |
+| LatestRecoveryPoint | Date and time of the latest recovery point associated with the backup instance |
+| OldestRecoveryPoint | Date and time of the oldest recovery point associated with the backup instance |
+| SourceSizeInMBs | Frontend size of the backup instance in MBs |
+| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the backup instance in the vault-standard tier |
+| DataSourceFriendlyName | Friendly name of the datasource corresponding to the backup instance |
+| BackupSolution | Backup Solution that the backup instance is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
+| DatasourceType | Type of the datasource corresponding to the backup instance. For example "Microsoft.Compute/virtualMachines" |
+| DatasourceResourceId | Azure Resource Manager (ARM) ID of the underlying datasource corresponding to the backup instance. For example, Azure Resource Manager (ARM) ID of the VM |
+| DatasourceSetFriendlyName | Friendly name of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the name of the VM in which the SQL Database exists |
+| DatasourceSetResourceId | Azure Resource Manager (ARM) ID of the parent resource of the datasource (wherever applicable). For example, for a SQL in Azure VM datasource, this field will contain the Azure Resource Manager (ARM) ID of the VM in which the SQL Database exists |
+| DatasourceSetType | Type of the parent resource of the datasource (wherever applicable). For example, for an SAP HANA in Azure VM datasource, this field will be Microsoft.Compute/virtualMachines since the parent resource is an Azure VM |
+| PolicyName | Name of the policy associated with the backup instance |
+| PolicyUniqueId | Foreign key that refers to the policy associated with the backup instance |
+| PolicyId | Azure Resource Manager (ARM) ID of the policy associated with the backup instance |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the backup instance |
+| VaultUniqueId | Foreign key which refers to the vault associated with the backup instance |
+| VaultName | Name of the vault associated with the backup instance |
+| VaultTags | Tags of the vault associated with the backup instance |
+| VaultSubscriptionId | Subscription ID of the vault associated with the backup instance |
+| VaultLocation | Location of the vault associated with the backup instance |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the backup instance |
+| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
+| TimeGenerated | Timestamp of the record |
+
+#### _AzureBackup_GetBillingGroupsTrends()
+
+This function returns historical records for each billing entity, allowing you to view key daily, weekly and monthly trends related to frontend size and storage consumption, at multiple levels of granularity.
+
+**Parameters**
+
+| **Parameter Name** | **Description** | **Required?** | **Example value** |
+| -- | - | | -- |
+| RangeStart | Use this parameter along with RangeEnd parameter to retrieve all billing group related records in the time period from RangeStart to RangeEnd. | Y | "2021-03-03 00:00:00" |
+| RangeEnd | Use this parameter along with RangeStart parameter to retrieve all billing group related records in the time period from RangeStart to RangeEnd. | Y |"2021-03-10 00:00:00"|
+| VaultSubscriptionList | Use this parameter to filter the output of the function for a certain set of subscriptions where backup data exists. Specifying a comma-separated list of subscription IDs as a parameter to this function helps you retrieve only those billing groups that are in the specified subscriptions. By default, the value of this parameter is '*', which makes the function search for records across all subscriptions. | N | "00000000-0000-0000-0000-000000000000,11111111-1111-1111-1111-111111111111"|
+| VaultLocationList | Use this parameter to filter the output of the function for a certain set of regions where backup data exists. Specifying a comma-separated list of regions as a parameter to this function helps you retrieve only those billing groups that are in the specified regions. By default, the value of this parameter is '*', which makes the function search for records across all regions. | N | "eastus,westus"|
+| VaultList |Use this parameter to filter the output of the function for a certain set of vaults. Specifying a comma-separated list of vault names as a parameter to this function helps you retrieve records of backup instances pertaining only to the specified vaults. By default, the value of this parameter is '*', which makes the function search for records of billing groups across all vaults. | N |"vault1,vault2,vault3"|
+| VaultTypeList | Use this parameter to filter the output of the function to records pertaining to a particular vault type. Currently the only supported vault type is "Microsoft.RecoveryServices/vaults", which is the default value of this parameter. | N | "Microsoft.RecoveryServices/vaults"|
+| ExcludeLegacyEvent | Use this parameter to choose whether to query data in the legacy AzureDiagnostics table or not. If the value of this parameter is false, the function queries data from both the AzureDiagnostics table and the Resource specific tables. If the value of this parameter is true, the function queries data from only the Resource specific tables. Default value is true. | N | true |
+| BackupSolutionList | Use this parameter to filter the output of the function for a certain set of backup solutions used in your Azure environment. For example, if you specify "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM" as the value of this parameter, the function only returns records that are related to items backed up using Azure Virtual Machine backup, SQL in Azure VM backup or DPM to Azure backup. By default, the value of this parameter is '*', which makes the function return records pertaining to all backup solutions that are supported by Backup Reports (supported values are "Azure Virtual Machine Backup", "SQL in Azure VM Backup", "SAP HANA in Azure VM Backup", "Azure Storage (Azure Files) Backup", "Azure Backup Agent", "DPM", "Azure Backup Server" or a comma-separated combination of any of these values). | N | "Azure Virtual Machine Backup,SQL in Azure VM Backup,DPM,Azure Backup Agent" |
+| BillingGroupName | Use this parameter to search for a particular billing group by name. By default, the value is "*", which makes the function search for all billing groups. | N | "testvm" |
+| AggregationType | Use this parameter to specify the time granularity at which data should be retrieved. If the value of this parameter is "Daily", the function returns a record per billing group per day, allowing you to analyze daily trends of storage consumption and frontend size. If the value of this parameter is "Weekly", the function returns a record per backup instance per week, allowing you to analyze weekly trends. Similarly, you can specify "Monthly" to analyze monthly trends. Default value is "Daily". If you are viewing data across larger time ranges, it is recommended to use "Weekly" or "Monthly" for better query performance and ease of trend analysis. | N | "Weekly" |
+
+**Returned Fields**
+
+| **Field Name** | **Description** |
+| -- | |
+| UniqueId | Primary key denoting unique ID of the billing group |
+| FriendlyName | Friendly name of the billing group |
+| Name | Name of the billing group |
+| Type | Type of billing group. For example, ProtectedContainer or BackupItem |
+| SourceSizeInMBs | Frontend size of the billing group in MBs |
+| VaultStore_StorageConsumptionInMBs | Total cloud storage consumed by the billing group in the vault-standard tier |
+| BackupSolution | Backup Solution that the billing group is associated with. For example, Azure VM Backup, SQL in Azure VM Backup, and so on. |
+| VaultResourceId | Azure Resource Manager (ARM) ID of the vault associated with the billing group |
+| VaultUniqueId | Foreign key which refers to the vault associated with the billing group |
+| VaultName | Name of the vault associated with the billing group |
+| VaultTags | Tags of the vault associated with the billing group |
+| VaultSubscriptionId | Subscription ID of the vault associated with the billing group |
+| VaultLocation | Location of the vault associated with the billing group |
+| VaultStore_StorageReplicationType | Storage Replication Type of the vault associated with the billing group |
+| VaultType | Type of the vault, which is "Microsoft.RecoveryServices/vaults" |
+| TimeGenerated | Timestamp of the record |
+| ExtendedProperties | Additional properties of the billing group |
+
+## Sample Queries
+
+Below are some sample queries to help you get started with using system functions.
+
+- All failed Azure VM backup jobs in a given time range
+
+ ````Kusto
+ _AzureBackup_GetJobs("2021-03-05", "2021-03-06") //call function with RangeStart and RangeEnd parameters set, and other parameters with default value
+ | where BackupSolution=="Azure Virtual Machine Backup" and Status=="Failed"
+ | project BackupInstanceFriendlyName, BackupInstanceId, OperationCategory, Status, JobStartDateTime=StartTime, JobDuration=DurationInSecs/3600, ErrorTitle, DataTransferred=DataTransferredInMBs
+ ````
+
+- All SQL log backup jobs in a given time range
+
+ ````Kusto
+ _AzureBackup_GetJobs("2021-03-05", "2021-03-06","*","*","*","*",true,"*","*","*","*","*","*",false) //call function with RangeStart and RangeEnd parameters set, ExcludeLog parameter as false, and other parameters with default value
+ | where BackupSolution=="SQL in Azure VM Backup" and Operation=="Log"
+ | project BackupInstanceFriendlyName, BackupInstanceId, OperationCategory, Status, JobStartDateTime=StartTime, JobDuration=DurationInSecs/3600, ErrorTitle, DataTransferred=DataTransferredInMBs
+ ````
+
+- Weekly trend of backup storage consumed for VM "testvm"
+
+ ````Kusto
+ _AzureBackup_GetBackupInstancesTrends("2021-01-01", "2021-03-06","*","*","*","*",false,"*","*","*","*",true, "Weekly") //call function with RangeStart and RangeEnd parameters set, AggregationType parameter as Weekly, and other parameters with default value
+ | where BackupSolution == "Azure Virtual Machine Backup"
+ | where FriendlyName == "testvm"
+ | project TimeGenerated, VaultStore_StorageConsumptionInMBs
+ | render timechart
+ ````
+
+## Next steps
+[Learn more about Backup Reports](https://docs.microsoft.com/azure/backup/configure-reports)
backup Configure Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/configure-reports.md
Today, Azure Backup provides a reporting solution that uses [Azure Monitor logs]
## Supported scenarios -- Backup reports are supported for Azure VMs, SQL in Azure VMs, SAP HANA in Azure VMs, Microsoft Azure Recovery Services (MARS) agent, Microsoft Azure Backup Server (MABS), and System Center Data Protection Manager (DPM). For Azure File share backup, data is displayed for all records created on or after June 1, 2020.-- For Azure File share backup, data on protected instances is currently not displayed in the reports (defaults to zero for all backup items).
+- Backup reports are supported for Azure VMs, SQL in Azure VMs, SAP HANA in Azure VMs, Microsoft Azure Recovery Services (MARS) agent, Microsoft Azure Backup Server (MABS), and System Center Data Protection Manager (DPM). For Azure File share backup, data is displayed for records created on or after June 1, 2020.
+- For Azure File share backup, data on protected instances is displayed for records created after Feb 1st, 2021 (defaults to zero for older records).
- For DPM workloads, Backup reports are supported for DPM Version 5.1.363.0 and above and Agent Version 2.0.9127.0 and above. - For MABS workloads, Backup reports are supported for MABS Version 13.0.415.0 and above and Agent Version 2.0.9170.0 and above. - Backup reports can be viewed across all backup items, vaults, subscriptions, and regions as long as their data is being sent to a Log Analytics workspace that the user has access to. To view reports for a set of vaults, you only need to have reader access to the Log Analytics workspace to which the vaults are sending their data. You don't need to have access to the individual vaults.
The **Backup Management Type** filter at the top of the tab should have the item
###### Policy adherence
-Using this tab, you can identify whether all of your backup instances have had at least one successful backup every day. You can view policy adherence by time period, or by backup instance.
+Using this tab, you can identify whether all of your backup instances have had at least one successful backup every day. For items with weekly backup policy, you can use this tab to determine whether all backup instances have had at least one successful backup a week.
+
+There are two types of policy adherence views available:
+
+* **Policy Adherence by Time Period**: Using this view, you can identify how many items have had at least one successful backup in a given day and how many have not had a successful backup in that day. You can click on a row to see details of all backup jobs that have been triggered on the selected day. Note that if you increase the time range to a larger value, such as the last 60 days, the grid is rendered in weekly view, and displays the count of all items that have had at least one successful backup on every day in the given week. Similarly, there is a monthly view for larger time ranges.
+
+In the case of items backed up weekly, this grid helps you identify all items that have had at least one successful backup in the given week. For a larger time range, such as the last 120 days, the grid is rendered in monthly view, and displays the count of all items that have had at least one successful backup in every week in the given month. Refer [Conventions used in Backup Reports](https://docs.microsoft.com/azure/backup/configure-reports#conventions-used-in-backup-reports) for more details around daily, weekly and monthly views.
+
+![Policy Adherence By Time Period](./media/backup-azure-configure-backup-reports/policy-adherence-by-time-period.png)
+
+* **Policy Adherence by Backup Instance**: Using this view, you can policy adherence details at a backup instance level. A cell which is green denotes that the backup instance had at least one successful backup on the given day. A cell which is red denotes that the backup instance did not have even one successful backup on the given day. Daily, weekly and monthly aggregations follow the same behavior as the Policy Adherence by Time Period view. You can click on any row to view all backup jobs on the given backup instance in the selected time range.
+
+![Policy Adherence By Backup Instance](./media/backup-azure-configure-backup-reports/policy-adherence-by-backup-instance.png)
###### Email Azure Backup reports Using the **Email Report** feature available in Backup Reports, you can create automated tasks to receive periodic reports via email. This feature works by deploying a logic app in your Azure environment that queries data from your selected Log Analytics (LA) workspaces, based on the inputs that you provide.
-Once the logic app is created, you'll need to authorize connections to Azure Monitor Logs and Office 365. To do this, navigate to **Logic Apps** in the Azure portal and search for the name of the task you've created. Selecting the **API connections** menu item opens up the list of API connections that you need to authorize.
+Once the logic app is created, you'll need to authorize connections to Azure Monitor Logs and Office 365. To do this, navigate to **Logic Apps** in the Azure portal and search for the name of the task you've created. Selecting the **API connections** menu item opens up the list of API connections that you need to authorize. [Learn more about how to configure emails and troubleshoot issues](backup-reports-email.md).
###### Customize Azure Backup reports
-Backup Reports uses functions on Azure Monitor logs. These functions operate on data in the raw Azure Backup tables in LA and return formatted data that helps you easily retrieve information of all your backup-related entities, using simple queries.
+Backup Reports uses [system functions on Azure Monitor logs](backup-reports-system-functions.md). These functions operate on data in the raw Azure Backup tables in LA and return formatted data that helps you easily retrieve information of all your backup-related entities, using simple queries.
+
+To create your own reporting workbooks using Backup Reports as a base, you can navigate to Backup Reports, click on **Edit** at the top of the report, and view/edit the queries being used in the reports. Refer to [Azure workbooks documentation](https://docs.microsoft.com/azure/azure-monitor/visualize/workbooks-overview) to learn more about how to create custom reports.
## Export to Excel
If you use [Azure Lighthouse](../lighthouse/index.yml) with delegated access to
- The report shows details of jobs (apart from log jobs) that were *triggered* in the selected time range. - The values shown for **Cloud Storage** and **Protected Instances** are at the *end* of the selected time range. - The Backup items displayed in the reports are those items that exist at the *end* of the selected time range. Backup items that were deleted in the middle of the selected time range aren't displayed. The same convention applies for Backup policies as well.
+- If the selected time range spans a period of 30 days of less, charts are rendered in daily view, where there is one data point for every day. If the time range spans a period greater than 30 days and less than (or equal to) 90 days, charts are rendered in weekly view. For larger time ranges, charts are rendered in monthly view. Aggregating data weekly or monthly helps in better performance of queries and easier readability of data in charts.
+- The Policy Adherence grids also follow a similar aggregation logic as described above. However, there are a couple of minor differences. The first difference is that for items with weekly backup policy, there is no daily view (only weekly and monthly views are available). Further, in the grids for items with weekly backup policy, a 'month' is considered as a 4-week period (28 days), and not 30 days, to eliminate partial weeks from consideration.
## Query load times
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/selective-disk-backup-restore.md
Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGro
### Get backup item object to be passed in modify protection with PowerShell ```azurepowershell
-$item= Get-AzRecoveryServicesBackupItem -BackupManagementType "AzureVM" -WorkloadType "AzureVM" -VaultId $Vault.ID -FriendlyName "V2VM"
+$item= Get-AzRecoveryServicesBackupItem -BackupManagementType "AzureVM" -WorkloadType "AzureVM" -VaultId $targetVault.ID -FriendlyName "V2VM"
``` You need to pass the above obtained **$item** object to the **ΓÇôItem** parameter in the following cmdlets.
Enable-AzRecoveryServicesBackupProtection -Item $item -ResetExclusionSettings -V
### Restore selective disks with PowerShell ```azurepowershell
-Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -StorageAccountName "DestAccount" -StorageAccountResourceGroupName "DestRG" -TargetResourceGroupName "DestRGforManagedDisks" -VaultId $targetVault.ID -RestoreDiskList [Strings]
+$startDate = (Get-Date).AddDays(-7)
+$endDate = Get-Date
+$rp = Get-AzRecoveryServicesBackupRecoveryPoint -Item $item -StartDate $startdate.ToUniversalTime() -EndDate $enddate.ToUniversalTime() -VaultId $targetVault.ID
+Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -StorageAccountName "DestAccount" -StorageAccountResourceGroupName "DestRG" -TargetResourceGroupName "DestRGforManagedDisks" -VaultId $targetVault.ID -RestoreDiskList [$disks]
``` ### Restore only OS disk with PowerShell
cdn Cdn Create A Storage Account With Cdn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-create-a-storage-account-with-cdn.md
Title: Quickstart - Integrate an Azure Storage account with Azure CDN
-description: Learn how to use the Azure Content Delivery Network (CDN) to deliver high-bandwidth content by caching blobs from Azure Storage.
+ Title: 'Quickstart: Integrate an Azure Storage account with Azure CDN'
+description: In this quickstart, learn how to use the Azure Content Delivery Network (CDN) to deliver high-bandwidth content by caching blobs from Azure Storage.
-- - Last updated 04/30/2020
In the preceding steps, you created a CDN profile and an endpoint in a resource
## Next steps
-> [!div class="nextstepaction"]
-> [Create an Azure CDN profile and endpoint](cdn-create-new-endpoint.md)
- > [!div class="nextstepaction"] > [Tutorial: Use CDN to server static content from a web app](cdn-add-to-web-app.md)
cdn Cdn Create New Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-create-new-endpoint.md
Title: Quickstart - Create an Azure CDN profile and endpoint description: This quickstart shows how to enable Azure CDN by creating a new CDN profile and CDN endpoint.- -- ms.assetid: 4ca51224-5423-419b-98cf-89860ef516d2 - Last updated 04/30/2020
In the preceding steps, you created a CDN profile and an endpoint in a resource
> [!div class="nextstepaction"] > [Tutorial: Use CDN to serve static content from a web app](cdn-add-to-web-app.md)-
-> [!div class="nextstepaction"]
-> [Tutorial: Add a custom domain to your Azure CDN endpoint](cdn-map-content-to-custom-domain.md)
cdn Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-overview.md
Azure CDN offers the following key features:
For a complete list of features that each Azure CDN product supports, see [Compare Azure CDN product features](cdn-features.md). ## Next steps+ - To get started with CDN, see [Create an Azure CDN profile and endpoint](cdn-create-new-endpoint.md). - Manage your CDN endpoints through the [Microsoft Azure portal](https://portal.azure.com) or with [PowerShell](cdn-manage-powershell.md). - Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md).-- To see Azure CDN in action, watch the [Azure CDN videos](https://azure.microsoft.com/resources/videos/index/?services=cdn&sort=newest).-- For information about the latest Azure CDN features, see [Azure CDN blog](https://azure.microsoft.com/blog/tag/azure-cdn/).
cdn Cdn Standard Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-standard-rules-engine.md
Title: Use a rules engine to enforce HTTPS in Standard Azure CDN | Microsoft Doc
description: Use the rules engine for Microsoft Standard Azure Content Delivery Network (Azure CDN) to customize how Azure CDN handles HTTP requests, including blocking the delivery of certain types of content, defining a caching policy, and modifying HTTP headers. In this article, learn how to create a rule to redirect users to HTTPS. - Last updated 11/01/2019
cdn Cdn Storage Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-storage-custom-domain-https.md
Title: Access storage blobs using an Azure CDN custom domain over HTTPS
+ Title: 'Tutorial: Access storage blobs using an Azure CDN custom domain over HTTPS'
description: Learn how to add an Azure CDN custom domain and enable HTTPS on that domain for your custom blob storage endpoint. documentationcenter: '' -- - Last updated 06/15/2018
cdn Create Profile Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/create-profile-endpoint-template.md
Title: 'Quickstart: Create a profile and endpoint - Resource Manager template'
-description: Learn how to create an Azure Content Delivery Network profile and endpoint a Resource Manager template
+description: In this quickstart, learn how to create an Azure Content Delivery Network profile and endpoint a Resource Manager template
When no longer needed, you can use the [az group delete](/cli/azure/group#az-gro
### PowerShell
-When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup?view=latest) command to remove the resource group and all resources contained within.
+When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all resources contained within.
```azurepowershell-interactive Remove-AzResourceGroup -Name myResourceGroupCDN
In this quickstart, you created a:
To learn more about Azure CDN and Azure Resource Manager, continue to the articles below.
-* Read an [Overview of Azure CDN](cdn-overview.md)
-* Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md)
+> [!div class="nextstepaction"]
+> [Tutorial: Use CDN to serve static content from a web app](cdn-add-to-web-app.md)
cloud-services-extended-support Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/schema-csdef-file.md
The following table describes the attributes of the `ServiceDefinition` element.
| name |Required. The name of the service. The name must be unique within the service account.| | topologyChangeDiscovery | Optional. Specifies the type of topology change notification. Possible values are:<br /><br /> - `Blast` - Sends the update as soon as possible to all role instances. If you choose option, the role should be able to handle the topology update without being restarted.<br />- `UpgradeDomainWalk` ΓÇô Sends the update to each role instance in a sequential manner after the previous instance has successfully accepted the update.| | schemaVersion | Optional. Specifies the version of the service definition schema. The schema version allows Visual Studio to select the correct SDK tools to use for schema validation if more than one version of the SDK is installed side-by-side.|
-| upgradeDomainCount | Optional. Specifies the number of upgrade domains across which roles in this service are allocated. Role instances are allocated to an upgrade domain when the service is deployed. For more information, see [Update a Cloud Service role or deployment](sample-update-cloud-service.md) and [Manage the availability of virtual machines](../virtual-machines/manage-availability.md) You can specify up to 20 upgrade domains. If not specified, the default number of upgrade domains is 5.|
+| upgradeDomainCount | Optional. Specifies the number of upgrade domains across which roles in this service are allocated. Role instances are allocated to an upgrade domain when the service is deployed. For more information, see [Update a Cloud Service role or deployment](sample-update-cloud-service.md) and [Manage the availability of virtual machines](../virtual-machines/availability.md) You can specify up to 20 upgrade domains. If not specified, the default number of upgrade domains is 5.|
## See also
cloud-services Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-file.md
The following table describes the attributes of the `ServiceDefinition` element.
| name |Required. The name of the service. The name must be unique within the service account.| | topologyChangeDiscovery | Optional. Specifies the type of topology change notification. Possible values are:<br /><br /> - `Blast` - Sends the update as soon as possible to all role instances. If you choose option, the role should be able to handle the topology update without being restarted.<br />- `UpgradeDomainWalk` ΓÇô Sends the update to each role instance in a sequential manner after the previous instance has successfully accepted the update.| | schemaVersion | Optional. Specifies the version of the service definition schema. The schema version allows Visual Studio to select the correct SDK tools to use for schema validation if more than one version of the SDK is installed side-by-side.|
-| upgradeDomainCount | Optional. Specifies the number of upgrade domains across which roles in this service are allocated. Role instances are allocated to an upgrade domain when the service is deployed. For more information, see [Update a cloud service role or deployment](cloud-services-how-to-manage-portal.md#update-a-cloud-service-role-or-deployment), [Manage the availability of virtual machines](../virtual-machines/manage-availability.md) and [What is a Cloud Service Model](./cloud-services-model-and-package.md).<br /><br /> You can specify up to 20 upgrade domains. If not specified, the default number of upgrade domains is 5.|
+| upgradeDomainCount | Optional. Specifies the number of upgrade domains across which roles in this service are allocated. Role instances are allocated to an upgrade domain when the service is deployed. For more information, see [Update a cloud service role or deployment](cloud-services-how-to-manage-portal.md#update-a-cloud-service-role-or-deployment), [Manage the availability of virtual machines](../virtual-machines/availability.md) and [What is a Cloud Service Model](./cloud-services-model-and-package.md).<br /><br /> You can specify up to 20 upgrade domains. If not specified, the default number of upgrade domains is 5.|
cloud-shell Cloud Shell Windows Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/cloud-shell-windows-users.md
Under `$HOME/.config/PowerShell`, you can create your profile files - `profile.p
## What's new in PowerShell Core 6
-For more information about what is new in PowerShell Core 6, reference the [PowerShell docs](/powershell/scripting/whats-new/what-s-new-in-powershell-70?view=powershell-7.1) and the [Getting Started with PowerShell Core](https://blogs.msdn.microsoft.com/powershell/2017/06/09/getting-started-with-powershell-core-on-windows-mac-and-linux/) blog post.
+For more information about what is new in PowerShell Core 6, reference the [PowerShell docs](/powershell/scripting/whats-new/what-s-new-in-powershell-70) and the [Getting Started with PowerShell Core](https://blogs.msdn.microsoft.com/powershell/2017/06/09/getting-started-with-powershell-core-on-windows-mac-and-linux/) blog post.
cloudfoundry Cloudfoundry Deploy Your First App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloudfoundry/cloudfoundry-deploy-your-first-app.md
Title: Deploy your first app to Cloud Foundry on Microsoft Azure description: Deploy an application to Cloud Foundry on Azure -+ Last updated 06/14/2017
cloudfoundry Cloudfoundry Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloudfoundry/cloudfoundry-get-started.md
Title: Getting Started with Cloud Foundry on Microsoft Azure description: Run OSS or Pivotal Cloud Foundry on Microsoft Azure -+ Last updated 01/19/2017
cloudfoundry Cloudfoundry Oms Nozzle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloudfoundry/cloudfoundry-oms-nozzle.md
Title: Deploy Azure Log Analytics Nozzle for Cloud Foundry monitoring description: Step-by-step guidance on deploying the Cloud Foundry loggregator Nozzle for Azure Log Analytics. Use the Nozzle to monitor the Cloud Foundry system health and performance metrics.-+ tags: Cloud-Foundry ms.assetid: 00c76c49-3738-494b-b70d-344d8efc0853
cloudfoundry Create Cloud Foundry On Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloudfoundry/create-cloud-foundry-on-azure.md
documentationcenter: CloudFoundry
editor: ruyakubu- ms.assetid: Last updated 09/13/2018
cloudfoundry How Cloud Foundry Integrates With Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloudfoundry/how-cloud-foundry-integrates-with-azure.md
documentationcenter: ''
tags: Cloud-Foundry ms.assetid: 00c76c49-3738-494b-b70d-344d8efc0853-+ vm-linux
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/language-support.md
Computer Vision's OCR APIs support several languages. They do not require you to
|Italian | `it` |Γ£ö |Γ£ö |Γ£ö | |Japanese | `ja` |Γ£ö | |Γ£ö | |Javanese | `jv` | | |Γ£ö |
+|KΓÇÖicheΓÇÖ | `quc` | | |Γ£ö |
|Kabuverdianu | `kea` | | |Γ£ö | |Kachin (Latin) | `kac` | | |Γ£ö | |Kara-Kalpak | `kaa` | | |Γ£ö | |Kashubian | `csb` | | |Γ£ö | |Khasi | `kha` | | |Γ£ö | |Korean | `ko` |Γ£ö | |Γ£ö |
-|KΓÇÖicheΓÇÖ | `quc` | | |Γ£ö |
|Kurdish (latin) | `kur` | | |Γ£ö | |Luxembourgish | `lb` | | |Γ£ö | |Malay (Latin) | `ms` | | |Γ£ö |
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
When the Edge compute role is set up on the Edge device, it creates two devices:
3. Assign a variable to the device IP address. ```powershell
- $ip = "" Replace with the IP address of your device.
+ $ip = "<device-IP-address>"
``` 4. To add the IP address of your device to the clientΓÇÖs trusted hosts list, use the following command:
Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters
```bash curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
+```
+```bash
sudo az login
-sudo az account set --subscription <name or ID of Azure Subscription>
-sudo az group create --name "test-resource-group" --location "WestUS"
-
-sudo az iot hub create --name "test-iot-hub-123" --sku S1 --resource-group "test-resource-group"
-
-sudo az iot hub device-identity create --hub-name "test-iot-hub-123" --device-id "my-edge-device" --edge-enabled
+```
+```bash
+sudo az account set --subscription "<name or ID of Azure Subscription>"
+```
+```bash
+sudo az group create --name "<resource-group-name>" --location "<your-region>"
+```
+See [Region Support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) for available regions.
+```bash
+sudo az iot hub create --name "<iothub-group-name>" --sku S1 --resource-group "<resource-group-name>"
+```
+```bash
+sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled
``` You will need to install [Azure IoT Edge](../../iot-edge/how-to-install-iot-edge.md) version 1.0.9. Follow these steps to download the correct version:
Install the Microsoft GPG public key.
```bash curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+```
+```bash
sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/ ```
Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters
```bash curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
+```
+```bash
sudo az login
-sudo az account set --subscription <name or ID of Azure Subscription>
-sudo az group create --name "test-resource-group" --location "WestUS"
-
-sudo az iot hub create --name "test-iot-hub-123" --sku S1 --resource-group "test-resource-group"
-
-sudo az iot hub device-identity create --hub-name "test-iot-hub-123" --device-id "my-edge-device" --edge-enabled
+```
+```bash
+sudo az account set --subscription "<name or ID of Azure Subscription>"
+```
+```bash
+sudo az group create --name "<resource-group-name>" --location "<your-region>"
+```
+See [Region Support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) for available regions.
+```bash
+sudo az iot hub create --name "<iothub-group-name>" --sku S1 --resource-group "<resource-group-name>"
+```
+```bash
+sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled
``` You will need to install [Azure IoT Edge](../../iot-edge/how-to-install-iot-edge.md) version 1.0.9. Follow these steps to download the correct version:
Install the Microsoft GPG public key.
```bash curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+```
+```bash
sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/ ```
Once you update the Deployment manifest for [Azure Stack Edge devices](https://g
```azurecli sudo az login sudo az extension add --name azure-iot
-sudo az iot edge set-modules --hub-name "<IoT Hub name>" --device-id "<IoT Edge device name>" --content DeploymentManifest.json --subscription "<subscriptionId>"
+sudo az iot edge set-modules --hub-name "<iothub-name>" --device-id "<device-name>" --content DeploymentManifest.json --subscription "<name or ID of Azure Subscription>"
``` |Parameter |Description |
cognitive-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/copy-move-projects.md
If your app or business depends on the use of a Custom Vision project, we recomm
- Two Azure Custom Vision resources. If you don't have them, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true). - The training keys and endpoint URLs of your Custom Vision resources. You can find these values on the resource's **Overview** tab on the Azure portal. - A created Custom Vision project. See [Build a classifier](./getting-started-build-a-classifier.md) for instructions on how to do this.
-* [PowerShell version 6.0+](https://docs.microsoft.com/powershell/scripting/install/installing-powershell-core-on-windows?view=powershell-7.1), or a similar command-line utility.
+* [PowerShell version 6.0+](https://docs.microsoft.com/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line utility.
## Process overview
cognitive-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
This guide shows you how to use these REST APIs with cURL. You can also use an H
- A Custom Vision resource in Azure. If you don't have one, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true). This feature doesn't currently support the Cognitive Service resource (all in one key). - An Azure Storage account with a blob container. Follow [Exercises 1 of the Azure Storage Lab](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise1) if you need help with this step.
-* [PowerShell version 6.0+](https://docs.microsoft.com/powershell/scripting/install/installing-powershell-core-on-windows?view=powershell-7.1), or a similar command-line application.
+* [PowerShell version 6.0+](https://docs.microsoft.com/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line application.
## Set up Azure storage integration
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/QuickStarts/client-libraries.md
keywords: face search by image, facial recognition search, facial recognition, f
[!INCLUDE [cURL quickstart](../includes/quickstarts/rest-api.md)] ::: zone-end---
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
**New features** - **All**: New 48KHz output formats available for the private preview of custom neural voice through the TTS speech synthesis API: Audio48Khz192KBitRateMonoMp3, audio-48khz-192kbitrate-mono-mp3, Audio48Khz96KBitRateMonoMp3, audio-48khz-96kbitrate-mono-mp3, Raw48Khz16BitMonoPcm, raw-48khz-16bit-mono-pcm, Riff48Khz16BitMonoPcm, riff-48khz-16bit-mono-pcm.-- **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#endpointId), [Objective-C](/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like public voices, and then provide the deployment id by setting `EndpointId`. This simplifies setting up custom voices.
+- **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#endpointId), [Objective-C](/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like public voices, and then provide the deployment id by setting `EndpointId`. This simplifies setting up custom voices.
- **C++/C#/Jav#add-a-languageunderstandingmodel-and-intents). - **C++/C#/Java**: Make your voice assistant or bot stop listening immediatedly. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios. - **C++/C#/Java/JavaScript**: Make your voice assistant or bot react better to underlying system errors. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector)) now has a new `TurnStatusReceived` event handler. These optional events correspond to every [`ITurnContext`](/dotnet/api/microsoft.bot.builder.iturncontext?view=botbuilder-dotnet-stable) resolution on the Bot and will report turn execution failures when they happen, e.g. as a result of an unhandled exception, timeout, or network drop between Direct Line Speech and the bot. `TurnStatusReceived` makes it easier to respond to failure conditions. For example, if a bot takes too long on a backend database query (e.g. looking up a product), `TurnStatusReceived` allows the client to know to reprompt with "sorry, I didn't quite get that, could you please try again" or something similar.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
speechConfig.setProperty(
# [Python](#tab/python)
-For more information, see <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python#set-property-by-name-property-name--str--value--str-" target="_blank"> `set_property_by_name` </a>.
+For more information, see <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#set-property-by-name-property-name--str--value--str-" target="_blank"> `set_property_by_name` </a>.
```python speech_config.set_property_by_name(
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/spx-basics.md
spx synthesize --text "Testing synthesis using the Speech CLI" --speakers
You can also save the synthesized output to file. In this example, we'll create a file named `my-sample.wav` in the directory that the command is run. ```console
-spx synthesize --text "We hope that you enjoy using the Speech CLI." --audio output my-sample.wav
+spx synthesize --text "Enjoy using the Speech CLI." --audio output my-sample.wav
``` These examples presume that you're testing in English. However, we support speech synthesis in many languages. You can pull down a full list of voices with this command, or by visiting the [language support page](./language-support.md).
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Azure Communication Services will feed into Azure Monitor logging data for under
## Additional resources -- [Azure Data Subject Requests for the GDPR and CCPA](/microsoft-365/compliance/gdpr-dsr-azure?preserve-view=true&view=o365-worldwide)
+- [Azure Data Subject Requests for the GDPR and CCPA](/microsoft-365/compliance/gdpr-dsr-azure)
- [Microsoft Trust Center](https://www.microsoft.com/trust-center/privacy/data-location) - [Azure Interactive Map - Where is my customer data?](https://azuredatacentermap.azurewebsites.net/)
confidential-computing Application Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/application-development.md
- Title: Azure confidential computing development tools
- description: Use tools and libraries to develop applications for confidential computing
-
-
-
-
-
- Last updated 09/22/2020
-
+ Title: Azure confidential computing development tools
+description: Use tools and libraries to develop applications for confidential computing
+++++ Last updated : 09/22/2020+ # Application development on Intel SGX
confidential-computing Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/attestation.md
description: Learn how you can use attestation to verify that your confidential
-+ Last updated 9/22/2020
confidential-computing Confidential Computing Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-computing-enclaves.md
description: Learn about Intel SGX hardware to enable your confidential computin
-+ Last updated 9/3/2020
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-containers.md
- Title: Confidential containers on Azure Kubernetes Service (AKS)
- description: Learn about unmodified container support on confidential containers.
-
-
-
- Last updated 2/11/2020
-
-
+ Title: Confidential containers on Azure Kubernetes Service (AKS)
+description: Learn about unmodified container support on confidential containers.
+++ Last updated : 2/11/2020++ # Confidential Containers
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-overview.md
- Title: Confidential computing nodes on Azure Kubernetes Service (AKS)
- description: Confidential computing nodes on AKS
-
-
-
-
- Last updated 2/08/2021
-
+ Title: Confidential computing nodes on Azure Kubernetes Service (AKS)
+description: Confidential computing nodes on AKS
++++ Last updated : 2/08/2021+
confidential-computing Confidential Nodes Out Of Proc Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-out-of-proc-attestation.md
- Title: Out-of-proc attestation support with Intel SGX quote helper Daemonset on Azure (preview)
- description: DaemonSet for generating the quote outside of the SGX application process. This article explains how the out-of-proc attestation facility is provided for confidential workloads running inside a container.
-
-
-
- Last updated 2/12/2021
-
+ Title: Out-of-proc attestation support with Intel SGX quote helper Daemonset on Azure (preview)
+description: DaemonSet for generating the quote outside of the SGX application process. This article explains how the out-of-proc attestation facility is rovided for confidential workloads running inside a container.
+++ Last updated : 2/12/2021+ # Platform Software Management with SGX quote helper daemon set (preview)
confidential-computing Enclave Aware Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/enclave-aware-containers.md
- Title: Enclave aware containers on Azure
- description: enclave ready application containers support on Azure Kubernetes Service (AKS)
-
-
-
- Last updated 9/22/2020
-
+ Title: Enclave aware containers on Azure
+description: enclave ready application containers support on Azure Kubernetes Service (AKS)
+++ Last updated : 9/22/2020+ # Enclave Aware Containers
confidential-computing Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/faq.md
-+ Last updated 4/17/2020
confidential-computing How To Fortanix Confidential Computing Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/how-to-fortanix-confidential-computing-manager.md
Title: Fortanix Confidential Computing Manager in an Azure managed application
description: Learn how to deploy Fortanix Confidential Computing Manager (CCM) in a managed application in the Azure portal. -+ Last updated 02/03/2021
confidential-computing How To Fortanix Enclave Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/how-to-fortanix-enclave-manager.md
description: Learn how to use Fortanix Confidential Computing Manager to convert
-+ Last updated 8/12/2020
confidential-computing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/overview.md
- Title: Azure Confidential Computing Overview
- description: Overview of Azure Confidential (ACC) Computing
-
-
-
-
-
- Last updated 09/22/2020
-
+ Title: Azure Confidential Computing Overview
+description: Overview of Azure Confidential (ACC) Computing
+++++ Last updated : 09/22/2020+ # Confidential computing on Azure
confidential-computing Quick Create Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/quick-create-marketplace.md
Title: Quickstart - Create an Azure confidential computing virtual machine with
description: Get started with your deployments by learning how to quickly create a confidential computing virtual machine with Marketplace. -+ Last updated 04/06/2020
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/quick-create-portal.md
Title: Quickstart - Create an Azure confidential computing virtual machine in th
description: Get started with your deployments by learning how to quickly create a confidential computing virtual machine in the Azure portal. -+ Last updated 04/23/2020
confidential-computing Use Cases Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/use-cases-scenarios.md
- Title: Common Azure confidential computing scenarios and use cases
- description: Understand how to use confidential computing in your scenario.
-
-
-
-
- Last updated 9/22/2020
-
+ Title: Common Azure confidential computing scenarios and use cases
+description: Understand how to use confidential computing in your scenario.
+++++ Last updated : 9/22/2020+ # Common scenarios for Azure confidential computing
confidential-computing Virtual Machine Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/virtual-machine-solutions.md
Title: Azure confidential computing solutions on virtual machines
description: Learn about Azure confidential computing solutions on virtual machines. -+ Last updated 04/06/2020
Follow a quickstart tutorial to deploy a DCsv2-Series virtual machine in less th
When using virtual machines in Azure, you're responsible for implementing a high availability and disaster recovery solution to avoid any downtime.
-Azure confidential computing doesn't support zone-redundancy via Availability Zones at this time. For the highest availability and redundancy for confidential computing, use [Availability Sets](../virtual-machines/manage-availability.md#configure-multiple-virtual-machines-in-an-availability-set-for-redundancy). Because of hardware restrictions, Availability Sets for confidential computing instances can only have a maximum of 10 update domains.
+Azure confidential computing doesn't support zone-redundancy via Availability Zones at this time. For the highest availability and redundancy for confidential computing, use [Availability Sets](../virtual-machines/availability-set-overview.md). Because of hardware restrictions, Availability Sets for confidential computing instances can only have a maximum of 10 update domains.
## Deployment with Azure Resource Manager (ARM) Template
cosmos-db Attachments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/attachments.md
# Azure Cosmos DB Attachments Azure Cosmos DB attachments are special items that contain references to an associated metadata with an external blob or media file.
Azure Cosmos DBΓÇÖs managed attachments are distinct from its support for standa
- Managed attachments are limited to 2 GB of storage per database account. - Managed attachments aren't compatible with Azure Cosmos DBΓÇÖs global distribution, and they aren't replicated across regions.
+> [!NOTE]
+> Azure Cosmos DB API for MongoDB version 3.2 utilizes managed attachments for GridFS and are subject to the same limitations.
+>
+> We recommend developers using the MongoDB GridFS feature set to upgrade to Azure Cosmos DB API for MongoDB version 3.6 or higher, which is decoupled from attachments and provides a better experience. Alternatively, developers using the MongoDB GridFS feature set should also consider using Azure Blob Storage - which is purpose-built for storing blob content and offers expanded functionality at lower cost compared to GridFS.
+ ## Migrating Attachments to Azure Blob Storage We recommend migrating Azure Cosmos DB attachments to Azure Blob Storage by following these steps:
namespace attachments
- Get started with [Azure Blob storage](../storage/blobs/storage-quickstart-blobs-dotnet.md) - Get references for using attachments via [Azure Cosmos DBΓÇÖs .NET SDK v2](/dotnet/api/microsoft.azure.documents.attachment) - Get references for using attachments via [Azure Cosmos DBΓÇÖs Java SDK v2](/java/api/com.microsoft.azure.documentdb.attachment)-- Get references for using attachments via [Azure Cosmos DBΓÇÖs REST API](/rest/api/cosmos-db/attachments)
+- Get references for using attachments via [Azure Cosmos DBΓÇÖs REST API](/rest/api/cosmos-db/attachments)
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/certificate-based-authentication.md
In this step, you will install the Azure AD PowerShell module. This module is re
Set-AzContext $context ```
-1. Install and import the [AzureAD](/powershell/module/azuread/?view=azureadps-2.0&preserve-view=true) module
+1. Install and import the [AzureAD](/powershell/module/azuread/) module
```powershell Install-Module AzureAD
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-periodic-backup-restore.md
If you provision throughput at the database level, the backup and restore proces
Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner, or contributor are allowed to request a restore or change the retention period. ## Understanding Costs of extra backups
-Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/en-us/pricing/details/cosmos-db/). For example if Backup Retention is configured to 240 hrs that is, 10 days and Backup Interval to 24 hrs. This implies 10 copies of the backup data. Assuming 1 TB of data in West US 2, the would be 1000 * 0.12 ~ $ 120 for backup storage in given month.
+Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/en-us/pricing/details/cosmos-db/). For example if Backup Retention is configured to 240 hrs that is, 10 days and Backup Interval to 24 hrs. This implies 10 copies of the backup data. Assuming 1 TB of data in West US 2, the cost would be will be 0.12 * 1000 * 8 for backup storage in given month.
## Options to manage your own backups
It is advised that you delete the container or database immediately after migrat
* To make a restore request, contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). * Configure and manage continuous backup using [Azure portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md), or [Azure Resource Manager](continuous-backup-restore-template.md).
-* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
+* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
cosmos-db Create Mongodb Rust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-rust.md
fn list_todos(self, status_filter: &str) {
} ```
-A `todo` status can be updated (from `pending` to `completed` or vice versa) using. The `todo` is converted to a
+A `todo` status can be updated (from `pending` to `completed` or vice versa). The `todo` is converted to a
[bson::oid::ObjectId](https://docs.rs/bson/1.1.0/bson/oid/struct.ObjectId.html) which then used by the[Collection.update_one](https://docs.rs/mongodb/1.1.1/mongodb/struct.Collection.html#method.update_one) method to locate the document that needs to be updated.
fn delete_todo(self, todo_id: &str) {
In this quickstart, you learned how to create an Azure Cosmos DB MongoDB API account using the Azure Cloud Shell, and create and run a Rust command-line app to manage `todo`s. You can now import additional data to your Azure Cosmos DB account. > [!div class="nextstepaction"]
-> [Import MongoDB data into Azure Cosmos DB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
+> [Import MongoDB data into Azure Cosmos DB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Local Emulator Export Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-export-ssl-certificates.md
You need to export the emulator certificate to successfully use the emulator end
When running Java applications or MongoDB applications that uses a Java based client, it is easier to install the certificate into the Java default certificate store than passing the `-Djavax.net.ssl.trustStore=<keystore> -Djavax.net.ssl.trustStorePassword="<password>"` flags. For example, the included Java Demo application (`https://localhost:8081/_explorer/https://docsupdatetracker.net/index.html`) depends on the default certificate store.
-Follow the instructions in the [Adding a Certificate to the Java Certificates Store](/azure/developer/java/sdk/java-sdk-add-certificate-ca-store) to import the X.509 certificate into the default Java certificate store. Keep in mind you will be working in the *%JAVA_HOME%* directory when running keytool. After the certificate is imported into the certificate store, clients for SQL and Azure Cosmos DB's API for MongoDB will be able to connect to the Azure Cosmos DB Emulator.
+Follow the instructions in the [Adding a Certificate to the Java Certificates Store](https://docs.oracle.com/cd/E54932_01/doc.705/e54936/cssg_create_ssl_cert.htm) to import the X.509 certificate into the default Java certificate store. Keep in mind you will be working in the *%JAVA_HOME%* directory when running keytool. After the certificate is imported into the certificate store, clients for SQL and Azure Cosmos DB's API for MongoDB will be able to connect to the Azure Cosmos DB Emulator.
Alternatively you can run the following bash script to import the certificate:
cosmos-db Mongodb Feature Support 36 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-feature-support-36.md
Azure Cosmos DB's API for MongoDB supports the following database commands:
| $limit | Yes | | $listLocalSessions | No | | $listSessions | No |
-| $lookup | Yes |
+| $lookup | Partial |
| $match | Yes | | $out | Yes | | $project | Yes |
Azure Cosmos DB's API for MongoDB supports the following database commands:
| $sortByCount | Yes | | $unwind | Yes |
+> [!NOTE]
+> `$lookup` does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields.
+ ### Boolean expressions | Command | Supported |
cosmos-db Mongodb Post Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-post-migration.md
Previously updated : 03/20/2020 Last updated : 02/14/2021
cosmos-db Mongodb Pre Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-pre-migration.md
Title: Pre-migration steps for data migration to Azure Cosmos DB's API for MongoDB description: This doc provides an overview of the prerequisites for a data migration from MongoDB to Cosmos DB.-+ Last updated 03/02/2021-+ # Pre-migration steps for data migrations from MongoDB to Azure Cosmos DB's API for MongoDB [!INCLUDE[appliesto-mongodb-api](includes/appliesto-mongodb-api.md)]
+> [!IMPORTANT]
+> This MongoDB pre-migration guide is the first in a series on migrating MongoDB to Azure Cosmos DB Mongo API at scale. Customers licensing and deploying MongoDB on self-managed infrastructure may want to reduce and manage the cost of their data estate by migrating to a managed cloud service like Azure Cosmos DB with pay-as-you-go pricing and elastic scalability. The goal of this series is to guide the customer through the migration process:
+>
+> 1. [Pre-migration](mongodb-pre-migration.md) - inventory the existing MongoDB data estate, plan migration, and choose the appropriate migration tool(s).
+> 2. Execution - migrate from MongoDB to Azure Cosmos DB using the provided [tutorials]().
+> 3. [Post-migration](mongodb-post-migration.md) - update and optimize existing applications to execute against your new Azure Cosmos DB data estate.
+>
+
+A solid pre-migration plan can have an outsize impact on the timeliness and success of your team's migration. A good analogy for pre-migration is starting a new project - you may begin by defining requirements, then outline the tasks involved and also prioritize the biggest tasks to be tackled first. This helps to make your project schedule predictable - but of course, unanticipated requirements can arise and complicate the project schedule. Coming back to migration - building a comprehensive execution plan in the pre-migration phase minimizes the chance that you will discover unexpected migration tasks late in the process, saving you time during migration and helping you to ensure goals are met.
+ Before you migrate your data from MongoDB (either on-premises or in the cloud) to Azure Cosmos DB's API for MongoDB, you should: 1. [Read the key considerations about using Azure Cosmos DB's API for MongoDB](#considerations)
cosmos-db Powershell Samples Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-cassandra.md
# Azure PowerShell samples for Azure Cosmos DB Cassandra API [!INCLUDE[appliesto-cassandra-api](includes/appliesto-cassandra-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db Powershell Samples Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-gremlin.md
# Azure PowerShell samples for Azure Cosmos DB Gremlin API [!INCLUDE[appliesto-gremlin-api](includes/appliesto-gremlin-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db Powershell Samples Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-mongodb.md
# Azure PowerShell samples for Azure Cosmos DB API for MongoDB [!INCLUDE[appliesto-mongodb-api](includes/appliesto-mongodb-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db Powershell Samples Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples-table.md
# Azure PowerShell samples for Azure Cosmos DB Table API [!INCLUDE[appliesto-table-api](includes/appliesto-table-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
## Common Samples
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/powershell-samples.md
# Azure PowerShell samples for Azure Cosmos DB Core (SQL) API [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps?preserve-view=true&view=azps-5.4.0) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
+The following table includes links to commonly used Azure PowerShell scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB PowerShell cmdlets are available in the [Azure PowerShell Reference](/powershell/module/az.cosmosdb). The `Az.CosmosDB` module is now part of the `Az` module. [Download and install](/powershell/azure/install-az-ps) the latest version of Az module to get the Azure Cosmos DB cmdlets. You can also get the latest version from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az/5.4.0). You can also fork these PowerShell samples for Cosmos DB from our GitHub repository, [Cosmos DB PowerShell Samples on GitHub](https://github.com/Azure/azure-docs-powershell-samples/tree/master/cosmosdb).
For PowerShell cmdlets for other APIs see [PowerShell Samples for Cassandra](powershell-samples-cassandra.md), [PowerShell Samples for MongoDB API](powershell-samples-mongodb.md), [PowerShell Samples for Gremlin](powershell-samples-gremlin.md), [PowerShell Samples for Table](powershell-samples-table.md)
cosmos-db Sql Api Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-python.md
| | | ||| |**Download SDK**|[PyPI](https://pypi.org/project/azure-cosmos)|
-|**API documentation**|[Python API reference documentation](/python/api/azure-cosmos/?preserve-view=true&view=azure-python)|
+|**API documentation**|[Python API reference documentation](/python/api/azure-cosmos/)|
|**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)| |**Get started**|[Get started with the Python SDK](create-sql-api-python.md)| |**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) and [Python 3.5.3+](https://www.python.org/downloads/)|
cosmos-db Sql Query Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-getting-started.md
Here are some examples of how to do **Point reads** with each SDK:
- [.NET SDK](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) - [Java SDK](/java/api/com.azure.cosmos.cosmoscontainer.readitem#com_azure_cosmos_CosmosContainer__T_readItem_java_lang_String_com_azure_cosmos_models_PartitionKey_com_azure_cosmos_models_CosmosItemRequestOptions_java_lang_Class_T__) - [Node.js SDK](/javascript/api/@azure/cosmos/item#read-requestoptions-)-- [Python SDK](/python/api/azure-cosmos/azure.cosmos.containerproxy?preserve-view=true&view=azure-python#read-item-item--partition-key--populate-query-metrics-none--post-trigger-include-none-kwargs-)
+- [Python SDK](/python/api/azure-cosmos/azure.cosmos.containerproxy#read-item-item--partition-key--populate-query-metrics-none--post-trigger-include-none-kwargs-)
**SQL queries** - You can query data by writing queries using the Structured Query Language (SQL) as a JSON query language. Queries always cost at least 2.3 request units and, in general, will have a higher and more variable latency than point reads. Queries can return many items.
cosmos-db Table Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-python.md
| | | ||| |**SDK download**|[PyPI](https://pypi.python.org/pypi/azure-cosmosdb-table/)|
-|**API documentation**|[Python API reference documentation](/python/api/overview/azure/cosmosdb?preserve-view=true&view=azure-python)|
+|**API documentation**|[Python API reference documentation](/python/api/overview/azure/cosmosdb)|
|**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-cosmosdb-python/tree/master/azure-cosmosdb-table)| |**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-cosmosdb-python/tree/master/azure-cosmosdb-table)| |**Current supported platform**|[Python 2.7](https://www.python.org/downloads/) or [Python 3.3, 3.4, 3.5, or 3.6](https://www.python.org/downloads/)|
cosmos-db Table Storage How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-python.md
This sample shows you how to use the [Azure Cosmos DB Table SDK for Python](http
* Insert and query entities * Modify entities
-While working through the scenarios in this sample, you may want to refer to the [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb?preserve-view=true&view=azure-python).
+While working through the scenarios in this sample, you may want to refer to the [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb).
## Prerequisites
table_service.delete_table('tasktable')
## Next steps * [FAQ - Develop with the Table API](./faq.md)
-* [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb?preserve-view=true&view=azure-python)
+* [Azure Cosmos DB SDK for Python API reference](/python/api/overview/azure/cosmosdb)
* [Python Developer Center](https://azure.microsoft.com/develop/python/) * [Microsoft Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md): A free, cross-platform application for working visually with Azure Storage data on Windows, macOS, and Linux. * [Working with Python in Visual Studio (Windows)](/visualstudio/python/overview-of-python-tools-for-visual-studio)
-[py_commit_batch]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_create_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_delete_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_get_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_insert_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_insert_or_replace_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_Entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.models.entity?preserve-view=true&view=azure-python
-[py_merge_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_update_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_delete_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_TableService]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?preserve-view=true&view=azure-python
-[py_TableBatch]: https://docs.microsoft.com/python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice?view=azure-python&preserve-view=true
+[py_commit_batch]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_create_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_delete_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_get_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_insert_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_insert_or_replace_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_Entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.models.entity
+[py_merge_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_update_entity]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_delete_table]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_TableService]: /python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
+[py_TableBatch]: https://docs.microsoft.com/python/api/azure-cosmosdb-table/azure.cosmosdb.table.tableservice.tableservice
cosmos-db Tutorial Setup Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-setup-ci-cd.md
To use the build task, we first need to install it onto our Azure DevOps organiz
Next, choose the organization in which to install the extension. > [!NOTE]
-> To install an extension to an Azure DevOps organization, you must be an account owner or project collection administrator. If you do not have permissions, but you are an account member, you can request extensions instead. [Learn more.](/azure/devops/marketplace/faq-extensions?preserve-view=true&view=vsts)
+> To install an extension to an Azure DevOps organization, you must be an account owner or project collection administrator. If you do not have permissions, but you are an account member, you can request extensions instead. [Learn more.](/azure/devops/marketplace/faq-extensions)
:::image type="content" source="./media/tutorial-setup-ci-cd/addExtension_2.png" alt-text="Choose an Azure DevOps organization in which to install an extension"::: ## Create a build definition
-Now that the extension is installed, sign in to your Azure DevOps organization and find your project from the projects dashboard. You can add a [build pipeline](/azure/devops/pipelines/get-started-designer?preserve-view=true&tabs=new-nav&view=vsts) to your project or modify an existing build pipeline. If you already have a build pipeline, you can skip ahead to [Add the Emulator build task to a build definition](#addEmulatorBuildTaskToBuildDefinition).
+Now that the extension is installed, sign in to your Azure DevOps organization and find your project from the projects dashboard. You can add a [build pipeline](/azure/devops/pipelines/get-started-designer?preserve-view=true&tabs=new-nav) to your project or modify an existing build pipeline. If you already have a build pipeline, you can skip ahead to [Add the Emulator build task to a build definition](#addEmulatorBuildTaskToBuildDefinition).
1. To create a new build definition, navigate to the **Builds** tab in Azure DevOps. Select **+New.** \> **New build pipeline**
Now that the extension is installed, sign in to your Azure DevOps organization a
3. Finally, select the desired template for the build pipeline. We'll select the **ASP.NET** template in this tutorial. Now you have a build pipeline that you can set up to use the Azure Cosmos DB Emulator build task. > [!NOTE]
-> The agent pool to be selected for this CI should have Docker for Windows installed unless the installation is done manually in a prior task as a part of the CI. See [Microsoft hosted agents](/azure/devops/pipelines/agents/hosted?preserve-view=true&tabs=yaml&view=azure-devops) article for a selection of agent pools; we recommend to start with `Hosted VS2017`.
+> The agent pool to be selected for this CI should have Docker for Windows installed unless the installation is done manually in a prior task as a part of the CI. See [Microsoft hosted agents](/azure/devops/pipelines/agents/hosted?tabs=yaml) article for a selection of agent pools; we recommend to start with `Hosted VS2017`.
Azure Cosmos DB Emulator currently doesnΓÇÖt support hosted VS2019 agent pool. However, the emulator already comes with VS2019 installed and you use it by starting the emulator with the following PowerShell cmdlets. If you run into any issues when using the VS2019, reach out to the [Azure DevOps](https://developercommunity.visualstudio.com/spaces/21/https://docsupdatetracker.net/index.html) team for help:
cost-management-billing Export Cost Data Storage Account Sas Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/export-cost-data-storage-account-sas-key.md
+
+ Title: Export cost data with an Azure Storage account SAS key
+description: This article helps partners create a SAS key and configure Cost Management exports.
++ Last updated : 03/08/2021++++++
+# Export cost data with an Azure Storage account SAS key
+
+The following information applies to Microsoft partners only.
+
+Often, partners don't have their own Azure subscriptions in the tenant that's associated with their own Microsoft Partner Agreement. Partners with a Microsoft Partner Agreement plan who are global admins of their billing account can export and copy cost data into a storage account in a different tenant using a shared access service (SAS) key. In other words, a storage account with a SAS key allows the partner to use a storage account that's outside of their partner agreement to receive exported information. This article helps partners create a SAS key and configure Cost Management exports.
+
+## Requirements
+
+- You must be a partner with a Microsoft Partner Agreement and have customers on the Azure Plan.
+- You must be global admin for your partner organization's billing account.
+- You must have access to configure a storage account that's in a different tenant of your partner organization. You're responsible for maintaining permissions and data access when your export data to your storage account.
+
+## Configure Azure Storage with a SAS key
+
+Get a storage account SAS token or create one using the Azure portal. To create on in the Azure portal, use the following steps. To learn more about SAS keys, see [Grant limited access to data with shared access signatures (SAS).](../../storage/common/storage-sas-overview.md)
+
+1. Navigate to the storage account in the Azure portal.
+ - If your account has access to multiple tenants, switch directories to access the storage account. Select your account in the upper right corner of the Azure portal and then select **Switch directories**.
+ - You might need to sign in to the Azure portal with the corresponding tenant account to access the storage account.
+1. In the left menu, select **Shared access signature**.
+ :::image type="content" source="./media/export-cost-data-storage-account-sas-key/storage-shared-access-signature.png" alt-text="Screenshot showing a configured Azure storage shared access signature." lightbox="./media/export-cost-data-storage-account-sas-key/storage-shared-access-signature.png" :::
+1. Configure the token with the same settings as identified in the preceding image.
+ 1. Select **Blob** for _Allowed services_.
+ 1. Select **Service**, **Container**, and **Object** for _Allowed resource types_.
+ 1. Select **Read**, **Write**, **Delete**, **List**, **Add**, and **Create** for _Allowed permissions_.
+ 1. Choose expiration and dates. Make sure to update your export SAS token before it expires. The longer the time period you configure before expiration, the longer your export runs before needing a new SAS token.
+1. Select **HTTPS only** for _Allowed protocols_.
+1. Select **Basic** for _Preferred routing tier_.
+1. Select **key1** for _Signing key_. If you rotate or update the key that's used to sign the SAS token, you'll need to regenerate a new SAS token for your export.
+1. Select **Generate SAS and connection string**.
+ The **SAS token** value shown is the token that you need when you configure exports.
+
+## Create a new export with a SAS token
+
+Navigate to **Exports** at the billing account scope and create a new export using the following steps.
+
+1. Select **Create**.
+1. Configure the Export details as you would for a normal export. You can configure the export to use an existing directory or container or you can specify a new directory or container and exports will create them for you.
+1. When configuring Storage, select **Use a SAS token**.
+ :::image type="content" source="./media/export-cost-data-storage-account-sas-key/new-export.png" alt-text="Screenshot showing the New export where you select SAS token." lightbox="./media/export-cost-data-storage-account-sas-key/new-export.png" :::
+1. Enter the name of the storage account and paste in your SAS token.
+1. Specify an existing container or Directory or identify new ones to be created.
+1. Select **Create**.
+
+The SAS token-based export only works while the token remains valid. Reset the token before the current one expires, or your export will stop working. Because the token provides access to your storage account, protect the token as carefully as you would any other sensitive information. You're responsible to maintain permissions and data access when your export data to your storage account.
+
+## Troubleshoot exports using SAS tokens
+
+The following are common issues that might happen when you configure or use SAS token-based exports.
+
+- You don't see the SAS key option in the Azure portal.
+ - Verify that you're a partner that has a Microsoft Partner Agreement and that you have global admin permission to the billing account. They're the only people who can export with a SAS key.
+
+- You get the following error message when trying to configure your export:
+
+ **Please ensure the SAS token is valid for blob service, is valid for container and object resource types, and has permissions: add create read write delete. (Storage service error code: AuthorizationResourceTypeMismatch)**
+
+ - Make sure that you're configuring and generating the SAS key correctly in Azure Storage.
+
+- You can't see the full SAS key after you create an export.
+ - Not seeing the key is expected behavior. After the SAS Export is configured, the key is hidden for security reasons.
+
+- You can't access the storage account from the tenant where the export is configured.
+ - It's expected behavior. If the storage account is in another tenant, you need to navigate to that tenant first in the Azure portal to find the storage account.
+
+- Your export fails because of a SAS token-related error.
+ - Your export works only while the SAS token remains valid. Create a new key and run the export.
+
+## Next steps
+
+- For more information about exporting Cost Management data, see [Create and export data](tutorial-export-acm-data.md).
+- For information about exporting large amounts of usage data, see [Retrieve large datasets with exports](ingest-azure-usage-at-scale.md).
cost-management-billing Ingest Azure Usage At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/ingest-azure-usage-at-scale.md
+
+ Title: Retrieve large cost datasets recurringly with exports
+description: This article helps you regularly export large amounts of data with exports from Azure Cost Management.
++ Last updated : 03/08/2021+++++
+# Retrieve large cost datasets recurringly with exports
+
+This article helps you regularly export large amounts of data with exports from Azure Cost Management. Exporting is the recommended way to retrieve unaggregated cost data. Especially when usage files are too large to reliably call and download using the Usage Details API. Exported data is placed in the Azure Storage account that you choose. From there, you can load it into your own systems and analyze it as needed. To configure exports in the Azure portal, see [Export data](tutorial-export-acm-data.md).
+
+If you want to automate exports at various scopes, the sample API request in the next section is a good starting point. You can use the Exports API to create automatic exports as a part of your general environment configuration. Automatic exports help ensure that you have the data that you need. You can use in your own organization's systems as you expand your Azure use.
+
+## Common export configurations
+
+Before you create your first export, consider your scenario and the configuration options need to enable it. Consider the following export options:
+
+- **Recurrence** - Determines how frequently the export job runs and when a file is put in your Azure Storage account. Choose between Daily, Weekly, and Monthly. Try to configure your recurrence to match the data import jobs used by your organization's internal system.
+- **Recurrence Period** - Determines how long the Export remains valid. Files are only exported during the recurrence period.
+- **Time Frame** - Determines the amount of data that's generated by the export on a given run. Common options are MonthToDate and WeekToDate.
+- **StartDate** - Configures when you want the export schedule to begin. An export is created on the StartDate and then later based on your Recurrence.
+- **Type** - There are three export types:
+ - ActualCost - Shows the total usage and costs for the period specified, as they're accrued and shows on your bill.
+ - AmortizedCost - Shows the total usage and costs for the period specified, with amortization applied to the reservation purchase costs that are applicable.
+ - Usage - All exports created before July 20 2020 are of type Usage. Update all your scheduled exports as either ActualCost or AmortizedCost.
+- **Columns** ΓÇô Defines the data fields you want included in your export file. They correspond with the fields available in the Usage Details API. For more information, see [Usage Details API](/rest/api/consumption/usagedetails/list).
+
+## Create a daily month-to-date export for a subscription
+
+Request URL: `PUT https://management.azure.com/{scope}/providers/Microsoft.CostManagement/exports/{exportName}?api-version=2020-06-01`
+
+```json
+{
+ "properties": {
+ "schedule": {
+ "status": "Active",
+ "recurrence": "Daily",
+ "recurrencePeriod": {
+ "from": "2020-06-01T00:00:00Z",
+ "to": "2020-10-31T00:00:00Z"
+ }
+ },
+ "format": "Csv",
+ "deliveryInfo": {
+ "destination": {
+ "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MYDEVTESTRG/providers/Microsoft.Storage/storageAccounts/{yourStorageAccount} ",
+ "container": "{yourContainer}",
+ "rootFolderPath": "{yourDirectory}"
+ }
+ },
+ "definition": {
+ "type": "ActualCost",
+ "timeframe": "MonthToDate",
+ "dataSet": {
+ "granularity": "Daily",
+ "configuration": {
+ "columns": [
+ "Date",
+ "MeterId",
+ "ResourceId",
+ "ResourceLocation",
+ "Quantity"
+ ]
+ }
+ }
+ }
+}
+```
+
+## Copy large Azure storage blobs
+
+You can use Cost Management to schedule exports of your Azure usage details into your Azure Storage accounts as blobs. The resulting blob sizes could be over gigabytes in size. The Azure Cost Management team worked with the Azure Storage team to test copying large Azure storage blobs. The results are documented in the following sections. You can expect to have similar results as you copy storage blobs from one Azure region to another.
+
+To test its performance, the team transferred blobs from storage accounts in the US West region to the same and other regions. The team measured speeds that ranged from 2 GB per second in the same region to 150 MB per second to storage accounts in the South East Asia region.
+
+### Test configuration
+
+To measure blob transfer speeds, the team created a simple .NET console application referencing the latest version (v2.0.1) of the Azure Data Movement Library (DLM) via NuGet. DLM is an SDK provided by the Azure Storage team that facilitates programmatic access to their transfer services. Then they created Standard V2 storage accounts in multiple regions and use the West US as the source region. They populated the storage accounts there with containers, where each held ten 2-GB block blobs. They copied the containers to other storage accounts using DLM's _TransferManager.CopyDirectoryAsync()_ method with the _CopyMethod.ServiceSideSyncCopy_ option. Tests were conducted on a computer running Windows 10 with 12 cores and 1-GbE network.
+
+Application settings used:
+
+- _TransferManager.Configurations.ParallelOperations_ = _Environment.ProcessorCount \* 32_. The team found the setting to have the most effect on overall throughput. A value of 32 times the number of cores provided the best throughput for the test client.
+- _ServicePointManager.DefaultConnectionLimit = int.MaxValue_. Setting it to a maximum value effectively passes full control of transfer parallelism to the _ParallelOperations_ setting above.
+- _TransferManager.Configurations.BlockSize = 4,194,304_. It had some effect on transfer rates with 4 MB, proving to be best for testing.
+
+For more information and sample code, see links in the [Next steps](#next-steps) section.
+
+### Test results
+
+| **Test number** | **To region** | **Blobs** | **Time (secs)** | **MB/s** | **Comments** |
+| | | | | | |
+| 1 | WestUS | 2 GB x 10 | 10 | 2,000 | |
+| 2 | WestUS2 | 2 GB x 10 | 33 | 600 | |
+| 3 | EastUS | 2 GB x 10 | 67 | 300 | |
+| 4 | EastUS | 2 GB x 10 x 4 | 99 | 200 | 4 parallel transfers using 8 storage accounts: 4 West to 4 East average per transfer |
+| 6 | EastUS | 2 GB x 10 x 4 | 92 | 870 | 4 parallel transfers from 1 storage account to another |
+| 5 | EastUS | 2G x 10 x 8 | 148 | 135 | 8 parallel transfers using 8 storage accounts: 4 West to 4x2 East average per transfer |
+| 7 | SE Asia | 2 GB x 10 | 133 | 150 | |
+| 8 | SE Asia | 2 GB x 10 x 4 | 444 | 180 | 4 parallel transfers from 1 storage account to another |
+
+### Sync transfer characteristics
+
+Here are some of the characteristics of the service-side sync transfer used with DML that are relevant to its use:
+
+- DML can transfer a single blob or a directory. For directory transfer, you can use a search pattern to match on blob prefix.
+- Block blob transfers happen in parallel. All complete towards the end of the transfer process. Individual blob blocks are transferred in parallel.
+- The transfer is executed asynchronously on the client. The transfer status is available periodically via a callback to a method that can be defined in a _TransferContext_ object.
+- The transfer creates checkpoints during its progress and exposes a _TransferCheckpoint_ object. The object represents the latest checkpoint via the _TransferContext_ object. If the _TransferCheckpoint_ is saved before a transfer is cancelled/aborted, the transfer can be resumed from the checkpoint for up to seven days. The transfer can be resumed from any checkpoint, not just the latest.
+- If the transfer client process is killed and restarted without implementing the checkpoint feature.
+ - Before any blob transfers have been completed, the transfer restarts.
+ - After some of the blobs have been completed, the transfer restarts for only the incompleted blobs.
+- Pausing the client execution pauses the transfers.
+- The blob transfer feature abstracts the client from transient failures. For instance, storage account throttling won't normally cause a transfer to fail but will slow the transfer.
+- Service-side transfers have low client resource usage for CPU and memory, some network bandwidth, and connections.
+
+### Async transfer characteristics
+
+You can invoke the _TransferManager.CopyDirectoryAsync()_ method with the _CopyMethod.ServiceSideAsyncCopy_ option. It operates similar to the sync transfer mechanism from the client perspective but with the following differences in operation:
+
+- Transfer rates are much slower than the equivalent sync transfer (typically 10 MB/s or less).
+- The transfer continues even if the client process terminates.
+- Although checkpoints are supported, resuming a transfer using a _TransferCheckpoint_ won't resume at the checkpoint time but at the current state of the transfer.
+
+### Test summary
+
+Azure blob storage supports high global transfer rates with its service-side sync transfer feature. Using the feature in .NET applications is straightforward using the Data Movement Library. It's possible for Cost Management exports to reliably copy hundreds of gigabytes of data to a storage account anywhere in less than an hour.
+
+## Next steps
+
+- See the [Microsoft Azure Storage Data Movement Library](https://github.com/Azure/azure-storage-net-data-movement) source.
+- [Transfer data with the Data Movement library](../../storage/common/storage-use-data-movement-library.md).
+- See the [AzureDmlBackup sample application](https://github.com/markjbrown/AzureDmlBackup) source sample.
+- Read [High-Throughput with Azure Blob Storage](https://azure.microsoft.com/blog/high-throughput-with-azure-blob-storage).
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/manage-automation.md
Title: Manage Azure costs with automation
description: This article explains how you can manage Azure costs with automation. Previously updated : 01/06/2021 Last updated : 03/08/2021
Consider using the [Usage Details API](/rest/api/consumption/usageDetails) if yo
The [Usage Details API](/rest/api/consumption/usageDetails) provides an easy way to get raw, unaggregated cost data that corresponds to your Azure bill. The API is useful when your organization needs a programmatic data retrieval solution. Consider using the API if you're looking to analyze smaller cost data sets. However, you should use other solutions identified previously if you have larger datasets. The data in Usage Details is provided on a per meter basis, per day. It's used when calculating your monthly bill. The general availability (GA) version of the APIs is `2019-10-01`. Use `2019-04-01-preview` to access the preview version for reservation and Azure Marketplace purchases with the APIs.
+If you want to get large amounts of exported data on a regular basis, see [Retrieve large cost datasets recurringly with exports](ingest-azure-usage-at-scale.md).
+ ### Usage Details API suggestions **Request schedule**
If you need actual costs to show purchases as they're accrued, change the *metri
GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?metric=AmortizedCost&$filter=properties/usageStart+ge+'2019-04-01'+AND+properties/usageEnd+le+'2019-04-30'&api-version=2019-04-01-preview ```
-## Retrieve large cost datasets recurringly with Exports
-
-You can regularly export large amounts of data with exports from Cost Management. Exporting is the recommended way to retrieve unaggregated cost data. Especially when usage files are too large to reliably call and download using the Usage Details API. Exported data is placed in the Azure Storage account that you choose. From there, you can load it into your own systems and analyze it as needed. To configure exports in the Azure portal, see [Export data](tutorial-export-acm-data.md).
-
-If you want to automate exports at various scopes, the sample API request in the next section is a good starting point. You can use the Exports API to create automatic exports as a part of your general environment configuration. Automatic exports help ensure that you have the data that you need. You can use in your own organization's systems as you expand your Azure use.
-
-### Common export configurations
-
-Before you create your first export, consider your scenario and the configuration options need to enable it. Consider the following export options:
--- **Recurrence** - Determines how frequently the export job runs and when a file is put in your Azure Storage account. Choose between Daily, Weekly, and Monthly. Try to configure your recurrence to match the data import jobs used by your organization's internal system.-- **Recurrence Period** - Determines how long the Export remains valid. Files are only exported during the recurrence period.-- **Time Frame** - Determines the amount of data that's generated by the export on a given run. Common options are MonthToDate and WeekToDate.-- **StartDate** - Configures when you want the export schedule to begin. An export is created on the StartDate and then later based on your Recurrence.-- **Type** - There are three export types:
- - ActualCost - Shows the total usage and costs for the period specified, as they're accrued and shows on your bill.
- - AmortizedCost - Shows the total usage and costs for the period specified, with amortization applied to the reservation purchase costs that are applicable.
- - Usage - All exports created before July 20 2020 are of type Usage. Update all your scheduled exports as either ActualCost or AmortizedCost.
-- **Columns** ΓÇô Defines the data fields you want included in your export file. They correspond with the fields available in the Usage Details API. For more information, see [Usage Details API](/rest/api/consumption/usagedetails/list).-
-### Create a daily month-to-date export for a subscription
-
-Request URL: `PUT https://management.azure.com/{scope}/providers/Microsoft.CostManagement/exports/{exportName}?api-version=2020-06-01`
-
-```json
-{
- "properties": {
- "schedule": {
- "status": "Active",
- "recurrence": "Daily",
- "recurrencePeriod": {
- "from": "2020-06-01T00:00:00Z",
- "to": "2020-10-31T00:00:00Z"
- }
- },
- "format": "Csv",
- "deliveryInfo": {
- "destination": {
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MYDEVTESTRG/providers/Microsoft.Storage/storageAccounts/{yourStorageAccount} ",
- "container": "{yourContainer}",
- "rootFolderPath": "{yourDirectory}"
- }
- },
- "definition": {
- "type": "ActualCost",
- "timeframe": "MonthToDate",
- "dataSet": {
- "granularity": "Daily",
- "configuration": {
- "columns": [
- "Date",
- "MeterId",
- "ResourceId",
- "ResourceLocation",
- "Quantity"
- ]
- }
- }
- }
-}
-```
-
-### Automate alerts and actions with Budgets
+## Automate alerts and actions with budgets
There are two critical components to maximizing the value of your investment in the cloud. One is automatic budget creation. The other is configuring cost-based orchestration in response to budget alerts. There are different ways to automate Azure budget creation. Various alert responses happen when your configured alert thresholds are exceeded. The following sections cover available options and provide sample API requests to get you started with budget automation.
-#### How costs are evaluated against your budget threshold
+### How costs are evaluated against your budget threshold
Your costs are evaluated against your budget threshold once per day. When you create a new budget or at your budget reset day, the costs compared to the threshold will be zero/null because the evaluation might not have occurred. When Azure detects that your costs have crossed the threshold, a notification is triggered within the hour of the detecting period.
-#### View your current cost
+### View your current cost
To view your current costs, you need to make a GET call using the [Query API](/rest/api/cost-management/query).
A GET call to the Budgets API won't return the current costs shown in Cost Analy
You can automate budget creation using the [Budgets API](/rest/api/consumption/budgets). You can also create a budget with a [budget template](quick-create-budget-template.md). Templates are an easy way for you to standardize Azure deployments while ensuring cost control is properly configured and enforced.
-#### Supported locales for budget alert emails
+### Supported locales for budget alert emails
With budgets, you're alerted when costs cross a set threshold. You can set up to five email recipients per budget. Recipients receive the email alerts within 24 hours of crossing the budget threshold. However, your recipient might need to receive an email in a different language. You can use the following language culture codes with the Budgets API. Set the culture code with the `locale` parameter similar to the following example.
Languages supported by a culture code:
| pt-pt | Portuguese (Portugal) | | sv-se | Swedish (Sweden) |
-#### Common Budgets API configurations
+### Common Budgets API configurations
There are many ways to configure a budget in your Azure environment. Consider your scenario first and then identify the configuration options that enable it. Review the following options:
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/understand-cost-mgt-data.md
If you don't see a specific tag in Cost Management, consider the following quest
- Data Factory - Databricks - Load balancers
+ - Machine Learning workspace Compute instances
- Network Watcher - Notification Hubs - Service Bus
Historical data for credit-based and pay-in-advance offers might not match your
## Next steps -- If you haven't already completed the first quickstart for Cost Management, read it at [Start analyzing costs](./quick-acm-cost-analysis.md).
+- If you haven't already completed the first quickstart for Cost Management, read it at [Start analyzing costs](./quick-acm-cost-analysis.md).
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
You can manage your Enterprise Agreement (EA) enrollment in the [Azure Enterpris
Before you begin, ensure that you're familiar with the following articles: - [Enterprise agreement roles](understand-ea-roles.md)-- [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps?view=azps-5.5.0&preserve-view=true)
+- [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps)
- [How to call REST APIs with Postman](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman) ## Create and authenticate your service principal
The parameter is the Billing account ID. You can find it in the Azure portal on
**billingRoleAssignmentName**
-The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid?view=powershell-7.1&preserve-view=true) PowerShell command.
+The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command.
Or, you can use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
It's the Billing account ID. You can find it in the Azure portal on the Cost Man
**billingRoleAssignmentName**
-The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid?view=powershell-7.1&preserve-view=true) PowerShell command.
+The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command.
Or, you can use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
The parameter is the Billing account ID. You can find it in the Azure portal on
**billingRoleAssignmentName**
-The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid?view=powershell-7.1&preserve-view=true) PowerShell command.
+The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command.
Or, you can use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID. **enrollmentAccountName**
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
New-AzSubscription -OfferType MS-AZR-0017P -Name "Dev Team Subscription" -Enroll
| `EnrollmentAccountObjectId` | Yes | String | The Object ID of the enrollment account that the subscription is created under and billed to. The value is a GUID that you get from `Get-AzEnrollmentAccount`. | | `OwnerObjectId` | No | String | The Object ID of any user to add as an Azure RBAC Owner on the subscription when it's created. | | `OwnerSignInName` | No | String | The email address of any user to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `OwnerObjectId`.|
-| `OwnerApplicationId` | No | String | The application ID of any service principal to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `OwnerObjectId`. When using the parameter, the service principal must have [read access to the directory](/powershell/azure/active-directory/signing-in-service-principal?view=azureadps-2.0#give-the-service-principal-reader-access-to-the-current-tenant-get-azureaddirectoryrole&preserve-view=true).|
+| `OwnerApplicationId` | No | String | The application ID of any service principal to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `OwnerObjectId`. When using the parameter, the service principal must have [read access to the directory](/powershell/azure/active-directory/signing-in-service-principal#give-the-service-principal-reader-access-to-the-current-tenant-get-azureaddirectoryrole).|
To see a full list of all parameters, see [New-AzSubscription](/powershell/module/az.subscription/New-AzSubscription).
az account create --offer-type "MS-AZR-0017P" --display-name "Dev Team Subscript
| `enrollment-account-object-id` | Yes | String | The Object ID of the enrollment account that the subscription is created under and billed to. The value is a GUID that you get from `az billing enrollment-account list`. | | `owner-object-id` | No | String | The Object ID of any user to add as an Azure RBAC Owner on the subscription when it's created. | | `owner-upn` | No | String | The email address of any user to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `owner-object-id`.|
-| `owner-spn` | No | String | The application ID of any service principal to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `owner-object-id`. When using the parameter, the service principal must have [read access to the directory](/powershell/azure/active-directory/signing-in-service-principal?view=azureadps-2.0#give-the-service-principal-reader-access-to-the-current-tenant-get-azureaddirectoryrole&preserve-view=true).|
+| `owner-spn` | No | String | The application ID of any service principal to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `owner-object-id`. When using the parameter, the service principal must have [read access to the directory](/powershell/azure/active-directory/signing-in-service-principal#give-the-service-principal-reader-access-to-the-current-tenant-get-azureaddirectoryrole).|
To see a full list of all parameters, see [az account create](/cli/azure/ext/subscription/account#-ext-subscription-az-account-create).
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
Last updated 09/11/2020 + # Troubleshoot mapping data flows in Azure Data Factory [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
For more help with troubleshooting, see these resources:
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory) * [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)+
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-expression-language-functions.md
+
+ Title: How to use parameters and expressions in Azure Data Factory
+description: This How To article provides information about expressions and functions that you can use in creating data factory entities.
+++++ Last updated : 03/08/2020++
+# How to use parameters, expressions and functions in Azure Data Factory
+
+> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
+> * [Version 1](v1/data-factory-functions-variables.md)
+> * [Current version](how-to-expression-language-functions.md)
+
+In this document, we will primarily focus on learning fundamental concepts with various examples to explore the ability to create parameterized data pipelines within Azure Data Factory. Parameterization and dynamic expressions are such notable additions to ADF because they can save a tremendous amount of time and allow for a much more flexible Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) solution, which will dramatically reduce the cost of solution maintenance and speed up the implementation of new features into existing pipelines. These gains are because parameterization minimizes the amount of hard coding and increases the number of reusable objects and processes in a solution.
+
+## Azure data factory UI and parameters
+
+If you are new to Azure data factory parameter usage in ADF user interface, please review [Data factory UI for linked services with parameters](https://docs.microsoft.comazure/data-factory/parameterize-linked-services#data-factory-ui) and [Data factory UI for metadata driven pipeline with parameters](https://docs.microsoft.com/azure/data-factory/how-to-use-trigger-parameterization#data-factory-ui) for visual explanation.
+
+## Parameter and expression concepts
+
+You can use parameters to pass external values into pipelines, datasets, linked services, and data flows. Once the parameter has been passed into the resource, it cannot be changed. By parameterizing resources, you can reuse them with different values each time. Parameters can be used individually or as a part of expressions. JSON values in the definition can be literal or expressions that are evaluated at runtime.
+
+For example:
+
+```json
+"name": "value"
+```
+
+ or
+
+```json
+"name": "@pipeline().parameters.password"
+```
+
+Expressions can appear anywhere in a JSON string value and always result in another JSON value. Here, *password* is a pipeline parameter in the expression. If a JSON value is an expression, the body of the expression is extracted by removing the at-sign (\@). If a literal string is needed that starts with \@, it must be escaped by using \@\@. The following examples show how expressions are evaluated.
+
+|JSON value|Result|
+|-||
+|"parameters"|The characters 'parameters' are returned.|
+|"parameters[1]"|The characters 'parameters[1]' are returned.|
+|"\@\@"|A 1 character string that contains '\@' is returned.|
+|" \@"|A 2 character string that contains ' \@' is returned.|
+
+ Expressions can also appear inside strings, using a feature called *string interpolation* where expressions are wrapped in `@{ ... }`. For example: `"name" : "First Name: @{pipeline().parameters.firstName} Last Name: @{pipeline().parameters.lastName}"`
+
+ Using string interpolation, the result is always a string. Say I have defined `myNumber` as `42` and `myString` as `foo`:
+
+|JSON value|Result|
+|-||
+|"\@pipeline().parameters.myString"| Returns `foo` as a string.|
+|"\@{pipeline().parameters.myString}"| Returns `foo` as a string.|
+|"\@pipeline().parameters.myNumber"| Returns `42` as a *number*.|
+|"\@{pipeline().parameters.myNumber}"| Returns `42` as a *string*.|
+|"Answer is: @{pipeline().parameters.myNumber}"| Returns the string `Answer is: 42`.|
+|"\@concat('Answer is: ', string(pipeline().parameters.myNumber))"| Returns the string `Answer is: 42`|
+|"Answer is: \@\@{pipeline().parameters.myNumber}"| Returns the string `Answer is: @{pipeline().parameters.myNumber}`.|
+
+## Examples of using parameters in expressions
+
+### Complex expression example
+The below example shows a complex example that references a deep sub-field of activity output. To reference a pipeline parameter that evaluates to a sub-field, use [] syntax instead of dot(.) operator (as in case of subfield1 and subfield2)
+
+`@activity('*activityName*').output.*subfield1*.*subfield2*[pipeline().parameters.*subfield3*].*subfield4*`
+
+### Dynamic content editor
+
+Dynamic content editor automatically escapes characters in your content when you finish editing. For example, the following content in content editor is a string interpolation with two expression functions.
+
+```json
+{
+ "type": "@{if(equals(1, 2), 'Blob', 'Table' )}",
+ "name": "@{toUpper('myData')}"
+}
+```
+
+Dynamic content editor converts above content to expression `"{ \n \"type\": \"@{if(equals(1, 2), 'Blob', 'Table' )}\",\n \"name\": \"@{toUpper('myData')}\"\n}"`. The result of this expression is a JSON format string showed below.
+
+```json
+{
+ "type": "Table",
+ "name": "MYDATA"
+}
+```
+
+### A dataset with parameters
+
+In the following example, the BlobDataset takes a parameter named **path**. Its value is used to set a value for the **folderPath** property by using the expression: `dataset().path`.
+
+```json
+{
+ "name": "BlobDataset",
+ "properties": {
+ "type": "AzureBlob",
+ "typeProperties": {
+ "folderPath": "@dataset().path"
+ },
+ "linkedServiceName": {
+ "referenceName": "AzureStorageLinkedService",
+ "type": "LinkedServiceReference"
+ },
+ "parameters": {
+ "path": {
+ "type": "String"
+ }
+ }
+ }
+}
+```
+
+### A pipeline with parameters
+
+In the following example, the pipeline takes **inputPath** and **outputPath** parameters. The **path** for the parameterized blob dataset is set by using values of these parameters. The syntax used here is: `pipeline().parameters.parametername`.
+
+```json
+{
+ "name": "Adfv2QuickStartPipeline",
+ "properties": {
+ "activities": [
+ {
+ "name": "CopyFromBlobToBlob",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "BlobDataset",
+ "parameters": {
+ "path": "@pipeline().parameters.inputPath"
+ },
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "BlobDataset",
+ "parameters": {
+ "path": "@pipeline().parameters.outputPath"
+ },
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "BlobSource"
+ },
+ "sink": {
+ "type": "BlobSink"
+ }
+ }
+ }
+ ],
+ "parameters": {
+ "inputPath": {
+ "type": "String"
+ },
+ "outputPath": {
+ "type": "String"
+ }
+ }
+ }
+}
+```
+
+
+## Calling functions within expressions
+
+You can call functions within expressions. The following sections provide information about the functions that can be used in an expression.
+
+### String functions
+
+To work with strings, you can use these string functions
+and also some [collection functions](#collection-functions).
+String functions work only on strings.
+
+| String function | Task |
+| | - |
+| [concat](control-flow-expression-language-functions.md#concat) | Combine two or more strings, and return the combined string. |
+| [endsWith](control-flow-expression-language-functions.md#endswith) | Check whether a string ends with the specified substring. |
+| [guid](control-flow-expression-language-functions.md#guid) | Generate a globally unique identifier (GUID) as a string. |
+| [indexOf](control-flow-expression-language-functions.md#indexof) | Return the starting position for a substring. |
+| [lastIndexOf](control-flow-expression-language-functions.md#lastindexof) | Return the starting position for the last occurrence of a substring. |
+| [replace](control-flow-expression-language-functions.md#replace) | Replace a substring with the specified string, and return the updated string. |
+| [split](control-flow-expression-language-functions.md#split) | Return an array that contains substrings, separated by commas, from a larger string based on a specified delimiter character in the original string. |
+| [startsWith](control-flow-expression-language-functions.md#startswith) | Check whether a string starts with a specific substring. |
+| [substring](control-flow-expression-language-functions.md#substring) | Return characters from a string, starting from the specified position. |
+| [toLower](control-flow-expression-language-functions.md#toLower) | Return a string in lowercase format. |
+| [toUpper](control-flow-expression-language-functions.md#toUpper) | Return a string in uppercase format. |
+| [trim](control-flow-expression-language-functions.md#trim) | Remove leading and trailing whitespace from a string, and return the updated string. |
+
+### Collection functions
+
+To work with collections, generally arrays, strings,
+and sometimes, dictionaries, you can use these collection functions.
+
+| Collection function | Task |
+| - | - |
+| [contains](control-flow-expression-language-functions.md#contains) | Check whether a collection has a specific item. |
+| [empty](control-flow-expression-language-functions.md#empty) | Check whether a collection is empty. |
+| [first](control-flow-expression-language-functions.md#first) | Return the first item from a collection. |
+| [intersection](control-flow-expression-language-functions.md#intersection) | Return a collection that has *only* the common items across the specified collections. |
+| [join](control-flow-expression-language-functions.md#join) | Return a string that has *all* the items from an array, separated by the specified character. |
+| [last](control-flow-expression-language-functions.md#last) | Return the last item from a collection. |
+| [length](control-flow-expression-language-functions.md#length) | Return the number of items in a string or array. |
+| [skip](control-flow-expression-language-functions.md#skip) | Remove items from the front of a collection, and return *all the other* items. |
+| [take](control-flow-expression-language-functions.md#take) | Return items from the front of a collection. |
+| [union](control-flow-expression-language-functions.md#union) | Return a collection that has *all* the items from the specified collections. |
+
+### Logical functions
+
+These functions are useful inside conditions, they can be used to evaluate any type of logic.
+
+| Logical comparison function | Task |
+| | - |
+| [and](control-flow-expression-language-functions.md#and) | Check whether all expressions are true. |
+| [equals](control-flow-expression-language-functions.md#equals) | Check whether both values are equivalent. |
+| [greater](control-flow-expression-language-functions.md#greater) | Check whether the first value is greater than the second value. |
+| [greaterOrEquals](control-flow-expression-language-functions.md#greaterOrEquals) | Check whether the first value is greater than or equal to the second value. |
+| [if](control-flow-expression-language-functions.md#if) | Check whether an expression is true or false. Based on the result, return a specified value. |
+| [less](control-flow-expression-language-functions.md#less) | Check whether the first value is less than the second value. |
+| [lessOrEquals](control-flow-expression-language-functions.md#lessOrEquals) | Check whether the first value is less than or equal to the second value. |
+| [not](control-flow-expression-language-functions.md#not) | Check whether an expression is false. |
+| [or](control-flow-expression-language-functions.md#or) | Check whether at least one expression is true. |
+
+### Conversion functions
+
+ These functions are used to convert between each of the native types in the language:
+- string
+- integer
+- float
+- boolean
+- arrays
+- dictionaries
+
+| Conversion function | Task |
+| - | - |
+| [array](control-flow-expression-language-functions.md#array) | Return an array from a single specified input. For multiple inputs, see [createArray](control-flow-expression-language-functions.md#createArray). |
+| [base64](control-flow-expression-language-functions.md#base64) | Return the base64-encoded version for a string. |
+| [base64ToBinary](control-flow-expression-language-functions.md#base64ToBinary) | Return the binary version for a base64-encoded string. |
+| [base64ToString](control-flow-expression-language-functions.md#base64ToString) | Return the string version for a base64-encoded string. |
+| [binary](control-flow-expression-language-functions.md#binary) | Return the binary version for an input value. |
+| [bool](control-flow-expression-language-functions.md#bool) | Return the Boolean version for an input value. |
+| [coalesce](control-flow-expression-language-functions.md#coalesce) | Return the first non-null value from one or more parameters. |
+| [createArray](control-flow-expression-language-functions.md#createArray) | Return an array from multiple inputs. |
+| [dataUri](control-flow-expression-language-functions.md#dataUri) | Return the data URI for an input value. |
+| [dataUriToBinary](control-flow-expression-language-functions.md#dataUriToBinary) | Return the binary version for a data URI. |
+| [dataUriToString](control-flow-expression-language-functions.md#dataUriToString) | Return the string version for a data URI. |
+| [decodeBase64](control-flow-expression-language-functions.md#decodeBase64) | Return the string version for a base64-encoded string. |
+| [decodeDataUri](control-flow-expression-language-functions.md#decodeDataUri) | Return the binary version for a data URI. |
+| [decodeUriComponent](control-flow-expression-language-functions.md#decodeUriComponent) | Return a string that replaces escape characters with decoded versions. |
+| [encodeUriComponent](control-flow-expression-language-functions.md#encodeUriComponent) | Return a string that replaces URL-unsafe characters with escape characters. |
+| [float](control-flow-expression-language-functions.md#float) | Return a floating point number for an input value. |
+| [int](control-flow-expression-language-functions.md#int) | Return the integer version for a string. |
+| [json](control-flow-expression-language-functions.md#json) | Return the JavaScript Object Notation (JSON) type value or object for a string or XML. |
+| [string](control-flow-expression-language-functions.md#string) | Return the string version for an input value. |
+| [uriComponent](control-flow-expression-language-functions.md#uriComponent) | Return the URI-encoded version for an input value by replacing URL-unsafe characters with escape characters. |
+| [uriComponentToBinary](control-flow-expression-language-functions.md#uriComponentToBinary) | Return the binary version for a URI-encoded string. |
+| [uriComponentToString](control-flow-expression-language-functions.md#uriComponentToString) | Return the string version for a URI-encoded string. |
+| [xml](control-flow-expression-language-functions.md#xml) | Return the XML version for a string. |
+| [xpath](control-flow-expression-language-functions.md#xpath) | Check XML for nodes or values that match an XPath (XML Path Language) expression, and return the matching nodes or values. |
+
+### Math functions
+ These functions can be used for either types of numbers: **integers** and **floats**.
+
+| Math function | Task |
+| - | - |
+| [add](control-flow-expression-language-functions.md#add) | Return the result from adding two numbers. |
+| [div](control-flow-expression-language-functions.md#div) | Return the result from dividing two numbers. |
+| [max](control-flow-expression-language-functions.md#max) | Return the highest value from a set of numbers or an array. |
+| [min](control-flow-expression-language-functions.md#min) | Return the lowest value from a set of numbers or an array. |
+| [mod](control-flow-expression-language-functions.md#mod) | Return the remainder from dividing two numbers. |
+| [mul](control-flow-expression-language-functions.md#mul) | Return the product from multiplying two numbers. |
+| [rand](control-flow-expression-language-functions.md#rand) | Return a random integer from a specified range. |
+| [range](control-flow-expression-language-functions.md#range) | Return an integer array that starts from a specified integer. |
+| [sub](control-flow-expression-language-functions.md#sub) | Return the result from subtracting the second number from the first number. |
+
+### Date functions
+
+| Date or time function | Task |
+| | - |
+| [addDays](control-flow-expression-language-functions.md#addDays) | Add a number of days to a timestamp. |
+| [addHours](control-flow-expression-language-functions.md#addHours) | Add a number of hours to a timestamp. |
+| [addMinutes](control-flow-expression-language-functions.md#addMinutes) | Add a number of minutes to a timestamp. |
+| [addSeconds](control-flow-expression-language-functions.md#addSeconds) | Add a number of seconds to a timestamp. |
+| [addToTime](control-flow-expression-language-functions.md#addToTime) | Add a number of time units to a timestamp. See also [getFutureTime](control-flow-expression-language-functions.md#getFutureTime). |
+| [convertFromUtc](control-flow-expression-language-functions.md#convertFromUtc) | Convert a timestamp from Universal Time Coordinated (UTC) to the target time zone. |
+| [convertTimeZone](control-flow-expression-language-functions.md#convertTimeZone) | Convert a timestamp from the source time zone to the target time zone. |
+| [convertToUtc](control-flow-expression-language-functions.md#convertToUtc) | Convert a timestamp from the source time zone to Universal Time Coordinated (UTC). |
+| [dayOfMonth](control-flow-expression-language-functions.md#dayOfMonth) | Return the day of the month component from a timestamp. |
+| [dayOfWeek](control-flow-expression-language-functions.md#dayOfWeek) | Return the day of the week component from a timestamp. |
+| [dayOfYear](control-flow-expression-language-functions.md#dayOfYear) | Return the day of the year component from a timestamp. |
+| [formatDateTime](control-flow-expression-language-functions.md#formatDateTime) | Return the timestamp as a string in optional format. |
+| [getFutureTime](control-flow-expression-language-functions.md#getFutureTime) | Return the current timestamp plus the specified time units. See also [addToTime](control-flow-expression-language-functions.md#addToTime). |
+| [getPastTime](control-flow-expression-language-functions.md#getPastTime) | Return the current timestamp minus the specified time units. See also [subtractFromTime](control-flow-expression-language-functions.md#subtractFromTime). |
+| [startOfDay](control-flow-expression-language-functions.md#startOfDay) | Return the start of the day for a timestamp. |
+| [startOfHour](control-flow-expression-language-functions.md#startOfHour) | Return the start of the hour for a timestamp. |
+| [startOfMonth](control-flow-expression-language-functions.md#startOfMonth) | Return the start of the month for a timestamp. |
+| [subtractFromTime](control-flow-expression-language-functions.md#subtractFromTime) | Subtract a number of time units from a timestamp. See also [getPastTime](control-flow-expression-language-functions.md#getPastTime). |
+| [ticks](control-flow-expression-language-functions.md#ticks) | Return the `ticks` property value for a specified timestamp. |
+| [utcNow](control-flow-expression-language-functions.md#utcNow) | Return the current timestamp as a string. |
+
+## Detailed examples for practice
+
+### Detailed Azure data factory copy pipeline with parameters
+
+This [Azure Data factory copy pipeline parameter passing tutorial](https://azure.microsoft.com/mediahandler/files/resourcefiles/azure-data-factory-passing-parameters/Azure%20data%20Factory-Whitepaper-PassingParameters.pdf) walks you through how to pass parameters between a pipeline and activity as well as between the activities.
+
+### Detailed Mapping data flow pipeline with parameters
+
+Please follow [Mapping data flow with parameters](https://docs.microsoft.com/azure/data-factory/parameters-data-flow) for comprehensive example on how to use parameters in data flow.
+
+### Detailed Metadata driven pipeline with parameters
+
+Please follow [Metadata driven pipeline with parameters](https://docs.microsoft.com/azure/data-factory/how-to-use-trigger-parameterization) to learn more about how to use parameters to design metadata driven pipelines. This is a popular use case for parameters.
++
+## Next steps
+For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-using-azure-monitor.md
Create or add diagnostic settings for your data factory.
![Name your settings and select a log-analytics workspace](media/data-factory-monitor-oms/monitor-oms-image2.png) > [!NOTE]
- > Because an Azure log table can't have more than 500 columns, we **highly recommended** you select _Resource-Specific mode_. For more information, see [Log Analytics Known Limitations](../azure-monitor/essentials/resource-logs.md#column-limit-in-azurediagnostics).
+ > Because an Azure log table can't have more than 500 columns, we **highly recommended** you select _Resource-Specific mode_. For more information, see [AzureDiagnostics Logs reference](/azure-monitor/reference/tables/azurediagnostics#additionalfields-column).
1. Select **Save**.
When querying SSIS package execution logs on Logs Analytics, you can join them u
![Querying SSIS package execution logs on Log Analytics](media/data-factory-monitor-oms/log-analytics-query2.png) ## Next steps
-[Monitor and manage pipelines programmatically](monitor-programmatically.md)
+[Monitor and manage pipelines programmatically](monitor-programmatically.md)
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-rest-api.md
Here is the sample output:
In this example, this pipeline contains one Copy activity. The Copy activity refers to the "InputDataset" and the "OutputDataset" created in the previous step as input and output. ```powershell
-$request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${dataFactoryName}/pipelines/Adfv2QuickStartPipeline?api-version=${apiVersion}"
+$request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelines/Adfv2QuickStartPipeline?api-version=${apiVersion}"
$body = @" { "name": "Adfv2QuickStartPipeline",
databox-online Azure Stack Edge Gpu Deploy Stateful Application Dynamic Provision Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateful-application-dynamic-provision-kubernetes.md
Before you can deploy the stateful application, complete the following prerequis
### For client accessing the device - You have a Windows client system that will be used to access the Azure Stack Edge Pro device.
- - The client is running Windows PowerShell 5.0 or later. To download the latest version of Windows PowerShell, go to [Install Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-7&preserve-view=true).
+ - The client is running Windows PowerShell 5.0 or later. To download the latest version of Windows PowerShell, go to [Install Windows PowerShell](/powershell/scripting/install/installing-windows-powershell).
- You can have any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device) as well. This article describes the procedure when using a Windows client.
databox-online Azure Stack Edge Gpu Deploy Stateful Application Static Provision Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateful-application-static-provision-kubernetes.md
Before you can deploy the stateful application, complete the following prerequis
### For client accessing the device - You have a Windows client system that will be used to access the Azure Stack Edge Pro device.
- - The client is running Windows PowerShell 5.0 or later. To download the latest version of Windows PowerShell, go to [Install Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-7&preserve-view=true).
+ - The client is running Windows PowerShell 5.0 or later. To download the latest version of Windows PowerShell, go to [Install Windows PowerShell](/powershell/scripting/install/installing-windows-powershell).
- You can have any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device) as well. This article describes the procedure when using a Windows client.
databox-online Azure Stack Edge Gpu Deploy Stateless Application Git Ops Guestbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateless-application-git-ops-guestbook.md
Before you can deploy the stateless application, make sure that you have complet
1. You have a Windows client system that will be used to access the Azure Stack Edge Pro device.
- - The client is running Windows PowerShell 5.0 or later. To download the latest version of Windows PowerShell, go to [Install Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-7&preserve-view = true).
+ - The client is running Windows PowerShell 5.0 or later. To download the latest version of Windows PowerShell, go to [Install Windows PowerShell](/powershell/scripting/install/installing-windows-powershell).
- You can have any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device) as well. This article describes the procedure when using a Windows client.
databox-online Azure Stack Edge Gpu Manage Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-certificates.md
Previously updated : 02/22/2021 Last updated : 03/08/2021 # Use certificates with Azure Stack Edge Pro GPU device [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes the types of certificates that can be installed on your Azure Stack Edge Pro device. The article also includes the details for each certificate type along with the procedure to install and identify the expiration date.
+This article describes the types of certificates that can be installed on your Azure Stack Edge Pro device. The article also includes the details for each certificate type along with the procedure to install and identify the expiration date.
## About certificates
The .pfx file backup is now saved in the location you selected and is ready to b
## Supported certificate algorithms
- Only the RivestΓÇôShamirΓÇôAdleman (RSA) certificates are supported with your Azure Stack Edge Pro device. If Elliptic Curve Digital Signature Algorithm (ECDSA) certificates are used, then the device behavior is indeterminate.
+ Only the RivestΓÇôShamirΓÇôAdleman (RSA) certificates are supported with your Azure Stack Edge Pro device. Elliptic Curve Digital Signature Algorithm (ECDSA) certificates are not supported.
Certificates that contain an RSA public key are referred to as RSA certificates. Certificates that contain an Elliptic Curve Cryptographic (ECC) public key are referred to as ECDSA (Elliptic Curve Digital Signature Algorithm) certificates.
databox-online Azure Stack Edge J Series Deploy Stateless Application Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-stateless-application-kubernetes.md
Before you can create a Kubernetes cluster and use the `kubectl` command-line to
- You have sign-in credentials to a 1-node Azure Stack Edge Pro device. -- Windows PowerShell 5.0 or later is installed on a Windows client system to access the Azure Stack Edge Pro device. You can have any other client with a Supported operating system as well. This article describes the procedure when using a Windows client. To download the latest version of Windows PowerShell, go to [Installing Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-7&preserve-view=true).
+- Windows PowerShell 5.0 or later is installed on a Windows client system to access the Azure Stack Edge Pro device. You can have any other client with a Supported operating system as well. This article describes the procedure when using a Windows client. To download the latest version of Windows PowerShell, go to [Installing Windows PowerShell](/powershell/scripting/install/installing-windows-powershell).
- Compute is enabled on the Azure Stack Edge Pro device. To enable compute, go to the **Compute** page in the local UI of the device. Then select a network interface that you want to enable for compute. Select **Enable**. Enabling compute results in the creation of a virtual switch on your device on that network interface. For more information, see [Enable compute network on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-ordered.md
Previously updated : 01/13/2021 Last updated : 03/08/2021 #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure.
You will see the following output:
WSManStackVersion 3.0 ```
-If your version is lower than 6.2.4, you need to upgrade your version of Windows PowerShell. To install the latest version of Windows PowerShell, see [Install Azure PowerShell](/powershell/scripting/install/installing-powershell?view=powershell-7&preserve-view=true).
+If your version is lower than 6.2.4, you need to upgrade your version of Windows PowerShell. To install the latest version of Windows PowerShell, see [Install Azure PowerShell](/powershell/scripting/install/installing-powershell).
**Install Azure PowerShell and Data Box modules**
Do the following steps in the Azure portal to order a device.
![Expanded Bring your own password options for a Data Box import order](media/data-box-deploy-ordered/select-data-box-import-security-02.png) - To use your own password for your new device, by **Set preference for the device password**, select **Use your own password**, and type a password that meets the security requirements.
+
+ The password must be alphanumeric and contain from 12 to 15 characters, with at least one uppercase letter, one lowercase letter, one special character, and one number.
+
+ - Allowed special characters: @ # - $ % ^ ! + = ; : _ ( )
+ - Characters not allowed: I i L o O 0
![Options for using your own device password on the Security screen for a Data Box import order](media/data-box-deploy-ordered/select-data-box-import-security-03.png) - To use your own passwords for shares:
- - By **Set preference for share passwords**, select **Use your own passwords** and then **Select passwords for the shares**.
+ 1. By **Set preference for share passwords**, select **Use your own passwords** and then **Select passwords for the shares**.
- ![Options for using your own share passwords on the Security screen for a Data Box import order](media/data-box-deploy-ordered/select-data-box-import-security-04.png)
+ ![Options for using your own share passwords on the Security screen for a Data Box import order](media/data-box-deploy-ordered/select-data-box-import-security-04.png)
- - Type a password for each storage account in the order. The password will be used on all shares for the storage account.
+ 1. Type a password for each storage account in the order. The password will be used on all shares for the storage account.
+
+ The password must be alphanumeric and contain from 12 to 64 characters, with at least one uppercase letter, one lowercase letter, one special character, and one number.
+
+ - Allowed special characters: @ # - $ % ^ ! + = ; : _ ( )
+ - Characters not allowed: I i L o O 0
- To use the same password for all of the storage accounts, select **Copy to all**. When you finish, select **Save**.
+ 1. To use the same password for all of the storage accounts, select **Copy to all**.
+
+ 1. When you finish, select **Save**.
- ![Screen for entering share passwords for a Data Box import order](media/data-box-deploy-ordered/select-data-box-import-security-05.png)
+ ![Screen for entering share passwords for a Data Box import order](media/data-box-deploy-ordered/select-data-box-import-security-05.png)
- On the **Security** screen, you can use **View or change passwords** to change the passwords.
+ On the **Security** screen, you can use **View or change passwords** to change the passwords.
16. In **Security**, if you want to enable software-based double encryption, expand **Double-encryption (for highly secure environments)**, and select **Enable double encryption for the order**.
defender-for-iot Concept Agent Portfolio Overview Os Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-agent-portfolio-overview-os-support.md
Title: Agent portfolio overview and OS support
+ Title: Agent portfolio overview and OS support (Preview)
description: Azure Defender for IoT provides a large portfolio of agents based on the device type.
-# Agent portfolio overview and OS support
+# Agent portfolio overview and OS support (Preview)
Azure Defender for IoT provides a large portfolio of agents based on the device type.
The Azure Defender for IoT micro agent comes built in as part of the Azure RTOS
## Next steps
-Learn more about the [Standalone micro agent overview ](concept-standalone-micro-agent-overview.md).
+Learn more about the [Standalone micro agent overview (Preview)](concept-standalone-micro-agent-overview.md).
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-event-aggregation.md
Title: Event aggregation
+ Title: Event aggregation (Preview)
description: Defender for IoT security agents collects data and system events from your local device, and sends the data to the Azure cloud for processing, and analytics.
-# Event aggregation
+# Event aggregation (Preview)
Defender for IoT security agents collects data and system events from your local device, and sends the data to the Azure cloud for processing, and analytics. The Defender for IoT micro agent collects many types of device events including new processes, and all new connection events. Both the new process, and new connection events may occur frequently on a device within a second. This ability is important for comprehensive security, however, the number of messages security agents send may quickly meet, or exceed your IoT Hub quota and cost limits. Nevertheless, these events contain highly valuable security information that is crucial to protecting your device.
defender-for-iot Concept Security Agent Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-agent-authentication.md
Title: Security agent authentication
+ Title: Security agent authentication (Preview)
description: Perform micro agent authentication with two possible methods.
-# Micro agent authentication methods
+# Micro agent authentication methods (Preview)
There are two options for authentication with the Defender for IoT Micro Agent:
defender-for-iot Concept Standalone Micro Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-standalone-micro-agent-overview.md
Title: Standalone micro agent overview
+ Title: Standalone micro agent overview (Preview)
description: The Azure Defender for IoT security agents allows you to build security directly into your new IoT devices and Azure IoT projects.
-# Standalone micro agent overview
+# Standalone micro agent overview (Preview)
Security is a near-universal concern for IoT implementers. IoT devices have unique needs for endpoint monitoring, security posture management, and threat detection ΓÇô all with highly specific performance requirements.
The Azure Defender for IoT micro agent is easy to deploy, and has minimal perfor
## Next steps
-Check your [Micro agent authentication methods ](concept-security-agent-authentication.md).
+Check your [Micro agent authentication methods (Preview)](concept-security-agent-authentication.md).
defender-for-iot Quickstart Building The Defender Micro Agent From Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-building-the-defender-micro-agent-from-source.md
Title: Build the Defender micro agent from source code
+ Title: Build the Defender micro agent from source code (Preview)
description: Micro Agent includes an infrastructure, which can be used to customize your distribution.
-# Build the Defender micro agent from source code
+# Build the Defender micro agent from source code (Preview)
The Micro Agent includes an infrastructure, which can be used to customize your distribution. To see a list of the available configuration parameters look at the `configs/LINUX_BASE.conf` file.
defender-for-iot Quickstart Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-micro-agent-module-twin.md
Title: Create a Defender IoT micro agent module twin
+ Title: Create a Defender IoT micro agent module twin (Preview)
description: Learn how to create individual DefenderIotMicroAgent module twins for new devices.
-# Create a Defender IoT micro agent module twin
+# Create a Defender IoT micro agent module twin (Preview)
You can create individualΓÇ»**DefenderIotMicroAgent** module twins for new devices. You can also batch create module twins for all devices in an IoT Hub.
defender-for-iot Quickstart Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-standalone-agent-binary-installation.md
Title: Install Defender for IoT micro agent
+ Title: Install Defender for IoT micro agent (Preview)
description: Learn how to install, and authenticate the Defender Micro Agent.
-# Install Defender for IoT micro agent
+# Install Defender for IoT micro agent (Preview)
This article provides an explanation of how to install, and authenticate the Defender micro agent. ## Prerequisites
-Prior to installing the Defender for IoT module you must create a module identity in the IoT Hub. For more information on how to create a module identity, see [Create a Defender IoT micro agent module twin ](quickstart-create-micro-agent-module-twin.md).
+Prior to installing the Defender for IoT module you must create a module identity in the IoT Hub. For more information on how to create a module identity, see [Create a Defender IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md).
## Install the package
defender-for-iot Troubleshoot Defender Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/troubleshoot-defender-micro-agent.md
Title: Defender IoT micro agent troubleshooting
+ Title: Defender IoT micro agent troubleshooting (Preview)
description: Learn how to handle unexpected or unexplained errors.
-# Defender IoT micro agent troubleshooting
+# Defender IoT micro agent troubleshooting (Preview)
In the event you have unexpected or unexplained errors, use the following troubleshooting methods to attempt to resolve your issues. You can also reach out to the Azure Defender for IoT product team for assistance as needed.  
dev-spaces Setup Cicd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dev-spaces/how-to/setup-cicd.md
Although this article guides you with Azure DevOps, the same concepts would appl
## Prerequisites * Azure Kubernetes Service (AKS) cluster with Azure Dev Spaces enabled * [Azure Dev Spaces CLI installed](upgrade-tools.md)
-* [Azure DevOps organization with a project](/azure/devops/user-guide/sign-up-invite-teammates?view=vsts)
+* [Azure DevOps organization with a project](/azure/devops/user-guide/sign-up-invite-teammates)
* [Azure Container Registry (ACR)](../../container-registry/container-registry-get-started-azure-cli.md) * Azure Container Registry [administrator account](../../container-registry/container-registry-authentication.md#admin-account) details available * [Authorize your AKS cluster to pull from your Azure Container Registry](../../aks/cluster-container-registry-integration.md)
The option to disable:
> [!Note] > The Azure DevOps _New YAML pipeline creation experience_ preview feature conflicts with creating pre-defined build pipelines at this time. You need to disable it for now in order to deploy our pre-defined build pipeline.
-In the _azds_updates_ branch we've included a simple [Azure Pipeline YAML](/azure/devops/pipelines/yaml-schema?view=vsts&tabs=schema) that defines the build steps required for *mywebapi* and *webfrontend*.
+In the _azds_updates_ branch we've included a simple [Azure Pipeline YAML](/azure/devops/pipelines/yaml-schema?tabs=schema) that defines the build steps required for *mywebapi* and *webfrontend*.
Depending on the language you've chosen, the pipeline YAML has been checked-in at a path similar to: `samples/dotnetcore/getting-started/azure-pipelines.dotnetcore.yml`
devops-project Azure Devops Project Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-aspnet-core.md
You can delete Azure App Service and other related resources that you created wh
To learn more about modifying the build and release pipelines to meet the needs of your team, see this tutorial: > [!div class="nextstepaction"]
-> [Customize CD process](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Customize CD process](/azure/devops/pipelines/release/define-multistage-release-process)
## Videos
devops-project Azure Devops Project Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-cosmos-db.md
You can modify these build and release pipelines to meet the needs of your team.
> * Commit changes to Git and automatically deploy them to Azure > * Clean up resources
-See [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process?view=azure-devops&viewFallbackFrom=vsts) for more information and next steps.
+See [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process) for more information and next steps.
devops-project Azure Devops Project Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-github.md
When you configured your CI/CD process in this tutorial, you automatically creat
To learn more about the CI/CD pipeline, see: > [!div class="nextstepaction"]
-> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process)
To learn more about application monitoring, see:
devops-project Azure Devops Project Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-go.md
When they are no longer needed, you can delete the Azure App Service instance an
To learn more about modifying the build and release pipelines to meet the needs of your team, see: > [!div class="nextstepaction"]
-> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process)
devops-project Azure Devops Project Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-java.md
You can delete Azure App Service and other related resources when you don't need
When you configured your CI/CD process, build and release pipelines were automatically created. You can modify these build and release pipelines to meet the needs of your team. To learn more about the CI/CD pipeline, see: > [!div class="nextstepaction"]
-> [Customize CD process](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Customize CD process](/azure/devops/pipelines/release/define-multistage-release-process)
devops-project Azure Devops Project Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-php.md
You can delete Azure App Service and other related resources when you don't need
When you configured your CI/CD process, build and release pipelines were automatically created. You can modify these build and release pipelines to meet the needs of your team. To learn more about the CI/CD pipeline, see this tutorial: > [!div class="nextstepaction"]
-> [Customize CD process](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Customize CD process](/azure/devops/pipelines/release/define-multistage-release-process)
devops-project Azure Devops Project Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-python.md
You can delete Azure App Service and related resources when you don't need them
When you configured your CI/CD process, build and release pipelines were automatically created. You can modify these build and release pipelines to meet the needs of your team. To learn more about the CI/CD pipeline, see: > [!div class="nextstepaction"]
-> [Customize CD process](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Customize CD process](/azure/devops/pipelines/release/define-multistage-release-process)
devops-project Azure Devops Project Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-ruby.md
When they are no longer needed, you can delete the Azure App Service instance an
To learn more about modifying the build and release pipelines to meet the needs of your team, see: > [!div class="nextstepaction"]
-> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process)
devops-project Azure Devops Project Service Fabric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-service-fabric.md
You can optionally modify the Azure CI/CD pipeline to meet the needs of your tea
To learn more about Service Fabric and microservices, see: > [!div class="nextstepaction"]
-> [Use a microservices approach for building applications](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Use a microservices approach for building applications](/azure/devops/pipelines/release/define-multistage-release-process)
devops-project Azure Devops Project Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-sql-database.md
You can optionally modify these build and release pipelines to meet the needs of
To learn more about the CI/CD pipeline, see: > [!div class="nextstepaction"]
-> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process)
## Videos
devops-project Azure Devops Project Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-vms.md
In this tutorial, you learned how to:
To learn more about the CI/CD pipeline, see: > [!div class="nextstepaction"]
-> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process?view=vsts)
+> [Define your multi-stage continuous deployment (CD) pipeline](/azure/devops/pipelines/release/define-multistage-release-process)
devtest-labs Add Artifact Repository https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/add-artifact-repository.md
New-AzResourceGroupDeployment `
After New-AzResourceGroupDeployment run successfully, the command outputs important information like the provisioning state (should be succeeded) and any outputs for the template. ## Use Azure PowerShell
-This section provides you a sample PowerShell script that can be used to add an artifact repository to a lab. If you don't have Azure PowerShell, see [How to install and configure Azure PowerShell](/powershell/azure/?view=azps-1.2.0) for detailed instructions to install it.
+This section provides you a sample PowerShell script that can be used to add an artifact repository to a lab. If you don't have Azure PowerShell, see [How to install and configure Azure PowerShell](/powershell/azure/) for detailed instructions to install it.
### Full script Here is the full script, including some verbose messages and comments:
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/automate-add-lab-user.md
The role definition ID is the string identifier for the existing role definition
The subscription ID is obtained by using `subscription().subscriptionId` template function.
-You need to get the role definition for the `DevTest Labs User` built-in role. To get the GUID for the [DevTest Labs User](../role-based-access-control/built-in-roles.md#devtest-labs-user) role, you can use the [Role Assignments REST API](/rest/api/authorization/roleassignments) or the [Get-AzRoleDefinition](/powershell/module/az.resources/get-azroledefinition?view=azps-1.8.0) cmdlet.
+You need to get the role definition for the `DevTest Labs User` built-in role. To get the GUID for the [DevTest Labs User](../role-based-access-control/built-in-roles.md#devtest-labs-user) role, you can use the [Role Assignments REST API](/rest/api/authorization/roleassignments) or the [Get-AzRoleDefinition](/powershell/module/az.resources/get-azroledefinition) cmdlet.
```powershell $dtlUserRoleDefId = (Get-AzRoleDefinition -Name "DevTest Labs User").Id
New-AzureRmResourceGroupDeployment -Name "MyLabResourceGroup-$(New-Guid)" -Resou
It's important to note that the group deployment name and role assignment GUID need to be unique. If you try to deploy a resource assignment with a non-unique GUID, then you'll get a `RoleAssignmentUpdateNotPermitted` error.
-If you plan to use the template several times to add several Active Directory objects to the DevTest Labs User role for your lab, consider using dynamic objects in your PowerShell command. The following example uses the [New-Guid](/powershell/module/Microsoft.PowerShell.Utility/New-Guid?view=powershell-5.0) cmdlet to specify the resource group deployment name and role assignment GUID dynamically.
+If you plan to use the template several times to add several Active Directory objects to the DevTest Labs User role for your lab, consider using dynamic objects in your PowerShell command. The following example uses the [New-Guid](/powershell/module/Microsoft.PowerShell.Utility/New-Guid) cmdlet to specify the resource group deployment name and role assignment GUID dynamically.
```powershell New-AzureRmResourceGroupDeployment -Name "MyLabResourceGroup-$(New-Guid)" -ResourceGroupName 'MyLabResourceGroup' -TemplateFile .\azuredeploy.json -roleAssignmentGuid "$(New-Guid)" -labName "MyLab" -principalId "11111111-1111-1111-1111-111111111111"
devtest-labs Best Practices Distributive Collaborative Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/best-practices-distributive-collaborative-development-environment.md
You can have a common source of custom images that are deployed to labs on a nig
[Formulas](devtest-lab-manage-formulas.md) are lab-specific and don't have a distribution mechanism. The lab members do all the development of formulas. ## Code repository-based resources
-There are two different features that are based on code repositories, artifacts and environments. This article goes over the features and how to most effectively set up repositories and workflow to allow the ability to customize the available artifacts and environments at the organization level or team level. This workflow is based on standard [source code control branching strategy](/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops).
+There are two different features that are based on code repositories, artifacts and environments. This article goes over the features and how to most effectively set up repositories and workflow to allow the ability to customize the available artifacts and environments at the organization level or team level. This workflow is based on standard [source code control branching strategy](/azure/devops/repos/tfvc/branching-strategies-with-tfvc).
### Key concepts The source information for artifacts includes metadata, scripts. The source information for environments includes metadata and Resource Manager templates with any supporting files like PowerShell scripts, DSC scripts, Zip files, and so on.
The most common configuration for source code control (SCC) is to set up a multi
- Business unit/Division-wide resources - Team-specific resources.
-Each of these levels link to a different repository where the main branch is required to be of the production quality. The [branches](/azure/devops/repos/git/git-branching-guidance?view=azure-devops) in each repository would be for development of those specific resources (artifacts or templates). This structure aligns well with DevTest Labs as you can easily connect multiple repositories and multiple branches at the same time to the organizationΓÇÖs labs. The repository name is included in the user interface (UI) to avoid confusion when there are identical names, description, and publisher.
+Each of these levels link to a different repository where the main branch is required to be of the production quality. The [branches](/azure/devops/repos/git/git-branching-guidance) in each repository would be for development of those specific resources (artifacts or templates). This structure aligns well with DevTest Labs as you can easily connect multiple repositories and multiple branches at the same time to the organizationΓÇÖs labs. The repository name is included in the user interface (UI) to avoid confusion when there are identical names, description, and publisher.
The following diagram shows two repositories: a company repository that is maintained by the IT Division, and a division repository maintained by the R&D division.
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-vm-powershell.md
This article shows you how to create a virtual machine in Azure DevTest Labs by
Before you begin: - [Create a lab](devtest-lab-create-lab.md) if you don't want to use an existing lab to test the script or commands in this article. -- [Install Azure PowerShell](/powershell/azure/install-az-ps?view=azps-1.7.0) or use Azure Cloud Shell that's integrated into the Azure portal.
+- [Install Azure PowerShell](/powershell/azure/install-az-ps) or use Azure Cloud Shell that's integrated into the Azure portal.
## PowerShell script
-The sample script in this section uses the [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction?view=azps-1.7.0) cmdlet. This cmdlet takes the lab's resource ID, name of the action to perform (`createEnvironment`), and the parameters necessary perform that action. The parameters are in a hash table that contains all the virtual machine description properties.
+The sample script in this section uses the [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) cmdlet. This cmdlet takes the lab's resource ID, name of the action to perform (`createEnvironment`), and the parameters necessary perform that action. The parameters are in a hash table that contains all the virtual machine description properties.
```powershell [CmdletBinding()]
devtest-labs Extend Devtest Labs Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/extend-devtest-labs-azure-functions.md
ThereΓÇÖs an additional action that can be taken, for any VMs on which the Windo
This section provides step-by-step instructions for setting up Azure Resources needed to update the **Internal support** page. This walkthrough provides one example of extending DevTest Labs. You can use use this pattern for other scenarios. ### Step 1: Create a service principal
-The first step is to get a service principal with permission to the subscription that contains the lab. The service principal must use the password-based authentication. It can be done with [Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli), [Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps?view=azps-2.5.0), or the [Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). If you already have a service principal to use, you can skip this step.
+The first step is to get a service principal with permission to the subscription that contains the lab. The service principal must use the password-based authentication. It can be done with [Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli), [Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps), or the [Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). If you already have a service principal to use, you can skip this step.
Note down the **application ID**, **key**, and **tenant ID** for the service principal. You will need them later in this walkthrough.
devtest-labs Image Factory Set Retention Policy Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/image-factory-set-retention-policy-cleanup.md
Adding a new image to your factory is also simple. When you want to include a ne
## Next steps
-1. [Schedule your build/release](/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=designer) to run the image factory periodically. It refreshes your factory-generated images on a regular basis.
+1. [Schedule your build/release](/azure/devops/pipelines/build/triggers?tabs=designer) to run the image factory periodically. It refreshes your factory-generated images on a regular basis.
2. Make more golden images for your factory. You may also consider [creating artifacts](devtest-lab-artifact-author.md) to script additional pieces of your VM setup tasks and include the artifacts in your factory images. 4. Create a [separate build/release](/azure/devops/pipelines/overview?view=azure-devops-2019) to run the **DistributeImages** script separately. You can run this script when you make changes to Labs.json and get images copied to target labs without having to recreate all the images again.
devtest-labs Use Command Line Start Stop Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/use-command-line-start-stop-virtual-machines.md
However, in some scenarios, you may want to automate starting and stopping of VM
> [!NOTE] > The following script uses the Azure PowerShell Az module.
-The following PowerShell script starts a VM in a lab. [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction?view=azps-1.7.0) is the primary focus for this script. The **ResourceId** parameter is the fully qualified resource ID for the VM in the lab. The **Action** parameter is where the **Start** or **Stop** options are set depending on what is needed.
+The following PowerShell script starts a VM in a lab. [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) is the primary focus for this script. The **ResourceId** parameter is the fully qualified resource ID for the VM in the lab. The **Action** parameter is where the **Start** or **Stop** options are set depending on what is needed.
```powershell # The id of the subscription
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-authenticate-client.md
To proceed, you will need a client app project in which you write your code. If
* [.NET (C#)](/dotnet/api/azure.identity) * [Java](/java/api/overview/azure/identity-readme) * [JavaScript](/javascript/api/overview/azure/identity-readme)
-* [Python](/python/api/overview/azure/identity-readme?preserve-view=true&view=azure-python)
+* [Python](/python/api/overview/azure/identity-readme)
Three common credential-obtaining methods in `Azure.Identity` are:
digital-twins How To Create Custom Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-custom-sdks.md
# Mandatory fields. Title: Create custom SDKs for Azure Digital Twins with AutoRest
+ Title: Create custom-language SDKs with AutoRest
-description: See how to generate custom SDKs, to use Azure Digital Twins with languages other than C#.
+description: Learn how to use AutoRest to generate custom-language SDKs, for writing Azure Digital Twins code in other languages that don't have published SDKs.
Previously updated : 4/24/2020 Last updated : 3/9/2021 -+ # Optional fields. Don't forget to remove # if you need a field. #
#
-# Create custom SDKs for Azure Digital Twins using AutoRest
+# Create custom-language SDKs for Azure Digital Twins using AutoRest
-Right now, the only published data plane SDKs for interacting with the Azure Digital Twins APIs are for .NET (C#), JavaScript, and Java. You can read about these SDKs, and the APIs in general, in [*How-to: Use the Azure Digital Twins APIs and SDKs*](how-to-use-apis-sdks.md). If you are working in another language, this article will show you how to generate your own data plane SDK in the language of your choice, using AutoRest.
+If you need to work with Azure Digital Twins using a language that does not have a [published Azure Digital Twins SDK](how-to-use-apis-sdks.md), this article will show you how to use AutoRest to generate your own SDK in the language of your choice.
->[!NOTE]
-> You can also use AutoRest to generate a control plane SDK if you would like. To do this, complete the steps in this article using the latest **control plane Swagger** (OpenAPI) file from the [control plane Swagger folder](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/) instead of the data plane one.
+The examples in this article show the creation of a [data plane SDK](how-to-use-apis-sdks.md#overview-data-plane-apis), but this process will work for generating a [control plane SDK](how-to-use-apis-sdks.md#overview-control-plane-apis) as well.
-## Set up your machine
+## Prerequisites
-To generate an SDK, you will need:
-* [AutoRest](https://github.com/Azure/autorest), version 2.0.4413 (version 3 isn't currently supported)
-* [Node.js](https://nodejs.org) as a pre-requisite to AutoRest
-* The latest Azure Digital Twins **data plane Swagger** (OpenAPI) file from the [data plane Swagger folder](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins), and its accompanying folder of examples. Download the Swagger file *digitaltwins.json* and its folder of examples to your local machine.
+To generate an SDK, you'll first need to complete the following setup on your local machine:
+* Install [**AutoRest**](https://github.com/Azure/autorest), version 2.0.4413 (version 3 isn't currently supported)
+* Install [**Node.js**](https://nodejs.org), which is a pre-requisite for using AutoRest
+* Install [**Visual Studio**](https://visualstudio.microsoft.com/downloads/)
+* Download the latest Azure Digital Twins **data plane Swagger** (OpenAPI) file from the [data plane Swagger folder](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/data-plane/Microsoft.DigitalTwins), along with its accompanying folder of examples. The Swagger file is the one called *digitaltwins.json*.
-Once your machine is equipped with everything from the list above, you're ready to use AutoRest to create the SDK.
+>[!TIP]
+> To create a **control plane SDK** instead, complete the steps in this article using the latest **control plane Swagger** (OpenAPI) file from the [control plane Swagger folder](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/) instead of the data plane one.
+
+Once your machine is equipped with everything from the list above, you're ready to use AutoRest to create an SDK.
-## Create the SDK with AutoRest
+## Create the SDK using AutoRest
-If you have Node.js installed, you can run this command to make sure you have the right version of AutoRest installed:
+Once you have Node.js installed, you can run this command to make sure you have the required version of AutoRest installed:
```cmd/sh npm install -g autorest@2.0.4413 ```
As a result, you'll see a new folder named *DigitalTwinsApi* in your working dir
AutoRest supports a wide range of language code generators.
-## Add the SDK to a Visual Studio project
+## Make the SDK into a class library
-You can include the files generated by AutoRest directly into a .NET solution. However, it's likely that you'll want to include the Azure Digital Twins SDK in several separate projects (your client apps, Azure Functions apps, and so on). For this reason, it can be useful to build a separate project (a .NET class library) from the generated files. Then, you can include this class library project into several solutions as a project reference.
+You can include the files generated by AutoRest directly into a .NET solution. However, it's likely that you'll want to include the Azure Digital Twins SDK in several separate projects (your client apps, Azure Functions apps, and more). For this reason, it can be useful to build a separate project (a .NET class library) from the generated files. Then, you can include this class library project into several solutions as a project reference.
-This section gives instructions on how to build the SDK as a class library, which is its own project and can be included into other projects. These steps rely on **Visual Studio** (you can install the latest version from [here](https://visualstudio.microsoft.com/downloads/)).
+This section gives instructions on how to build the SDK as a class library, which is its own project and can be included into other projects. These steps rely on **Visual Studio**.
Here are the steps:
To add these, open *Tools > NuGet Package Manager > Manage NuGet Packages for So
You can now build the project, and include it as a project reference in any Azure Digital Twins application you write.
-## General guidelines for generated SDKs
+## Tips for using the SDK
This section contains general information and guidelines for using the generated SDK.
digital-twins How To Use Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-apis-sdks.md
To use the data plane APIs:
- you can find the SDK source in GitHub: [Azure Azure Digital Twins Core client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/digital-twins-core) * You can use the **Python SDK**. To use the Python SDK... - you can view and install the package from PyPi: [Azure Azure Digital Twins Core client library for Python](https://pypi.org/project/azure-digitaltwins-core/).
- - you can view the [SDK reference documentation](/python/api/azure-digitaltwins-core/azure.digitaltwins.core?view=azure-python&preserve-view=true).
+ - you can view the [SDK reference documentation](/python/api/azure-digitaltwins-core/azure.digitaltwins.core).
- you can find the SDK source in GitHub: [Azure Azure Digital Twins Core client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/digitaltwins/azure-digitaltwins-core) * You can generate an SDK for another language using AutoRest. Follow the instructions in [*How-to: Create custom SDKs for Azure Digital Twins with AutoRest*](how-to-create-custom-sdks.md).
You can also find additional samples in the [GitHub repo for the .NET (C#) SDK](
Serialization helpers are helper functions available within the SDK for quickly creating or deserializing twin data for access to basic information. Since the core SDK methods return twin data as JSON by default, it can be helpful to use these helper classes to break the twin data down further. The available helper classes are:
-* `BasicDigitalTwin`: Represents the core data of a digital twin
-* `BasicRelationship`: Represents the core data of a relationship
-* `UpdateOperationUtility`: Represents JSON Patch information used in update calls
-* `WriteableProperty`: Represents property metadata
+* `BasicDigitalTwin`: Generically represents the core data of a digital twin
+* `BasicDigitalTwinComponent`: Generically represents a component in the `Contents` properties of a `BasicDigitalTwin`
+* `BasicRelationship`: Generically represents the core data of a relationship
+* `DigitalTwinsJsonPropertyName`: Contains the string constants for use in JSON serialization and deserialization for custom digital twin types
##### Deserialize a digital twin
-You can always deserialize twin data using the JSON library of your choice, like `System.Test.Json` or `Newtonsoft.Json`. For basic access to a twin, the helper classes make this a bit more convenient.
+You can always deserialize twin data using the JSON library of your choice, like `System.Text.Json` or `Newtonsoft.Json`. For basic access to a twin, the helper classes can make this more convenient.
The `BasicDigitalTwin` helper class also gives you access to properties defined on the twin, through a `Dictionary<string, object>`. To list properties of the twin, you can use: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="GetTwin":::
+> [!NOTE]
+> `BasicDigitalTwin` uses `System.Text.Json` attributes. In order to use `BasicDigitalTwin` with your [DigitalTwinsClient](/dotnet/api/azure.digitaltwins.core.digitaltwinsclient?view=azure-dotnet&preserve-view=true), you must either initialize the client with the default constructor, or, if you want to customize the serializer option, use the [JsonObjectSerializer](/dotnet/api/azure.core.serialization.jsonobjectserializer?view=azure-dotnet&preserve-view=true).
+ ##### Create a digital twin Using the `BasicDigitalTwin` class, you can prepare data for creating a twin instance:
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/howto-sql-server-to-azure-sql-powershell.md
To complete these steps, you need:
* [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later. * To have created a Microsoft Azure Virtual Network by using the Azure Resource Manager deployment model, which provides the Azure Database Migration Service with site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). * To have completed assessment of your on-premises database and schema migration using Data Migration Assistant as described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem)
-* To download and install the Az.DataMigration module from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module?view=powershell-5.1); be sure to open the PowerShell command window using run as an Administrator.
+* To download and install the Az.DataMigration module from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module); be sure to open the PowerShell command window using run as an Administrator.
* To ensure that the credentials used to connect to source SQL Server instance has the [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permission. * To ensure that the credentials used to connect to target Azure SQL DB instance has the CONTROL DATABASE permission on the target Azure SQL Database databases. * An Azure subscription. If you don't have one, create a [free](https://azure.microsoft.com/free/) account before you begin.
dns Private Dns Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/private-dns-migration-guide.md
This step will delete the legacy DNS zones and should be executed only after you
If you're using automation including templates, PowerShell scripts or custom code developed using SDK, you must update your automation to use the new resource model for the private DNS zones. Below are the links to new private DNS CLI/PS/SDK documentation. * [Azure DNS private zones REST API](/rest/api/dns/privatedns/privatezones) * [Azure DNS private zones CLI](/cli/azure/ext/privatedns/network/private-dns)
-* [Azure DNS private zones PowerShell](/powershell/module/az.privatedns/?view=azps-2.3.2)
+* [Azure DNS private zones PowerShell](/powershell/module/az.privatedns/)
* [Azure DNS private zones SDK](/dotnet/api/overview/azure/privatedns/management?view=azure-dotnet-preview) ## Need further help
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
The ExpressRoute gateway will advertise the *Address Space(s)* of the Azure VNet
### How many prefixes can be advertised from a VNet to on-premises on ExpressRoute Private Peering?
-There is a maximum of 200 prefixes advertised on a single ExpressRoute connection, or through VNet peering using gateway transit. For example, if you have 199 address spaces on a single VNet connected to an ExpressRoute circuit, all 199 of those prefixes will be advertised to on-premises. Alternatively, if you have a VNet enabled to allow gateway transit with 1 address space and 150 spoke VNets enabled using the "Allow Remote Gateway" option, the VNet deployed with the gateway will advertise 151 prefixes to on-premises.
+There is a maximum of 1000 prefixes advertised on a single ExpressRoute connection, or through VNet peering using gateway transit. For example, if you have 199 address spaces on a single VNet connected to an ExpressRoute circuit, all 199 of those prefixes will be advertised to on-premises. Alternatively, if you have a VNet enabled to allow gateway transit with 1 address space and 150 spoke VNets enabled using the "Allow Remote Gateway" option, the VNet deployed with the gateway will advertise 151 prefixes to on-premises.
### What happens if I exceed the prefix limit on an ExpressRoute connection?
expressroute How To Custom Route Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/how-to-custom-route-alert.md
Verify that you have met the following criteria before beginning your configurat
* You are familiar with [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
-* You are familiar with using Azure PowerShell. Azure PowerShell is required to collect the network prefixes in ExpressRoute gateway. For more information about Azure PowerShell in general, see the [Azure PowerShell documentation](/powershell/azure/?view=azps-4.1.0).
+* You are familiar with using Azure PowerShell. Azure PowerShell is required to collect the network prefixes in ExpressRoute gateway. For more information about Azure PowerShell in general, see the [Azure PowerShell documentation](/powershell/azure/).
### <a name="limitations"></a>Notes and limitations
firewall-manager Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/ip-groups.md
You can have a maximum of 100 IP Groups per firewall with a maximum 5000 individ
The following Azure PowerShell cmdlets can be used to create and manage IP Groups: -- [New-AzIpGroup](/powershell/module/az.network/new-azipgroup?view=azps-3.4.0)-- [Remove-AzIPGroup](/powershell/module/az.network/remove-azipgroup?view=azps-3.4.0)-- [Get-AzIpGroup](/powershell/module/az.network/get-azipgroup?view=azps-3.4.0)-- [Set-AzIpGroup](/powershell/module/az.network/set-azipgroup?view=azps-3.4.0)-- [New-AzFirewallPolicyNetworkRule](/powershell/module/az.network/new-azfirewallpolicynetworkrule?view=azps-3.4.0)-- [New-AzFirewallPolicyApplicationRule](/powershell/module/az.network/new-azfirewallpolicyapplicationrule?view=azps-3.4.0)-- [New-AzFirewallPolicyNatRule](/powershell/module/az.network/new-azfirewallpolicynatrule?view=azps-3.4.0)
+- [New-AzIpGroup](/powershell/module/az.network/new-azipgroup)
+- [Remove-AzIPGroup](/powershell/module/az.network/remove-azipgroup)
+- [Get-AzIpGroup](/powershell/module/az.network/get-azipgroup)
+- [Set-AzIpGroup](/powershell/module/az.network/set-azipgroup)
+- [New-AzFirewallPolicyNetworkRule](/powershell/module/az.network/new-azfirewallpolicynetworkrule)
+- [New-AzFirewallPolicyApplicationRule](/powershell/module/az.network/new-azfirewallpolicyapplicationrule)
+- [New-AzFirewallPolicyNatRule](/powershell/module/az.network/new-azfirewallpolicynatrule)
## Next steps
firewall Active Ftp Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/active-ftp-support.md
By default, Active FTP support is disabled on Azure Firewall to protect against
## Azure PowerShell
-To deploy using Azure PowerShell, use the `AllowActiveFTP` parameter. For more information, see [Create a Firewall with Allow Active FTP](/powershell/module/az.network/new-azfirewall?view=azps-5.4.0#16create-a-firewall-with-allow-active-ftp-).
+To deploy using Azure PowerShell, use the `AllowActiveFTP` parameter. For more information, see [Create a Firewall with Allow Active FTP](/powershell/module/az.network/new-azfirewall#16create-a-firewall-with-allow-active-ftp-).
## Azure CLI
firewall Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/ip-groups.md
You can have a maximum of 100 IP Groups per firewall with a maximum 5000 individ
The following Azure PowerShell cmdlets can be used to create and manage IP Groups: -- [New-AzIpGroup](/powershell/module/az.network/new-azipgroup?view=azps-3.4.0)-- [Remove-AzIPGroup](/powershell/module/az.network/remove-azipgroup?view=azps-3.4.0)-- [Get-AzIpGroup](/powershell/module/az.network/get-azipgroup?view=azps-3.4.0)-- [Set-AzIpGroup](/powershell/module/az.network/set-azipgroup?view=azps-3.4.0)-- [New-AzFirewallNetworkRule](/powershell/module/az.network/new-azfirewallnetworkrule?view=azps-3.4.0)-- [New-AzFirewallApplicationRule](/powershell/module/az.network/new-azfirewallapplicationrule?view=azps-3.4.0)-- [New-AzFirewallNatRule](/powershell/module/az.network/new-azfirewallnatrule?view=azps-3.4.0)
+- [New-AzIpGroup](/powershell/module/az.network/new-azipgroup)
+- [Remove-AzIPGroup](/powershell/module/az.network/remove-azipgroup)
+- [Get-AzIpGroup](/powershell/module/az.network/get-azipgroup)
+- [Set-AzIpGroup](/powershell/module/az.network/set-azipgroup)
+- [New-AzFirewallNetworkRule](/powershell/module/az.network/new-azfirewallnetworkrule)
+- [New-AzFirewallApplicationRule](/powershell/module/az.network/new-azfirewallapplicationrule)
+- [New-AzFirewallNatRule](/powershell/module/az.network/new-azfirewallnatrule)
## Next steps
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-certificates.md
Previously updated : 02/16/2021 Last updated : 03/09/2021
To configure a CA certificate in your Firewall Premium policy, select your polic
To help you test and verify TLS inspection, you can use the following scripts to create your own self-signed Root CA and Intermediate CA. > [!IMPORTANT]
-> For production, you should use your corporate PKI to create an Intermediate CA certificate. A corporate PKI leverages the existing infrastructure and handles the Root CA distribution to all endpoint machines.
+> For production, you should use your corporate PKI to create an Intermediate CA certificate. A corporate PKI leverages the existing infrastructure and handles the Root CA distribution to all endpoint machines. For more information, see [Deploy and configure Enterprise CA certificates for Azure Firewall Preview](premium-deploy-certificates-enterprise-ca.md).
There are two versions of this script: - a bash script `cert.sh`
firewall Premium Deploy Certificates Enterprise Ca https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-deploy-certificates-enterprise-ca.md
+
+ Title: Deploy and configure Enterprise CA certificates for Azure Firewall Premium Preview
+description: Learn how to deploy and configure Enterprise CA certificates for Azure Firewall Premium Preview.
++++ Last updated : 03/09/2021+++
+# Deploy and configure Enterprise CA certificates for Azure Firewall Preview
+
+> [!IMPORTANT]
+> Azure Firewall Premium is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++
+Azure Firewall Premium Preview includes a TLS inspection feature, which requires a certificate authentication chain. For production deployments, you should use an Enterprise PKI to generate the certificates that you use with Azure Firewall Premium. Use this article to create and manage an Intermediate CA certificate for Azure Firewall Premium Preview.
+
+For more information about certificates used by Azure Firewall Premium Preview, see [Azure Firewall Premium Preview certificates](premium-certificates.md).
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+To use an Enterprise CA to generate a certificate to use with Azure Firewall Premium Preview, you must have the following resources:
+
+- an Active Directory Forest
+- an Active Directory Certification Services Root CA with Web Enrollment enabled
+- an Azure Firewall Premium with Premium tier Firewall Policy
+- an Azure Key Vault
+- a Managed Identity with Read permissions to **Certificates and Secrets** defined in the Key Vault Access Policy
+
+## Request and export a certificate
+
+1. Access the web enrollment site on the Root CA, usually `https://<servername>/certsrv` and select **Request a Certificate**.
+1. Select **Advanced Certificate Request**.
+1. Select **Create and Submit a Request to this CA**.
+1. Fill out the form using the Subordinate Certification Authority template as shown:
+1. Submit the request and install the certificate.
+1. Assuming this request is made from a Windows Server using Internet Explorer, open **Internet Options**.
+1. Navigate to the **Content** tab and select **Certificates**.
+1. Select the certificate that was just issued and then select **Export**.
+1. Select **Next** to begin the wizard. Select **Yes, export the private key**, and then select **Next**.
+1. .pfx file format is selected by default. Uncheck **Include all certificates in the certification path if possible**. If you export the entire certificate chain, the import process to Azure Firewall will fail.
+1. Assign and confirm a password to protect the key, and then select **Next**.
+1. Choose a file name and export location and then select **Next**.
+1. Select **Finish** and move the exported certificate to a secure location.
+
+## Add the certificate to a Firewall Policy
+
+1. In the Azure portal, navigate to the Certificates page of your Key Vault, and select **Generate/Import**.
+1. Select **Import** as the method of creation, name the certificate, select the exported .pfx file, enter the password, and then select **Create**.
+1. Navigate to the **TLS Inspection (preview)** page of your Firewall policy and select your Managed identity, Key Vault, and certificate.
+1. Select **Save**.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/tls-inspection.png" alt-text="TLS inspection":::
+
+## Validate TLS inspection
+
+1. Create an Application Rule using TLS inspection to the destination URL or FQDN of your choice. For example: `*bing.com`.
+1. From a domain-joined machine within the Source range of the rule, navigate to your Destination and select the lock symbol next to the address bar in your browser. The certificate should show that it was issued by your Enterprise CA rather than a public CA.
+1. Show the certificate to display more details, including the certificate path.
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/certificate-details.png" alt-text="certificate details":::
+1. In Log Analytics, run the following KQL query to return all requests that have been subject to TLS Inspection:
+ ```
+ AzureDiagnostics
+ | where ResourceType == "AZUREFIREWALLS"
+ | where Category == "AzureFirewallApplicationRule"
+ | where msg_s contains "Url:"
+ | sort by TimeGenerated desc
+ ```
+ The result shows the full URL of inspected traffic:
+ :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/kql-query.png" alt-text="KQL query":::
+
+## Next steps
+
+[Azure Firewall Premium Preview in the Azure portal](premium-portal.md)
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/snat-private-range.md
New-AzFirewall @azFw
> [!NOTE] > IANAPrivateRanges is expanded to the current defaults on Azure Firewall while the other ranges are added to it. To keep the IANAPrivateRanges default in your private range specification, it must remain in your `PrivateRange` specification as shown in the following examples.
-For more information, see [New-AzFirewall](/powershell/module/az.network/new-azfirewall?view=azps-3.3.0).
+For more information, see [New-AzFirewall](/powershell/module/az.network/new-azfirewall).
### Existing firewall
frontdoor Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/resource-manager-template-samples.md
+
+ Title: Resource Manager template samples - Azure Front Door
+description: Information about sample Azure Resource Manager templates provided for Azure Front Door.
+++++ Last updated : 03/05/2021 ++
+# Azure Resource Manager templates for Azure Front Door
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+The following table includes links to Azure Resource Manager templates for Azure Front Door, with reference architectures including other Azure services.
+
+| App Service | Description |
+|-|-|
+| [App Service](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-app-service-public) | Creates an App Service app with a public endpoint, and a Front Door profile. |
+| [App Service with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-app-service-private-link) | Creates an App Service app with a private endpoint, and a Front Door profile. |
+| [App Service environment with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-app-service-environment-internal-private-link) | Creates an App Service environment, an app with a private endpoint, and a Front Door profile. |
+|**Azure Functions**| **Description** |
+| [Azure Functions](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-function-public/) | Creates an Azure Functions app with a public endpoint, and a Front Door profile. |
+| [Azure Functions with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-function-private-link) | Creates an Azure Functions app with a private endpoint, and a Front Door profile. |
+|**API Management**| **Description** |
+| [API Management (external)](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-api-management-external) | Creates an API Management instance with external VNet integration, and a Front Door profile. |
+|**Storage**| **Description** |
+| [Storage static website](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-storage-static-website) | Creates an Azure Storage account and static website with a public endpoint, and a Front Door profile. |
+| [Storage blobs with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-storage-blobs-private-link) | Creates an Azure Storage account and blob container with a private endpoint, and a Front Door profile. |
+| | |
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/index.md
The following are the [Regulatory Compliance](../concepts/regulatory-compliance.
- [CMMC Level 3](./cmmc-l3.md) - [HIPAA HITRUST 9.2](./hipaa-hitrust-9-2.md) - [ISO 27001:2013](./iso-27001.md)-- [New Zealand Information Security Manual](./new-zealand-ism.md)
+- [New Zealand ISM Restricted](./new-zealand-ism.md)
- [NIST SP 800-53 R4](./nist-sp-800-53-r4.md) - [NIST SP 800-171 R2](./nist-sp-800-171-r2.md)
hdinsight Cluster Management Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/cluster-management-best-practices.md
Learn best practices for managing HDInsight clusters.
| Azure CLI | [Create HDInsight clusters using the Azure CLI](./hdinsight-hadoop-create-linux-clusters-azure-cli.md) | | Azure PowerShell | [Create Linux-based clusters in HDInsight using Azure PowerShell](./hdinsight-hadoop-create-linux-clusters-azure-powershell.md) | | cURL | [Create Apache Hadoop clusters using the Azure REST API](./hdinsight-hadoop-create-linux-clusters-curl-rest.md) |
-| SDKs (.NET, Python, Java) | [.NET](/dotnet/api/overview/azure/hdinsight), [Python](/python/api/overview/azure/hdinsight?preserve-view=true&view=azure-python), [Java](/jav) |
+| SDKs (.NET, Python, Java) | [.NET](/dotnet/api/overview/azure/hdinsight), [Python](/python/api/overview/azure/hdinsight), [Java](/jav) |
> [!Note] > If you are creating a cluster and re-using the cluster name from a previously created cluster, wait until the previous cluster deletion is completed before creating your cluster.
hdinsight Apache Kafka Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-get-started.md
To create an Apache Kafka cluster on HDInsight, use the following steps:
Each Azure region (location) provides _fault domains_. A fault domain is a logical grouping of underlying hardware in an Azure data center. Each fault domain shares a common power source and network switch. The virtual machines and managed disks that implement the nodes within an HDInsight cluster are distributed across these fault domains. This architecture limits the potential impact of physical hardware failures.
- For high availability of data, select a region (location) that contains __three fault domains__. For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
+ For high availability of data, select a region (location) that contains __three fault domains__. For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/availability.md) document.
Select the **Next: Storage >>** tab to advance to the storage settings.
Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
* In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
- * For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
+ * For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/availability.md) document.
* Apache Kafka is not aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
hdinsight Apache Kafka Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-quickstart-powershell.md
Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
- For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
+ For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/availability.md) document.
Kafka is not aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
hdinsight Apache Kafka Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-quickstart-resource-manager-template.md
Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
- For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
+ For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/availability.md) document.
Kafka isn't aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
hdinsight Kafka Troubleshoot Insufficient Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/kafka-troubleshoot-insufficient-domains.md
Receive error message similar to `not sufficient fault domains in region` when a
A fault domain is a logical grouping of underlying hardware in an Azure data center. Each fault domain shares a common power source and network switch. The virtual machines and managed disks that implement the nodes within an HDInsight cluster are distributed across these fault domains. This architecture limits the potential impact of physical hardware failures.
-Each Azure region has a specific number of fault domains. For a list of domains and the number of fault domains they contain, refer to documentation on [Availability Sets](../../virtual-machines/manage-availability.md).
+Each Azure region has a specific number of fault domains. For a list of domains and the number of fault domains they contain, refer to documentation on [Availability Sets](../../virtual-machines/availability.md).
In HDInsight, Kafka clusters are required to be provisioned in a region with at least three Fault domains.
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-analytics.md
Title: Extend Azure IoT Central with custom analytics | Microsoft Docs description: As a solution developer, configure an IoT Central application to do custom analytics and visualizations. This solution uses Azure Databricks.-+ Last updated 02/18/2020
Your Event Hubs namespace looks like the following screenshot:
On the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, navigate to the IoT Central application you created from the Contoso template. In this section, you configure the application to stream the telemetry from its simulated devices to your event hub. To configure the export:
-1. Navigate to the **Data Export (Legacy)** page, select **+ New**, and then **Azure Event Hubs**.
+1. Navigate to the **Data Export** page, select **+ New**, and then **Azure Event Hubs**.
1. Use the following settings to configure the export, then select **Save**: | Setting | Value |
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-rules.md
You can configure an IoT Central application to continuously export telemetry to
Your Event Hubs namespace looks like the following screenshot: -
+```:::image type="content" source="media/howto-create-custom-rules/event-hubs-namespace.png" alt-text="Screenshot of Event Hubs namespace." border="false":::
## Define the function
This solution uses an Azure Functions app to send an email notification when the
The portal creates a default function called **HttpTrigger1**: -
+```:::image type="content" source="media/howto-create-custom-rules/default-function.png" alt-text="Screenshot of Edit HTTP trigger function.":::
1. Replace the C# code with the following code:
To test the function in the portal, first choose **Logs** at the bottom of the c
The function log messages appear in the **Logs** panel:
+```:::image type="content" source="media/howto-create-custom-rules/function-app-logs.png" alt-text="Function log output":::
After a few minutes, the **To** email address receives an email with the following content:
iot-central Howto Manage Iot Central From Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-iot-central-from-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
If you prefer to run Azure PowerShell on your local machine, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps). When you run Azure PowerShell locally, use the **Connect-AzAccount** cmdlet to sign in to Azure before you try the cmdlets in this article. > [!TIP]
-> If you need to run your PowerShell commands in a different Azure subscription, see [Change the active subscription](/powershell/azure/manage-subscriptions-azureps?view=azps-3.4.0#change-the-active-subscription&preserve-view=true).
+> If you need to run your PowerShell commands in a different Azure subscription, see [Change the active subscription](/powershell/azure/manage-subscriptions-azureps#change-the-active-subscription).
## Install the IoT Central module
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/about-iot-sdks.md
These SDKs can run on any device that can support a higher-order language runtim
* [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples) * [Package](https://pypi.org/project/azure-iot-device/) * [Reference Documentation](/python/api/azure-iot-device)
-* [Edge Module Reference Documentation](/python/api/azure-iot-device/azure.iot.device.iothubmoduleclient?view=azure-python&preserve-view=true)
+* [Edge Module Reference Documentation](/python/api/azure-iot-device/azure.iot.device.iothubmoduleclient)
### Service SDKs Azure IoT also offers service SDKs that enable you to build solution-side applications to manage devices, gain insights, visualize data, and more. These SDKs are specific to each Azure IoT service and are available in C#, Java, JavaScript, and Python to simplify your development experience.
Azure Digital Twins is a platform as a service (PaaS) offering that enables the
**Node.js ADT Service SDK**: [GitHub Repository](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/digital-twins-core) | [Package](https://www.npmjs.com/package/@azure/digital-twins-core) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/digitaltwins/digital-twins-core/samples) | [Reference Documentation](/javascript/api/@azure/digital-twins-core/)
-**Python ADT Service SDK**: [GitHub Repository](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/digitaltwins/azure-digitaltwins-core) | [Package](https://pypi.org/project/azure-digitaltwins-core/) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/digitaltwins/azure-digitaltwins-core/samples) | [Reference Documentation](/python/api/azure-digitaltwins-core/azure.digitaltwins.core?view=azure-python&preserve-view=true)
+**Python ADT Service SDK**: [GitHub Repository](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/digitaltwins/azure-digitaltwins-core) | [Package](https://pypi.org/project/azure-digitaltwins-core/) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/digitaltwins/azure-digitaltwins-core/samples) | [Reference Documentation](/python/api/azure-digitaltwins-core/azure.digitaltwins.core)
#### Device Provisioning Service
iot-edge How To Access Built In Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-access-built-in-metrics.md
Access metrics from the host by exposing and mapping the metrics port from the m
Choose different and unique host port numbers if you are mapping both the edgeHub and edgeAgent's metrics endpoints. > [!NOTE]
-> If you wish to disable metrics, set the `MetricsEnabled` environment variable to `false` for **edgeAgent**.
+> The environment variable `httpSettings__enabled` should not be set to `false` for built-in metrics to be available for collection.
+>
+> Environment variables that can be used to disable metrics are listed in the [azure/iotedge repo doc](https://github.com/Azure/iotedge/blob/master/doc/EnvironmentVariables.md).
## Available metrics
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-ubuntu-agent.md
-# Device Update for Azure IoT Hub tutorial using the Ubuntu Server 18.04 x64 Package agent
+# Device Update for Azure IoT Hub tutorial using the package agent on Ubuntu Server 18.04 x64
-Device Update for IoT Hub supports two forms of updates ΓÇô image-based
-and package-based.
+Device Update for IoT Hub supports two forms of updates ΓÇô image-based and package-based.
-Package-based updates are targeted updates that alter only a specific component
-or application on the device. This leads to lower consumption of
-bandwidth and helps reduce the time to download and install the update. Package
-updates typically allow for less downtime of devices when applying an update and
-avoid the overhead of creating images.
+Package-based updates are targeted updates that alter only a specific component or application on the device. This leads to lower consumption of bandwidth and helps reduce the time to download and install the update. Package updates typically allow for less downtime of devices when applying an update and avoid the overhead of creating images.
-This tutorial walks you through the steps to complete an end-to-end package-based update through Device Update for IoT Hub. We will use a sample package agent for Ubuntu Server 18.04 x64 for this tutorial. Even if you plan on using a different OS platform configuration, this tutorial is still useful to learn about the tools and concepts in Device Update for IoT Hub. Complete this introduction to an end-to-end update process, then choose your preferred form of updating and OS platform to dive into the details. You can use Device Update for IoT Hub to update an Azure IoT or Azure IoT Edge device using this tutorial.
+This tutorial walks you through the steps to complete an end-to-end package-based update through Device Update for IoT Hub. For this tutorial we use an Ubuntu Server 18.04 x64 running Azure IoT Edge and the Device Update package agent. The tutorial demonstrates updating a sample package, but using similar steps you could update other packages such as Azure IoT Edge or the container engine it uses.
+
+The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration. Complete this introduction to an end-to-end update process, then choose your preferred form of updating and OS platform to dive into the details.
In this tutorial you will learn how to: > [!div class="checklist"]
-> * Configure device update package repository
-> * Download and install device update agent and its dependencies
-> * Add a tag to your IoT device
+> * Download and install the Device Update agent and its dependencies
+> * Add a tag to your device
> * Import an update > * Create a device group > * Deploy a package update
In this tutorial you will learn how to:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites+ * Access to an IoT Hub. It is recommended that you use a S1 (Standard) tier or above.
-* An Azure IoT or Azure IoT Edge device running Ubuntu Server 18.04 x64, connected to IoT Hub.
- * If you are using an Azure IoT Edge device, make sure it is on v1.2.0 of the Edge runtime or higher
-* If you are not using an Azure IoT Edge device, then [install the latest `aziot-identity-service` package (preview) on your IoT device](https://github.com/Azure/iot-identity-service/actions/runs/575919358)
-* [Device Update account and instance linked to the same IoT Hub as above.](create-device-update-account.md)
+* A Device Update instance and account linked to your IoT Hub.
+ * Follow the guide to [create and link a device update account](create-device-update-account.md) if you have not done so previously.
+* The [connection string for an IoT Edge device](../iot-edge/how-to-register-device.md?view=iotedge-2020-11&preserve-view=true#view-registered-devices-and-retrieve-connection-strings).
-## Configure device update package repository
+## Prepare a device
+### Using the Automated Deploy to Azure Button
-1. Install the repository configuration that matches your device operating system. For this tutorial, this will be Ubuntu Server 18.04.
-
- ```shell
- curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
- ```
+For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/using-cloud-init.md)-based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to help you quickly set up an Ubuntu 18.04 LTS virtual machine. It installs both the Azure IoT Edge runtime and the Device Update package agent and then automatically configures the device with provisioning information using the device connection string for an IoT Edge device (prerequisite) that you supply. This avoids the need to start an SSH session to complete setup.
-2. Copy the generated list to the sources.list.d directory.
+1. To begin, click the button below:
- ```shell
- sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
- ```
-
-3. Install the Microsoft GPG public key.
+ [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.2.0-rc4%2FedgeDeploy.json)
- ```shell
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
- sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
- ```
+1. On the newly launched window, fill in the available form fields:
-## Install Device Update .deb agent packages
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot showing the iotedge-vm-deploy template](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)
-1. Update package lists on your device
+ **Subscription**: The active Azure subscription to deploy the virtual machine into.
- ```shell
- sudo apt-get update
- ```
+ **Resource group**: An existing or newly created Resource Group to contain the virtual machine and it's associated resources.
-2. Install the deviceupdate-agent package and its dependencies
+ **DNS Label Prefix**: A required value of your choosing that is used to prefix the hostname of the virtual machine.
- ```shell
- sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
- ```
+ **Admin Username**: A username, which will be provided root privileges on deployment.
-Device Update for Azure IoT Hub software packages are subject to the following license terms:
- * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
- * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE.md)
-
-Read the license terms prior to using a package. Your installation and use of a package constitutes your acceptance of these terms. If you do not agree with the license terms, do not use that package.
+ **Device Connection String**: A [device connection string](../iot-edge/how-to-register-device.md) for a device that was created within your intended [IoT Hub](../iot-hub/about-iot-hub.md).
-## Configure Device Update Agent using Azure IoT Identity service (Preview)
+ **VM Size**: The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed
-Once you have the required packages installed, you need to provision the device with its cloud identity and authentication information.
+ **Ubuntu OS Version**: The version of the Ubuntu OS to be installed on the base virtual machine. Leave the default value unchanged as it will be set to Ubuntu 18.04-LTS already.
-1. Open the configuration file
+ **Location**: The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into, this value defaults to the location of the selected Resource Group.
- ```shell
- sudo nano /etc/aziot/config.toml
- ```
+ **Authentication Type**: Choose **sshPublicKey** or **password** depending on your preference.
-2. Find the provisioning configuration section of the file. Uncomment the "Manual provisioning with connection string" section. Update the value of the connection_string with the connection string for your IoT (Edge) device. Ensure that all other provisioning sections are commented out.
+ **Admin Password or Key**: The value of the SSH Public Key or the value of the password depending on the choice of Authentication Type.
+ When all fields have been filled in, select the checkbox at the bottom of the page to accept the terms and select **Purchase** to begin the deployment.
- ```toml
- # Manual provisioning configuration using a connection string
- [provisioning]
- source = "manual"
- iothub_hostname = "<REQUIRED IOTHUB HOSTNAME>"
- device_id = "<REQUIRED DEVICE ID PROVISIONED IN IOTHUB>"
- dynamic_reprovisioning = false
- ```
+1. Verify that the deployment has completed successfully. Allow a few minutes after deployment completes for the post-installation and configuration to finish installing IoT Edge and the Device Package update agent.
-3. Save and close the file using Ctrl+X, Y
+ A virtual machine resource should have been deployed into the selected resource group. Take note of the machine name, this should be in the format `vm-0000000000000`. Also, take note of the associated **DNS Name**, which should be in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
-4. Apply the configuration.
+ The **DNS Name** can be obtained from the **Overview** section of the newly deployed virtual machine within the Azure portal.
- If you are using an IoT Edge device, use the following command.
-
- ```shell
- sudo iotedge config apply
- ```
-
- If you are using an IoT device, with the `aziot-identity-service` package installed, then use the following command.
-
- ```shell
- sudo aziotctl config apply
- ```
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot showing the dns name of the iotedge vm](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)
+
+ > [!TIP]
+ > If you want to SSH into this VM after setup, use the associated **DNS Name** with the command:
+ `ssh <adminUsername>@<DNS_Name>`
+
+### (Optional) Manually prepare a device
+The following manual steps to install and configure the device are equivalent to those that were automated by this [cloud-init script](https://github.com/Azure/iotedge-vm-deploy/blob/1.2.0-rc4/cloud-init.txt). They can be used to prepare a physical device.
-5. Optionally, you can verify that the services are running by
+1. Follow the instructions to [Install the Azure IoT Edge runtime](../iot-edge/how-to-install-iot-edge.md?view=iotedge-2020-11&preserve-view=true).
+ > [!NOTE]
+ > The Device Update package agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
+ >
+ > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/packaging.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent will not be registered as an authorized component to establish a connection to IoT Hub.
- ```shell
- sudo systemctl list-units --type=service | grep 'adu-agent\.service\|deliveryoptimization-agent\.service'
- ```
+1. Then, install the Device Update agent .deb packages.
- The output should read:
+ ```bash
+ sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
+ ```
- ```markdown
- adu-agent.service loaded active running Device Update for IoT Hub Agent daemon.
+Device Update for Azure IoT Hub software packages are subject to the following license terms:
+ * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE.md)
+ * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE.md)
- deliveryoptimization-agent.service loaded active running deliveryoptimization-agent.service: Performs content delivery optimization tasks `
- ```
+Read the license terms prior to using a package. Your installation and use of a package constitutes your acceptance of these terms. If you do not agree with the license terms, do not use that package.
## Add a tag to your device 1. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
-2. From 'IoT Devices' or 'IoT Edge' on the left navigation pane find your IoT device and navigate to the Device Twin.
+2. From 'IoT Edge' on the left navigation pane find your IoT Edge device and navigate to the Device Twin.
3. In the Device Twin, delete any existing Device Update tag value by setting them to null.
Once you have the required packages installed, you need to provision the device
```JSON "tags": { "ADUGroup": "<CustomTagValue>"
- }
+ },
``` ## Import update
-1. Download the following [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-1.0.1-importManifest.json). This apt manifest will install the latest available version of `libcurl4-doc package` to your IoT device.
+1. Download the following [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-1.0.1-importManifest.json). This apt manifest will install the latest available version of `libcurl4-doc package` to your device.
- Alternatively, you can download this [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-7.58-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-2-2.0.1-importManifest.json). This will install specific version v7.58.0 of the `libcurl4-doc package` to your IoT device.
+ Alternatively, you can download this [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-7.58-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-2-2.0.1-importManifest.json). This will install specific version v7.58.0 of the `libcurl4-doc package` to your device.
2. In Azure portal, select the Device Updates option under Automatic Device Management from the left-hand navigation bar in your IoT Hub.
Once you have the required packages installed, you need to provision the device
4. Select "+ Import New Update". 5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you downloaded previously. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the apt manifest update file you downloaded previously.
-
+ :::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png"::: 6. Select the folder icon or text box under "Select a storage container". Then select the appropriate storage account.
Once you have the required packages installed, you need to provision the device
8. Select "Submit" to start the import process. 9. The import process begins, and the screen changes to the "Import History" section. Select "Refresh" to view progress until the import process completes. Depending on the size of the update, this may complete in a few minutes but could take longer.
-
+ :::image type="content" source="media/import-update/update-publishing-sequence-2.png" alt-text="Screenshot showing update import sequence." lightbox="media/import-update/update-publishing-sequence-2.png"::: 10. When the Status column indicates the import has succeeded, select the "Ready to Deploy" header. You should see your imported update in the list now.
Once you have the required packages installed, you need to provision the device
1. Go to the IoT Hub you previously connected to your Device Update instance.
-2. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+1. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
-3. Select the Groups tab at the top of the page.
+1. Select the Groups tab at the top of the page.
-4. Select the Add button to create a new group.
+1. Select the Add button to create a new group.
-5. Select the IoT Hub tag you created in the previous step from the list. Select Create update group.
+1. Select the IoT Hub tag you created in the previous step from the list. Select Create update group.
:::image type="content" source="media/create-update-group/select-tag.PNG" alt-text="Screenshot showing tag selection." lightbox="media/create-update-group/select-tag.PNG"::: [Learn more](create-update-group.md) about adding tags and creating update groups - ## Deploy update
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Pending Updates. You may need to Refresh once.
+1. Once the group is created, you should see a new update available for your device group, with a link to the update in the _Available updates_ column. You may need to Refresh once.
-2. Click on the available update.
+1. Click on the link to the available update.
-3. Confirm the correct group is selected as the target group. Schedule your deployment, then select Deploy update.
+1. Confirm the correct group is selected as the target group and schedule your deployment
:::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
-4. View the compliance chart. You should see the update is now in progress.
+ > [!TIP]
+ > By default the Start date/time is 24 hrs from your current time. Be sure to select a different date/time if you want the deployment to begin earlier.
+
+1. Select Deploy update.
+
+1. View the compliance chart. You should see the update is now in progress.
:::image type="content" source="media/deploy-update/update-in-progress.png" alt-text="Update in progress" lightbox="media/deploy-update/update-in-progress.png":::
-5. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+1. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
:::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png":::
Once you have the required packages installed, you need to provision the device
:::image type="content" source="media/deploy-update/deployments-tab.png" alt-text="Deployments tab" lightbox="media/deploy-update/deployments-tab.png":::
-2. Select the deployment you created to view the deployment details.
+1. Select the deployment you created to view the deployment details.
:::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
-3. Select Refresh to view the latest status details. Continue this process until the status changes to Succeeded.
+1. Select Refresh to view the latest status details. Continue this process until the status changes to Succeeded.
You have now completed a successful end-to-end package update using Device Update for IoT Hub on a Ubuntu Server 18.04 x64 device. ## Bonus steps
-1. Download the following [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-remove-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-1.0.2-importManifest.json). This apt manifest will remove the installed `libcurl4-doc package` from your IoT device.
+1. Download the following [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-remove-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-1.0.2-importManifest.json). This apt manifest will remove the installed `libcurl4-doc package` from your device.
-2. Repeat the "Import update" and "Deploy update" sections
+1. Repeat the "Import update" and "Deploy update" sections
## Clean up resources
-When no longer needed, clean up your device update account, instance, IoT Hub and IoT device. You can do so, by going to each individual resource and selecting "Delete". Note that you need to clean up a device update instance before cleaning up the device update account.
+When no longer needed, clean up your device update account, instance, IoT Hub and the IoT Edge device (if you created the VM via the Deploy to Azure button). You can do so, by going to each individual resource and selecting "Delete". Note that you need to clean up a device update instance before cleaning up the device update account.
## Next steps > [!div class="nextstepaction"] > [Image Update on Raspberry Pi 3 B+ tutorial](device-update-raspberry-pi.md)-
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
a location accessible from PowerShell (once the zip file is downloaded, right cl
| Parameter | Description | | | -- |
- | deviceManufacturer | Manufacturer of the device the update is compatible with, for example, Contoso
- | deviceModel | Model of the device the update is compatible with, for example, Toaster
+ | deviceManufacturer | Manufacturer of the device the update is compatible with, for example, Contoso. Must match _manufacturer_ [device property](https://docs.microsoft.com/azure/iot-hub-device-update/device-update-plug-and-play#device-properties)
+ | deviceModel | Model of the device the update is compatible with, for example, Toaster. Must match _model_ [device property](https://docs.microsoft.com/azure/iot-hub-device-update/device-update-plug-and-play#device-properties)
| updateProvider | Entity who is creating or directly responsible for the update. It will often be a company name. | updateName | Identifier for a class of updates. The class can be anything you choose. It will often be a device or model name. | updateVersion | Version number distinguishing this update from others that have the same Provider and Name. May or may not match a version of an individual software component on the device.
iot-hub Iot Hub Python Python File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-python-python-file-upload.md
In this section, you create the device app to upload a file to IoT hub.
return (False, ex) ```
- This function parses the *blob_info* structure passed into it to create a URL that it uses to initialize an [azure.storage.blob.BlobClient](/python/api/azure-storage-blob/azure.storage.blob.blobclient?view=azure-python). Then it uploads your file to Azure blob storage using this client.
+ This function parses the *blob_info* structure passed into it to create a URL that it uses to initialize an [azure.storage.blob.BlobClient](/python/api/azure-storage-blob/azure.storage.blob.blobclient). Then it uploads your file to Azure blob storage using this client.
1. Add the following code to connect the client and upload the file:
Learn more about Azure Blob Storage with the following links:
* [Azure Blob Storage documentation](../storage/blobs/index.yml)
-* [Azure Blob Storage for Python API documentation](/python/api/overview/azure/storage-blob-readme?view=azure-python)
+* [Azure Blob Storage for Python API documentation](/python/api/overview/azure/storage-blob-readme)
iot-pnp Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/libraries-sdks.md
The IoT Plug and Play libraries and SDKs enable developers to build IoT solution
| C - Device | [vcpkg 1.3.9](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/setting_up_vcpkg.md) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/pnp) | [Connect to IoT Hub](quickstart-connect-device.md) | [Reference](/azure/iot-hub/iot-c-sdk-ref/) | | .NET - Device | [NuGet 1.31.0](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/iot-hub/Samples/device/PnpDeviceSamples) | [Connect to IoT Hub](quickstart-connect-device.md) | [Reference](/dotnet/api/microsoft.azure.devices.client) | | Java - Device | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-jav) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) |
-| Python - Device | [pip 2.3.0](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python/tree/master/) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/pnp) | [Connect to IoT Hub](quickstart-connect-device.md) | [Reference](/python/api/azure-iot-device/azure.iot.device?preserve-view=true&view=azure-python) |
+| Python - Device | [pip 2.3.0](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python/tree/master/) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/pnp) | [Connect to IoT Hub](quickstart-connect-device.md) | [Reference](/python/api/azure-iot-device/azure.iot.device) |
| Node - Device | [npm 1.17.2](https://www.npmjs.com/package/azure-iot-device)  | [GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/master/) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/master/device/samples/pnp) | [Connect to IoT Hub](quickstart-connect-device.md) | [Reference](/javascript/api/azure-iot-device/) | | Embedded C - Device | N/A | [GitHub](https://github.com/Azure/azure-sdk-for-c/)| [Samples](howto-use-embedded-c.md#samples) | [How to use Embedded C](howto-use-embedded-c.md) | N/A
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/about-certificates.md
A certificate policy contains information on how to create and manage lifecycle
When a Key Vault certificate is created from scratch, a policy needs to be supplied. The policy specifies how to create this Key Vault certificate version, or the next Key Vault certificate version. Once a policy has been established, it isn't required with successive create operations for future versions. There's only one instance of a policy for all the versions of a Key Vault certificate.
-At a high level, a certificate policy contains the following information (their definitions can be found [here](/powershell/module/az.keyvault/set-azkeyvaultcertificatepolicy?view=azps-4.4.0)):
+At a high level, a certificate policy contains the following information (their definitions can be found [here](/powershell/module/az.keyvault/set-azkeyvaultcertificatepolicy)):
- X509 certificate properties: Contains subject name, subject alternate names, and other properties used to create an x509 certificate request. - Key Properties: contains key type, key length, exportable, and ReuseKeyOnRenewal fields. These fields instruct key vault on how to generate a key.
key-vault How To Export Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/how-to-export-certificate.md
$pfxFileByte = $x509Cert.Export($type, $password)
``` This command exports the entire chain of certificates with private key. The certificate is password protected.
-For more information on the **Get-AzKeyVaultCertificate** command and parameters, see [Get-AzKeyVaultCertificate - Example 2](/powershell/module/az.keyvault/Get-AzKeyVaultCertificate?view=azps-4.4.0).
+For more information on the **Get-AzKeyVaultCertificate** command and parameters, see [Get-AzKeyVaultCertificate - Example 2](/powershell/module/az.keyvault/Get-AzKeyVaultCertificate).
# [Portal](#tab/azure-portal)
key-vault Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/client-libraries.md
Each SDK has separate client libraries for key vault, secrets, keys, and certifi
| Language | Secrets | Keys | Certificates | Key Vault (Management plane) | |--|--|--|--|--| | .NET | - [API Reference](/dotnet/api/azure.security.keyvault.secrets)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Secrets/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Secrets)<br>- [Quickstart](../secrets/quick-create-net.md) | - [API Reference](/dotnet/api/azure.security.keyvault.keys)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Keys/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Keys)<br>- [Quickstart](../keys/quick-create-net.md) | - [API Reference](/dotnet/api/azure.security.keyvault.certificates)<br>- [NuGet package](https://www.nuget.org/packages/Azure.Security.KeyVault.Certificates/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Azure.Security.KeyVault.Certificates)<br>- [Quickstart](../certificates/quick-create-net.md) | - [API Reference](/dotnet/api/microsoft.azure.management.keyvault)<br>- [NuGet Package](https://www.nuget.org/packages/Microsoft.Azure.Management.KeyVault/)<br> - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/Microsoft.Azure.Management.KeyVault)|
-| Python| - [API Reference](/python/api/overview/azure/keyvault-secrets-readme?view=azure-python)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-secrets/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-secrets)<br>- [Quickstart](../secrets/quick-create-python.md) |- [API Reference](/python/api/overview/azure/keyvault-keys-readme?view=azure-python)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-keys/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-keys)<br>- [Quickstart](../keys/quick-create-python.md) | - [API Reference](/python/api/overview/azure/keyvault-certificates-readme?view=azure-python)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-certificates/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-certificates)<br>- [Quickstart](../certificates/quick-create-python.md) | - [API Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault?view=azure-python)<br> - [PyPi package](https://pypi.org/project/azure-mgmt-keyvault/)<br> - [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-mgmt-keyvault)|
+| Python| - [API Reference](/python/api/overview/azure/keyvault-secrets-readme)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-secrets/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-secrets)<br>- [Quickstart](../secrets/quick-create-python.md) |- [API Reference](/python/api/overview/azure/keyvault-keys-readme)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-keys/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-keys)<br>- [Quickstart](../keys/quick-create-python.md) | - [API Reference](/python/api/overview/azure/keyvault-certificates-readme)<br>- [PyPi package](https://pypi.org/project/azure-keyvault-certificates/)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-certificates)<br>- [Quickstart](../certificates/quick-create-python.md) | - [API Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault)<br> - [PyPi package](https://pypi.org/project/azure-mgmt-keyvault/)<br> - [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-mgmt-keyvault)|
| Java | - [API Reference](https://azuresdkdocs.blob.core.windows.net/$web/jav) |- [API Reference](/java/api/com.microsoft.azure.management.keyvault)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/mgmt-v2016_10_01)| | Node.js | - [API Reference](/javascript/api/@azure/keyvault-secrets/)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-secrets)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-secrets)<br>- [Quickstart](../secrets/quick-create-node.md) |- [API Reference](/javascript/api/@azure/keyvault-keys/)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-keys)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-keys)<br>- [Quickstart](../keys/quick-create-node.md)| - [API Reference](/javascript/api/@azure/keyvault-certificates/)<br>- [npm package](https://www.npmjs.com/package/@azure/keyvault-certificates)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault-certificates)<br>- [Quickstart](../certificates/quick-create-node.md) | - [API Reference](/javascript/api/@azure/arm-keyvault/)<br>- [npm package](https://www.npmjs.com/package/@azure/arm-keyvault)<br>- [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/arm-keyvault)
key-vault Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/howto-logging.md
az account list
az account set --subscription "<subscriptionID>" ```
-With Azure PowerShell, you can first list your subscriptions using the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription?view=azps-4.7.0) cmdlet, and then connect to one using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext?view=azps-4.7.0) cmdlet:
+With Azure PowerShell, you can first list your subscriptions using the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet, and then connect to one using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet:
```powershell-interactive Get-AzSubscription
With the Azure CLI, use the [az storage account create](/cli/azure/storage/accou
az storage account create --name "<your-unique-storage-account-name>" -g "myResourceGroup" --sku "Standard_LRS" ```
-With Azure PowerShell, use the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount?view=azps-4.7.0) cmdlet. You will need to provide the location that corresponds to the resource group.
+With Azure PowerShell, use the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) cmdlet. You will need to provide the location that corresponds to the resource group.
```powershell New-AzStorageAccount -ResourceGroupName myResourceGroup -Name "<your-unique-storage-account-name>" -Type "Standard_LRS" -Location "eastus" ```
-In either case, note the "id" of the storage account. The Azure CLI operation returns the "id" in the output. To obtain the "id" with Azure PowerShell, use [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount?view=azps-4.7.0) and assigned the output to a the variable $sa. You can then see the storage account with $sa.id. (The "$sa.Context" property will also be used, later in this article.)
+In either case, note the "id" of the storage account. The Azure CLI operation returns the "id" in the output. To obtain the "id" with Azure PowerShell, use [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) and assigned the output to a the variable $sa. You can then see the storage account with $sa.id. (The "$sa.Context" property will also be used, later in this article.)
```powershell-interactive $sa = Get-AzStorageAccount -Name "<your-unique-storage-account-name>" -ResourceGroup "myResourceGroup"
The "id" of the storage account will be in the format "/subscriptions/<your-subs
## Obtain your key vault Resource ID
-In the [CLI quickstart](quick-create-cli.md) and [PowerShell quickstart](quick-create-powershell.md), you created a key with a unique name. Use that name again in the steps below. If you cannot remember the name of your key vault, you can use the Azure CLI [az keyvault list](/cli/azure/keyvault#az_keyvault_list) command or the Azure PowerShell [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault?view=azps-4.7.0) cmdlet to list them.
+In the [CLI quickstart](quick-create-cli.md) and [PowerShell quickstart](quick-create-powershell.md), you created a key with a unique name. Use that name again in the steps below. If you cannot remember the name of your key vault, you can use the Azure CLI [az keyvault list](/cli/azure/keyvault#az_keyvault_list) command or the Azure PowerShell [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault) cmdlet to list them.
Use the name of your key vault to find its Resource ID. With Azure CLI, use the [az keyvault show](/cli/azure/keyvault#az_keyvault_show) command.
Use the name of your key vault to find its Resource ID. With Azure CLI, use the
az keyvault show --name "<your-unique-keyvault-name>" ```
-With Azure PowerShell, use the [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault?view=azps-4.7.0) cmdlet.
+With Azure PowerShell, use the [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault) cmdlet.
```powershell-interactive Get-AzKeyVault -VaultName "<your-unique-keyvault-name>"
The Resource ID for your key vault will be on the format "/subscriptions/<your-s
## Enable logging using Azure PowerShell
-To enable logging for Key Vault, we'll use the Azure CLI [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings) command, or the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting?view=azps-4.7.0) cmdlet, together with the storage account ID and the key vault Resource ID.
+To enable logging for Key Vault, we'll use the Azure CLI [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings) command, or the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet, together with the storage account ID and the key vault Resource ID.
```azurecli-interactive az monitor diagnostic-settings create --storage-account "<storage-account-id>" --resource "<key-vault-resource-id>" --name "Key vault logs" --logs '[{"category": "AuditEvent","enabled": true}]' --metrics '[{"category": "AllMetrics","enabled": true}]' ```
-With Azure PowerShell, we'll use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting?view=azps-4.7.0) cmdlet, with the **-Enabled** flag set to **$true** and the category set to `AuditEvent` (the only category for Key Vault logging):
+With Azure PowerShell, we'll use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet, with the **-Enabled** flag set to **$true** and the category set to `AuditEvent` (the only category for Key Vault logging):
```powershell-interactive Set-AzDiagnosticSetting -ResourceId "<key-vault-resource-id>" -StorageAccountId $sa.id -Enabled $true -Category "AuditEvent"
az monitor diagnostic-settings update
``` -->
-With Azure PowerShell, use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting?view=azps-4.7.0) cmdlet.
+With Azure PowerShell, use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet.
```powershell-interactive Set-AzDiagnosticSetting "<key-vault-resource-id>" -StorageAccountId $sa.id -Enabled $true -Category AuditEvent -RetentionEnabled $true -RetentionInDays 90
First, list all the blobs in the container. With the Azure CLI, use the [az sto
az storage blob list --account-name "<your-unique-storage-account-name>" --container-name "insights-logs-auditevent" ```
-With Azure PowerShell, use the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob?view=azps-4.7.0) list all the blobs in this container, enter:
+With Azure PowerShell, use the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob) list all the blobs in this container, enter:
```powershell Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context
With the Azure CLI, use the [az storage blob download](/cli/azure/storage/blob#a
az storage blob download --container-name "insights-logs-auditevent" --file <path-to-file> --name "<blob-name>" --account-name "<your-unique-storage-account-name>" ```
-With Azure PowerShell, use the [Gt-AzStorageBlobs](/powershell/module/az.storage/get-azstorageblob?view=azps-4.7.0) cmdlet to get a list of the blobs, then pipe that to the [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent?view=azps-4.7.0) cmdlet to download the logs to your chosen path.
+With Azure PowerShell, use the [Gt-AzStorageBlobs](/powershell/module/az.storage/get-azstorageblob) cmdlet to get a list of the blobs, then pipe that to the [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent) cmdlet to download the logs to your chosen path.
```powershell-interactive $blobs = Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context | Get-AzStorageBlobContent -Destination "<path-to-file>"
key-vault Logging https://gi