Updates from: 08/12/2022 01:15:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
zone_pivot_groups: b2c-policy-type
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [`https://twitter.com/signup`](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access).
+1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials.
+1. Select **+ Create Project** button.
+1. Under **Project name** tab, enter a preferred name of your project, and then select **Next** button.
+1. Under **Use case** tab, select your preferred use case, and then select **Next**.
+1. Under **Project description** tab, enter your project description, and then select **Next** button.
+1. Under **App name** tab, enter a name for your app, such as *azureadb2c*, and the select **Next** button.
+1. Under **Keys & Tokens** tab, copy the value of **API Key** and **API Key Secret** for later. You use both of them to configure Twitter as an identity provider in your Azure AD B2C tenant.
+1. Select **App settings** to open the app settings.
+1. At the lower part of the page, under **User authentication settings**, select **Set up**.
+1. In the **User authentication settings** page, select **OAuth 2.0** option.
+1. Under **OAUTH 2.0 SETTINGS**, for the **Type of app**, select your appropriate app type such as *Web App*.
+1. Under **GENERAL AUTHENTICATION SETTINGS**:
+ 1. For the **Callback URI/Redirect URL**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-policy-id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace:
+ - `your-tenant-name` with the name of your tenant name.
+ - `your-domain-name` with your custom domain.
+ - `your-policy-id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`.
+ 1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`.
+ 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application.
+ 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application.
+1. Select **Save**.
+ ::: zone-end
+++ 1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials. 1. Select **+ Create Project** button. 1. Under **Project name** tab, enter a preferred name of your project, and then select **Next** button.
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
1. In the **User authentication settings** page, select **OAuth 2.0** option. 1. Under **OAUTH 2.0 SETTINGS**, for the **Type of app**, select your appropriate app type such as *Web App*. 1. Under **GENERAL AUTHENTICATION SETTINGS**:
- 1. For the **Callback URI/Redirect URL**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace:
+ 1. For the **Callback URI/Redirect URL**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-name/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace:
- `your-tenant-name` with the name of your tenant name. - `your-domain-name` with your custom domain.
- - `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`.
-
+ - `your-user-flow-name` with the identifier of your user flow. For example, `b2c_1_signup_signin_twitter`.
1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`. 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application. 1. Select **Save**.
+
::: zone pivot="b2c-user-flow"
At this point, the Twitter identity provider has been set up, but it's not yet a
1. Select the **Run user flow** button. 1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account.
-If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
- ::: zone-end ::: zone pivot="b2c-custom-policy"
You can define a Twitter account as a claims provider by adding it to the **Clai
1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account. If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+> [!TIP]
+> If you're facing `unauthorized` error while testing this identity provider, make sure you use the correct Twitter API Key and API Key Secret, or try to apply for [elevated](https://developer.twitter.com/en/portal/products/elevated) access. Also, we recommend you've a look at [Twitter's projects structure](https://developer.twitter.com/en/docs/projects/overview), if you registered your app before the feature was available.
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
The following are the IDs for a [Verification display control](display-control-v
| ID | Default value | | | - |
-|intro_msg <sup>*</sup>| Verification is necessary. Please click Send button.|
+|intro_msg<sup>1</sup>| Verification is necessary. Please click Send button.|
|success_send_code_msg | Verification code has been sent. Please copy it to the input box below.| |failure_send_code_msg | We are having trouble verifying your email address. Please enter a valid email address and try again.| |success_verify_code_msg | E-mail address verified. You can now continue.|
The following are the IDs for a [Verification display control](display-control-v
|but_verify_code | Verify code| |but_send_new_code | Send new code| |but_change_claims | Change e-mail|
+| UserMessageIfVerificationControlClaimsNotVerified<sup>2</sup>| The claims for verification control have not been verified. |
-Note: The `intro_msg` element is hidden, and not shown on the self-asserted page. To make it visible, use the [HTML customization](customize-ui-with-html.md) with Cascading Style Sheets. For example:
+<sup>1</sup> The `intro_msg` element is hidden, and not shown on the self-asserted page. To make it visible, use the [HTML customization](customize-ui-with-html.md) with Cascading Style Sheets. For example:
```css .verificationInfoText div{display: block!important} ```
+<sup>2</sup> This error message is displayed to the user if they enter a verification code, but instead of completing the verification by selecting on the **Verify** button, they select the **Continue** button.
+
### Verification display control example ```xml
Note: The `intro_msg` element is hidden, and not shown on the self-asserted page
<LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_verify_code">Verify code</LocalizedString> <LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_send_new_code">Send new code</LocalizedString> <LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_change_claims">Change e-mail</LocalizedString>
+ <LocalizedString ElementType="ErrorMessage" StringId="UserMessageIfVerificationControlClaimsNotVerified">The claims for verification control have not been verified.</LocalizedString>
</LocalizedStrings> </LocalizedResources> ```
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
|Element |Page layout version range |jQuery version |Handlebars Runtime version |Handlebars Compiler version | ||||--|-|
-|multifactor |>= 1.2.4 | 3.5.1 | 4.7.6 |4.7.7 |
+|multifactor |>= 1.2.8 | 3.5.1 | 4.7.7 |4.7.7 |
+| |>= 1.2.4 | 3.5.1 | 4.7.6 |4.7.7 |
| |< 1.2.4 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|selfasserted |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
+|selfasserted |>= 2.1.11 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
| |< 2.1.4 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|unifiedssp |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
+|unifiedssp |>= 2.1.7 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
| |< 2.1.4 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|globalexception |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+|globalexception |>= 1.2.3 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|providerselection |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+|providerselection |>= 1.2.3 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|claimsconsent |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+|claimsconsent |>= 1.2.2 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 1.2.2 | 3.5.1 |4.7.7 |4.7.7 |
| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|unifiedssd |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+|unifiedssd |>= 1.2.3 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md
The Azure AD provisioning service includes a feature to help avoid accidental de
The feature lets you specify a deletion threshold, above which an admin needs to explicitly choose to allow the deletions to be processed.
-> [!NOTE]
-> Accidental deletions are not supported for our Workday / SuccessFactors integrations. It is also not supported for changes in scoping (e.g. changing a scoping filter or changing from "sync all users and groups" to "sync assigned users and groups"). Until the accidental deletions prevention feature is fully released, you'll need to access the Azure portal using this URL: https://aka.ms/AccidentalDeletionsPreview
-- ## Configure accidental deletion prevention To enable accidental deletion prevention: 1. In the Azure portal, select **Azure Active Directory**.
threshold. Also, be sure the notification email address is completed. If the del
When the deletion threshold is met, the job will go into quarantine and a notification email will be sent. The quarantined job can then be allowed or rejected. To learn more about quarantine behavior, see [Application provisioning in quarantine status](application-provisioning-quarantine-status.md).
-## Known limitations
-There are two key limitations to be aware of and are actively working to address:
-- HR-driven provisioning from Workday and SuccessFactors don't support the accidental deletions feature. -- Changes to your provisioning configuration (e.g. changing scoping) isn't supported by the accidental deletions feature. - ## Recovering from an accidental deletion If you encounter an accidental deletion you'll see it on the provisioning status page. It will say **Provisioning has been quarantined. See quarantine details for more information.**.
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
This article describes how to onboard a Google Cloud Platform (GCP) project on P
> [!NOTE] > 1. To confirm that the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app.
- > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your AWS account.
+ > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your GCP account.
1. Return to Permissions Management, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**.
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Continuous access evaluation is implemented by enabling services, like Exchange
This process enables the scenario where users lose access to organizational SharePoint Online files, email, calendar, or tasks, and Teams from Microsoft 365 client apps within minutes after a critical event. > [!NOTE]
-> Teams and SharePoint Online do not support user risk events.
+> SharePoint Online doesn't support user risk events.
### Conditional Access policy evaluation
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
Previously updated : 1/19/2022 Last updated : 08/11/2022
With this evaluation and enforcement, Conditional Access defines the basis of [M
![Conditional Access overview](./media/plan-conditional-access/conditional-access-overview-how-it-works.png)
-Microsoft provides [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) that ensure a basic level of security enabled in tenants that do not have Azure AD Premium. With Conditional Access, you can create policies that provide the same protection as security defaults, but with granularity. Conditional Access and security defaults are not meant to be combined as creating Conditional Access policies will prevent you from enabling security defaults.
+Microsoft provides [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) that ensure a basic level of security enabled in tenants that don't have Azure AD Premium. With Conditional Access, you can create policies that provide the same protection as security defaults, but with granularity. Conditional Access and security defaults aren't meant to be combined as creating Conditional Access policies will prevent you from enabling security defaults.
### Prerequisites * A working Azure AD tenant with Azure AD Premium or trial license enabled. If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).- * An account with Conditional Access administrator privileges.- * A test user (non-administrator) that allows you to verify policies work as expected before you impact real users. If you need to create a user, see [Quickstart: Add new users to Azure Active Directory](../fundamentals/add-users-azure-active-directory.md).- * A group that the non-administrator user is a member of. If you need to create a group, see [Create a group and add members in Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md). ## Understand Conditional Access policy components
Here are some common questions about [Assignments and Access Controls](concept-c
**Users or workload identities** * Which users, groups, directory roles and workload identities will be included in or excluded from the policy?- * What emergency access accounts or groups should be excluded from policy? **Cloud apps or actions**
Will this policy apply to any application, user action, or authentication contex
**Conditions** * Which device platforms will be included in or excluded from the policy?- * What are the organizationΓÇÖs trusted locations?- * What locations will be included in or excluded from the policy?- * What client app types will be included in or excluded from the policy?- * Do you have policies that would drive excluding Azure AD joined devices or Hybrid Azure AD joined devices from policies? - * If using [Identity Protection](../identity-protection/concept-identity-protection-risks.md), do you want to incorporate sign-in risk protection? **Grant or Block**
Will this policy apply to any application, user action, or authentication contex
Do you want to grant access to resources by requiring one or more of the following? * Require MFA- * Require device to be marked as compliant- * Require hybrid Azure AD joined device- * Require approved client app- * Require app protection policy- * Require password change- * Use Terms of Use **Session control**
Do you want to grant access to resources by requiring one or more of the followi
Do you want to enforce any of the following access controls on cloud apps? * Use app enforced restrictions- * Use Conditional Access App control- * Enforce sign-in frequency- * Use persistent browser sessions- * Customize continuous access evaluation ### Access token issuance
Do you want to enforce any of the following access controls on cloud apps?
This doesnΓÇÖt prevent the app to have separate authorization to block access. For example, consider a policy where: * IF user is in finance team, THEN force MFA to access their payroll app.- * IF a user not in finance team attempts to access the payroll app, the user will be issued an access token. -
- * To ensure users outside of finance group cannot access the payroll app, a separate policy should be created to block all other users. If all users except for finance team and emergency access accounts group, accessing payroll app, then block access.
+ * To ensure users outside of finance group can't access the payroll app, a separate policy should be created to block all other users. If all users except for finance team and emergency access accounts group, accessing payroll app, then block access.
## Follow best practices
Conditional Access provides you with great configuration flexibility. However, g
**If you misconfigure a policy, it can lock the organizations out of the Azure portal**.
-Mitigate the impact of accidental administrator lock out by creating two or more [emergency access accounts](../roles/security-emergency-access.md) in your organization. Create a user account dedicated to policy administration and excluded from all your policies.
+Mitigate the impact of accidental administrator lockout by creating two or more [emergency access accounts](../roles/security-emergency-access.md) in your organization. Create a user account dedicated to policy administration and excluded from all your policies.
### Apply Conditional Access policies to every app
-**Ensure that every app has at least one conditional access policy applied**. From a security perspective it is better to create a policy that encompasses All cloud apps and then exclude applications that you do not want the policy to apply to. This ensures you do not need to update Conditional Access policies every time you onboard a new application.
+**Ensure that every app has at least one Conditional Access policy applied**. From a security perspective it's better to create a policy that encompasses **All cloud apps**, and then exclude applications that you don't want the policy to apply to. This ensures you don't need to update Conditional Access policies every time you onboard a new application.
> [!IMPORTANT] > Be very careful in using block and all apps in a single policy. This could lock admins out of the Azure portal, and exclusions cannot be configured for important endpoints such as Microsoft Graph. ### Minimize the number of Conditional Access policies
-Creating a policy for each app isnΓÇÖt efficient and leads to difficult administration. Conditional Access will only apply to the first 195 policies per user. We recommend that you **analyze your apps and group them into applications that have the same resource requirements for the same users**. For example, if all Microsoft 365 apps or all HR apps have the same requirements for the same users, create a single policy and include all the apps to which it applies.
+Creating a policy for each app isnΓÇÖt efficient and leads to difficult administration. Conditional Access has a limit of 195 policies per-tenant. We recommend that you **analyze your apps and group them into applications that have the same resource requirements for the same users**. For example, if all Microsoft 365 apps or all HR apps have the same requirements for the same users, create a single policy and include all the apps to which it applies.
### Set up report-only mode It can be difficult to predict the number and names of users affected by common deployment initiatives such as:
-* blocking legacy authentication
-* requiring MFA
-* implementing sign-in risk policies
+* Blocking legacy authentication
+* Requiring MFA
+* Implementing sign-in risk policies
[Report-only mode ](concept-conditional-access-report-only.md) allows administrators to evaluate the impact of Conditional Access policies before enabling them in their environment. **First configure your policies in report-only mode and let it run for an interval before enforcing it in your environment**. ### Plan for disruption
-If you rely on a single access control, such as MFA or a network location, to secure your IT systems, you are susceptible to access failures if that single access control becomes unavailable or misconfigured.
+If you rely on a single access control such as MFA or a network location to secure your IT systems, you're susceptible to access failures if that single access control becomes unavailable or misconfigured.
**To reduce the risk of lockout during unforeseen disruptions, [plan strategies](../authentication/concept-resilient-controls.md) to adopt for your organization**.
If you rely on a single access control, such as MFA or a network location, to se
**A naming standard helps you to find policies and understand their purpose without opening them in the Azure admin portal**. We recommend that you name your policy to show: * A Sequence Number- * The cloud app(s) it applies to- * The response- * Who it applies to- * When it applies (if applicable) ![Screenshot that shows the naming standards for policies.](media/plan-conditional-access/11.png)
A descriptive name helps you to keep an overview of your Conditional Access impl
In addition to your active policies, implement disabled policies that act as secondary [resilient access controls in outage or emergency scenarios](../authentication/concept-resilient-controls.md). Your naming standard for the contingency policies should include: * ENABLE IN EMERGENCY at the beginning to make the name stand out among the other policies.- * The name of disruption it should apply to.- * An ordering sequence number to help the administrator to know in which order policies should be enabled. **Example**
The following name indicates that this policy is the first of four policies to e
### Block countries from which you never expect a sign-in.
-Azure active directory allows you to create [named locations](location-condition.md). Create the list of countries that are allowed, and then create a network block policy with these "allowed countries" as an exclusion. This is less overhead for customers who are mainly based in smaller geographic locations.**Be sure to exempt your emergency access accounts from this policy**.
+Azure active directory allows you to create [named locations](location-condition.md). Create the list of countries that are allowed, and then create a network block policy with these "allowed countries" as an exclusion. This is less overhead for customers who are based in smaller geographic locations.**Be sure to exempt your emergency access accounts from this policy**.
## Deploy Conditional Access policy
-When new policies are ready, deploy your conditional access policies in phases.
+When new policies are ready, deploy your Conditional Access policies in phases.
### Build your Conditional Access policy
Before you see the impact of your Conditional Access policy in your production e
#### Set up report-only mode
-By default, each policy is created in report-only mode, we recommended organizations test and monitor usage, to ensure intended result, before turning each policy on.
+By default, each policy is created in report-only mode, we recommended organizations test and monitor usage, to ensure intended result, before turning on each policy.
[Enable the policy in report-only mode](howto-conditional-access-insights-reporting.md). Once you save the policy in report-only mode, you can see the impact on real-time sign-ins in the sign-in logs. From the sign-in logs, select an event and navigate to the Report-only tab to see the result of each report-only policy.
-You can view the aggregate impact of your Conditional Access policies in the Insights and Reporting workbook. To access the workbook, you need an Azure Monitor subscription and you will need to [stream your sign-in logs to a log analytics workspace](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) .
+You can view the aggregate impact of your Conditional Access policies in the Insights and Reporting workbook. To access the workbook, you need an Azure Monitor subscription and you'll need to [stream your sign-in logs to a log analytics workspace](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) .
#### Simulate sign-ins using the What If tool
Perform each test in your test plan with test users. The test plan is important
| [Password change for risky users](../identity-protection/howto-identity-protection-configure-risk-policies.md)| Authorized user attempts to sign in with compromised credentials (high risk sign in)| User is prompted to change password or access is blocked based on your policy | ### Deploy in production+ After confirming impact using **report-only mode**, an administrator can move the **Enable policy** toggle from **Report-only** to **On**. ### Roll back policies In case you need to roll back your newly implemented policies, use one or more of the following options:
-* **Disable the policy.** Disabling a policy makes sure it does not apply when a user tries to sign in. You can always come back and enable the policy when you would like to use it.
+* **Disable the policy.** Disabling a policy makes sure it doesn't apply when a user tries to sign in. You can always come back and enable the policy when you would like to use it.
![enable policy image](media/plan-conditional-access/enable-policy.png)
In case you need to roll back your newly implemented policies, use one or more o
When a user is having an issue with a Conditional Access policy, collect the following information to facilitate troubleshooting.
-* User Principle Name
-
+* User Principal Name
* User display name- * Operating system name- * Time stamp (approximate is ok)- * Target application- * Client application type (browser vs client)- * Correlation ID (this is unique to the sign-in) If the user received a message with a More details link, they can collect most of this information for you. ![CanΓÇÖt get to app error message](media/plan-conditional-access/cant-get-to-app.png)
-Once you have collected the information, See the following resources:
+Once you've collected the information, See the following resources:
* [Sign-in problems with Conditional Access](troubleshoot-conditional-access.md) ΓÇô Understand unexpected sign-in outcomes related to Conditional Access using error messages and Azure AD sign-ins log.- * [Using the What-If tool](troubleshoot-conditional-access-what-if.md) - Understand why a policy was or wasn't applied to a user in a specific circumstance or if a policy would apply in a known state. ## Next Steps
-[Learn more about Multi-factor authentication](../authentication/concept-mfa-howitworks.md)
+[Learn more about Multifactor authentication](../authentication/concept-mfa-howitworks.md)
[Learn more about Identity Protection](../identity-protection/overview-identity-protection.md)
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
In this article, we walk through a few common scenarios that can help you unders
In the following examples, you create, update, link, and delete policies for service principals. Claims-mapping policies can only be assigned to service principal objects. If you are new to Azure AD, we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
-When creating a claims-mapping policy, you can also emit a claim from a directory schema extension attribute in tokens. Use *ExtensionID* for the extension attribute instead of *ID* in the `ClaimsSchema` element. For more info on extension attributes, see [Using directory schema extension attributes](active-directory-schema-extensions.md).
+When creating a claims-mapping policy, you can also emit a claim from a directory extension attribute in tokens. Use *ExtensionID* for the extension attribute instead of *ID* in the `ClaimsSchema` element. For more info on extension attributes, see [Using directory extension attributes](active-directory-schema-extensions.md).
> [!NOTE] > The [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview) is required to configure claims-mapping policies. The PowerShell module is in preview, while the claims mapping and token creation runtime in Azure is generally available. Updates to the preview PowerShell module could require you to update or change your configuration scripts.
If you're not using a verified domain, Azure AD will return an `AADSTS501461` er
- Read the [claims-mapping policy type](reference-claims-mapping-policy-type.md) reference article to learn more. - To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- To learn more about extension attributes, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
+- To learn more about extension attributes, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
You can use optional claims to:
For the lists of standard claims, see the [access token](access-tokens.md) and [id_token](id-tokens.md) claims documentation.
-While optional claims are supported in both v1.0 and v2.0 format tokens, as well as SAML tokens, they provide most of their value when moving from v1.0 to v2.0. One of the goals of the [Microsoft identity platform](./v2-overview.md) is smaller token sizes to ensure optimal performance by clients. As a result, several claims formerly included in the access and ID tokens are no longer present in v2.0 tokens and must be asked for specifically on a per-application basis.
+While optional claims are supported in both v1.0 and v2.0 format tokens and SAML tokens, they provide most of their value when moving from v1.0 to v2.0. One of the goals of the [Microsoft identity platform](./v2-overview.md) is smaller token sizes to ensure optimal performance by clients. As a result, several claims formerly included in the access and ID tokens are no longer present in v2.0 tokens and must be asked for specifically on a per-application basis.
**Table 1: Applicability**
While optional claims are supported in both v1.0 and v2.0 format tokens, as well
## v1.0 and v2.0 optional claims set
-The set of optional claims available by default for applications to use are listed below. To add custom optional claims for your application, see [Directory Extensions](#configuring-directory-extension-optional-claims), below. When adding claims to the **access token**, the claims apply to access tokens requested *for* the application (a web API), not claims requested *by* the application. No matter how the client accesses your API, the right data is present in the access token that is used to authenticate against your API.
+The set of optional claims available by default for applications to use are listed below. You can use custom data in extension attributes and directory extensions to add optional claims for your application. To use directory extensions, see [Directory Extensions](#configuring-directory-extension-optional-claims), below. When adding claims to the **access token**, the claims apply to access tokens requested *for* the application (a web API), not claims requested *by* the application. No matter how the client accesses your API, the right data is present in the access token that is used to authenticate against your API.
> [!NOTE] >The majority of these claims can be included in JWTs for v1.0 and v2.0 tokens, but not SAML tokens, except where noted in the Token Type column. Consumer accounts support a subset of these claims, marked in the "User Type" column. Many of the claims listed do not apply to consumer users (they have no tenant, so `tenant_ctry` has no value).
The set of optional claims available by default for applications to use are list
| Name | Description | Token Type | User Type | Notes | |-|-||--|--|
-| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they are a guest, the value is `1`. |
+| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they're a guest, the value is `1`. |
| `auth_time` | Time when the user last authenticated. See OpenID Connect spec.| JWT | | | | `ctry` | User's country/region | JWT | | Azure AD returns the `ctry` optional claim if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
-| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value is not guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or pre-fill in your UX. |
+| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or pre-fill in your UX. |
| `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) | | `groups`| Optional formatting for group claims |JWT, SAML| |For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md). Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well.
-| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This is the most accurate way for an API to determine if a token is an app token or an app+user token.|
-| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user clicks on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you are operating in a guest scenario, where the user is from another tenant, then you must still provide a tenant identifier in the sign-in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
+| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This claim is the most accurate way for an API to determine if a token is an app token or an app+user token.|
+| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user selects on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you're operating in a guest scenario where the user is from another tenant, you must provide a tenant identifier in the sign-in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
| `sid` | Session ID, used for per-session user sign-out. | JWT | Personal and Azure AD accounts. | | | `tenant_ctry` | Resource tenant's country/region | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. | | `tenant_region_scope` | Region of the resource tenant | JWT | | |
-| `upn` | UserPrincipalName | JWT, SAML | | An identifier for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used for authorization or to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. For more information, see [Validate the user has permission to access this data](access-tokens.md). Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. You should use the `login_hint` claim for `login_hint` use - human-readable identifiers like UPN are unreliable.|
+| `upn` | UserPrincipalName | JWT, SAML | | An identifier for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. For more information, see [Validate the user has permission to access this data](access-tokens.md). Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. You should use the `login_hint` claim for `login_hint` use - human-readable identifiers like UPN are unreliable.|
| `verified_primary_email` | Sourced from the user's PrimaryAuthoritativeEmail | JWT | | | | `verified_secondary_email` | Sourced from the user's SecondaryAuthoritativeEmail | JWT | | | | `vnet` | VNET specifier information. | JWT | | |
These claims are always included in v1.0 Azure AD tokens, but not included in v2
| JWT Claim | Name | Description | Notes | |||-|-| | `ipaddr` | IP Address | The IP address the client logged in from. | |
-| `onprem_sid` | On-Premises Security Identifier | | |
+| `onprem_sid` | On-premises Security Identifier | | |
| `pwd_exp` | Password Expiration Time | The number of seconds after the time in the iat claim at which the password expires. This claim is only included when the password is expiring soon (as defined by "notification days" in the password policy). | | | `pwd_url` | Change Password URL | A URL that the user can visit to change their password. This claim is only included when the password is expiring soon (as defined by "notification days" in the password policy). | | | `in_corp` | Inside Corporate Network | Signals if the client is logging in from the corporate network. If they're not, the claim isn't included. | Based off of the [trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) settings in MFA. | | `family_name` | Last Name | Provides the last name, surname, or family name of the user as defined in the user object. <br>"family_name":"Miller" | Supported in MSA and Azure AD. Requires the `profile` scope. | | `given_name` | First name | Provides the first or "given" name of the user, as set on the user object.<br>"given_name": "Frank" | Supported in MSA and Azure AD. Requires the `profile` scope. |
-| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used for authorization or to uniquely identity user information (for example, as a database key). For more information, see [Validate the user has permission to access this data](access-tokens.md). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) below for configuration of the claim. Requires the `profile` scope.|
+| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). For more information, see [Validate the user has permission to access this data](access-tokens.md). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) below for configuration of the claim. Requires the `profile` scope.|
## v1.0-specific optional claims set
-Some of the improvements of the v2 token format are available to apps that use the v1 token format, as they help improve security and reliability. These will not take effect for ID tokens requested from the v2 endpoint, nor access tokens for APIs that use the v2 token format. These only apply to JWTs, not SAML tokens.
+Some of the improvements of the v2 token format are available to apps that use the v1 token format, as they help improve security and reliability. These improvements won't take effect for ID tokens requested from the v2 endpoint, nor access tokens for APIs that use the v2 token format. These improvements only apply to JWTs, not SAML tokens.
**Table 4: v1.0-only optional claims** | JWT Claim | Name | Description | Notes | |||-|-|
-|`aud` | Audience | Always present in JWTs, but in v1 access tokens it can be emitted in a variety of ways - any appID URI, with or without a trailing slash, as well as the client ID of the resource. This randomization can be hard to code against when performing token validation. Use the [additional properties for this claim](#additional-properties-of-optional-claims) to ensure it's always set to the resource's client ID in v1 access tokens. | v1 JWT access tokens only|
-|`preferred_username` | Preferred username | Provides the preferred username claim within v1 tokens. This makes it easier for apps to provide username hints and show human readable display names, regardless of their token type. It's recommended that you use this optional claim instead of using e.g. `upn` or `unique_name`. | v1 ID tokens and access tokens |
+|`aud` | Audience | Always present in JWTs, but in v1 access tokens it can be emitted in various ways - any appID URI, with or without a trailing slash, and the client ID of the resource. This randomization can be hard to code against when performing token validation. Use the [additional properties for this claim](#additional-properties-of-optional-claims) to ensure it's always set to the resource's client ID in v1 access tokens. | v1 JWT access tokens only|
+|`preferred_username` | Preferred username | Provides the preferred username claim within v1 tokens. This claim makes it easier for apps to provide username hints and show human readable display names, regardless of their token type. It's recommended that you use this optional claim instead of using, for example, `upn` or `unique_name`. | v1 ID tokens and access tokens |
### Additional properties of optional claims
-Some optional claims can be configured to change the way the claim is returned. These additional properties are mostly used to help migration of on-premises applications with different data expectations. For example, `include_externally_authenticated_upn_without_hash` helps with clients that cannot handle hash marks (`#`) in the UPN.
+Some optional claims can be configured to change the way the claim is returned. These additional properties are mostly used to help migration of on-premises applications with different data expectations. For example, `include_externally_authenticated_upn_without_hash` helps with clients that can't handle hash marks (`#`) in the UPN.
**Table 4: Values for configuring optional claims**
Some optional claims can be configured to change the way the claim is returned.
| `upn` | | Can be used for both SAML and JWT responses, and for v1.0 and v2.0 tokens. | | | `include_externally_authenticated_upn` | Includes the guest UPN as stored in the resource tenant. For example, `foo_hometenant.com#EXT#@resourcetenant.com` | | | `include_externally_authenticated_upn_without_hash` | Same as above, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com`|
-| `aud` | | In v1 access tokens, this is used to change the format of the `aud` claim. This has no effect in v2 tokens or either version's ID tokens, where the `aud` claim is always the client ID. Use this configuration to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.|
-| | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim always instead of it being runtime dependent. For example, if a resource sets this flag, and its client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`. </br></br> Without this claim set, an API could get tokens with an `aud` claim of `api://MyApi.com`, `api://MyApi.com/`, `api://myapi.com/AdditionalRegisteredField` or any other value set as an app ID URI for that API, as well as the client ID of the resource. |
+| `aud` | | In v1 access tokens, this claim is used to change the format of the `aud` claim. This claim has no effect in v2 tokens or either version's ID tokens, where the `aud` claim is always the client ID. Use this configuration to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.|
+| | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim always instead of it being runtime dependent. For example, if a resource sets this flag, and its client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`. </br></br> Without this claim set, an API could get tokens with an `aud` claim of `api://MyApi.com`, `api://MyApi.com/`, `api://myapi.com/AdditionalRegisteredField` or any other value set as an app ID URI for that API, and the client ID of the resource. |
#### Additional properties example
You can configure optional claims for your application through the UI or applica
[![Configure optional claims in the UI](./media/active-directory-optional-claims/token-configuration.png)](./media/active-directory-optional-claims/token-configuration.png) 1. Under **Manage**, select **Token configuration**.
- - The UI option **Token configuration** blade is not available for apps registered in an Azure AD B2C tenant which can be configured by modifying the application manifest. For more information see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md)
+ - The UI option **Token configuration** blade isn't available for apps registered in an Azure AD B2C tenant, which can be configured by modifying the application manifest. For more information, see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md)
1. Select **Add optional claim**. 1. Select the token type you want to configure.
If supported by a specific claim, you can also modify the behavior of the Option
## Configuring directory extension optional claims
-In addition to the standard optional claims set, you can also configure tokens to include extensions. For more info, see [the Microsoft Graph extensionProperty documentation](/graph/api/resources/extensionproperty).
+In addition to the standard optional claims set, you can also configure tokens to include Microsoft Graph extensions. For more info, see [Add custom data to resources using extensions](/graph/extensibility-overview).
-Schema and open extensions are not supported by optional claims, only the AAD-Graph style directory extensions. This feature is useful for attaching additional user information that your app can use ΓÇô for example, an additional identifier or important configuration option that the user has set. See the bottom of this page for an example.
+Schema and open extensions aren't supported by optional claims, only extension attributes and directory extensions. This feature is useful for attaching additional user information that your app can use ΓÇô for example, an additional identifier or important configuration option that the user has set. See the bottom of this page for an example.
-Directory schema extensions are an Azure AD-only feature. If your application manifest requests a custom extension and an MSA user logs in to your app, these extensions will not be returned.
+Directory extensions are an Azure AD-only feature. If your application manifest requests a custom extension and an MSA user logs in to your app, these extensions won't be returned.
### Directory extension formatting
-When configuring directory extension optional claims using the application manifest, use the full name of the extension (in the format: `extension_<appid>_<attributename>`). The `<appid>` must match the ID of the application requesting the claim.
+When configuring directory extension optional claims using the application manifest, use the full name of the extension (in the format: `extension_<appid>_<attributename>`). The `<appid>` is the stripped version of the **appId** (or Client ID) of the application requesting the claim.
Within the JWT, these claims will be emitted with the following name format: `extn.<attributename>`.
Within the SAML tokens, these claims will be emitted with the following URI form
This section covers the configuration options under optional claims for changing the group attributes used in group claims from the default group objectID to attributes synced from on-premises Windows Active Directory. You can configure groups optional claims for your application through the UI or application manifest. > [!IMPORTANT]
-> Azure AD limits the number of groups emitted in a token to 150 for SAML assertions and 200 for JWT, including nested groups. For more details on group limits and important caveats for group claims from on-premises attributes, see [Configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
+> Azure AD limits the number of groups emitted in a token to 150 for SAML assertions and 200 for JWT, including nested groups. For more information on group limits and important caveats for group claims from on-premises attributes, see [Configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
**Configuring groups optional claims through the UI:**
This section covers the configuration options under optional claims for changing
Some applications require group information about the user in the role claim. To change the claim type from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values will be emitted in the role claim.
- If "emit_as_roles" is used, any application roles configured that the user is assigned will not appear in the role claim.
+ If "emit_as_roles" is used, any application roles configured that the user is assigned won't appear in the role claim.
**Examples:**
There are multiple options available for updating the properties on an applicati
**Example:**
-In the example below, you will use the **Token configuration** UI and **Manifest** to add optional claims to the access, ID, and SAML tokens intended for your application. Different optional claims will be added to each type of token that the application can receive:
+In the example below, you'll use the **Token configuration** UI and **Manifest** to add optional claims to the access, ID, and SAML tokens intended for your application. Different optional claims will be added to each type of token that the application can receive:
- The ID tokens will now contain the UPN for federated users in the full form (`<upn>_<homedomain>#EXT#@<resourcedomain>`). - The access tokens that other clients request for this application will now include the auth_time claim.-- The SAML tokens will now contain the skypeId directory schema extension (in this example, the app ID for this app is ab603c56068041afb2f6832e2a17e237). The SAML tokens will expose the Skype ID as `extension_skypeId`.
+- The SAML tokens will now contain the skypeId directory schema extension (in this example, the app ID for this app is ab603c56068041afb2f6832e2a17e237). The SAML tokens will expose the Skype ID as `extension_ab603c56068041afb2f6832e2a17e237_skypeId`.
**UI configuration:**
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
Title: Use Azure AD schema extension attributes in claims
-description: Describes how to use directory schema extension attributes for sending user data to applications in token claims.
+ Title: Use Azure AD directory extension attributes in claims
+description: Describes how to use directory extension attributes for sending user data to applications in token claims.
Last updated 07/29/2020
-# Using directory schema extension attributes in claims
+# Using directory extension attributes in claims
-Directory schema extension attributes provide a way to store additional data in Azure Active Directory on user objects and other directory objects such as groups, tenant details, service principals. Only extension attributes on user objects can be used for emitting claims to applications. This article describes how to use directory schema extension attributes for sending user data to applications in token claims.
+Directory extension attributes, also called Azure AD extensions, provide a way to store additional data in Azure Active Directory on user objects and other directory objects such as groups, tenant details, service principals. Only extension attributes on user objects can be used for emitting claims to applications. This article describes how to use directory extension attributes for sending user data to applications in token claims.
> [!NOTE]
-> Microsoft Graph provides two other extension mechanisms to customize Graph objects. These are known as Microsoft Graph open extensions and Microsoft Graph schema extensions. See the [Microsoft Graph documentation](/graph/extensibility-overview) for details. Data stored on Microsoft Graph objects using these capabilities are not available as sources for claims in tokens.
+> Microsoft Graph provides three other extension mechanisms to customize Graph objects. These are the extension attributes 1-15, open extensions, and schema extensions. See the [Microsoft Graph documentation](/graph/extensibility-overview) for details. Data stored on Microsoft Graph objects using open and schema extensions are not available as sources for claims in tokens.
-Directory schema extension attributes are always associated with an application in the tenant and are referenced by the application's *applicationId* in their name.
+Directory extension attributes are always associated with an application in the tenant and are referenced by the application's *appId* in their name.
-The identifier for a directory schema extension attribute is of the form *Extension_xxxxxxxxx_AttributeName*. Where *xxxxxxxxx* is the *applicationId* of the application the extension was defined for.
+The identifier for a directory extension attribute is of the form *extension_xxxxxxxxx_AttributeName*. Where *xxxxxxxxx* is the *appId* of the application the extension was defined for, with only characters 0-9 and A-Z.
-## Registering and using directory schema extensions
-Directory schema extension attributes can be registered and populated in one of two ways:
+## Registering and using directory extensions
+Directory extension attributes can be registered and populated in one of two ways:
-- By configuring AD Connect to create them and to sync data into them from on premises AD. See [Azure AD Connect Sync Directory Extensions](../hybrid/how-to-connect-sync-feature-directory-extensions.md).-- By using Microsoft Graph to register, set the values of, and read from [schema extensions](/graph/extensibility-overview). [PowerShell cmdlets](/powershell/azure/active-directory/using-extension-attributes-sample) are also available.
+- By configuring AD Connect to create them and to sync data into them from on-premises AD. See [Azure AD Connect Sync Directory Extensions](../hybrid/how-to-connect-sync-feature-directory-extensions.md).
+- By using Microsoft Graph to register, set the values of, and read from [directory extensions](/graph/extensibility-overview#directory-azure-ad-extensions). [PowerShell cmdlets](/powershell/azure/active-directory/using-extension-attributes-sample) are also available.
-### Emitting claims with data from directory schema extension attributes created with AD Connect
-Directory schema extension attributes created and synced using AD Connect are always associated with the application ID used by AD Connect. They can be used as a source for claims both by configuring them as claims in the **Enterprise Applications** configuration in the Portal UI for SAML applications registered using the Gallery or the non-Gallery application configuration experience under **Enterprise Applications**, and via a claims-mapping policy for applications registered via the Application registration experience. Once a directory extension attribute created via AD Connect is in the directory, it will show in the SAML SSO claims configuration UI.
+### Emitting claims with data from directory extension attributes created with AD Connect
+Directory extension attributes created and synced using AD Connect are always associated with the application ID used by AD Connect. They can be used as a source for claims both by configuring them as claims in the **Enterprise Applications** configuration in the Portal UI for SAML applications registered using the Gallery or the non-Gallery application configuration experience under **Enterprise Applications**, and via a claims-mapping policy for applications registered via the Application registration experience. Once a directory extension attribute created via AD Connect is in the directory, it will show in the SAML SSO claims configuration UI.
-### Emitting claims with data from directory schema extension attributes created for an application using Graph or PowerShell
-If a directory schema extension attribute is registered for an application using Microsoft Graph or PowerShell (via an applications initial setup or provisioning step for instance), the same application can be configured in Azure Active Directory to receive data in that attribute from a user object in a claim when the user signs in. The application can be configured to receive data in directory schema extensions that are registered on that same application using [optional claims](active-directory-optional-claims.md#configuring-directory-extension-optional-claims). These can be set in the application manifest. This enables a multi-tenant application to register directory schema extension attributes for its own use. When the application is provisioned into a tenant the associated directory schema extensions become available to be set on users in that tenant, and to be consumed. Once it's configured in the tenant and consent granted, it can be used to store and retrieve data via graph and to map to claims in tokens the Microsoft identity platform emits to applications.
+### Emitting claims with data from directory extension attributes created for an application using Graph or PowerShell
+If a directory extension attribute is registered for an application using Microsoft Graph or PowerShell (via an applications initial setup or provisioning step for instance), the same application can be configured in Azure Active Directory to receive data in that attribute from a user object in a claim when the user signs in. The application can be configured to receive data in directory extensions that are registered on that same application using [optional claims](active-directory-optional-claims.md#configuring-directory-extension-optional-claims). These can be set in the application manifest. This enables a multi-tenant application to register directory extension attributes for its own use. When the application is provisioned into a tenant the associated directory extensions become available to be set on users in that tenant, and to be consumed. Once it's configured in the tenant and consent granted, it can be used to store and retrieve data via graph and to map to claims in tokens the Microsoft identity platform emits to applications.
-Directory schema extension attributes can be registered and populated for any application.
+Directory extension attributes can be registered and populated for any application.
-If an application needs to send claims with data from an extension attribute registered on a different application, a [claims mapping policy](active-directory-claims-mapping.md) must be used to map the extension attribute to the claim. A common pattern for managing directory schema extension attributes is to create an application specifically to be the point of registration for all the schema extensions you need. It doesn't have to be a real application and this technique means that all the extensions have the same application ID in their name.
+If an application needs to send claims with data from an extension attribute registered on a different application, a [claims mapping policy](active-directory-claims-mapping.md) must be used to map the extension attribute to the claim. A common pattern for managing directory extension attributes is to create an application specifically to be the point of registration for all the directory extensions you need. It doesn't have to be a real application and this technique means that all the extensions have the same appID in their name.
-For example, here is a claims-mapping policy to emit a single claim from a directory schema extension attribute in an OAuth/OIDC token:
+For example, here is a claims-mapping policy to emit a single claim from a directory extension attribute in an OAuth/OIDC token:
```json {
For example, here is a claims-mapping policy to emit a single claim from a direc
} ```
-Where *xxxxxxx* is the application ID the extension was registered with.
+Where *xxxxxxx* is the appID (or Client ID) of the application that the extension was registered with.
> [!TIP]
-> Case consistency is important when setting directory extension attributes on objects. Extension attribute names aren't cases sensitive when being set up, but they are case sensitive when being read from the directory by the token service. If an extension attribute is set on a user object with the name "LegacyId" and on another user object with the name "legacyid", when the attribute is mapped to a claim using the name "LegacyId" the data will be successfully retrieved and the claim included in the token for the first user but not the second.
+> Case consistency is important when setting directory extension attributes on objects. Extension attribute names aren't cases sensitive when being set up, but they are case sensitive when being read from the directory by the token service. If an extension attribute is set on a user object with the name "LegacyId" and on another user object with the name "legacyid", when the attribute is mapped to a claim using the name "LegacyId" the data will be successfully retrieved and the claim included in the token for the first user but not the second.
> > The "Id" parameter in the claims schema used for built-in directory attributes is "ExtensionID" for directory extension attributes. ## Next steps - Learn how to [add custom or additional claims to the SAML 2.0 and JSON Web Tokens (JWT) tokens](active-directory-optional-claims.md).-- Learn how to [customize claims emitted in tokens for a specific app](active-directory-claims-mapping.md).
+- Learn how to [customize claims emitted in tokens for a specific app](active-directory-claims-mapping.md).
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md
Title: Create a self-signed public certificate to authenticate your application description: Create a self-signed public certificate to authenticate your application. -+
Last updated 08/10/2021-+ #Customer intent: As an application developer, I want to understand the basic concepts of authentication and authorization in the Microsoft identity platform.
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
Previously updated : 06/13/2022 Last updated : 08/10/2022
The `error` field has several possible values - review the protocol documentatio
| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. | | AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter isn't valid. | | AADSTS501481 | The Code_Verifier doesn't match the code_challenge supplied in the authorization request.|
+| AADSTS501491 | InvalidCodeChallengeMethodInvalidSize - Invalid size of Code_Challenge parameter.|
| AADSTS50155 | DeviceAuthenticationFailed - Device authentication failed for this user. | | AADSTS50158 | ExternalSecurityChallenge - External security challenge was not satisfied. | | AADSTS50161 | InvalidExternalSecurityChallengeConfiguration - Claims sent by external provider isn't enough or Missing claim requested to external provider. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS70000 | InvalidGrant - Authentication failed. The refresh token isn't valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> | | AADSTS70001 | UnauthorizedClient - The application is disabled. To learn more, see the troubleshooting article for error [AADSTS70001](/troubleshoot/azure/active-directory/error-code-aadsts70001-app-not-found-in-directory). | | AADSTS70002 | InvalidClient - Error validating the credentials. The specified client_secret does not match the expected value for this client. Correct the client_secret and try again. For more info, see [Use the authorization code to request an access token](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). |
+| AADSTS700025 | InvalidClientPublicClientWithCredential - Client is public so neither 'client_assertion' nor 'client_secret' should be presented. |
| AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. | | AADSTS700030 | Invalid certificate - subject name in certificate isn't authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. | | AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. |
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
For each claim schema entry defined in this property, certain information is req
**Source/ID pair:** The Source and ID elements define where the data in the claim is sourced from.
-**Source/ExtensionID pair:** The Source and ExtensionID elements define the directory schema extension attribute where the data in the claim is sourced from. For more information, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
+**Source/ExtensionID pair:** The Source and ExtensionID elements define the directory extension attribute where the data in the claim is sourced from. For more information, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
Set the Source element to one of the following values:
The ID element identifies which property on the source provides the value for th
| User | lastpasswordchangedatetime | Last Password Change Date/Time | | User | mobilephone | Mobile Phone | | User | officelocation | Office Location |
-| User | onpremisesdomainname | On-Premises Domain Name |
-| User | onpremisesimmutableid | On-Premises Imutable ID |
-| User | onpremisessyncenabled | On-Premises Sync Enabled |
-| User | preferreddatalocation | Preffered Data Location |
+| User | onpremisesdomainname | On-premises Domain Name |
+| User | onpremisesimmutableid | On-premises Immutable ID |
+| User | onpremisessyncenabled | On-premises Sync Enabled |
+| User | preferreddatalocation | Preferred Data Location |
| User | proxyaddresses | Proxy Addresses | | User | usertype | User Type | | User | telephonenumber| Business Phones / Office Phones |
Based on the method chosen, a set of inputs and outputs is expected. Define the
|TransformationMethod|Expected input|Expected output|Description| |--|--|--|--| |Join|string1, string2, separator|outputClaim|Joins input strings by using a separator in between. For example: string1:"foo@bar.com" , string2:"sandbox" , separator:"." results in outputClaim:"foo@bar.com.sandbox"|
-|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other Schema Extensions, which are storing a UPN or email address value for the user for example, johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
+|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other directory extensions, which are storing a UPN or email address value for the user, for example, johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
**InputClaims:** Use an InputClaims element to pass the data from a claim schema entry to a transformation. It has three attributes: **ClaimTypeReferenceId**, **TransformationClaimType** and **TreatAsMultiValue** (Preview)
Based on the method chosen, a set of inputs and outputs is expected. Define the
- To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md) - To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- To learn more about extension attributes, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
+- To learn more about extension attributes, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
active-directory Tutorial V2 Angular Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-angular-auth-code.md
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
```javascript import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router';
+ import { BrowserUtils } from '@azure/msal-browser';
import { HomeComponent } from './home/home.component'; import { ProfileComponent } from './profile/profile.component';
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
@NgModule({ imports: [RouterModule.forRoot(routes, {
- initialNavigation: !isIframe ? 'enabled' : 'disabled' // Don't perform initial navigation in iframes
+ // Don't perform initial navigation in iframes or popups
+ initialNavigation: !BrowserUtils.isInIframe() && !BrowserUtils.isInPopup() ? 'enabledNonBlocking' : 'disabled' // Set to enabledBlocking to use Angular Universal
})], exports: [RouterModule] })
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
device.objectId -ne null
## Extension properties and custom extension properties
-Extension attributes and custom extension properties are supported as string properties in dynamic membership rules. [Extension attributes](/graph/api/resources/onpremisesextensionattributes) are synced from on-premises Window Server Active Directory and take the format of "ExtensionAttributeX", where X equals 1 - 15. Here's an example of a rule that uses an extension attribute as a property:
+Extension attributes and custom extension properties are supported as string properties in dynamic membership rules. [Extension attributes](/graph/api/resources/onpremisesextensionattributes) can be synced from on-premises Window Server Active Directory or updated using Microsoft Graph and take the format of "ExtensionAttributeX", where X equals 1 - 15. Here's an example of a rule that uses an extension attribute as a property:
``` (user.extensionAttribute15 -eq "Marketing") ```
-[Custom extension properties](../hybrid/how-to-connect-sync-feature-directory-extensions.md) are synced from on-premises Windows Server Active Directory or from a connected SaaS application and are of the format of `user.extension_[GUID]_[Attribute]`, where:
+[Custom extension properties](../hybrid/how-to-connect-sync-feature-directory-extensions.md) can be synced from on-premises Windows Server Active Directory, from a connected SaaS application, or created using Microsoft Graph, and are of the format of `user.extension_[GUID]_[Attribute]`, where:
-- [GUID] is the unique identifier in Azure AD for the application that created the property in Azure AD
+- [GUID] is the stripped version of the unique identifier in Azure AD for the application that created the property. It contains only characters 0-9 and A-Z
- [Attribute] is the name of the property as it was created An example of a rule that uses a custom extension property is:
An example of a rule that uses a custom extension property is:
user.extension_c272a57b722d4eb29bfe327874ae79cb_OfficeNumber -eq "123" ```
+Custom extension properties are also called directory or Azure AD extension properties.
+ The custom property name can be found in the directory by querying a user's property using Graph Explorer and searching for the property name. Also, you can now select **Get custom extension properties** link in the dynamic user group rule builder to enter a unique app ID and receive the full list of custom extension properties to use when creating a dynamic membership rule. This list can also be refreshed to get any new custom extension properties for that app. Extension attributes and custom extension properties must be from applications in your tenant. For more information, see [Use the attributes in dynamic groups](../hybrid/how-to-connect-sync-feature-directory-extensions.md#use-the-attributes-in-dynamic-groups) in the article [Azure AD Connect sync: Directory extensions](../hybrid/how-to-connect-sync-feature-directory-extensions.md).
active-directory B2b Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-fundamentals.md
Previously updated : 03/31/2022- Last updated : 08/10/2022
This article contains recommendations and best practices for business-to-business (B2B) collaboration in Azure Active Directory (Azure AD). > [!IMPORTANT]
-> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## B2B recommendations
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 04/26/2022- Last updated : 08/10/2022
You can enable this feature at any time in the Azure portal by configuring the E
> [!IMPORTANT] >
-> - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> - The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you havenΓÇÖt explicitly turned it off. This feature provides a seamless fallback authentication method for your guest users. If you donΓÇÖt want to use this feature, you can [disable it](#disable-email-one-time-passcode), in which case users will redeem invitations using unmanaged ("viral") Azure AD accounts as a fallback. Soon, weΓÇÖll stop creating new unmanaged accounts and tenants during invitation redemption, and we'll enforce redemption with a Microsoft account instead.
> - Email one-time passcode settings have moved in the Azure portal from **External collaboration settings** to **All identity providers**.- > [!NOTE] > One-time passcode users must sign in using a link that includes the tenant context (for example, `https://myapps.microsoft.com/?tenantid=<tenant id>` or `https://portal.azure.com/<tenant id>`, or in the case of a verified domain, `https://myapps.microsoft.com/<verified domain>.onmicrosoft.com`). Direct links to applications and resources also work as long as they include the tenant context. Guest users are currently unable to sign in using endpoints that have no tenant context. For example, using `https://myapps.microsoft.com`, `https://portal.azure.com` will result in an error.
Guest user teri@gmail.com is invited to Fabrikam, which doesn't have Google fede
## Disable email one-time passcode
-We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can disable it. Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. This feature provides a seamless fallback authentication method for your guest users. If you don't want to use this feature, you can disable it, in which case users will redeem invitations using unmanaged ("viral") Azure AD accounts as a fallback. Soon, we'll stop creating new unmanaged accounts and tenants during invitation redemption, and we'll enforce redemption with a Microsoft account instead.
> [!NOTE] >
For more information about current limitations, see [Azure AD B2B in government
## Frequently asked questions
-**Why do I still see ΓÇ£Automatically enable email one-time passcode for guests starting October 2021ΓÇ¥ selected in my email one-time passcode settings?**
-
-We've begun globally rolling out the change to enable email one-time passcode. In the meantime, you might still see ΓÇ£Automatically enable email one-time passcode for guests starting October 2021ΓÇ¥ selected in your email one-time passcode settings.
- **What happens to my existing guest users if I enable email one-time passcode?** Your existing guest users won't be affected if you enable email one-time passcode, as your existing users are already past the point of redemption. Enabling email one-time passcode will only affect future redemption activities where new guest users are redeeming into the tenant.
-**What is the user experience for guests during global rollout?**
-
-The user experience depends on your current email one-time passcode settings, whether the user already has an unmanaged account, and whether you [reset a user's redemption status](reset-redemption-status.md). The following table describes these scenarios.
-
-|User scenario |With email one-time passcode enabled prior to rollout |With email one-time passcode disabled prior to rollout |
-||||
-|**User has an existing unmanaged Azure AD account (not from redemption in your tenant)** |Both before and after rollout, the user redeems invitations using email one-time passcode. |Both before and after rollout, the user continues signing in with their unmanaged account.<sup>1</sup> |
-|**User previously redeemed an invitation to your tenant using an unmanaged Azure AD account** |Both before and after rollout, the user continues to use their unmanaged account. Or, you can [reset their redemption status](reset-redemption-status.md) so they can redeem a new invitation using email one-time passcode. |Both before and after rollout, the user continues to use their unmanaged account, even if you reset their redemption status and reinvite them.<sup>1</sup> |
-|**User with no unmanaged Azure AD account** |Both before and after rollout, the user redeems invitations using email one-time passcode. |Both before and after rollout, the user redeems invitations using an unmanaged account.<sup>2</sup> |
+**What is the user experience when email one-time passcode is disabled?**
-<sup>1</sup> In a separate release, weΓÇÖll roll out a change that will enforce redemption with a Microsoft account. To prevent your users from having to manage both an unmanaged Azure AD account and an MSA, we strongly encourage you to enable email one-time passcode.
+If youΓÇÖve disabled the email one-time passcode feature, the user redeems invitations using an unmanaged ("viral") account as a fallback. In a separate release, weΓÇÖll stop creating new, unmanaged Azure AD accounts and tenants during B2B collaboration invitation redemption and will enforce redemption with a Microsoft account.
-<sup>2</sup> The user might see a sign-in error when they're redeeming a direct application link and they weren't added to your directory in advance. In a separate release, weΓÇÖll roll out a change that will enforce redemption and future sign-ins with a Microsoft account.
+Also, when email one-time passcode is disabled, users might see a sign-in error when they're redeeming a direct application link and they weren't added to your directory in advance.
For more information about the different redemption pathways, see [B2B collaboration invitation redemption](redemption-experience.md).
-**Does this mean the ΓÇ£No account? Create one!ΓÇ¥ option for self-service sign-up is going away?**
+**Will the ΓÇ£No account? Create one!ΓÇ¥ option for self-service sign-up go away?**
-ItΓÇÖs easy to get [self-service sign-up in the context of External Identities](self-service-sign-up-overview.md) confused with self-service sign-up for email-verified users, but they're two different features. The feature that's going away is [self-service sign-up with email-verified users](../enterprise-users/directory-self-service-signup.md), which results in your guests creating an unmanaged Azure AD account. However, self-service sign-up for External Identities will continue to be available, which results in your guests signing up to your organization with a [variety of identity providers](identity-providers.md).ΓÇ»
+No. ItΓÇÖs easy to get [self-service sign-up in the context of External Identities](self-service-sign-up-overview.md) confused with self-service sign-up for email-verified users, but they're two different features. The unmanaged ("viral") feature that's going away is [self-service sign-up with email-verified users](../enterprise-users/directory-self-service-signup.md), which results in your guests creating an unmanaged Azure AD account. However, self-service sign-up for External Identities will continue to be available, which results in your guests signing up to your organization with a [variety of identity providers](identity-providers.md).ΓÇ»
**What does Microsoft recommend we do with existing Microsoft accounts (MSA)?** When we support the ability to disable Microsoft Account in the Identity providers settings (not available today), we strongly recommend you disable Microsoft Account and enable email one-time passcode. Then you should [reset the redemption status](reset-redemption-status.md) of existing guests with Microsoft accounts so that they can re-redeem using email one-time passcode authentication and use email one-time passcode to sign in going forward.
-**Does this change include SharePoint and OneDrive integration with Azure AD B2B?**
+**Regarding the change to enable email one-time-passcode by default, does this include SharePoint and OneDrive integration with Azure AD B2B?**
No, the global rollout of the change to enable email one-time passcode by default doesn't include enabling SharePoint and OneDrive integration with Azure AD B2B. To learn how to enable integration so that collaboration on SharePoint and OneDrive uses B2B capabilities, or how to disable this integration, see [SharePoint and OneDrive Integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration).
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
Previously updated : 04/07/2022- Last updated : 08/10/2022
When you add a guest user to your directory, the guest user account has a consen
> [!IMPORTANT] > - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
-> - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> - The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## Redemption and sign-in through a common endpoint
When a user clicks the **Accept invitation** link in an [invitation email](invit
![Screenshot showing the redemption flow diagram](media/redemption-experience/invitation-redemption-flow.png)
-**If the userΓÇÖs User Principal Name (UPN) matches with both an existing Azure AD and personal MSA account, the user will be prompted to choose which account they want to redeem with. If Email OTP is enabled, existing unmanaged "viral" Azure AD accounts will be ignored (See step #9).*
+**If the userΓÇÖs User Principal Name (UPN) matches with both an existing Azure AD and personal Microsoft account, the user is prompted to choose which account they want to redeem with. If email one-time passcode is enabled, existing unmanaged ("viral") Azure AD accounts will be ignored (See step #9).*
1. Azure AD performs user-based discovery to determine if the user exists in an [existing Azure AD tenant](./what-is-b2b.md#easily-invite-guest-users-from-the-azure-ad-portal).
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Previously updated : 05/17/2022 Last updated : 08/10/2022 tags: active-directory
Here are some remedies for common problems with Azure Active Directory (Azure AD
> > - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
- > - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
-
+ > - The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## Guest sign-in fails with error code AADSTS50020
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Previously updated : 08/05/2022- Last updated : 08/10/2022
The following table describes B2B collaboration users based on how they authenti
- **Internal member**: These users are generally considered employees of your organization. The user authenticates internally via Azure AD, and the user object created in the resource Azure AD directory has a UserType of Member. > [!IMPORTANT]
-> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## Invitation redemption
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 06/30/2022- Last updated : 08/10/2022
A simple invitation and redemption process lets partners use their own credentia
Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/). > [!IMPORTANT]
-> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## Collaborate with any partner using their identities
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Security defaults make it easier to help protect your organization from these id
## Enabling security defaults
-If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation.
+If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation.
To enable security defaults in your directory:
You may choose to [disable password expiration](../authentication/concept-sspr-p
For more detailed information about emergency access accounts, see the article [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
+### B2B guest users
+
+Any B2B Guest users that access your directory will be subject to the same controls as your organization's users.
+ ### Disabled MFA status If your organization is a previous user of per-user based Azure AD Multi-Factor Authentication, don't be alarmed to not see users in an **Enabled** or **Enforced** status if you look at the Multi-Factor Auth status page. **Disabled** is the appropriate status for users who are using security defaults or Conditional Access based Azure AD Multi-Factor Authentication.
active-directory How To Connect Install Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-automatic-upgrade.md
Azure AD Connect automatic upgrade is a feature that regularly checks for newer versions of Azure AD Connect. If your server is enabled for automatic upgrade and a newer version is found for which your server is eligible, it will perform an automatic upgrade to that newer version. Note that for security reasons the agent that performs the automatic upgrade validates the new build of Azure AD Connect based on the digital signature of the downloaded version.
+>[!NOTE]
+> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](https://docs.microsoft.com/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
+>
+> Product governed by the Modern Policy follow a [continuous support and servicing model](https://docs.microsoft.com/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported.
+>
+> For products and services governed by the Modern Lifecycle Policy, Microsoft's policy is to provide a minimum 30 days' notification when customers are required to take action in order to avoid significant degradation to the normal use of the product or service.
+ ## Overview Making sure your Azure AD Connect installation is always up to date has never been easier with the **automatic upgrade** feature. This feature is enabled by default for express installations and DirSync upgrades. When a new version is released, your installation is automatically upgraded. Automatic upgrade is enabled by default for the following:
active-directory Myapps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/myapps-overview.md
Previously updated : 05/05/2022 Last updated : 08/11/2022 + #Customer intent: As an Azure AD administrator, I want to make applications available to users in the My Apps portal. # My Apps portal overview
-[My Apps](https://myapps.microsoft.com) is a web-based portal that is used for managing and launching applications in Azure Active Directory (Azure AD). To work with applications in My Apps, use an organizational account in Azure AD and obtain access granted by the Azure AD administrator. My Apps is separate from the Azure portal and doesn't require users to have an Azure subscription or Microsoft 365 subscription.
+My Apps is a web-based portal that is used for managing and launching applications in Azure Active Directory (Azure AD). To work with applications in My Apps, use an organizational account in Azure AD and obtain access granted by the Azure AD administrator. My Apps is separate from the Azure portal and doesn't require users to have an Azure subscription or Microsoft 365 subscription.
Users access the My Apps portal to:
For more information, see [Properties of an enterprise application](application-
### Discover applications
-When signed in to the My Apps portal, the applications that have been made visible are shown. For an application to be visible in the My Apps portal, set the appropriate properties in the Azure portal. Also in the Azure portal, assign a user or group with the appropriate members.
+When signed in to the [My Apps](https://myapps.microsoft.com) portal, the applications that have been made visible are shown. For an application to be visible in the My Apps portal, set the appropriate properties in the [Azure portal](https://portal.azure.com). Also in the Azure portal, assign a user or group with the appropriate members.
In the My Apps portal, to search for an application, enter an application name in the search box at the top of the page to find an application. The applications that are listed can be formatted in **List view** or a **Grid view**.
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
zone_pivot_groups: enterprise-apps-minus-portal
In this article, you'll learn how to restore a soft deleted enterprise application in your Azure Active Directory (Azure AD) tenant. Soft deleted enterprise applications can be restored from the recycle bin within the first 30 days after their deletion. After the 30-day window, the enterprise application is permanently deleted and can't be restored.
-When an [application registration is deleted](../develop/howto-remove-app.md) in its home tenant through app registrations in the Azure portal, the enterprise application, which is its corresponding service principal also gets deleted. Restoring the deleted application registration through the Azure portal won't restore its corresponding service principal, but will instead create a new one.
-
-Currently, the [soft deleted enterprise applications](delete-application-portal.md) can't be viewed or restored through the Azure portal. Therefore, if you had configurations on the previous enterprise application, you can't restore them through the Azure portal. To recover your previous configurations, first delete the enterprise application that was restored through the Azure portal, then follow the steps in this article to recover the soft deleted enterprise application. For more information on frequently asked questions about deletion and recovery of applications, see [Deleting and recovering applications FAQs](delete-recover-faq.yml.
--
+>[!IMPORTANT]
+>If you deleted an [application registration](../develop/howto-remove-app.md) in its home tenant through app registrations in the Azure portal, the enterprise application, which is its corresponding service principal also got deleted. If you restore the deleted application registration through the Azure portal, its corresponding service principal, won't be restored. Instead, this action will create a new service principal. Therefore, if you had configurations on the previous enterprise application, you can't restore them through the Azure portal. Use the workaround provided in this article to recover the deleted service principal and its previous configurations.
## Prerequisites To restore an enterprise application, you need:
To restore an enterprise application, you need:
- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - A [soft deleted enterprise application](delete-application-portal.md) in your tenant.- ## View restorable enterprise applications
+To recover your enterprise application with its previous configurations, first delete the enterprise application that was restored through the Azure portal, then take the following steps to recover the soft deleted enterprise application. For more information on frequently asked questions about deletion and recovery of applications, see [Deleting and recovering applications FAQs](delete-recover-faq.yml).
+ :::zone pivot="aad-powershell" > [!IMPORTANT]
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tenant-restrictions.md
For specific details, refer to your proxy server documentation.
## Blocking consumer applications
-Applications from Microsoft that support both consumer accounts and organizational accounts, like OneDrive or Microsoft Learn can sometimes be hosted on the same URL. This means that users that must access that URL for work purposes also have access to it for personal use, which may not be permitted under your operating guidelines.
+Applications from Microsoft that support both consumer accounts and organizational accounts such as OneDrive can sometimes be hosted on the same URL. This means that users that must access that URL for work purposes also have access to it for personal use, which may not be permitted under your operating guidelines.
Some organizations attempt to fix this by blocking `login.live.com` in order to block personal accounts from authenticating. This has several downsides:
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Knowledge Administrator](#knowledge-administrator) | Can configure knowledge, learning, and other intelligent features. | b5a8dcf3-09d5-43a9-a639-8e29ef291470 | > | [Knowledge Manager](#knowledge-manager) | Can organize, create, manage, and promote topics and knowledge. | 744ec460-397e-42ad-a462-8b3f9747a02c | > | [License Administrator](#license-administrator) | Can manage product licenses on users and groups. | 4d6ac14f-3453-41d0-bef9-a3e0c569773a |
+> | [Lifecycle Workflows Administrator](#lifecycle-workflows-administrator) | Create and manage all aspects of workflows and tasks associated with Lifecycle Workflows in Azure AD. | 59d46f88-662b-457b-bceb-5c3809e5908f |
> | [Message Center Privacy Reader](#message-center-privacy-reader) | Can read security messages and updates in Office 365 Message Center only. | ac16e43d-7b2d-40e0-ac05-243ff356ab5b | > | [Message Center Reader](#message-center-reader) | Can read messages and updates for their organization in Office 365 Message Center only. | 790c1fb9-7f7d-4f88-86a1-ef1f95c05c1b | > | [Modern Commerce User](#modern-commerce-user) | Can manage commercial purchases for a company, department or team. | d24aef57-1500-4070-84db-2666f29cf966 |
Users in this role can add, remove, and update license assignments on users, gro
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Lifecycle Workflows Administrator
+
+Assign the Lifecycle Workflows Administrator role to users who need to do the following tasks:
+
+- Create and manage all aspects of workflows and tasks associated with Lifecycle Workflows in Azure AD
+- Check the execution of scheduled workflows
+- Launch on-demand workflow runs
+- Inspect workflow execution logs
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/lifecycleManagement/workflows/allProperties/allTasks | Manage all aspects of lifecycle management workflows and tasks in Azure AD |
+ ## Message Center Privacy Reader Users in this role can monitor all notifications in the Message Center, including data privacy messages. Message Center Privacy Readers get email notifications including those related to data privacy and they can unsubscribe using Message Center Preferences. Only the Global Administrator and the Message Center Privacy Reader can read data privacy messages. Additionally, this role contains the ability to view groups, domains, and subscriptions. This role has no permission to view, create, or manage service requests.
active-directory Zendesk Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Zendesk SSO
+You can set up one SAML configuration for team members and a second SAML configuration for end users.
+ 1. To automate the configuration within **Zendesk**, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**. ![Screenshot shows the Install the extension button.](./media/target-process-tutorial/install_extension.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Setup configuration](common/setup-sso.png)
-1. If you want to setup Zendesk manually, open a new web browser window and sign into your Zendesk company site as an administrator and perform the following steps:
+1. If you want to set up Zendesk manually, open a new web browser window and sign into your Zendesk company site as an administrator and perform the following steps:
-1. In the **Zendesk Admin Center**, Go to the **Account -> Security -> Single sign-on** page and click **Configure** in the **SAML**.
+1. In the **Zendesk Admin Center**, go to **Account -> Security -> Single sign-on**, then click **Create SSO configuration** and select **SAML**.
- ![Screenshot shows the Zendesk Admin Center with Security settings selected.](./media/zendesk-tutorial/settings.png "Security")
+ ![Screenshot shows the Zendesk Admin Center with Security settings selected.](https://zen-marketing-documentation.s3.amazonaws.com/docs/en/zendesk_create_sso_configuration.png "Security")
1. Perform the following steps in the **Single sign-on** page.
- ![Single sign-on](./media/zendesk-tutorial/saml-configuration.png "Single sign-on")
+ ![Single sign-on](https://zen-marketing-documentation.s3.amazonaws.com/docs/en/zendesk_saml_configuration_settings.png "Single sign-on")
+
+ a. In **Configuration name**, enter a name for your configuration. Up to two SAML and two JWT configurations are possible.
- a. Check the **Enabled**.
-
b. In **SAML SSO URL** textbox, paste the value of **Login URL** which you have copied from Azure portal. c. In **Certificate fingerprint** textbox, paste the **Thumbprint** value of certificate which you have copied from Azure portal.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
e. Click **Save**.
+After creating your SAML configuration, you must activate it by assigning it to end users or team members.
+
+1. In the **Zendesk Admin Center**, go to **Account -> Security** and select either **Team member authentication** or **End user authentication**.
+
+1. If you're assigning the configuration to team members, select **External authentication** to show the authentication options. These options are already displayed for end users.
+
+1. Click the **Single sign-on (SSO)** option in the **External authentication** section, then select the name of the SSO configuration you want to use.
+
+1. Select the primary SSO method for this group of users if you have more than one authentication method assigned to the group. This option sets the default method used when users go to a page that requires authentication.
+
+1. Click **Save**.
+ ### Create Zendesk test user The objective of this section is to create a user called Britta Simon in Zendesk. Zendesk supports automatic user provisioning, which is by default enabled. You can find more details [here](Zendesk-provisioning-tutorial.md) on how to configure automatic user provisioning.
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
Verifiable credentials definitions are made up of two components, *display* defi
This article explains how to modify both types of definitions to meet the requirements of your organization.
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Display definition: wallet credential visuals Microsoft Entra Verified ID offer a limited set of options that can be used to reflect your brand. This article provides instructions how to customize your credentials, and best practices for designing credentials that look great after they're issued to users.
The rules definition is a simple JSON document that describes important properti
### Attestations
-The following four attestation types are currently available to be configured in the rules definition. They're used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your decentralized identifier (DID).
+The following four attestation types are currently available to be configured in the rules definition. They are different ways of providing claims used by the Entra verified ID issuing service to be inserted into a verifiable credential and attest to that information with your decentralized identifier (DID). Multiple attestation types can be used in the rules definition.
* **ID token**: When this option is configured, you'll need to provide an Open ID Connect configuration URI and include the claims that should be included in the verifiable credential. Users are prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account. To configure this option, see this [how to guide](how-to-use-quickstart-idtoken.md)
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable. But identity data has too often been exposed in security breaches. These breaches affect our social, professional, and financial lives. Microsoft believes that thereΓÇÖs a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This primer explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations.
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
Microsoft Entra Verified ID includes the Request Service REST API. This API allows you to issue and verify credentials. This article shows you how to start using the Request Service REST API.
-> [!IMPORTANT]
-> The Request Service REST API is currently in preview. This preview version is provided without a service level agreement, and you can occasionally expect breaking changes and deprecation of the API while in preview. The preview version of the API isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## API access token Your application needs to include a valid access token with the required permissions so that it can access the Request Service REST API. Access tokens issued by the Microsoft identity platform contain information (scopes) that the Request Service REST API uses to validate the caller. An access token ensures that the caller has the proper permissions to perform the operation they're requesting.
To issue or verify a verifiable credential, follow these steps:
1. Submit the request to the Request Service REST API.
-The Request Service API returns a HTTP Status Code `201 Created` on a successful call. If the API call returns an error, please check the [error reference documentation](error-codes.md). //TODO
+The Request Service API returns an HTTP Status Code `201 Created` on a successful call. If the API call returns an error, please check the [error reference documentation](error-codes.md). //TODO
## Issuance request example
Authorization: Bearer <token>
"clientName": "Verifiable Credential Expert Sample" }, "type": "VerifiedCredentialExpert",
- "manifestUrl": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert1",
+ "manifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert1",
"pin": { "value": "3539", "length": 4
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- > [!NOTE] > The requirement of an Azure Active Directory (Azure AD) P2 license was removed in early May 2001. The Azure AD Free tier is now supported.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- ## Prerequisites To link your DID to your domain, you need to have completed the following.
It is of high importance that you link your DID to a domain recognizable to the
## How do you update the linked domain on your DID?
-1. Navigate to the Verifiable Credentials | Getting Started page.
-1. On the left side of the page, select **Domain**.
+1. Navigate to the Verified ID in the Azure portal.
+1. On the left side of the page, select **Registration**.
1. In the Domain box, enter your new domain name. 1. Select **Publish**.
If the trust system is ION, once the domain changes are published to ION, the do
## Distribute well-known config
-1. From the Azure portal, navigate to the Verifiable Credentials page. Select **Domain** and choose **Verify this domain**
+1. From the Azure portal, navigate to the Verified ID page. Select **Registration** and choose **Verify** for the domain
2. Download the did-configuration.json file shown in the image below.
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Title: How to Revoke a Verifiable Credential as an Issuer - Azure Active Directory Verifiable Credentials
+ Title: How to Revoke a Verifiable Credential as an Issuer - Entra Verified ID
description: Learn how to revoke a Verifiable Credential that you've issued documentationCenter: ''
As part of the process of working with verifiable credentials (VCs), you not only have to issue credentials, but sometimes you also have to revoke them. In this article, we go over the **Status** property part of the VC specification and take a closer look at the revocation process, why we may want to revoke credentials and some data and privacy implications.
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Why you may want to revoke a VC?
+## Why you may want to revoke a verifiable credential?
Each customer will have their own unique reason's for wanting to revoke a verifiable credential, but here are some of the common themes we've heard thus far.
Each customer will have their own unique reason's for wanting to revoke a verifi
Using the indexed claim in verifiable credentials, you can search for issued verifiable credentials by that claim in the portal and revoke it.
-1. Navigate to the verifiable credentials blade in Azure Active Directory.
+1. Navigate to the Verified ID blade in the Azure portal as an admin user with sign key permission on Azure KeyVault.
1. Select the verifiable credential type 1. On the left-hand menu, choose **Revoke a credential** ![Revoke a credential](media/how-to-issuer-revoke/settings-revoke.png)
-1. Search for the index claim of the user you want to revoke. If you haven't indexed a claim, search won't work, and you won't be able to revoke the verifiable credential.
+1. Search for the index claim of the user you want to revoke. If you haven't indexed a claim, search will not work, and you will not be able to revoke the verifiable credential.
![Screenshot of the credential to revoke](media/how-to-issuer-revoke/revoke-search.png) >[!NOTE]
- >Since we are only storing a hash of the indexed claim from the verifiable credential, only an exact match will populate the search results. We take the input as searched by the IT Admin and we use the same hashing algorithm to see if we have a hash match in our database.
+ >Since only a hash of the indexed claim from the verifiable credential is stored, only an exact match will populate the search results. What is entered in the textbox is hashed using the same algorithm and used as a search criteria to match the stored, hashed, value.
-1. Once you've found a match, select the **Revoke** option to the right of the credential you want to revoke.
+1. When a match is found, select the **Revoke** option to the right of the credential you want to revoke.
+
+ >[!NOTE]
+ >The admin user performing the revoke operation needs to have **sign** key permission on Azure KeyVault or you will get error message ***Unable to access KeyVault resource with given credentials***.
![Screenshot of a warning letting you know that after revocation the user still has the credential](media/how-to-issuer-revoke/warning.png)
Verifiable credential data isn't stored by Microsoft. Therefore, the issuer need
``` >[!NOTE]
->Only one claim can be indexed from a rules claims mapping.
+>Only one claim can be indexed from a rules claims mapping. If you accidentally have no indexed claim in your rules definition, and you later correct this, already issued verifiable credentials will not be searchable since they were issued when no index existed.
## How does revocation work?
Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.c
In every Microsoft issued verifiable credential, there is a claim called `credentialStatus`. This data is a navigational map to where in a block of data this VC has its revocation flag.
+>[!NOTE]
+>If the verifiable credential is old and was issued during the preview period, this claim may not exist. Revocation will not work for this credential and you have to reissue it.
+ ```json ... "credentialStatus": {
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
Title: Opt out of the Microsoft Entra Verified ID
-description: Learn how to Opt Out of the Verifiable Credentials Preview
+description: Learn how to Opt Out of Entra Verified ID
documentationCenter: ''
In this article:
- What happens to your data? - Effect on existing verifiable credentials.
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites - Complete verifiable credentials onboarding. ## When do you need to opt out?
-Opting out is a one-way operation, after you opt-out your Microsoft Entra Verified ID environment will be reset. During the Public Preview opting out may be required to:
+Opting out is a one-way operation, after you opt-out your Entra Verified ID environment will be reset. Opting out may be required to:
- Enable new service capabilities. - Reset your service configuration.
Once an opt-out takes place, you won't be able to recover your DID or conduct an
All verifiable credentials already issued will continue to exist. They won't be cryptographically invalidated as your DID will remain resolvable through ION. However, when relying parties call the status API, they will always receive back a failure message.
-## How to opt-out from the Microsoft Entra Verified ID Public Preview?
+## How to opt-out from the Microsoft Entra Verified ID service?
1. From the Azure portal search for verifiable credentials. 2. Choose **Organization Settings** from the left side menu.
-3. Under the section, **Reset your organization**, select **Delete all credentials, and opt out of preview**.
+3. Under the section, **Reset your organization**, select **Delete all credentials and reset service**.
:::image type="content" source="media/how-to-opt-out/settings-reset.png" alt-text="Section in settings that allows you to reset your organization":::
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites - Complete verifiable credentials onboarding with Web as the selected trust system.-- Complete the Linked Domain setup.
+- Complete the Linked Domain setup. Without completing this step, you can't perform this registration step.
## Why do I need to register my website ID?
-If your trust system for the tenant is Web, you need register your website ID to be able to issue and verify your credentials. When you use the ION based trust system, information like your issuers' public keys are published to the blockchain. When the trust system is Web, you have to make this information available on your website.
+If your trust system for the tenant is Web, you need register your website ID to be able to issue and verify your credentials. When the trust system is Web, you have to make this information available on your website and complete this registration. When you use the ION based trust system, information like your issuers' public keys are published to blockchain and you don't need to complete this step.
## How do I register my website ID?
-1. Navigate to the Verifiable Credentials | Getting Started page.
-1. On the left side of the page, select Domain.
+1. Navigate to the Verified ID in the Azure portal.
+1. On the left side of the page, select Registration.
1. At the Website ID registration, select Review. ![Screenshot of website registration page.](media/how-to-register-didwebsite/how-to-register-didwebsite-domain.png)
If your trust system for the tenant is Web, you need register your website ID to
![Screenshot of did.json.](media/how-to-register-didwebsite/how-to-register-didwebsite-diddoc.png) 1. Upload the file to your webserver. The DID document JSON file needs to be uploaded to location /.well-known/did.json on your webserver.
-1. Once the file is available on your webserver, you need to select the Refresh registration status button to verify that the system can request the file.
+1. Once the file is available on your webserver, you need to select the **Refresh registration status** button to verify that the system can request the file.
## When is the DID document in the did.json file used?
The DID document contains the public keys for your issuer and is used during bot
The DID document in the did.json file needs to be republished if you changed the Linked Domain or if you rotate your signing keys.
+## How can I verify that the registration is working?
+
+The portal verifies that the `did.json` is reachable and correct when you click the [**Refresh registration status** button](#how-do-i-register-my-website-id). You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, bad SSL certificate or URL not being public. If the did.json file can be requested anonymously in a browser, without warnings or errors, the portal will not be able to complete the **Refresh registration status** step either.
+ ## Next steps - [Tutorial for issue a verifiable credential](verifiable-credentials-configure-issuer.md)
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [idTokens attestation](rules-and-display-definitions-model.md#idtokenattestation-type) produces an issuance flow where you're required to do an interactive sign-in to an OpenID Connect (OIDC) identity provider in Microsoft Authenticator. Claims in the ID token that the identity provider returns can be used to populate the issued verifiable credential. The claims mapping section in the rules definition specifies which claims are used. ## Create a custom credential with the idTokens attestation type
The JSON display definition is nearly the same, regardless of attestation type.
## Sample JSON rules definitions
-The JSON attestation definition should contain the **idTokens** name, the [OIDC configuration details](rules-and-display-definitions-model.md#idtokenattestation-type) and the claims mapping section. The expected JSON for the rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
+The JSON attestation definition should contain the **idTokens** name, the [OIDC configuration details](rules-and-display-definitions-model.md#idtokenattestation-type) (clientId, configuration, redirectUri and scope) and the claims mapping section. The expected JSON for the rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
The claims mapping in the following example requires that you configure the token as explained in the [Claims in the ID token from the identity provider](#claims-in-the-id-token-from-the-identity-provider) section.
The clientId attribute is the application ID of a registered application in the
1. In **Redirect URI (optional)**, select **Public client/native (mobile & desktop)**, and then enter **vcclient://openid**.
-If you want to be able to test what claims are in the token, do the following:
+If you want to be able to test what claims are in the Azure Active Directory ID token, do the following:
1. On the left pane, select **Authentication**> **Add platform** > **Web**.
To configure your sample code to issue and verify your custom credentials, you n
- The credential type - The manifest URL to your credential
-The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-
-![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-
-After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**. Then you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
active-directory How To Use Quickstart Selfissued https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-selfissued.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) type produces an issuance flow where you're required to manually enter values for the claims in Microsoft Authenticator. ## Create a custom credential with the selfIssued attestation type
To configure your sample code to issue and verify your custom credential, you ne
- The credential type - The manifest URL to your credential
-The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-
-![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-
-After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**. Then you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
active-directory How To Use Quickstart Verifiedemployee https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-verifiedemployee.md
If attribute values change in the user's Azure AD profile, the VC isn't automati
## Configure the samples to issue and verify your VerifiedEmployee credential
-Verifiable Credentials for directory based claims can be issued and verified just like any other credentials you create. All you need is your issuer DID for your tenant, the credential type and the manifest url to your credential. The easiest way to find these values for a Managed Credential is to view the credential in the portal, select Issue credential and switch to Custom issue. These steps bring up a textbox with a skeleton JSON payload for the Request Service API.
+Verifiable Credentials for directory based claims can be issued and verified just like any other credentials you create. All you need is your issuer DID for your tenant, the credential type and the manifest url to your credential. The easiest way to find these values for a Managed Credential is to view the credential in the portal, select **Issue credential** and you will get a header named **Custom issue**. These steps bring up a textbox with a skeleton JSON payload for the Request Service API.
![Custom issue](media/how-to-use-quickstart-verifiedemployee/verifiable-credentials-configure-verifiedemployee-custom-issue.png)
active-directory How To Use Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites To use the Microsoft Entra Verified ID quickstart, you need only to complete the verifiable credentials onboarding process. ## What is the quickstart?
-Azure Active Directory verifiable credentials now come with a quickstart in the Azure portal for creating custom credentials. When you use the quickstart, you don't need to edit and upload rules and display files to Azure Storage. Instead, you enter all details in the Azure portal and create the credential on a single page.
+Entra Verified ID now come with quickstarts in the Azure portal for creating custom credentials. When you use the quickstart, you don't need to edit and upload rules and display files to Azure Storage. Instead, you enter all details in the Azure portal and create the credential on a single page.
>[!NOTE] >When you work with custom credentials, you provide display definitions and rules definitions in JSON documents. These definitions are stored with the credential details.
To configure your sample code to issue and verify by using custom credentials, y
- The credential type - The manifest URL to your credential
-The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-
-![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-
-After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**. There you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
active-directory How Use Vcnetwork https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-use-vcnetwork.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites To use the Entra Verified ID Network, you need to have completed the following.
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Microsoft Entra Verified ID](decentralized-identifier-overview.md) and the [FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial. This architectural overview introduces the capabilities and components of the Microsoft Entra Verified ID service. For more detailed information on issuance and validation, see
Terminology for verifiable credentials (VCs) might be confusing if you're not fa
* In the scenario above, both the issuer and verifier have a DID, and a DID document. The DID document contains the public key, and the list of DNS web domains associated with the DID (also known as linked domains).
-* Woodgrove (issuer) signs their employeesΓÇÖ VCs with its public key; similarly, Proseware (verifier) signs requests to present a VC using its key, which is also associated with its DID.
+* Woodgrove (issuer) signs their employeesΓÇÖ VCs with its private key; similarly, Proseware (verifier) signs requests to present a VC using its key, which is also associated with its DID.
A ***trust system*** is the foundation in establishing trust between decentralized systems. It can be a distributed ledger or it can be something centralized, such as [DID Web](https://w3c-ccg.github.io/did-method-web/).
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
The callback endpoint is called when a user scans the QR code, uses the deep lin
|Property |Type |Description | |||| | `requestId`| string | Mapped to the original request when the payload was posted to the Verifiable Credentials service.|
-| `code` |string |The code returned when the request has an error. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the issuance flow.</li><li>`issuance_successful`: The issuance of the verifiable credentials was successful.</li><li>`issuance_error`: There was an error during issuance. For details, see the `error` property.</li></ul> |
+| `requestStatus` |string |The status returned for the request. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the issuance flow.</li><li>`issuance_successful`: The issuance of the verifiable credentials was successful.</li><li>`issuance_error`: There was an error during issuance. For details, see the `error` property.</li></ul> |
| `state` |string| Returns the state value that you passed in the original payload. | | `error`| error | When the `code` property value is `Issuance_error`, this property contains information about the error.| | `error.code` | string| The return error code. |
The following example demonstrates a callback payload when the authenticator app
```json {     "requestId": "799f23ea-5241-45af-99ad-cf8e5018814e",
-    "code":"request_retrieved",
+    "requestStatus":"request_retrieved",
    "state": "de19cb6b-36c1-45fe-9409-909a51292a9c" } ```
The following example demonstrates a callback payload after the user successfull
```json {     "requestId": "799f23ea-5241-45af-99ad-cf8e5018814e",
-    "code":"issuance_successful",
+    "requestStatus":"issuance_successful",
    "state": "de19cb6b-36c1-45fe-9409-909a51292a9c" }  ```
The following example demonstrates a callback payload when an error occurred:
```json {     "requestId": "799f23ea-5241-45af-99ad-cf8e5018814e",
-    "code": "issuance_error",
+    "requestStatus": "issuance_error",
    "state": "de19cb6b-36c1-45fe-9409-909a51292a9c", "error": { "code":"IssuanceFlowFailed",
active-directory Issuer Openid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuer-openid.md
Title: Issuer service communication examples - Azure Active Directory Verifiable Credentials
+ Title: Issuer service communication examples - Entra Verified ID
description: Details of communication between identity provider and issuer service
The Microsoft Entra Verified ID service can issue verifiable credentials by retrieving claims from an ID token generated by your organization's OpenID compliant identity provider. This article instructs you on how to set up your identity provider so Authenticator can communicate with it and retrieve the correct ID Token to pass to the issuing service.
-> [!IMPORTANT]
-> Azure Active Directory Verifiable Credentials is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- To issue a Verifiable Credential, Authenticator is instructed through downloading the contract to gather input from the user and send that information to the issuing service. If you need to use an ID Token, you have to set up your identity provider to allow Authenticator to sign in a user using the OpenID Connect protocol. The claims in the resulting ID token are used to populate the contents of your verifiable credential. Authenticator authenticates the user using the OpenID Connect authorization code flow. Your OpenID provider must support the following OpenID Connect features: | Feature | Description |
The ID token must use the JWT compact serialization format, and must not be encr
## Next steps -- [How to customize your Azure Active Directory Verifiable Credentials](credential-design.md)
+- [Customize your verifiable credentials](credential-design.md)
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
- >[!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ItΓÇÖs important to plan your issuance solution so that in addition to issuing credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt done so, we recommend you view the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md) for foundational information. ## Scope of guidance
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
->[!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- MicrosoftΓÇÖs Microsoft Entra Verified ID (Azure AD VC) service enables you to trust proofs of user identity without expanding your trust boundary. With Azure AD VC, you create accounts or federate with another identity provider. When a solution implements a verification exchange using verifiable credentials, it enables applications to request credentials that aren't bound to a specific domain. This approach makes it easier to request and verify credentials at scale. If you havenΓÇÖt already, we suggest you review the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md). You may also want to review [Plan your Microsoft Entra Verified ID issuance solution](plan-issuance-solution.md).
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
The `RequestCredential` provides information about the requested credentials the
|||| | `type`| string| The verifiable credential type. The `type` must match the type as defined in the `issuer` verifiable credential manifest (for example, `VerifiedCredentialExpert`). To get the issuer manifest, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md). Copy the **Issue credential URL**, open it in a web browser, and check the **id** property. | | `purpose`| string | Provide information about the purpose of requesting this verifiable credential. |
-| `acceptedIssuers`| string collection | A collection of issuers' DIDs that could issue the type of verifiable credential that subjects can present. To get your issuer DID, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md), and copy the value of the **Decentralized identifier (DID)**. |
+| `acceptedIssuers`| string collection | A collection of issuers' DIDs that could issue the type of verifiable credential that subjects can present. To get your issuer DID, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md), and copy the value of the **Decentralized identifier (DID)**. If the `acceptedIssuers` collection is empty, then the presentation request will accept a credential type issued by any issuer. |
| `configuration.validation` | [Configuration.Validation](#configurationvalidation-type) | Optional settings for presentation validation.| ### Configuration.Validation type
The `Configuration.Validation` provides information about the presented credenti
|Property |Type |Description | |||| | `allowRevoked` | Boolean | Determines if a revoked credential should be accepted. Default is `false` (it shouldn't be accepted). |
-| `validateLinkedDomain` | Boolean | Determines if the linked domain should be validated. Default is `true` (it should be validated). Setting this flag to `false` means you'll accept credentials from unverified linked domain. Setting this flag to `true` means the linked domain will be validated and only verified domains will be accepted. |
+| `validateLinkedDomain` | Boolean | Determines if the linked domain should be validated. Default is `false`. Setting this flag to `false` means you as a Relying Party application accept credentials from unverified linked domain. Setting this flag to `true` means the linked domain will be validated and only verified domains will be accepted. |
## Successful response
The callback endpoint is called when a user scans the QR code, uses the deep lin
|Property |Type |Description | |||| | `requestId`| string | Mapped to the original request when the payload was posted to the Verifiable Credentials service.|
-| `code` |string |The code returned when the request was retrieved by the authenticator app. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the presentation flow.</li><li>`presentation_verified`: The verifiable credential validation completed successfully.</li></ul> |
+| `requestStatus` |string |The status returned when the request was retrieved by the authenticator app. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the presentation flow.</li><li>`presentation_verified`: The verifiable credential validation completed successfully.</li></ul> |
| `state` |string| Returns the state value that you passed in the original payload. | | `subject`|string | The verifiable credential user DID.| | `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuer's domain. </li><li>The verifiable credential issuer's domain validation status. </li></ul> |
The following example demonstrates a callback payload when the authenticator app
```json {     "requestId": "e4ef27ca-eb8c-4b63-823b-3b95140eac11",
-    "code":"request_retrieved",
+    "requestStatus":"request_retrieved",
    "state": "92d076dd-450a-4247-aa5b-d2e75a1a5d58" } ```
The following example demonstrates a callback payload after the verifiable crede
```json { "requestId": "e4ef27ca-eb8c-4b63-823b-3b95140eac11",
- "code": "presentation_verified",
+ "requestStatus": "presentation_verified",
"state": "92d076dd-450a-4247-aa5b-d2e75a1a5d58", "subject": "did:ion:EiAlrenrtD3Lsw0GlbzS1O2YFdy3Xtu8yo35W<SNIP>…", "issuers": [
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
The following diagram illustrates the Microsoft Entra Verified ID architecture a
- To clone the repository that hosts the sample app, install [GIT](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download), or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- [ngrok](https://ngrok.com/) (free).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account.
- A mobile device with Microsoft Authenticator:
- - Android version 6.2108.5654 or later installed.
- - iOS version 6.5.82 or later installed.
+ - Android version 6.2206.3973 or later installed.
+ - iOS version 6.6.2 or later installed.
## Create the verified credential expert card in Azure In this step, you create the verified credential expert card by using Microsoft Entra Verified ID. After you create the credential, your Azure AD tenant can issue it to users who initiate the process.
-1. Using the [Azure portal](https://portal.azure.com/), search for *verifiable credentials*. Then select **Verifiable Credentials (Preview)**.
+1. Using the [Azure portal](https://portal.azure.com/), search for **Verified ID** and select it.
1. After you [set up your tenant](verifiable-credentials-configure-tenant.md), the **Create credential** should appear. Alternatively, you can select **Credentials** in the left hand menu and select **+ Add a credential**.
-1. In **Create a new credential**, do the following:
+1. In **Create credential**, select **Custom Credential** and click **Next**:
1. For **Credential name**, enter **VerifiedCredentialExpert**. This name is used in the portal to identify your verifiable credentials. It's included as part of the verifiable credentials contract.
The following screenshot demonstrates how to create a new credential:
Now that you have a new credential, you're going to gather some information about your environment and the credential that you created. You use these pieces of information when you set up your sample application.
-1. In Verifiable Credentials, select **Issue credential** and switch to **Custom issue**.
+1. In Verifiable Credentials, select **Issue credential**.
![Screenshot that shows how to select the newly created verified credential.](media/verifiable-credentials-configure-issuer/issue-credential-custom-view.png)
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The Verifiable Credentials Service Request is the Request Service API, and it ne
To set up Microsoft Entra Verified ID, follow these steps:
-1. In the [Azure portal](https://portal.azure.com/), search for *verifiable credentials*. Then, select **Verifiable Credentials (Preview)**.
+1. In the [Azure portal](https://portal.azure.com/), search for *Verified ID*. Then, select **Verified ID**.
1. From the left menu, select **Getting started**.
To add the required permissions, follow these steps:
## Service endpoint configuration
-1. In the Azure portal, navigate to the Verifiable credentials page.
+1. Navigate to the Verified ID in the Azure portal.
1. Select **Registration**. 1. Notice that there are two sections: 1. Website ID registration
active-directory Verifiable Credentials Configure Verifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-verifier.md
Last updated 06/16/2022
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-In [Issue Microsoft Entra Verified ID credentials from an application (preview)](verifiable-credentials-configure-issuer.md), you learn how to issue and verify credentials by using the same Azure Active Directory (Azure AD) tenant. In this tutorial, you go over the steps needed to present and verify your first verifiable credential: a verified credential expert card.
+In [Issue Microsoft Entra Verified ID credentials from an application](verifiable-credentials-configure-issuer.md), you learn how to issue and verify credentials by using the same Azure Active Directory (Azure AD) tenant. In this tutorial, you go over the steps needed to present and verify your first verifiable credential: a verified credential expert card.
As a verifier, you unlock privileges to subjects that possess verified credential expert cards. In this tutorial, you run a sample application from your local computer that asks you to present a verified credential expert card, and then verifies it.
In this article, you learn how to:
- If you want to clone the repository that hosts the sample app, install [Git](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download) or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- [ngrok](https://ngrok.com/) (free).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account.
- A mobile device with Microsoft Authenticator:
- - Android version 6.2108.5654 or later installed.
- - iOS version 6.5.82 or later installed.
+ - Android version 6.2206.3973 or later installed.
+ - iOS version 6.6.2 or later installed.
## Gather tenant details to set up your sample application Now that you've set up your Microsoft Entra Verified ID service, you're going to gather some information about your environment and the verifiable credentials you set. You use these pieces of information when you set up your sample application.
-1. From **Verifiable credentials (Preview)**, select **Organization settings**.
+1. From **Verified ID**, select **Organization settings**.
1. Copy the **Tenant identifier** value, and record it for later. 1. Copy the **Decentralized identifier** value, and record it for later.
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
This page contains commonly asked questions about Verifiable Credentials and Dec
- [Conceptual questions about decentralized identity](#conceptual-questions) - [Questions about using Verifiable Credentials preview](#using-the-preview)
-> [!IMPORTANT]
-> Azure Active Directory Verifiable Credentials is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## The basics ### What is a DID?
-Decentralized Identifers(DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains this in further detail.
+Decentralized Identifers (DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains this in further detail.
### Why do we need a DID?
There are multiple ways of offering a recovery mechanism to users, each with the
### How can a user trust a request from an issuer or verifier? How do they know a DID is the real DID for an organization?
-We implement [the Decentralized Identity Foundation's Well Known DID Configuration spec](https://identity.foundation/.well-known/resources/did-configuration/) in order to connect a DID to a highly known existing system, domain names. Each DID created using the Azure Active Directory Verifiable Credentials has the option of including a root domain name that will be encoded in the DID Document. Follow the article titled [Link your Domain to your Distributed Identifier](how-to-dnsbind.md) to learn more.
+We implement [the Decentralized Identity Foundation's Well Known DID Configuration spec](https://identity.foundation/.well-known/resources/did-configuration/) in order to connect a DID to a highly known existing system, domain names. Each DID created using the Entra Verified ID has the option of including a root domain name that will be encoded in the DID Document. Follow the article titled [Link your Domain to your Distributed Identifier](how-to-dnsbind.md) to learn more.
-### Why does the Verifiable Credential preview use ION as its DID method, and therefore Bitcoin to provide decentralized public key infrastructure?
+### Why does the Entra Verified ID support ION as its DID method, and therefore Bitcoin to provide decentralized public key infrastructure?
Microsoft now offers two different trust systems, Web and ION. You may choose to use either one of them during tenant onboarding. ION is a decentralized, permissionless, scalable decentralized identifier Layer 2 network that runs atop Bitcoin. It achieves scalability without including a special crypto asset token, trusted validators, or centralized consensus mechanisms. We use Bitcoin for the base Layer 1 substrate because of the strength of the decentralized network to provide a high degree of immutability for a chronological event record system.
Yes! The following repositories are the open-sourced components of our services.
There are no special licensing requirements to issue Verifiable credentials. All you need is An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ### Updating the VC Service configuration
-The following instructions will take 15 mins to complete and are only required if you have been using the Azure AD Verifiable Credentials service prior to April 25, 2022. You are required to execute these steps to update the existing service principals in your tenant that run the verifiable credentials service the following is an overview of the steps:
+The following instructions will take 15 mins to complete and are only required if you have been using the Entra Verified ID service prior to April 25, 2022. You are required to execute these steps to update the existing service principals in your tenant that run the verifiable credentials service the following is an overview of the steps:
1. Register new service principals for the Azure AD Verifiable Service 1. Update the Key Vault access policies
For the Request API the new scope for your application or Postman is now:
```3db474b9-6a0c-4840-96ac-1fceb342124f/.default ```
-### How do I reset the Azure AD Verifiable credentials service?
+### How do I reset the Entra Verified ID service?
-Resetting requires that you opt out and opt back into the Azure Active Directory Verifiable Credentials service, your existing verifiable credentials configurations will reset and your tenant will obtain a new DID to use during issuance and presentation.
+Resetting requires that you opt out and opt back into the Entra Verified ID service, your existing verifiable credentials configurations will reset and your tenant will obtain a new DID to use during issuance and presentation.
1. Follow the [opt-out](how-to-opt-out.md) instructions.
-1. Go over the Azure Active Directory Verifiable credentials [deployment steps](verifiable-credentials-configure-tenant.md) to reconfigure the service.
+1. Go over the Entra Verified ID [deployment steps](verifiable-credentials-configure-tenant.md) to reconfigure the service.
1. If you are in the European region, it's recommended that your Azure Key Vault and container are in the same European region otherwise you may experience some performance and latency issues. Create new instances of these services in the same EU region as needed. 1. Finish [setting up](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials) your verifiable credentials service. You need to recreate your credentials. 1. If your tenant needs to be configured as an issuer, it's recommended that your storage account is in the European region as your Verifiable Credentials service.
Resetting requires that you opt out and opt back into the Azure Active Directory
### How can I check my Azure AD Tenant's region?
-1. In the [Azure portal](https://portal.azure.com), go to Azure Active Directory for the subscription you use for your Azure Active Directory Verifiable credentials deployment.
+1. In the [Azure portal](https://portal.azure.com), go to Azure Active Directory for the subscription you use for your Entra Verified ID deployment.
1. Under Manage, select Properties :::image type="content" source="media/verifiable-credentials-faq/region.png" alt-text="settings delete and opt out"::: 1. See the value for Country or Region. If the value is a country or a region in Europe, your Microsoft Entra Verified ID service will be set up in Europe. ### How can I check if my tenant has the new Hub endpoint?
-1. In the Azure portal, go to the Verifiable Credentials service.
+1. Navigate to the Verified ID in the Azure portal.
1. Navigate to the Organization Settings. 1. Copy your organizationΓÇÖs Decentralized Identifier (DID). 1. Go to the ION Explorer and paste the DID in the search box
Resetting requires that you opt out and opt back into the Azure Active Directory
], ```
-### If I reconfigure the Azure AD Verifiable Credentials service, do I need to relink my DID to my domain?
+### If I reconfigure the Entra Verified ID service, do I need to relink my DID to my domain?
Yes, after reconfiguring your service, your tenant has a new DID use to issue and verify verifiable credentials. You need to [associate your new DID](how-to-dnsbind.md) with your domain.
No, at this point it isn't possible to keep your tenant's DID after you have opt
## Next steps -- [How to customize your Azure Active Directory Verifiable Credentials](credential-design.md)
+- [Customize your verifiable credentials](credential-design.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Title: What's new for Microsoft Entra Verified ID (preview)
+ Title: What's new for Microsoft Entra Verified ID
description: Recent updates for Microsoft Entra Verified ID
This article lists the latest features, improvements, and changes in the Microso
Microsoft Entra Verified ID is now generally available (GA) as the new member of the Microsoft Entra portfolio! [read more](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-verified-id-now-generally-available/ba-p/3295506) ### Known issues -- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) without issuing any Verifiable Credential will get a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by 08/20/22.
+- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential will get a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by 08/20/22.
## July 2022
Microsoft Entra Verified ID is now generally available (GA) as the new member of
- Request Service API **[Error codes](error-codes.md)** have been **updated** - The **[Admin API](admin-api.md)** is made **public** and is documented. The Azure portal is using the Admin API and with this REST API you can automate the onboarding or your tenant and creation of credential contracts. - Find issuers and credentials to verify via the [The Microsoft Entra Verified ID Network](how-use-vcnetwork.md).-- For migrating your Azure Storage based credentials to become Managed Credentials there is a PowerShell script in the [github samples repo](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/contractmigration/scripts/contractmigration) for the task.
+- For migrating your Azure Storage based credentials to become Managed Credentials there is a PowerShell script in the [GitHub samples repo](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/contractmigration/scripts/contractmigration) for the task.
- We also made the following updates to our Plan and design docs: - (updated) [architecture planning overview](introduction-to-verifiable-credentials-architecture.md).
Microsoft Entra Verified ID is now generally available (GA) as the new member of
## June 2022 -- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you will need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service).
+- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you will need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).
- We are rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform: - Introducing Managed Credentials, Managed Credentials are verifiable credentials that no longer use of Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions. - Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md).
We are rolling out some breaking changes to our service. These updates require M
- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for Android. [More information](whats-new.md?#microsoft-authenticator-did-generation-update) >[!IMPORTANT]
-> All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service).
+> All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).
### Microsoft Entra Verified ID available in Europe
Since the beginning of the Microsoft Entra Verified ID service public preview, t
Take the following steps to configure the Verifiable Credentials service in Europe: 1. [Check the location](verifiable-credentials-faq.md#how-can-i-check-my-azure-ad-tenants-region) of your Azure Active Directory to make sure is in Europe.
-1. [Reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in your tenant.
+1. [Reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in your tenant.
>[!IMPORTANT]
-> On March 31st, 2022 European tenants that have not been [reconfigured](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in Europe will lose access to any previous configuration and will require to configure a new instance of the Azure AD Verifiable Credential service.
+> On March 31st, 2022 European tenants that have not been [reconfigured](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in Europe will lose access to any previous configuration and will require to configure a new instance of the Azure AD Verifiable Credential service.
#### Are there any changes to the way that we use the Request API as a result of this move?
To uptake this feature follow the next steps:
1. [Check if your tenant has the Hub endpoint](verifiable-credentials-faq.md#how-can-i-check-if-my-tenant-has-the-new-hub-endpoint). 1. If so, go to the next step.
- 1. If not, [reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in your tenant and go to the next step.
+ 1. If not, [reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in your tenant and go to the next step.
1. Create new verifiable credentials contracts. In the rules file you must add the ` "credentialStatusConfiguration": "anonymous" ` property to start using the new feature in combination with the Hub endpoint for your credentials: Sample contract file:
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
test.txt
## Resize a persistent volume without downtime (Preview) > [!IMPORTANT]
-> Azure Disks CSI driver supports resizing PVCs without downtime.
+> Azure Disks CSI driver supports expanding PVCs without downtime (Preview).
> Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature. > > az feature register --namespace Microsoft.Compute --name LiveResize--
+>
+> az feature show --namespace Microsoft.Compute --name LiveResize
+>
+> Follow this [link][expand-pvc-with-downtime] to expand PVCs **with** downtime if you cannot try preview feature.
You can request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.
The output of the command resembles the following example:
[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/ [csi-driver-parameters]: https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/docs/driver-parameters.md [create-burstable-storage-class]: https://github.com/Azure-Samples/burstable-managed-csi-premium
+[expand-pvc-with-downtime]: https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/docs/known-issues/sizegrow.md
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
az aks create \
The following screenshot from the Azure portal shows an example of configuring these settings during AKS cluster creation: ## Dynamic allocation of IPs and enhanced subnet support
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
KMS supports [public key vault][Enable-KMS-with-public-key-vault] and [private k
> > If you need to recover your Key Vault or key, see the [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md?tabs=azure-cli) documentation.
+#### For non-RBAC key vault
+ Use `az keyvault create` to create a KeyVault. ```azurecli
export KEY_ID=$(az keyvault key show --name MyKeyName --vault-name MyKeyVault --
echo $KEY_ID ```
+The above example stores the Key ID in *KEY_ID*.
+
+#### For RBAC key vault
+
+Use `az keyvault create` to create a KeyVault using Azure Role Based Access Control.
+
+```azurecli
+export KEYVAULT_RESOURCE_ID=$(az keyvault create --name MyKeyVault --resource-group MyResourceGroup --enable-rbac-authorization true --query id -o tsv)
+```
+
+Assign yourself permission to create a key.
+
+```azurecli-interactive
+az role assignment create --role "Key Vault Crypto Officer" --assignee-object-id $(az ad signed-in-user show --query id --out tsv) --assignee-principal-type "User" --scope $KEYVAULT_RESOURCE_ID
+```
+
+Use `az keyvault key create` to create a key.
+
+```azurecli
+az keyvault key create --name MyKeyName --vault-name MyKeyVault
+```
+
+Use `az keyvault key show` to export the Key ID.
+
+```azurecli
+export KEY_ID=$(az keyvault key show --name MyKeyName --vault-name MyKeyVault --query 'key.kid' -o tsv)
+echo $KEY_ID
+```
+ The above example stores the Key ID in *KEY_ID*. ### Create a user-assigned managed identity
az keyvault set-policy -n MyKeyVault --key-permissions decrypt encrypt --object-
#### For RBAC key vault
-If your key vault is enabled with `--enable-rbac-authorization`, you need to assign the "Key Vault Administrator" RBAC role which has decrypt, encrypt permission.
+If your key vault is enabled with `--enable-rbac-authorization`, you need to assign the "Key Vault Crypto User" RBAC role which has decrypt, encrypt permission.
```azurecli-interactive az role assignment create --role "Key Vault Crypto User" --assignee-object-id $IDENTITY_OBJECT_ID --assignee-principal-type "ServicePrincipal" --scope $KEYVAULT_RESOURCE_ID
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Because tabular models in Azure Analysis Services are much the same as tabular m
### Contribute!
-Analysis Services documentation, like this article, is open source. To learn more about how you can contribute, see the [Docs contributor guide](/contribute/).
+Analysis Services documentation, like this article, is open source. To learn more about how you can contribute, see our [contributor guide](/contribute/).
Azure Analysis Services documentation also uses [GitHub Issues](/teamblog/a-new-feedback-system-is-coming-to-docs). You can provide feedback about the product or documentation. Use **Feedback** at the bottom of an article. GitHub Issues are not enabled for the shared Analysis Services documentation.
api-management Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/observability.md
Title: Observability in Azure API Management | Microsoft Docs
-description: Overview of all observability options in Azure API Management.
+description: Overview of all API observability and monitoring options in Azure API Management.
documentationcenter: ''--+ na-+ Last updated 06/01/2020
Azure API Management helps organizations centralize the management of all APIs.
## Overview
-Azure API Management allows you to choose use the managed gateway or [self-hosted gateway](self-hosted-gateway-overview.md), either self-deployed or by using an [Azure Arc extension](how-to-deploy-self-hosted-gateway-azure-arc.md).
+Azure API Management allows you to choose to use the managed gateway or [self-hosted gateway](self-hosted-gateway-overview.md), either self-deployed or by using an [Azure Arc extension](how-to-deploy-self-hosted-gateway-azure-arc.md).
-The table below summarizes all the observability capabilities supported by API Management to operate APIs and what deployment models they support.
+The table below summarizes all the observability capabilities supported by API Management to operate APIs and what deployment models they support. These capabilities can be used by API publishers and others who have permissions to operate or manage the API Management instance.
+> [!NOTE]
+> For API consumers who use the developer portal, a built-in API report is available. It only provides information about their individual API usage during the preceding 90 days.
+>
| Tool | Useful for | Data lag | Retention | Sampling | Data kind | Supported Deployment Model(s) | |:- |:-|:- |:-|:- |: |:- | | **[API Inspector](api-management-howto-api-inspector.md)** | Testing and debugging | Instant | Last 100 traces | Turned on per request | Request traces | Managed, Self-hosted, Azure Arc |
-| **Built-in Analytics** | Reporting and monitoring | Minutes | Lifetime | 100% | Reports and logs | Managed |
+| **[Built-in Analytics](howto-use-analytics.md)** | Reporting and monitoring | Minutes | Lifetime | 100% | Reports and logs | Managed |
| **[Azure Monitor Metrics](api-management-howto-use-azure-monitor.md)** | Reporting and monitoring | Minutes | 90 days (upgrade to extend) | 100% | Metrics | Managed, Self-hosted<sup>2</sup>, Azure Arc | | **[Azure Monitor Logs](api-management-howto-use-azure-monitor.md)** | Reporting, monitoring, and debugging | Minutes | 31 days/5GB (upgrade to extend) | 100% (adjustable) | Logs | Managed<sup>1</sup>, Self-hosted<sup>3</sup>, Azure Arc<sup>3</sup> | | **[Azure Application Insights](api-management-howto-app-insights.md)** | Reporting, monitoring, and debugging | Seconds | 90 days/5GB (upgrade to extend) | Custom | Logs, metrics | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
-| **[Logging through Azure Event Hub](api-management-howto-log-event-hubs.md)** | Custom scenarios | Seconds | User managed | Custom | Custom | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
+| **[Logging through Azure Event Hubs](api-management-howto-log-event-hubs.md)** | Custom scenarios | Seconds | User managed | Custom | Custom | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
| **[OpenTelemetry](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md#introduction-to-opentelemetry)** | Monitoring | Minutes | User managed | 100% | Metrics | Self-hosted<sup>2</sup> | *1. Optional, depending on the configuration of feature in Azure API Management*
The table below summarizes all the observability capabilities supported by API M
## Next Steps
-* [Follow the tutorials to learn more about API Management](import-and-publish.md)
-- To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
+- Get started with [Azure Monitor metrics and logs](api-management-howto-use-azure-monitor.md)
+- Learn how to log requests with [Application Insights](api-management-howto-app-insights.md)
+- Learn how to log events through [Event Hubs](api-management-howto-log-event-hubs.md)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
For example, here's how to calculate the available addressing for a subnet with
Subnet Size /24 = 255 IP addresses - 5 reserved from the platform = 250 available addresses. 250 - Gateway 1 (10) - 1 private frontend IP configuration = 239 239 - Gateway 2 (2) = 237
-237 - Gateway 3 (15) - 1 private frontend IP configuration = 223
+237 - Gateway 3 (15) - 1 private frontend IP configuration = 221
> [!IMPORTANT] > Although a /24 subnet is not required per Application Gateway v2 SKU deployment, it is highly recommended. This is to ensure that Application Gateway v2 has sufficient space for autoscaling expansion and maintenance upgrades. You should ensure that the Application Gateway v2 subnet has sufficient address space to accommodate the number of instances required to serve your maximum expected traffic. If you specify the maximum instance count, then the subnet should have capacity for at least that many addresses. For capacity planning around instance count, see [instance count details](understanding-pricing.md#instance-count).
applied-ai-services Write A Valid Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/tutorials/write-a-valid-query.md
main branch.
<!-- This template provides the basic structure of a tutorial article.
-See the [tutorial guidance](contribute-how-to-mvc-tutorial.md) in the contributor guide.
+See the [tutorial guidance](contribute-how-to-mvc-tutorial.md) in our contributor guide.
To provide feedback on this template contact [the templates workgroup](mailto:templateswg@microsoft.com).
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
$Job = Start-AzAutomationRunbook @StartAzAutomationRunBookParameters
$PollingSeconds = 5 $MaxTimeout = New-TimeSpan -Hours 3 | Select-Object -ExpandProperty TotalSeconds $WaitTime = 0
-while((-NOT (IsJobTerminalState $Job.Status) -and $WaitTime -lt $MaxTimeout) {
+while(-NOT (IsJobTerminalState $Job.Status) -and $WaitTime -lt $MaxTimeout) {
Start-Sleep -Seconds $PollingSeconds $WaitTime += $PollingSeconds $Job = $Job | Get-AzAutomationJob
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
Azure Service Management (ASM) REST APIs for Azure Automation will be retired an
## Next steps
-If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
+If you'd like to contribute to Azure Automation documentation, see our [contributor guide](/contribute/).
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation now supports [system-assigned managed identities](./automation-
## Next steps
-If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
+If you'd like to contribute to Azure Automation documentation, see our [contributor guide](/contribute/).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Cisco Hyperflex on VMware <br/> Cisco IKS ESXi 6.7 U3 |v1.20.14|v1.4.1_2022-03-08|15.0.2255.119| PostgreSQL 12.3 (Ubuntu 12.3-1) |
+|Cisco Hyperflex on VMware <br/> Cisco IKS ESXi 6.7 U3 |1.21.13|v1.9.0_2022-07-12|16.0.312.4243| Not validated |
### Dell |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--| | Dell EMC PowerFlex |1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
-| PowerFlex version 3.6 |1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
-| PowerFlex CSI version 1.4 |1.21.5|v1.4.1_2022-03-08 | Not validated |
-| PowerStore X|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1) |
-| PowerStore T|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1)|
+| PowerFlex version 3.6 |v1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
+| PowerFlex CSI version 1.4 |1.21.5|1.4.1_2022-03-08 | Not validated |
+| PowerStore X|1.20.6|1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1) |
+| PowerStore T|1.23.5|1.9.0_2022-07-12|16.0.312.4243 |postgres 12.3 (Ubuntu 12.3-1)|
### HPE |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|HPE Superdome Flex 280|1.20.0|v1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1)
+|HPE Superdome Flex 280|1.20.0|1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1)
### Kublr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Kublr |1.22.3 / 1.22.10 | v1.9.0_2022-07-12 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
+|Kublr |1.22.3 / 1.22.10 | 1.9.0_2022-07-12 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
### Lenovo |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2|v1.0.0_2021-07-30 |15.0.2148.140|Not validated|
+|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2|1.0.0_2021-07-30 |15.0.2148.140|Not validated|
### Nutanix |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | v1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
+| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | 1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
### Platform 9 |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | v1.0.0_2021-07-30| 15.0.2195.191 | PostgreSQL 12.3 (Ubuntu 12.3-1) |
+| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | 1.0.0_2021-07-30| 15.0.2195.191 | PostgreSQL 12.3 (Ubuntu 12.3-1) |
### PureStorage |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Portworx Enterprise 2.7 1.22.5 | 1.20.7 | v1.1.0_2021-11-02 | 15.0.2148.140 | Not validated |
+| Portworx Enterprise 2.7 1.22.5 | 1.20.7 | 1.1.0_2021-11-02 | 15.0.2148.140 | Not validated |
### Red Hat |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| OpenShift 4.7.13 | 1.20.0 | v1.0.0_2021-07-30 | 15.0.2148.140 | postgres 12.3 (Ubuntu 12.3-1)|
+| OpenShift 4.7.13 | 1.20.0 | 1.0.0_2021-07-30 | 15.0.2148.140 | postgres 12.3 (Ubuntu 12.3-1)|
### VMware |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| TKGm v1.5.1 | 1.20.5 | v1.4.1_2022-03-08 |15.0.2255.119|postgres 12.3 (Ubuntu 12.3-1)|
+| TKGm v1.5.3 | 1.22.8 | 1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1)|
### Wind River |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Wind River Cloud Platform 22.06 | v1.23.1|v1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) |
+|Wind River Cloud Platform 22.06 | 1.23.1|1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) |
## Data services validation process
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-en
|Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)| |`arcdata` Azure CLI extension version|1.4.5 ([Download](https://arcdataazurecliextension.blob.core.windows.net/stage/arcdata-1.4.5-py2.py3-none-any.whl))| |Arc enabled Kubernetes helm chart extension version|1.2.20381002|
-|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.5.0.vsix))</br>1.5.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.5.0.vsix))|
+|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.1 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.5.1.vsix))</br>1.5.1 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.5.1.vsix))|
## July 12, 2022
azure-cache-for-redis Cache How To Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md
Use import to bring Redis compatible RDB files from any Redis server running in
> >
-1. To import one or more exported cache blobs, [browse to your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the Azure portal and select **Import data** from the **Resource menu**. In the working pane you see **Choose Blob(s)** where you can find .RDB files.
+1. To import one or more exported cache blobs, [browse to your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the Azure portal and select **Import data** from the **Resource menu**. In the working pane, you see **Choose Blob(s)** where you can find RDB files.
:::image type="content" source="./media/cache-how-to-import-export-data/cache-import-data.png" alt-text="Screenshot showing Import data selected in the Resource menu.":::
Export allows you to export the data stored in Azure Cache for Redis to Redis co
:::image type="content" source="./media/cache-how-to-import-export-data/cache-export-data-choose-account.png" alt-text="Screenshot showing a list of containers in the working pane.":::
-3. Choose the storage container you want to hold your export, then **Select**. If you want a new container, select **Add Container** to add it first and then select it from the list.
+3. Choose the storage container you want to hold your export, then **Select**. If you want a new container, select **Add Container** to add it first, and then select it from the list.
:::image type="content" source="./media/cache-how-to-import-export-data/cache-export-data-container.png" alt-text="Screenshot of a list of containers with one highlighted and a select button.":::
To resolve this error, start the import or export operation before 15 minutes ha
### I got an error when exporting my data to Azure Blob Storage. What happened?
-Export works only with RDB files stored as page blobs. Other blob types aren't currently supported, including Blob storage accounts with hot and cool tiers. For more information, see [Azure storage account overview](../storage/common/storage-account-overview.md).
+Export works only with RDB files stored as page blobs. Other blob types aren't currently supported, including Blob storage accounts with hot and cool tiers. For more information, see [Azure storage account overview](../storage/common/storage-account-overview.md). If you're using an access key to authenticate a storage account, having firewall exceptions on the storage account tends to cause the import/export process to fail.
## Next steps
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Azure Cache for Redis offers Redis persistence using the Redis database (RDB) an
- **RDB persistence** - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. - **AOF persistence** - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second into an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
-Azure Cache for Redis persistence features are intended to be used to restore data to the same cache after data loss and the RDB/AOF persisted data files cannot be imported to a new cache.
+Azure Cache for Redis persistence features are intended to be used to restore data to the same cache after data loss and the RDB/AOF persisted data files can't be imported to a new cache.
To move data across caches, use the Import/Export feature. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
-To generate backup of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI to export data periodically.
+To generate any backups of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI to export data periodically.
> [!NOTE] > Persistence features are intended to be used to restore data to the same cache after data loss.
The following list contains answers to commonly asked questions about Azure Cach
- [Can I use the same storage account for persistence across two different caches?](#can-i-use-the-same-storage-account-for-persistence-across-two-different-caches) - [Will I be charged for the storage being used in Data Persistence](#will-i-be-charged-for-the-storage-being-used-in-data-persistence) - [How frequently does RDB and AOF persistence write to my blobs, and should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete)
+- [Will having firewall exceptions on the storage account affect persistence](#Will having firewall exceptions on the storage account affect persistence)
### RDB persistence
When clustering is enabled, each shard in the cache has its own set of page blob
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to stay in the soft delete state.
+### Will having firewall exceptions on the storage account affect persistence
+Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process.
+ ## Next steps Learn more about Azure Cache for Redis features.
azure-cache-for-redis Cache Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-managed-identity.md
# Managed identity for storage (Preview)
-[Managed identities](../active-directory/managed-identities-azure-resources/overview.md) are a common tool used in Azure to help developers minimize the burden of managing secrets and login information. Managed identities are useful when Azure services connect to each other. Instead of managing authorization between each service, [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) can be used to provide a managed identity that makes the authentication process more streamlined and secure.
+[Managed identities](../active-directory/managed-identities-azure-resources/overview.md) are a common tool used in Azure to help developers minimize the burden of managing secrets and sign-in information. Managed identities are useful when Azure services connect to each other. Instead of managing authorization between each service, [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) can be used to provide a managed identity that makes the authentication process more streamlined and secure.
## Use managed identity with storage accounts
To use managed identity, you must have a premium-tier cache.
> :::image type="content" source="media/cache-managed-identity/basics.png" alt-text="create a premium azure cache":::
-1. Click the **Advanced** tab. Then, scroll down to **(PREVIEW) System assigned managed identity** and select **On**.
+1. Select the **Advanced** tab. Then, scroll down to **(PREVIEW) System assigned managed identity** and select **On**.
:::image type="content" source="media/cache-managed-identity/system-assigned.png" alt-text="Advanced page of the form":::
Set-AzRedisCache -ResourceGroupName \"MyGroup\" -Name \"MyCache\" -IdentityType
:::image type="content" source="media/cache-managed-identity/blob-data.png" alt-text="storag blob data contributor list"::: > [!NOTE]
-> Adding an Azure Cache for Redis instance as a storage blog data contributor through system-assigned identity will conveniently add the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to implement.
+> Adding an Azure Cache for Redis instance as a storage blob data contributor through system-assigned identity conveniently adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to implement. If you're not using managed identity and instead authorizing a storage account with a key, then having firewall exceptions on the storage account tends to break the persistence process and the import-export processes.
## Use managed identity to access a storage account
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Title: "Quickstart: Create your first C# function in Azure using Visual Studio"
-description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions that runs on .NET Core 3.1."
+description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions."
ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 06/13/2022 Last updated : 09/08/2022 ms.devlang: csharp adobe-target: true
adobe-target-content: ./functions-create-your-first-function-visual-studio-uiex
Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using Visual Studio Code, you should instead consider the [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
-By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) version of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](functions-create-your-first-function-visual-studio.md?tabs=isolated-process).
+By default, this article shows you how to create C# functions that run [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET. To create C# functions [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](functions-create-your-first-function-visual-studio.md?tabs=isolated-process). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) before getting started.
In this article, you learn how to: > [!div class="checklist"]
-> * Use Visual Studio to create a C# class library project on .NET 6.0.
+> * Use Visual Studio to create a C# class library project.
> * Create a function that responds to HTTP requests. > * Run your code locally to verify function behavior. > * Deploy your code project to Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in you
## Prerequisites
-+ [Visual Studio 2022](https://visualstudio.microsoft.com/vs/), which supports .NET 6.0. Make sure to select the **Azure development** workload during installation.
++ [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure to select the **Azure development** workload during installation. + [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't already have an account [create a free one](https://azure.microsoft.com/free/dotnet/) before you begin.
The Azure Functions project template in Visual Studio creates a C# class library
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6** | When you choose **.NET 6**, you create a project that runs in-process with the Azure Functions runtime. Use in-process unless you need to run your function app on .NET 5.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](functions-dotnet-class-library.md#supported-versions). |
+ | **Functions worker** | **.NET 6** | When you choose **.NET 6**, you create a project that runs in-process with the Azure Functions runtime. Use in-process unless you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](functions-dotnet-class-library.md#supported-versions). |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. | | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
The Azure Functions project template in Visual Studio creates a C# class library
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated process when you need to run your function app on .NET 5.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
+ | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated process when you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. | | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
Advance to the next article to learn how to add an Azure Storage queue binding t
# [.NET 6 Isolated](#tab/isolated-process)
-To learn more about working with C# functions that run in an isolated process, see the [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+To learn more about working with C# functions that run in an isolated process, see the [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) to see other versions of supported .NET versions in an isolated process .
Advance to the next article to learn how to add an Azure Storage queue binding to your function: > [!div class="nextstepaction"]
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
There are a number of advantages to using deployment slots. The following scenar
- **Different environments for different purposes**: Using different slots gives you the opportunity to differentiate app instances before swapping to production or a staging slot. - **Prewarming**: Deploying to a slot instead of directly to production allows the app to warm up before going live. Additionally, using slots reduces latency for HTTP-triggered workloads. Instances are warmed up before deployment, which reduces the cold start for newly deployed functions. - **Easy fallbacks**: After a swap with production, the slot with a previously staged app now has the previous production app. If the changes swapped into the production slot aren't as you expect, you can immediately reverse the swap to get your "last known good instance" back.-- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into productions with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](functions-versions.md#minimum-downtime-upgrade).
+- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into production with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](functions-versions.md#minimum-downtime-upgrade).
## Swap operations
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
description: Learn how to develop and test Azure Functions by using Azure Functi
ms.devlang: csharp Previously updated : 05/19/2022 Last updated : 09/08/2022 # Develop Azure Functions using Visual Studio
Visual Studio provides the following benefits when you develop your functions:
This article provides details about how to use Visual Studio to develop C# class library functions and publish them to Azure. Before you read this article, consider completing the [Functions quickstart for Visual Studio](functions-create-your-first-function-visual-studio.md).
-Unless otherwise noted, procedures and examples shown are for Visual Studio 2022.
+Unless otherwise noted, procedures and examples shown are for Visual Studio 2022. For more information about Visual Studio 2022 releases, see [the release notes](/visualstudio/releases/2022/release-notes) or the [preview release notes](/visualstudio/releases/2022/release-notes-preview).
## Prerequisites
When you update your Visual Studio 2017 installation, make sure that you're usin
1. If your version is older, update your tools in Visual Studio as shown in the following section.
-### Update your tools in Visual Studio 2017
+### Update your tools in Visual Studio
1. In the **Extensions and Updates** dialog, expand **Updates** > **Visual Studio Marketplace**, choose **Azure Functions and Web Jobs Tools** and select **Update**.
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio](./functions-create-your-first-function-visual-studio.md)<li>[Visual Studio Code](./create-first-function-vs-code-csharp.md)<li>[Command line](./create-first-function-cli-csharp.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=csharp&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=C%23) |
-| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=csharp)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [C# language reference](./functions-dotnet-class-library.md)|
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-java.md)<li>[Jav) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=java&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Java) |
-| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/learn/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/learn/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=java)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Java language reference](./functions-reference-java.md)| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-node.md)<li>[Node.js terminal/command prompt](./create-first-function-cli-node.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=javascript%2ctypescript&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=JavaScript%2CTypeScript) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=javascript)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [JavaScript](./functions-reference-node.md) or [TypeScript](./functions-reference-node.md#typescript) language reference| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | <li>Using [Visual Studio Code](./create-first-function-vs-code-powershell.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=powershell&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=PowerShell) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=powershell)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [PowerShell language reference](./functions-reference-powershell.md))| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-python.md)<li>[Terminal/command prompt](./create-first-function-cli-python.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=python&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Python) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=python)<li>[Security](./security-concepts.md)<li>[Improve throughput performance](./python-scale-performance-reference.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Python language reference](./functions-reference-python.md)| ::: zone-end
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
This type of streaming logs requires that Application Insights integration be en
## Next steps
-Learn how to develop, test, and publish Azure Functions by using Azure Functions Core Tools [Microsoft learn module](/learn/modules/develop-test-deploy-azure-functions-with-core-tools/)
-Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli).
-To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues).
+Learn how to [develop, test, and publish Azure functions by using Azure Functions core tools](/learn/modules/develop-test-deploy-azure-functions-with-core-tools/). Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli). To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues).
<!-- LINKS -->
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md
source = new DataSource(
); //Import the geojson data and add it to the data source.
-source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/SamplePoiDataSet.json");
+source.importDataFromUrl("https://samples.azuremaps.com/data/geojson/SamplePoiDataSet.json");
//Add data source to the map. map.sources.add(source);
val source = DataSource(
) //Import the geojson data and add it to the data source.
-source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/SamplePoiDataSet.json")
+source.importDataFromUrl("https://samples.azuremaps.com/data/geojson/SamplePoiDataSet.json")
//Add data source to the map. map.sources.add(source)
azure-maps Clustering Point Data Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-ios-sdk.md
map.layers.addLayer(
) ```
-For this sample, the following images is loaded into the assets folder of the app.
+For this sample, the following image is loaded into the assets folder of the app.
| ![Earthquake icon image](./media/ios-sdk/cluster-point-data-ios-sdk/earthquake-icon.png) | ![Weather icon image of rain showers](./media/ios-sdk/cluster-point-data-ios-sdk/warning-triangle-icon.png) | |:--:|:--:|
let source = DataSource(options: [
]) // Import the geojson data and add it to the data source.
-let url = URL(string: "https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/SamplePoiDataSet.json")!
+let url = URL(string: "https://samples.azuremaps.com/data/geojson/SamplePoiDataSet.json")!
source.importData(fromURL: url) // Add data source to the map.
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
To see your indoor map, load it into a web browser. It should appear like the im
![indoor map image](media/how-to-use-indoor-module/indoor-map-graphic.png)
-[See live demo](https://azuremapscodesamples.azurewebsites.net/?sample=Creator%20indoor%20maps)
+[See live demo](https://samples.azuremaps.com/?sample=creator-indoor-maps)
## Next steps
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
The web application that you previously opened in a browser should now reflect t
![Free room in green and Busy room in red](./media/indoor-map-dynamic-styling/room-state.png)
-[See live demo](https://azuremapscodesamples.azurewebsites.net/?sample=Creator%20indoor%20maps)
+[See live demo](https://samples.azuremaps.com/?sample=creator-indoor-maps)
## Next steps
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
Learn about accessibility in the Web SDK modules.
> [!div class="nextstepaction"] > [Drawing tools accessibility](drawing-tools-interactions-keyboard-shortcuts.md)
-Learn about developing accessible apps with Microsoft Learn:
+Learn about developing accessible apps:
> [!div class="nextstepaction"] > [Accessibility in Action Digital Badge Learning Path](https://ready.azurewebsites.net/learning/track/2940)
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
var feature = new atlas.data.Feature(new atlas.data.Point([0, 0]), {
subValue: 'Pizza' }, arrayValue: [3, 4, 5, 6],
- imageLink: 'https://azuremapscodesamples.azurewebsites.net/common/images/Pike_Market.jpg'
+ imageLink: 'https://samples.azuremaps.com/images/Pike_Market.jpg'
}); var popup = new atlas.Popup({
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon-android.md
A choropleth map can be rendered using the polygon extrusion layer. Set the `hei
DataSource source = new DataSource(); //Import the geojson data and add it to the data source.
-source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/US_States_Population_Density.json");
+source.importDataFromUrl("https://samples.azuremaps.com/data/geojson/US_States_Population_Density.json");
//Add data source to the map. map.sources.add(source);
map.layers.add(layer, "labels");
val source = DataSource() //Import the geojson data and add it to the data source.
-source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/US_States_Population_Density.json")
+source.importDataFromUrl("https://samples.azuremaps.com/data/geojson/US_States_Population_Density.json")
//Add data source to the map. map.sources.add(source)
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure
If migrating an existing web application, check to see if it is using an open-source map control library such as Cesium, Leaflet, and OpenLayers. If it is and you would prefer to continue to use that library, you can connect it to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile) \| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The links below provide details on how to use Azure Maps in some commonly used open-source map control libraries.
-* [Cesium](https://www.cesium.com/) - A 3D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=Cesium) \| [Plugin repo]()
-* [Leaflet](https://leafletjs.com/) ΓÇô Lightweight 2D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=leaflet) \| [Plugin repo]()
-* [OpenLayers](https://openlayers.org/) - A 2D map control for the web that supports projections. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=openlayers) \| [Plugin repo]()
+* [Cesium](https://www.cesium.com/) - A 3D map control for the web. [Code samples](https://samples.azuremaps.com/?search=Cesium) \| [Plugin repo]()
+* [Leaflet](https://leafletjs.com/) ΓÇô Lightweight 2D map control for the web. [Code samples](https://samples.azuremaps.com/?search=leaflet) \| [Plugin repo]()
+* [OpenLayers](https://openlayers.org/) - A 2D map control for the web that supports projections. [Code samples](https://samples.azuremaps.com/?search=openlayers) \| [Plugin repo]()
If developing using a JavaScript framework, one of the following open-source projects may be useful:
The following table lists key API features in the Bing Maps V8 JavaScript SDK an
| Heat maps | Γ£ô | | Tile Layers | Γ£ô | | KML Layer | Γ£ô |
-| Contour layer | [Samples](https://azuremapscodesamples.azurewebsites.net/?search=contour) |
+| Contour layer | [Samples](https://samples.azuremaps.com/?search=contour) |
| Data binning layer | Included in the open-source Azure Maps [Gridded Data Source module](https://github.com/Azure-Samples/azure-maps-gridded-data-source) | | Animated tile layer | Included in the open-source Azure Maps [Animation module](https://github.com/Azure-Samples/azure-maps-animations) | | Drawing tools | Γ£ô |
Azure Maps also has many additional [open-source modules for the web SDK](open-s
The following are some of the key differences between the Bing Maps and Azure Maps Web SDKs to be aware of:
-* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an NPM package is also available for embedding the Web SDK into apps if preferred. For more information, see this [documentation](./how-to-use-map-control.md) for more information. This package also includes TypeScript definitions.
-* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps there you can use the NPM module and point to any previous minor version release.
+* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is also available for embedding the Web SDK into apps if preferred. For more information, see this [documentation](./how-to-use-map-control.md) for more information. This package also includes TypeScript definitions.
+* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps there you can use the npm module and point to any previous minor version release.
> [!TIP] > Azure Maps publishes both minified and unminified versions of the SDK. Simple remove `.min` from the file names. The unminified version is useful when debugging issues but be sure to use the minified version in production to take advantage of the smaller file size.
map.events.add('click', marker, function () {
**Additional resources** * [Add a popup](./map-add-popup.md)
-* [Popup with Media Content](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Popup%20with%20Media%20Content)
-* [Popups on Shapes](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Popups%20on%20Shapes)
-* [Reusing Popup with Multiple Pins](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Reusing%20Popup%20with%20Multiple%20Pins)
+* [Popup with Media Content](https://samples.azuremaps.com/?sample=popup-with-media-content)
+* [Popups on Shapes](https://samples.azuremaps.com/?sample=popups-on-shapes)
+* [Reusing Popup with Multiple Pins](https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins)
* [Popup class](/javascript/api/azure-maps-control/atlas.popup) * [Popup options](/javascript/api/azure-maps-control/atlas.popupoptions)
If you click on one of the traffic icons in Azure Maps, additional information i
**Additional resources** * [Show traffic on the map](./map-show-traffic.md)
-* [Traffic overlay options](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Traffic%20Overlay%20Options)
-* [Traffic control](https://azuremapscodesamples.azurewebsites.net/?sample=Traffic%20controls)
+* [Traffic overlay options](https://samples.azuremaps.com/?sample=traffic-overlay-options)
+* [Traffic control](https://samples.azuremaps.com/?sample=traffic-controls)
### Add a ground overlay
In Azure Maps the drawing tools module needs to be loaded by loading the JavaScr
**Additional resources** * [Documentation](./set-drawing-options.md)
-* [Code samples](https://azuremapscodesamples.azurewebsites.net/#Drawing-Tools-Module)
+* [Code samples](https://samples.azuremaps.com/#drawing-tools-module)
## Additional resources
Review code samples related migrating other Bing Maps features:
**Data visualizations** > [!div class="nextstepaction"]
-> [Contour layer](https://azuremapscodesamples.azurewebsites.net/?search=contour)
+> [Contour layer](https://samples.azuremaps.com/?search=contour)
> [!div class="nextstepaction"]
-> [Data Binning](https://azuremapscodesamples.azurewebsites.net/?search=data%20binning)
+> [Data Binning](https://samples.azuremaps.com/?search=Data%20Binning)
**Services**
Review code samples related migrating other Bing Maps features:
> [Show directions from A to B](./map-route.md) > [!div class="nextstepaction"]
-> [Search Autosuggest with JQuery UI](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Search%20Autosuggest%20and%20JQuery%20UI)
+> [Search Autosuggest with JQuery UI](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui)
Learn more about the Azure Maps Web SDK.
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Azure Maps can snap coordinates to roads by using the [route directions](/rest/a
There are two different ways to use the route directions API to snap coordinates to roads. * If there are 150 coordinates or less, they can be passed as waypoints in the GET route directions API. Using this approach two different types of snapped data can be retrieved; route instructions will contain the individual snapped waypoints, while the route path will have an interpolated set of coordinates that fill the full path between the coordinates.
-* If there are more than 150 coordinates, the POST route directions API can be used. The coordinates start and end coordinates have to be passed into the query parameter, but all coordinates can be passed into the `supportingPoints` parameter in the body of the POST request and formatted a GeoJSON geometry collection of points. The only snapped data available using this approach will be the route path that is an interpolated set of coordinates that fill the full path between the coordinates. [Here is an example](https://azuremapscodesamples.azurewebsites.net/?sample=Snap%20points%20to%20logical%20route%20path) of this approach using the services module in the Azure Maps Web SDK.
+* If there are more than 150 coordinates, the POST route directions API can be used. The coordinates start and end coordinates have to be passed into the query parameter, but all coordinates can be passed into the `supportingPoints` parameter in the body of the POST request and formatted a GeoJSON geometry collection of points. The only snapped data available using this approach will be the route path that is an interpolated set of coordinates that fill the full path between the coordinates. [Here is an example](https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path) of this approach using the services module in the Azure Maps Web SDK.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The Azure Maps route directions API does not currently return speed limit data,
The Azure Maps Web SDK uses vector tiles to render the maps. These vector tiles contain the raw road geometry information and can be used to calculate the nearest road to a coordinate for simple snapping of individual coordinates. This is useful when you want the coordinates to visually appear over roads and you are already using the Azure Maps Web SDK to visualize the data.
-This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. [Here is a code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Basic%20snap%20to%20road%20logic) that demonstrates this.
+This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. [Here is a code sample](https://samples.azuremaps.com/?sample=basic-snap-to-road-logic) that demonstrates this.
**Using the Azure Maps vector tiles directly to snap coordinates**
Here are some useful resources around hosting and querying spatial data in Azure
Azure Maps provides client libraries for the following programming languages;
-* JavaScript, TypeScript, Node.js ΓÇô [documentation](./how-to-use-services-module.md) \| [NPM package](https://www.npmjs.com/package/azure-maps-rest)
+* JavaScript, TypeScript, Node.js ΓÇô [documentation](./how-to-use-services-module.md) \| [npm package](https://www.npmjs.com/package/azure-maps-rest)
Open-source client libraries for other programming languages;
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
You will also learn:
If migrating an existing web application, check to see if it is using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you do not want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile) \| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
-* Cesium - A 3D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20Cesium%20JS) \| [Documentation](https://www.cesium.com/)
-* Leaflet ΓÇô Lightweight 2D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Azure%20Maps%20Raster%20Tiles%20in%20Leaflet%20JS) \| [Documentation](https://leafletjs.com/)
-* OpenLayers - A 2D map control for the web that supports projections. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20OpenLayers) \| [Documentation](https://openlayers.org/)
+* Cesium - A 3D map control for the web. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-cesium) \| [Documentation](https://www.cesium.com/)
+* Leaflet ΓÇô Lightweight 2D map control for the web. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-leaflet) \| [Documentation](https://leafletjs.com/)
+* OpenLayers - A 2D map control for the web that supports projections. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-openlayers) \| [Documentation](https://openlayers.org/)
If developing using a JavaScript framework, one of the following open-source projects may be useful:
The table lists key API features in the Google Maps V3 JavaScript SDK and the su
The following are some key differences between the Google Maps and Azure Maps Web SDKs, to be aware of: -- In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an NPM package is available. Embed the Web SDK package into apps. For more information, see this [documentation](how-to-use-map-control.md). This package also includes TypeScript definitions.
+- In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available. Embed the Web SDK package into apps. For more information, see this [documentation](how-to-use-map-control.md). This package also includes TypeScript definitions.
- You first need to create an instance of the Map class in Azure Maps. Wait for the maps `ready` or `load` event to fire before programmatically interacting with the map. This order will ensure that all the map resources have been loaded and are ready to be accessed. - Both platforms use a similar tiling system for the base maps. The tiles in Google Maps are 256 pixels in dimension; however, the tiles in Azure Maps are 512 pixels in dimension. To get the same map view in Azure Maps as Google Maps, subtract Google Maps zoom level by the number one in Azure Maps. - Coordinates in Google Maps are referred to as `latitude,longitude`, while Azure Maps uses `longitude,latitude`. The Azure Maps format is aligned with the standard `[x, y]`, which is followed by most GIS platforms.
The following are some key differences between the Google Maps and Azure Maps We
## Web SDK side-by-side examples
-This collection has code samples for each platform, and each sample covers a common use case. It's intended to help you migrate your web application from Google Maps V3 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript. However, Azure Maps also provides TypeScript definitions as an additional option through an [NPM module](how-to-use-map-control.md).
+This collection has code samples for each platform, and each sample covers a common use case. It's intended to help you migrate your web application from Google Maps V3 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript. However, Azure Maps also provides TypeScript definitions as an additional option through an [npm module](how-to-use-map-control.md).
**Topics**
value in Google Maps is relative to the top-left corner of the image.
var marker = new google.maps.Marker({ position: new google.maps.LatLng(51.5, -0.2), icon: {
- url: 'https://azuremapscodesamples.azurewebsites.net/Common/images/icons/ylw-pushpin.png',
+ url: 'https://samples.azuremaps.com/images/icons/ylw-pushpin.png',
anchor: new google.maps.Point(5, 30) }, map: map
To customize an HTML marker, pass an HTML `string` or `HTMLElement` to the `html
```javascript map.markers.add(new atlas.HtmlMarker({
- htmlContent: '<img src="https://azuremapscodesamples.azurewebsites.net/Common/images/icons/ylw-pushpin.png" style="pointer-events: none;" />',
+ htmlContent: '<img src="https://samples.azuremaps.com/images/icons/ylw-pushpin.png" style="pointer-events: none;" />',
anchor: 'top-left', pixelOffset: [-5, -30], position: [-0.2, 51.5]
Symbol layers in Azure Maps support custom images as well. First, load the image
map.events.add('ready', function () { //Load the custom image icon into the map resources.
- map.imageSprite.add('my-yellow-pin', 'https://azuremapscodesamples.azurewebsites.net/Common/images/icons/ylw-pushpin.png').then(function () {
+ map.imageSprite.add('my-yellow-pin', 'https://samples.azuremaps.com/images/icons/ylw-pushpin.png').then(function () {
//Create a data source and add it to the map. datasource = new atlas.source.DataSource();
map.events.add('click', marker, function () {
**Additional resources:** - [Add a popup](map-add-popup.md)-- [Popup with Media Content](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Popup%20with%20Media%20Content)-- [Popups on Shapes](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Popups%20on%20Shapes)-- [Reusing Popup with Multiple Pins](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Reusing%20Popup%20with%20Multiple%20Pins)
+- [Popup with Media Content](https://samples.azuremaps.com/?sample=popup-with-media-content)
+- [Popups on Shapes](https://samples.azuremaps.com/?sample=popups-on-shapes)
+- [Reusing Popup with Multiple Pins](https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins)
- [Popup class](/javascript/api/azure-maps-control/atlas.popup) - [Popup options](/javascript/api/azure-maps-control/atlas.popupoptions)
If you click on one of the traffic icons in Azure Maps, additional information i
**Additional resources:** * [Show traffic on the map](map-show-traffic.md)
-* [Traffic overlay options](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Traffic%20Overlay%20Options)
+* [Traffic overlay options](https://samples.azuremaps.com/?sample=traffic-overlay-options)
### Add a ground overlay
In Azure Maps, GeoJSON is the main data format used in the web SDK, additional s
The following are some additional code samples related to Google Maps migration: * [Drawing tools](map-add-drawing-toolbar.md)
-* [Limit Map to Two Finger Panning](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Limit%20Map%20to%20Two%20Finger%20Panning)
-* [Limit Scroll Wheel Zoom](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Limit%20Scroll%20Wheel%20Zoom)
-* [Create a Fullscreen Control](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Create%20a%20Fullscreen%20Control)
+* [Limit Map to Two Finger Panning](https://samples.azuremaps.com/?sample=limit-map-to-two-finger-panning)
+* [Limit Scroll Wheel Zoom](https://samples.azuremaps.com/?sample=limit-scroll-wheel-zoom)
+* [Create a Fullscreen Control](https://samples.azuremaps.com/?sample=fullscreen-control)
**
The following are some additional code samples related to Google Maps migration:
* [Search for points of interest](map-search-location.md) * [Get information from a coordinate (reverse geocode)](map-get-information-from-coordinate.md) * [Show directions from A to B](map-route.md)
-* [Search Autosuggest with JQuery UI](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Search%20Autosuggest%20and%20JQuery%20UI)
+* [Search Autosuggest with JQuery UI](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui)
## Google Maps V3 to Azure Maps Web SDK class mapping
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
The following service APIs aren't currently available in Azure Maps:
- Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but does not currently support cell tower or WiFi triangulation. - Places details and photos - Phone numbers and website URL are available in the Azure Maps search API. - Map URLs-- Nearest Roads - This is achievable using the Web SDK as shown [here](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Basic%20snap%20to%20road%20logic), but not available as a service currently.
+- Nearest Roads - This is achievable using the Web SDK as shown [here](https://samples.azuremaps.com/?sample=basic-snap-to-road-logic), but not available as a service currently.
- Static street view Azure Maps has several other REST web services that may be of interest:
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
You might want to target older browsers that don't support WebGL or that have on
(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
-Additional code samples using Azure Maps in Leaflet can be found [here](https://azuremapscodesamples.azurewebsites.net/?search=leaflet).
+Additional code samples using Azure Maps in Leaflet can be found [here](https://samples.azuremaps.com/?search=leaflet).
[Here](open-source-projects.md#third-part-map-control-plugins) are some popular open-source map controls that the Azure Maps team has created plugin's for.
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
For more information about Azure Maps authentication, see [Manage authentication
In this tutorial, you'll create a store locator for a fictional company named *Contoso Coffee*. Also, this tutorial includes some tips to help you learn about extending the store locator with other optional functionality.
-To see a live sample of what you will create in this tutorial, see [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) on the **Azure Maps Code Samples** site.
+To see a live sample of what you will create in this tutorial, see [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) on the **Azure Maps Code Samples** site.
To more easily follow and engage this tutorial, you'll need to download the following resources:
This section lists the Azure Maps features that are demonstrated in the Contoso
## Store locator design
-The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) sample application on the **Azure Maps Code Samples** site.
+The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) sample application on the **Azure Maps Code Samples** site.
:::image type="content" source="./media/tutorial-create-store-locator/store-locator-wireframe.png" alt-text="A screenshot the Contoso Coffee store locator Azure Maps sample application.":::
If you resize the browser window to fewer than 700 pixels wide or open the appli
In this tutorial, you learned how to create a basic store locator by using Azure Maps. The store locator you create in this tutorial might have all the functionality you need. You can add features to your store locator or use more advance features for a more custom user experience:
-* Enable [suggestions as you type](https://azuremapscodesamples.azurewebsites.net/?sample=Search%20Autosuggest%20and%20JQuery%20UI) in the search box.
-* Add [support for multiple languages](https://azuremapscodesamples.azurewebsites.net/?sample=Map%20Localization).
-* Allow the user to [filter locations along a route](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Data%20Along%20Route).
-* Add the ability to [set filters](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Symbols%20by%20Property).
+* Enable [suggestions as you type](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui) in the search box.
+* Add [support for multiple languages](https://samples.azuremaps.com/?sample=map-localization).
+* Allow the user to [filter locations along a route](https://samples.azuremaps.com/?sample=filter-data-along-route).
+* Add the ability to [set filters](https://samples.azuremaps.com/?sample=filter-symbols-by-property).
* Add support to specify an initial search value by using a query string. When you include this option in your store locator, users are then able to bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page. * Deploy your store locator as an [Azure App Service Web App](../app-service/quickstart-html.md). * Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview](/sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017) and [Query spatial data for the nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017).
In this tutorial, you learned how to create a basic store locator by using Azure
## Additional information * For the completed code used in this tutorial, see the [Simple Store Locator](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator) tutorial on GitHub.
-* To view this sample live, see [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) on the **Azure Maps Code Samples** site.
+* To view this sample live, see [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) on the **Azure Maps Code Samples** site.
* learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md). * You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic.
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
This section shows you how to use the Azure Maps Route service to get directions
* The truck route is displayed using a thick blue line and the car route is displayed using a thin purple line. * The car route goes across Lake Washington via I-90, passing through tunnels beneath residential areas. Because the tunnels are in residential areas, hazardous waste cargo is restricted. The truck route, which specifies a `USHazmatClass2` cargo type, is directed to use a different route that doesn't have this restriction.
-* For the completed code used in this tutorial, see the [Truck Route](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Truck%20Route) tutorial on GitHub.
-* To view this sample live, see [Multiple routes by mode of travel](https://azuremapscodesamples.azurewebsites.net/?sample=Multiple%20routes%20by%20mode%20of%20travel) on the **Azure Maps Code Samples** site.
+* For the completed code used in this tutorial, see the [Truck Route](https://samples.azuremaps.com/?sample=car-vs-truck-route) tutorial on GitHub.
+* To view this sample live, see [Multiple routes by mode of travel](https://samples.azuremaps.com/?sample=multiple-routes-by-mode-of-travel) on the **Azure Maps Code Samples** site.
* You can also use [Data-driven style expressions](data-driven-style-expressions-web-sdk.md) ## Next steps
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
This section shows you how to use the Azure Maps Route Directions API to get rou
:::image type="content" source="./media/tutorial-route-location/map-route.png" alt-text="[A screenshot showing a map that demonstrates the Azure Map control and Route service."::: * For the completed code used in this tutorial, see the [route](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Route) tutorial on GitHub.
-* To view this sample live, see [Route to a destination](https://azuremapscodesamples.azurewebsites.net/?sample=Route%20to%20a%20destination) on the **Azure Maps Code Samples** site.
+* To view this sample live, see [Route to a destination](https://samples.azuremaps.com/?sample=route-to-a-destination) on the **Azure Maps Code Samples** site.
## Next steps
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The map that we've made so far only looks at the longitude/latitude data for the
![A screen shot of a map with information popups that appear when you hover over a search pin.](./media/tutorial-search-location/popup-map.png) * For the completed code used in this tutorial, see the [search](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Search) tutorial on GitHub.
-* To view this sample live, see [Search for points of interest](https://azuremapscodesamples.azurewebsites.net/?sample=Search%20for%20points%20of%20interest) on the **Azure Maps Code Samples** site.
+* To view this sample live, see [Search for points of interest](https://samples.azuremaps.com/?sample=search-for-points-of-interest) on the **Azure Maps Code Samples** site.
## Next steps
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Title: Azure Monitor Agent overview description: Overview of the Azure Monitor Agent, which collects monitoring data from the guest operating system of virtual machines. -+ Last updated 7/21/2022
The Azure Monitor Agent extensions for Windows and Linux can communicate either
# [Windows VM](#tab/PowerShellWindows) ```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString ```
Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMo
# [Linux VM](#tab/PowerShellLinux) ```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString ```
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMoni
# [Windows Arc-enabled server](#tab/PowerShellWindowsArc) ```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString ```
New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType Az
# [Linux Arc-enabled server](#tab/PowerShellLinuxArc) ```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString ```
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
If your IT security policies do not allow computers on your network to connect t
Before starting, review the following requirements.
-* Azure Monitor only supports System Center Operations Manager 2016 or later, Operations Manager 2012 SP1 UR6 or later, and Operations Manager 2012 R2 UR2 or later. Proxy support was added in Operations Manager 2012 SP1 UR7 and Operations Manager 2012 R2 UR3.
-* Integrating System Center Operations Manager 2016 with US Government cloud requires an updated Advisor management pack included with Update Rollup 2 or later. System Center Operations Manager 2012 R2 requires an updated Advisor management pack included with Update Rollup 3 or later.
+* Azure Monitor supports the following:
+ * System Center Operations Manager 2022
+ * System Center Operations Manager 2019
+ * System Center Operations Manager 2016
+ * System Center Operations Manager 2012 SP1 UR6 or later
+ * System Center Operations Manager 2012 R2 UR2 or later
+* Integrating System Center Operations Manager 2016 with US Government cloud requires the following:
+ * System Center Operations Manager 2022
+ * System Center Operations Manager 2019
+ * System Center Operations Manager 2016 UR 2 or later
+ * System Center Operations Manager 2012 R2 UR 3 or later
* All Operations Manager agents must meet minimum support requirements. Ensure that agents are at the minimum update, otherwise Windows agent communication may fail and generate errors in the Operations Manager event log. * A Log Analytics workspace. For further information, review [Log Analytics workspace overview](../logs/workspace-design.md). * You authenticate to Azure with an account that is a member of the [Log Analytics Contributor role](../logs/manage-access.md#azure-rbac).
-* Supported Regions - Only the following Azure regions are supported by System Center Operations Manager to connect to a Log Analytics workspace:
- - West Central US
- - Australia South East
- - West Europe
- - East US
- - South East Asia
- - Japan East
- - UK South
- - Central India
- - Canada Central
- - West US 2
>[!NOTE] >Recent changes to Azure APIs will prevent customers from being able to successfully configure integration between their management group and Azure Monitor for the first time. For customers who have already integrated their management group with the service, you are not impacted unless you need to reconfigure your existing connection.
azure-monitor Alerts Dynamic Thresholds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md
Title: Creating Alerts with Dynamic Thresholds in Azure Monitor
-description: Create Alerts with machine learning based Dynamic Thresholds
+ Title: Create alerts with Dynamic Thresholds in Azure Monitor
+description: Create alerts with machine learning-based Dynamic Thresholds.
Last updated 2/23/2022
-# Dynamic thresholds in Metric Alerts
+# Dynamic thresholds in metric alerts
- Dynamic thresholds in metric alerts use advanced machine learning (ML) to learn metrics' historical behavior, and to identify patterns and anomalies that indicate possible service issues. Dynamic thresholds in metric alerts support both a simple UI and operations at scale by allowing users to configure alert rules through the fully automated Azure Resource Manager API.
+Dynamic thresholds in metric alerts use advanced machine learning to learn metrics' historical behavior and identify patterns and anomalies that indicate possible service issues. Dynamic thresholds in metric alerts support both a simple UI and operations at scale by allowing users to configure alert rules through the fully automated Azure Resource Manager API.
-An alert rule using a dynamic threshold only fires when the monitored metric doesnΓÇÖt behave as expected, based on its tailored thresholds.
+An alert rule using dynamic thresholds only fires when the monitored metric doesn't behave as expected, based on its tailored thresholds.
-We would love to hear your feedback, keep it coming at <azurealertsfeedback@microsoft.com>.
+To send us feedback, use <azurealertsfeedback@microsoft.com>.
-Alert rules with dynamic thresholds provide:
-- **Scalable Alerting**. Dynamic threshold alert rules can create tailored thresholds for hundreds of metric series at a time, yet are as easy to define as an alert rule on a single metric. They give you fewer alerts to create and manage. You can use either Azure portal or the Azure Resource Manager API to create them. The scalable approach is especially useful when dealing with metric dimensions or when applying to multiple resources, such as to all subscription resources. [Learn more about how to configure Metric Alerts with Dynamic Thresholds using templates](./alerts-metric-create-templates.md).
+Alert rules with dynamic thresholds provide:
-- **Smart Metric Pattern Recognition**. Using our ML technology, weΓÇÖre able to automatically detect metric patterns and adapt to metric changes over time, which may often include seasonality (hourly / daily / weekly). Adapting to the metricsΓÇÖ behavior over time and alerting based on deviations from its pattern relieves the burden of knowing the "right" threshold for each metric. The ML algorithm used in dynamic thresholds is designed to prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern.--- **Intuitive Configuration**. Dynamic thresholds allow you to set up metric alerts using high-level concepts, alleviating the need to have extensive domain knowledge about the metric.
+- **Scalable alerting**. Dynamic thresholds alert rules can create tailored thresholds for hundreds of metric series at a time. They're as easy to define as an alert rule on a single metric. They give you fewer alerts to create and manage. You can use either the Azure portal or the Azure Resource Manager API to create them. The scalable approach is especially useful when you're dealing with metric dimensions or applying to multiple resources, such as to all subscription resources. Learn more about how to [configure metric alerts with dynamic thresholds by using templates](./alerts-metric-create-templates.md).
+- **Smart metric pattern recognition**. With our machine learning technology, we can automatically detect metric patterns and adapt to metric changes over time, which often includes seasonality patterns, such as hourly, daily, or weekly. Adapting to the metrics' behavior over time and alerting based on deviations from its pattern relieves the burden of knowing the "right" threshold for each metric. The machine learning algorithm used in dynamic thresholds is designed to prevent noisy (low precision) or wide (low recall) thresholds that don't have an expected pattern.
+- **Intuitive configuration**. Dynamic thresholds allow you to set up metric alerts by using high-level concepts. This way, you don't need to have extensive domain knowledge about the metric.
## Configure alerts rules with dynamic thresholds
-Alerts with Dynamic thresholds can be configured using Azure Monitor metric alerts. [Learn more about how to configure Metric Alerts](alerts-metric.md).
+Alerts with dynamic thresholds can be configured by using Azure Monitor metric alerts. Learn more about how to [configure metric alerts](alerts-metric.md).
## How are the thresholds calculated?
-Dynamic Thresholds continuously learns the data of the metric series and tries to model it using a set of algorithms and methods. It detects patterns in the data such as seasonality (Hourly / Daily / Weekly), and is able to handle noisy metrics (such as machine CPU or memory) as well as metrics with low dispersion (such as availability and error rate).
+Dynamic Thresholds continuously learns the data of the metric series and tries to model it by using a set of algorithms and methods. It detects patterns in the data like hourly, daily, or weekly seasonality. It can handle noisy metrics, such as machine CPU or memory, and metrics with low dispersion, such as availability and error rate.
The thresholds are selected in such a way that a deviation from these thresholds indicates an anomaly in the metric behavior. > [!NOTE]
-> Dynamic Thresholds can detect seasonality for hourly, daily, or weekly patterns. Other patterns like bi-hourly or semi-weekly seasonality might not be detected. To detect weekly seasonality, at least three weeks of historical data are required.
+> Dynamic thresholds can detect seasonality for hourly, daily, or weekly patterns. Other patterns like bi-hourly or semi-weekly seasonality might not be detected. To detect weekly seasonality, at least three weeks of historical data are required.
-## What does the 'Sensitivity' setting in Dynamic Thresholds mean?
+## What does the Sensitivity setting in Dynamic Thresholds mean?
Alert threshold sensitivity is a high-level concept that controls the amount of deviation from metric behavior required to trigger an alert.
-This option doesn't require domain knowledge about the metric like static threshold. The options available are:
-- High: The thresholds will be tight and close to the metric series pattern. An alert rule will be triggered on the smallest deviation, resulting in more alerts.-- Medium: Less tight and more balanced thresholds, fewer alerts than with high sensitivity (default).-- Low: The thresholds will be loose with more distance from metric series pattern. An alert rule will only trigger on large deviations, resulting in fewer alerts.
+This option doesn't require domain knowledge about the metric like a static threshold. The options available are:
+
+- **High**: The thresholds will be tight and close to the metric series pattern. An alert rule will be triggered on the smallest deviation, resulting in more alerts.
+- **Medium**: The thresholds will be less tight and more balanced. There will be fewer alerts than with high sensitivity (default).
+- **Low**: The thresholds will be loose with more distance from metric series pattern. An alert rule will only trigger on large deviations, resulting in fewer alerts.
-## What are the 'Operator' setting options in Dynamic Thresholds?
+## What are the Operator setting options in Dynamic Thresholds?
+
+Dynamic thresholds alert rules can create tailored thresholds based on metric behavior for both upper and lower bounds by using the same alert rule.
-Dynamic Thresholds alerts rule can create tailored thresholds based on metric behavior for both upper and lower bounds using the same alert rule.
You can choose the alert to be triggered on one of the following three conditions: - Greater than the upper threshold or lower than the lower threshold (default) - Greater than the upper threshold-- Lower than the lower threshold.
+- Lower than the lower threshold
-## What do the advanced settings in Dynamic Thresholds mean?
+## What do the Advanced settings in Dynamic Thresholds mean?
-**Failing Periods**. Using dynamic thresholds, you can also configure a minimum number of deviations required within a certain time window for the system to raise an alert. The default is four deviations in 20 minutes. You can configure failing periods and choose what to be alerted on by changing the failing periods and time window. These configurations reduce alert noise generated by transient spikes. For example:
+**Failing periods**. You can configure a minimum number of deviations required within a certain time window for the system to raise an alert by using dynamic thresholds. The default is four deviations in 20 minutes. You can configure failing periods and choose what to be alerted on by changing the failing periods and time window. These configurations reduce alert noise generated by transient spikes. For example:
-To trigger an alert when the issue is continuous for 20 minutes, 4 consecutive times in a given period grouping of 5 minutes, use the following settings:
+To trigger an alert when the issue is continuous for 20 minutes, four consecutive times in a period grouping of 5 minutes, use the following settings:
-![Failing periods settings for continuous issue for 20 minutes, 4 consecutive times in a given period grouping of 5 minutes](media/alerts-dynamic-thresholds/0008.png)
+![Screenshot that shows failing periods settings for continuous issue for 20 minutes, four consecutive times in a period grouping of 5 minutes.](media/alerts-dynamic-thresholds/0008.png)
-To trigger an alert when there was a violation from a Dynamic Thresholds in 20 minutes out of the last 30 minutes with period of 5 minutes, use the following settings:
+To trigger an alert when there was a violation from Dynamic Thresholds in 20 minutes out of the last 30 minutes with a period of 5 minutes, use the following settings:
-![Failing periods settings for issue for 20 minutes out of the last 30 minutes with period grouping of 5 minutes](media/alerts-dynamic-thresholds/0009.png)
+![Screenshot that shows failing periods settings for issue for 20 minutes out of the last 30 minutes with a period grouping of 5 minutes.](media/alerts-dynamic-thresholds/0009.png)
-**Ignore data before**. Users may also optionally define a start date from which the system should begin calculating the thresholds. A typical use case may occur when a resource was a running in a testing mode and is now promoted to serve a production workload, and therefore the behavior of any metric during the testing phase should be disregarded.
+**Ignore data before**. You can optionally define a start date from which the system should begin calculating the thresholds. A typical use case might occur when a resource was running in a testing mode and is promoted to serve a production workload. As a result, the behavior of any metric during the testing phase should be disregarded.
> [!NOTE]
-> An alert fires when the rule is evaluated and the result shows an anomaly. The alert is resolved if the rule is evaluated and does not show an anomaly three times in a row.
+> An alert fires when the rule is evaluated and the result shows an anomaly. The alert is resolved if the rule is evaluated and doesn't show an anomaly three times in a row.
## How do you find out why a dynamic thresholds alert was triggered?
-You can explore triggered alert instances by clicking on the link in the email or text message, or browse to see the alerts in the Azure portal. [Learn more about the alerts view](./alerts-page.md).
+You can explore triggered alert instances by selecting the link in the email or text message. You can also browse to see the alerts in the Azure portal. Learn more about the [alerts view](./alerts-page.md).
The alert view displays: -- All the metric details at the moment the Dynamic Thresholds alert fired.-- A chart of the period in which the alert was triggered that includes the Dynamic Thresholds used at that point in time.-- Ability to provide feedback on Dynamic Thresholds alert and the alerts view experience, which could improve future detections.
+- All the metric details at the moment the dynamic thresholds alert fired.
+- A chart of the period in which the alert was triggered that includes the dynamic thresholds used at that point in time.
+- Ability to provide feedback on the dynamic thresholds alert and the alerts view experience, which could improve future detections.
## Will slow behavior changes in the metric trigger an alert?
-Probably not. Dynamic Thresholds are good for detecting significant deviations rather than slowly evolving issues.
+Probably not. Dynamic thresholds are good for detecting significant deviations rather than slowly evolving issues.
## How much data is used to preview and then calculate thresholds?
-When an alert rule is first created, the thresholds appearing in the chart are calculated based on enough historical data to calculate hour or daily seasonal patterns (10 days). Once an alert rule is created, Dynamic Thresholds uses all needed historical data that is available and will continuously learn and adapt based on new data to make the thresholds more accurate. This means that after this calculation, the chart will also display weekly patterns.
+When an alert rule is first created, the thresholds appearing in the chart are calculated based on enough historical data to calculate hourly or daily seasonal patterns (10 days). After an alert rule is created, Dynamic Thresholds uses all needed historical data that's available and continuously learns and adapts based on new data to make the thresholds more accurate. After this calculation, the chart also displays weekly patterns.
## How much data is needed to trigger an alert?
-If you have a new resource or missing metric data, Dynamic Thresholds won't trigger alerts before three days and at least 30 samples of metric data are available, to ensure accurate thresholds.
-For existing resources with sufficient metric data, Dynamic Thresholds can trigger alerts immediately.
+If you have a new resource or missing metric data, Dynamic Thresholds won't trigger alerts before three days and at least 30 samples of metric data are available, to ensure accurate thresholds. For existing resources with sufficient metric data, Dynamic Thresholds can trigger alerts immediately.
## How do prolonged outages affect the calculated thresholds?
-The system automatically recognizes prolonged outages and removes them from threshold learning algorithm. As a result, despite prolonged outages, dynamic thresholds understand the data. Service issues are detected with the same sensitivity as before an outage occurred.
+The system automatically recognizes prolonged outages and removes them from the threshold learning algorithm. As a result, despite prolonged outages, dynamic thresholds understand the data. Service issues are detected with the same sensitivity as before an outage occurred.
## Dynamic Thresholds best practices
-Dynamic Thresholds can be applied to most platform and custom metrics in Azure Monitor and it was also tuned for the common application and infrastructure metrics.
-The following items are best practices on how to configure alerts on some of these metrics using Dynamic Thresholds.
+Dynamic Thresholds can be applied to most platform and custom metrics in Azure Monitor, and it was also tuned for the common application and infrastructure metrics.
+
+The following items are best practices on how to configure alerts on some of these metrics by using Dynamic Thresholds.
### Configure dynamic thresholds on virtual machine CPU percentage metrics
-1. In [Azure portal](https://portal.azure.com), select **Monitor**. The Monitor view consolidates all your monitoring settings and data in one view.
+1. In the [Azure portal](https://portal.azure.com), select **Monitor**. The **Monitor** view consolidates all your monitoring settings and data in one view.
-2. Select **Alerts** then select **+ New alert rule**.
+1. Select **Alerts** > **+ New alert rule**.
> [!TIP]
- > Most resource blades also have **Alerts** in their resource menu under **Monitoring**, you could create alerts from there as well.
+ > Most resource panes also have **Alerts** in their resource menu under **Monitoring**. You can also create alerts from there.
-3. Select **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **'Virtual Machines' Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
+1. Choose **Select target**. In the pane that opens, select a target resource that you want to alert on. Use the **Subscription** and **Virtual Machines Resource type** dropdowns to find the resource you want to monitor. You can also use the search bar to find your resource.
-4. Once you've selected a target resource, select **Add condition**.
+1. After you've selected a target resource, select **Add condition**.
-5. Select the **'CPU Percentage'**.
+1. Select the **CPU Percentage**.
-6. Optionally, refine the metric by adjusting **Period** and **Aggregation**. It's discouraged to use 'Maximum' aggregation type for this metric type as it is less representative of behavior. For 'Maximum' aggregation type static threshold maybe more appropriate.
+1. Optionally, refine the metric by adjusting **Period** and **Aggregation**. We discourage using the **Maximum** aggregation for this metric type because it's less representative of behavior. Static thresholds might be more appropriate for the **Maximum** aggregation type.
-7. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
- 1. **Condition Type** - Choose 'Dynamic' option.
- 1. **Sensitivity** - Choose Medium/Low sensitivity to reduce alert noise.
- 1. **Operator** - Choose 'Greater Than' unless behavior represents the application usage.
- 1. **Frequency** - Consider lowering the frequency based on business impact of the alert.
- 1. **Failing Periods** (Advanced Option) - The look back window should be at least 15 minutes. For example, if the period is set to five minutes, then failing periods should be at least three or more.
+1. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
+ 1. **Condition Type**: Select the **Dynamic** option.
+ 1. **Sensitivity**: Select **Medium/Low** sensitivity to reduce alert noise.
+ 1. **Operator**: Select **Greater Than** unless behavior represents the application usage.
+ 1. **Frequency**: Consider lowering the frequency based on the business impact of the alert.
+ 1. **Failing Periods** (advanced option): The look-back window should be at least 15 minutes. For example, if the period is set to 5 minutes, failing periods should be at least 3 minutes or more.
-8. The metric chart displays the calculated thresholds based on recent data.
+1. The metric chart displays the calculated thresholds based on recent data.
-9. Select **Done**.
+1. Select **Done**.
-10. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**.
+1. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**.
-11. Add an action group to the alert either by selecting an existing action group or creating a new action group.
+1. Add an action group to the alert either by selecting an existing action group or creating a new action group.
-12. Select **Done** to save the metric alert rule.
+1. Select **Done** to save the metric alert rule.
> [!NOTE]
-> Metric alert rules created through portal are created in the same resource group as the target resource.
+> Metric alert rules created through the portal are created in the same resource group as the target resource.
### Configure dynamic thresholds on Application Insights HTTP request execution time
-1. In [Azure portal](https://portal.azure.com), select on **Monitor**. The Monitor view consolidates all your monitoring settings and data in one view.
+1. In the [Azure portal](https://portal.azure.com), select **Monitor**. The **Monitor** view consolidates all your monitoring settings and data in one view.
-2. Select **Alerts** then select **+ New alert rule**.
+1. Select **Alerts** > **+ New alert rule**.
> [!TIP]
- > Most resource blades also have **Alerts** in their resource menu under **Monitoring**, you could create alerts from there as well.
+ > Most resource panes also have **Alerts** in their resource menu under **Monitoring**. You can also create alerts from there.
-3. Select **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **'Application Insights' Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
+1. Choose **Select target**. In the pane that opens, select a target resource that you want to alert on. Use the **Subscription** and **Application Insights Resource type** dropdowns to find the resource you want to monitor. You can also use the search bar to find your resource.
-4. Once you've selected a target resource, select **Add condition**.
+1. After you've selected a target resource, select **Add condition**.
-5. Select the **'HTTP request execution time'**.
+1. Select the **HTTP request execution time**.
-6. Optionally, refine the metric by adjusting **Period** and **Aggregation**. We discourage using the **Maximum** aggregation type for this metric type, since it is less representative of behavior. Static thresholds maybe more appropriate for the **Maximum** aggregation type.
+1. Optionally, refine the metric by adjusting **Period** and **Aggregation**. We discourage using the **Maximum** aggregation for this metric type because it's less representative of behavior. Static thresholds might be more appropriate for the **Maximum** aggregation type.
-7. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
- 1. **Condition Type** - Choose 'Dynamic' option.
- 1. **Operator** - Choose 'Greater Than' to reduce alerts fired on improvement in duration.
- 1. **Frequency** - Consider lowering based on business impact of the alert.
+1. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
+ 1. **Condition Type**: Select the **Dynamic** option.
+ 1. **Operator**: Select **Greater Than** to reduce alerts fired on improvement in duration.
+ 1. **Frequency**: Consider lowering the frequency based on the business impact of the alert.
-8. The metric chart will display the calculated thresholds based on recent data.
+1. The metric chart displays the calculated thresholds based on recent data.
-9. Select **Done**.
+1. Select **Done**.
-10. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**.
+1. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**.
-11. Add an action group to the alert either by selecting an existing action group or creating a new action group.
+1. Add an action group to the alert either by selecting an existing action group or creating a new action group.
-12. Select **Done** to save the metric alert rule.
+1. Select **Done** to save the metric alert rule.
> [!NOTE]
-> Metric alert rules created through portal are created in the same resource group as the target resource.
+> Metric alert rules created through the portal are created in the same resource group as the target resource.
-## Interpret Dynamic Threshold charts
+## Interpret Dynamic Thresholds charts
-Following is a chart showing a metric, its dynamic threshold limits, and some alerts fired when the value was outside of the allowed thresholds.
+The following chart shows a metric, its dynamic thresholds limits, and some alerts that fired when the value was outside the allowed thresholds.
-![Learn more about how to configure Metric Alerts](media/alerts-dynamic-thresholds/threshold-picture-8bit.png)
+![Screenshot that shows a metric, its dynamic thresholds limits, and some alerts that fired.](media/alerts-dynamic-thresholds/threshold-picture-8bit.png)
-Use the following information to interpret the previous chart.
+Use the following information to interpret the chart:
-- **Blue line** - The actual measured metric over time.-- **Blue shaded area** - Shows the allowed range for the metric. As long as the metric values stay within this range, no alert will occur.-- **Blue dots** - If you left select on part of the chart and then hover over the blue line, a blue dot appears under your cursor showing an individual aggregated metric value.-- **Pop-up with blue dot** - Shows the measured metric value (the blue dot) and the upper and lower values of allowed range. -- **Red dot with a black circle** - Shows the first metric value out of the allowed range. This is the value that fires a metric alert and puts it in an active state.-- **Red dots**- Indicate other measured values outside of the allowed range. They won't fire additional metric alerts, but the alert stays in the active.-- **Red area** - Shows the time when the metric value was outside of the allowed range. The alert remains in the active state as long as subsequent measured values are out of the allowed range, but no new alerts are fired.-- **End of red area** - When the blue line is back inside the allowed values, the red area stops and the measured value line turns blue. The status of the metric alert fired at the time of the red dot with black outline is set to resolved.
+- **Blue line**: The actual measured metric over time.
+- **Blue shaded area**: Shows the allowed range for the metric. If the metric values stay within this range, no alert will occur.
+- **Blue dots**: If you left select on part of the chart and then hover over the blue line, a blue dot appears under your cursor that shows an individual aggregated metric value.
+- **Pop-up with blue dot**: Shows the measured metric value (the blue dot) and the upper and lower values of the allowed range.
+- **Red dot with a black circle**: Shows the first metric value out of the allowed range. This value fires a metric alert and puts it in an active state.
+- **Red dots**: Indicate other measured values outside of the allowed range. They won't fire more metric alerts, but the alert stays in the active state.
+- **Red area**: Shows the time when the metric value was outside of the allowed range. The alert remains in the active state as long as subsequent measured values are out of the allowed range, but no new alerts are fired.
+- **End of red area**: When the blue line is back inside the allowed values, the red area stops and the measured value line turns blue. The status of the metric alert fired at the time of the red dot with black outline is set to resolved.
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
In the past, users used the [legacy Log Analytics Alert API](./api-alerts.md) to
## Process
+View workspaces to upgrade using this [Azure Resource Graph Explorer query](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29). Open the [link](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29), select all available subscriptions, and run the query.
+ The process of switching isn't interactive and doesn't require manual steps, in most cases. Your alert rules aren't stopped or stalled, during or after the switch.
-Do this call to switch all alert rules associated with the specific Log Analytics workspace:
+Do this call to switch all alert rules associated with each of the Log Analytics workspaces:
``` PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview
azure-monitor Alerts Metric Multiple Time Series Single Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-multiple-time-series-single-rule.md
Title: Monitor multiple time-series in a single metric alert rule
-description: Alert at scale using a single alert rule for multiple time series
+ Title: Monitor multiple time series in a single metric alert rule
+description: Alert at scale by using a single alert rule for multiple time series.
Last updated 2/23/2022
-# Monitor multiple time-series in a single metric alert rule
+# Monitor multiple time series in a single metric alert rule
-A single metric alert rule can be used to monitor one or many metric time-series, making it easier to monitor resources at scale.
+A single metric alert rule can be used to monitor one or many metric time series. This capability makes it easier to monitor resources at scale.
-## Metric time-series
+## Metric time series
-A metric time-series is a series of measurements (or "metric values") captured over a period of time.
+A metric time series is a series of measurements, or "metric values," captured over a period of time.
For example:
For example:
- The incoming bytes (ingress) to a storage account - The number of failed requests of a web application
+## Alert rule on a single time series
+An alert rule monitors a single time series when it meets all the following conditions:
-## Alert rule on a single time-series
-An alert rule monitors a single time-series when it meets all the following conditions:
-- The rule monitors a single target resource -- Contains a single condition-- Evaluates a metric without choosing dimensions (assuming the metric supports dimensions)
+- It monitors a single target resource.
+- It contains a single condition.
+- It evaluates a metric without choosing dimensions (assuming the metric supports dimensions).
-An example of such an alert rule (with only the relevant properties shown):
-- Target resource: *myVM1*-- Metric: *Percentage CPU*-- Operator: *Greater Than*-- Threshold: *70*
+An example of such an alert rule, with only the relevant properties shown:
+- **Target resource**: *myVM1*
+- **Metric**: *Percentage CPU*
+- **Operator**: *Greater Than*
+- **Threshold**: *70*
+
+For this alert rule, a single metric time series is monitored:
-For this alert rule, a single metric time-series is monitored:
- Percentage CPU where *Resource*=ΓÇÖmyVM1ΓÇÖ > 70%
-![An alert rule on a single time-series](media/alerts-metric-multiple-time-series-single-rule/simple-alert-rule.png)
+![Screenshot that shows an alert rule on a single time series.](media/alerts-metric-multiple-time-series-single-rule/simple-alert-rule.png)
+
+## Alert rule on multiple time series
+
+An alert rule monitors multiple time series if it uses at least one of the following features:
-## Alert rule on multiple time-series
-An alert rule monitors multiple time-series if it uses at least one of the following features:
- Multiple resources-- Multiple conditions
+- Multiple conditions
- Multiple dimensions - ## Multiple resources (multi-resource)
-A single metric alert rule can monitor multiple resources, provided the resources are of the same type and exist in the same Azure region. Using this type of rule reduces complexity and the total number of alert rules you have to maintain.
+A single metric alert rule can monitor multiple resources, provided the resources are of the same type and exist in the same Azure region. Using this type of rule reduces complexity and the total number of alert rules you have to maintain.
An example of such an alert rule:-- Target resource: *myVM1, myVM2*-- Metric: *Percentage CPU*-- Operator: *Greater Than*-- Threshold: *70*
-For this alert rule, two metric time-series are being monitored separately:
+- **Target resource**: *myVM1, myVM2*
+- **Metric**: *Percentage CPU*
+- **Operator**: *Greater Than*
+- **Threshold**: *70*
+
+For this alert rule, two metric time series are monitored separately:
+ - Percentage CPU where *Resource*=ΓÇÖmyVM1ΓÇÖ > 70% - Percentage CPU where *Resource*=ΓÇÖmyVM2ΓÇÖ > 70%
-![A multi-resource alert rule](media/alerts-metric-multiple-time-series-single-rule/multi-resource-alert-rule.png)
-
-In a multi-resource alert rule, the condition is evaluated **separately** for each of the resources (or more accurately, for each of the metric time-series corresponded to each resource). This means that alerts are also fired for each resource separately.
+![Screenshot that shows a multi-resource alert rule.](media/alerts-metric-multiple-time-series-single-rule/multi-resource-alert-rule.png)
+
+In a multi-resource alert rule, the condition is evaluated separately for each of the resources (or more accurately, for each of the metric time series corresponded to each resource). As a result, alerts are also fired for each resource separately.
-For example, assume we've set the alert rule above to monitor for CPU above 70%. In the evaluated time period (that is, the last 5 minutes)
-- The *Percentage CPU* of *myVM1* is greater than 70% -- The *Percentage CPU* of *myVM2* is at 50%
+For example, assume we've set the preceding alert rule to monitor for CPU above 70%. In the evaluated time period, that is, the last 5 minutes:
-The alert rule triggers on *myVM1*, but not *myVM2*. These triggered alerts are independent. They can also resolve at different times depending on the individual behavior of each of the virtual machines.
+- The *Percentage CPU* of *myVM1* is greater than 70%.
+- The *Percentage CPU* of *myVM2* is at 50%.
+
+The alert rule triggers on *myVM1* but not *myVM2*. These triggered alerts are independent. They can also resolve at different times depending on the individual behavior of each of the virtual machines.
For more information about multi-resource alert rules and the resource types supported for this capability, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
-> [!NOTE]
+> [!NOTE]
> In a metric alert rule that monitors multiple resources, only a single condition is allowed. ## Multiple conditions (multi-condition)
-A single metric alert rule can also monitor up to five conditions per alert rule.
+A single metric alert rule can also monitor up to five conditions per alert rule.
For example: -- Target resource: *myVM1*
+- **Target resource**: *myVM1*
- Condition1
- - Metric: *Percentage CPU*
- - Operator: *Greater Than*
- - Threshold: *70*
+ - **Metric**: *Percentage CPU*
+ - **Operator**: *Greater Than*
+ - **Threshold**: *70*
- Condition2
- - Metric: *Network In Total*
- - Operator: *Greater Than*
- - Threshold: *20 MB*
+ - **Metric**: *Network In Total*
+ - **Operator**: *Greater Than*
+ - **Threshold**: *20 MB*
-For this alert rule, two metric time-series are being monitored:
+For this alert rule, two metric time series are being monitored:
-- Percentage CPU where *Resource*=ΓÇÖmyVM1ΓÇÖ > 70%-- Network In Total where *Resource*=ΓÇÖmyVM1ΓÇÖ > 20 MB
+- The *Percentage CPU* where *Resource*=ΓÇÖmyVM1ΓÇÖ > 70%.
+- The *Network In Total* where *Resource*=ΓÇÖmyVM1ΓÇÖ > 20 MB.
-![A multi-condition alert rule](media/alerts-metric-multiple-time-series-single-rule/multi-condition-alert-rule.png)
-
-An ΓÇÿANDΓÇÖ operator is used between the conditions. The alert rule fires an alert when **all** conditions are met. The fired alert resolves if at least one of the conditions is no longer met.
+![Screenshot that shows a multi-condition alert rule.](media/alerts-metric-multiple-time-series-single-rule/multi-condition-alert-rule.png)
-> [!NOTE]
-> There are restrictions when using dimensions in an alert rule with multiple conditions. For more information, see [Restrictions when using dimensions in a metric alert rule with multiple conditions](alerts-troubleshoot-metric.md#restrictions-when-using-dimensions-in-a-metric-alert-rule-with-multiple-conditions).
+An AND operator is used between the conditions. The alert rule fires an alert when *all* conditions are met. The fired alert resolves if at least one of the conditions is no longer met.
+> [!NOTE]
+> There are restrictions when you use dimensions in an alert rule with multiple conditions. For more information, see [Restrictions when using dimensions in a metric alert rule with multiple conditions](alerts-troubleshoot-metric.md#restrictions-when-using-dimensions-in-a-metric-alert-rule-with-multiple-conditions).
## Multiple dimensions (multi-dimension)
-A single metric alert rule can also monitor multiple dimension values of a metric. The dimensions of a metric are name-value pairs that carry additional data to describe the metric value. For example, the **Transactions** metric of a storage account has a dimension called **API name**, describing the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). The use of dimensions is optional, but it allows filtering the metric and only monitoring specific time-series, instead of monitoring the metric as an aggregate of all the dimensional values put together.
+A single metric alert rule can also monitor multiple dimension values of a metric. The dimensions of a metric are name-value pairs that carry more data to describe the metric value. For example, the **Transactions** metric of a storage account has a dimension called **API name**. This dimension describes the name of the API called by each transaction, for example, GetBlob, DeleteBlob, and PutPage. The use of dimensions is optional, but it allows filtering the metric and only monitoring specific time series, instead of monitoring the metric as an aggregate of all the dimensional values put together.
-For example, you can choose to have an alert fired when the number of transactions is high across all API names (which is the aggregated data), or further break it down into only alerting when the number of transactions is high for specific API names.
+For example, you can choose to have an alert fired when the number of transactions is high across all API names (which is the aggregated data). Or you can further break it down into only alerting when the number of transactions is high for specific API names.
An example of an alert rule monitoring multiple dimensions is: -- Target resource: *myStorage1*-- Metric: *Transactions*-- Dimensions
+- **Target resource**: *myStorage1*
+- **Metric**: *Transactions*
+- **Dimensions**:
* API name = *GetBlob, DeleteBlob, PutPage*-- Operator: *Greater Than*-- Threshold: *70*
+- **Operator**: *Greater Than*
+- **Threshold**: *70*
-For this alert rule, three metric time-series are being monitored:
+For this alert rule, three metric time series are being monitored:
- Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖGetBlobΓÇÖ > 70 - Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖDeleteBlobΓÇÖ > 70 - Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖPutPageΓÇÖ > 70
-![A multi-dimension alert rule with values from one dimension](media/alerts-metric-multiple-time-series-single-rule/multi-dimension-1.png)
+![Screenshot that shows a multi-dimension alert rule with values from one dimension.](media/alerts-metric-multiple-time-series-single-rule/multi-dimension-1.png)
-A multi-dimension metric alert rule can also monitor multiple dimension values from **different** dimensions of a metric. In this case, the alert rule **separately** monitors all the dimensions value combinations of the selected dimension values.
+A multi-dimension metric alert rule can also monitor multiple dimension values from *different* dimensions of a metric. In this case, the alert rule *separately* monitors all the dimension value combinations of the selected dimension values.
An example of this type of alert rule: -- Target resource: *myStorage1*-- Metric: *Transactions*-- Dimensions
+- **Target resource**: *myStorage1*
+- **Metric**: *Transactions*
+- **Dimensions**:
* API name = *GetBlob, DeleteBlob, PutPage* * Authentication = *SAS, AccountKey*-- Operator: *Greater Than*-- Threshold: *70*
+- **Operator**: *Greater Than*
+- **Threshold**: *70*
-For this alert rule, six metric time-series are being monitored separately:
+For this alert rule, six metric time series are being monitored separately:
- Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖGetBlobΓÇÖ and *Authentication*=ΓÇÖSASΓÇÖ > 70 - Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖGetBlobΓÇÖ and *Authentication*=ΓÇÖAccountKeyΓÇÖ > 70
For this alert rule, six metric time-series are being monitored separately:
- Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖPutPageΓÇÖ and *Authentication*=ΓÇÖSASΓÇÖ > 70 - Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖPutPageΓÇÖ and *Authentication*=ΓÇÖAccountKeyΓÇÖ > 70
-![A multi-dimension alert rule with values from multiple dimensions](media/alerts-metric-multiple-time-series-single-rule/multi-dimension-2.png)
-
-### Advanced multi-dimension features
+![Screenshot that shows a multi-dimension alert rule with values from multiple dimensions.](media/alerts-metric-multiple-time-series-single-rule/multi-dimension-2.png)
-1. **Selecting all current and future dimensions** ΓÇô You can choose to monitor all possible values of a dimension, including future values. Such an alert rule will scale automatically to monitor all values of the dimension without you needing to modify the alert rule every time a dimension value is added or removed.
-2. **Excluding dimensions** ΓÇô Selecting the 'Γëá' (exclude) operator for a dimension value is equivalent to selecting all other values of that dimension, including future values.
-3. **New and custom dimensions** ΓÇô The dimension values displayed in the Azure portal are based on metric data collected in the last day. If the dimension value youΓÇÖre looking for isnΓÇÖt yet emitted, you can add a custom dimension value.
-4. **Matching dimensions with a prefix** - You can choose to monitor all dimension values that start with a specific pattern, by selecting the 'Starts with' operator and entering a custom prefix.
+### Advanced multi-dimension features
-![Advanced multi-dimension features](media/alerts-metric-multiple-time-series-single-rule/advanced-features.png)
+- **Select all current and future dimensions**: You can choose to monitor all possible values of a dimension, including future values. Such an alert rule will scale automatically to monitor all values of the dimension without you needing to modify the alert rule every time a dimension value is added or removed.
+- **Exclude dimensions**: Selecting the **Γëá** (exclude) operator for a dimension value is equivalent to selecting all other values of that dimension, including future values.
+- **Add new and custom dimensions**: The dimension values displayed in the Azure portal are based on metric data collected in the last day. If the dimension value you're looking for isn't yet emitted, you can add a custom dimension value.
+- **Match dimensions with a prefix**: You can choose to monitor all dimension values that start with a specific pattern by selecting the **Starts with** operator and entering a custom prefix.
+![Screenshot that shows advanced multi-dimension features.](media/alerts-metric-multiple-time-series-single-rule/advanced-features.png)
## Metric alerts pricing The pricing of metric alert rules is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-When creating a metric alert rule, the provided price estimation is based on the selected features and the number of monitored time-series, which is determined from the rule configuration and current metric values. However, the monthly charge is based on actual evaluations of the time-series, and can therefore differ from the original estimation if some time-series donΓÇÖt have data to evaluate, or if the alert rule uses features that can make it scale dynamically.
+When you create a metric alert rule, the provided price estimation is based on the selected features and the number of monitored time series. This number is determined from the rule configuration and current metric values. The monthly charge is based on actual evaluations of the time series, so it can differ from the original estimation if some time series don't have data to evaluate, or if the alert rule uses features that can make it scale dynamically.
-For example, an alert rule can show a high price estimation if it leverages the multi-dimension feature, and a large number of dimension values combinations are selected, resulting in the monitoring of many time-series. But the actual charge for that alert rule can be lower if not all the time-series resulting from the dimension values combinations actually have data to evaluate.
+For example, an alert rule can show a high price estimation if it uses the multi-dimension feature, and a large number of dimension values combinations are selected, which results in the monitoring of many time series. But the actual charge for that alert rule can be lower if not all the time series resulting from the dimension values combinations actually have data to evaluate.
## Number of time series monitored by a single alert rule
-To prevent excess costs, each alert rule can monitor up to 5000 time-series by default. To lift this limit from your subscription, open a support ticket.
-
+To prevent excess costs, each alert rule can monitor up to 5,000 time series by default. To lift this limit from your subscription, open a support ticket.
## Next steps
-Learn more about monitoring at scale using metric alerts and [dynamic thresholds](../alerts/alerts-dynamic-thresholds.md).
+Learn more about monitoring at scale by using metric alerts and [dynamic thresholds](../alerts/alerts-dynamic-thresholds.md).
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
If you have 100 regions, 200 departments, and 2,000 customers, that gives you 10
Again, this limit isn't for an individual metric. It's for the sum of all such metrics across a subscription and region.
-The following steps will provide more information to assist with troubleshooting.
+Follow the steps below to see your current total active time series metrics, and more information to assist with troubleshooting.
1. Navigate to the Monitor section of the Azure portal. 1. Select **Metrics** on the left hand side.
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
Title: Getting started with Azure metrics explorer
-description: Learn how to create your first metric chart with Azure metrics explorer.
+ Title: Get started with Azure Monitor metrics explorer
+description: Learn how to create your first metric chart with Azure Monitor metrics explorer.
-# Getting started with Azure Metrics Explorer
+# Get started with metrics explorer
-## Where do I start
-Azure Monitor metrics explorer is a component of the Microsoft Azure portal that allows plotting charts, visually correlating trends, and investigating spikes and dips in metrics' values. Use the metrics explorer to investigate the health and utilization of your resources. Start in the following order:
+Azure Monitor metrics explorer is a component of the Azure portal that you can use to plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. Use metrics explorer to investigate the health and utilization of your resources.
-1. [Pick a resource and a metric](#create-your-first-metric-chart) and you see a basic chart. Then [select a time range](#select-a-time-range) that is relevant for your investigation.
+## Where do I start?
+
+Start in the following order:
+
+1. [Pick a resource and a metric](#create-your-first-metric-chart) and you see a basic chart. Then [select a time range](#select-a-time-range) that's relevant for your investigation.
1. Try [applying dimension filters and splitting](#apply-dimension-filters-and-splitting). The filters and splitting allow you to analyze which segments of the metric contribute to the overall metric value and identify possible outliers.
-1. Use [advanced settings](#advanced-chart-settings) to customize the chart before pinning to dashboards. [Configure alerts](../alerts/alerts-metric-overview.md) to receive notifications when the metric value exceeds or drops below a threshold.
+1. Use [advanced settings](#advanced-chart-settings) to customize the chart before you pin it to dashboards. [Configure alerts](../alerts/alerts-metric-overview.md) to receive notifications when the metric value exceeds or drops below a threshold.
## Create your first metric chart To create a metric chart, from your resource, resource group, subscription, or Azure Monitor view, open the **Metrics** tab and follow these steps:
-1. Select the "Select a scope" button to open the resource scope picker. This allows you to select the resource(s) you want to see metrics for. The resource should already be populated if you opened metrics explorer from the resource's menu. To learn how to view metrics across multiple resources, [read this article](./metrics-dynamic-scope.md).
- > ![Select a resource](./media/metrics-getting-started/scope-picker.png)
+1. Select the **Select a scope** button to open the resource scope picker. You can use the picker to select the resources you want to see metrics for. The resource should already be populated if you opened metrics explorer from the resource's menu. To learn how to view metrics across multiple resources, see [View multiple resources in Azure Monitor metrics explorer](./metrics-dynamic-scope.md).
-1. For some resources, you must pick a namespace. The namespace is just a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing Files, Tables, Blobs, and Queues metrics. Many resource types only have one namespace.
+ > ![Screenshot that shows selecting a resource.](./media/metrics-getting-started/scope-picker.png)
+
+1. For some resources, you must pick a namespace. The namespace is a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing metrics for files, tables, blobs, and queues. Many resource types have only one namespace.
1. Select a metric from a list of available metrics.
- > ![Select a metric](./media/metrics-getting-started/metrics-dropdown.png)
+ > ![Screenshot that shows selecting a metric.](./media/metrics-getting-started/metrics-dropdown.png)
1. Optionally, you can [change the metric aggregation](../essentials/metrics-charts.md#aggregation). For example, you might want your chart to show minimum, maximum, or average values of the metric. > [!TIP]
-> Use the **Add metric** button and repeat these steps if you want to see multiple metrics plotted in the same chart. For multiple charts in one view, select the **Add chart** button on top.
+> Select **Add metric** and repeat these steps to see multiple metrics plotted in the same chart. For multiple charts in one view, select **Add chart**.
## Select a time range > [!WARNING]
-> [Most metrics in Azure are stored for 93 days](../essentials/data-platform-metrics.md#retention-of-metrics). However, you can query no more than 30 days worth of data on any single chart. You can [pan](metrics-charts.md#pan) the chart to view the full retention. The 30 day limitation doesn't apply to [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics).
+> [Most metrics in Azure are stored for 93 days](../essentials/data-platform-metrics.md#retention-of-metrics). You can query no more than 30 days' worth of data on any single chart. You can [pan](metrics-charts.md#pan) the chart to view the full retention. The 30-day limitation doesn't apply to [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics).
-By default, the chart shows the most recent 24 hours of metrics data. Use the **time picker** panel to change the time range, zoom in, or zoom out on your chart.
+By default, the chart shows the most recent 24 hours of metrics data. Use the **time picker** panel to change the time range, zoom in, or zoom out on your chart.
-![Change time range panel](./media/metrics-getting-started/time.png)
+![Screenshot that shows changing the time range panel.](./media/metrics-getting-started/time.png)
> [!TIP]
-> Use the **time brush** to investigate an interesting area of the chart (spike or a dip). Put the mouse pointer at the beginning of the area, click and hold the left mouse button, drag to the other side of area and then release the button. The chart will zoom in on that time range.
+> Use the **time brush** to investigate an interesting area of the chart like a spike or a dip. Position the mouse pointer at the beginning of the area, select and hold the left mouse button, drag to the other side of the area, and then release the button. The chart will zoom in on that time range.
## Apply dimension filters and splitting
-[Filtering](../essentials/metrics-charts.md#filters) and [splitting](../essentials/metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments ("dimension values") impact the overall value of the metric, and allow you to identify possible outliers.
--- **Filtering** lets you choose which dimension values are included in the chart. For example, you might want to show successful requests when charting the *server response time* metric. You would need to apply the filter on the *success of request* dimension.
+[Filtering](../essentials/metrics-charts.md#filters) and [splitting](../essentials/metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments ("dimension values") affect the overall value of the metric. You can use them to identify possible outliers.
-- **Splitting** controls whether the chart displays separate lines for each value of a dimension, or aggregates the values into a single line. For example, you can see one line for an average response time across all server instances, or see separate lines for each server. You would need to apply splitting on the *server instance* dimension to see separate lines.
+- **Filtering** lets you choose which dimension values are included in the chart. For example, you might want to show successful requests when you chart the *server response time* metric. You apply the filter on the *success of request* dimension.
+- **Splitting** controls whether the chart displays separate lines for each value of a dimension or aggregates the values into a single line. For example, you can see one line for an average response time across all server instances. Or you can see separate lines for each server. You apply splitting on the *server instance* dimension to see separate lines.
-See [examples of the charts](../essentials/metric-chart-samples.md) that have filtering and splitting applied. The article shows the steps were used to configure the charts.
+For examples that have filtering and splitting applied, see [Metric chart examples](../essentials/metric-chart-samples.md). The article shows the steps that were used to configure the charts.
## Share your metric chart
-There are three ways to share your metric chart. See the instructions below on how to share information from your metrics charts using Excel, a link and a workbook.
-
+
+There are three ways to share your metric chart. See the following instructions on how to share information from your metric charts by using Excel, a link, or a workbook.
+ ### Download to Excel
-Select "Share" and "Download to Excel". Your download should start immediately.
+Select **Share** > **Download to Excel**. Your download should start immediately.
+ ### Share a link
-Select "Share" and "Copy link". You should get a notification that the link was copied successfully.
+Select **Share** > **Copy link**. You should get a notification that the link was copied successfully.
+ ### Send to workbook
-Select "Share" and "Send to Workbook". The **Send to Workbook** window opens for you to send the metric chart to a new or existing workbook.
+Select **Share** > **Send to Workbook**. In the **Send to Workbook** window, you can send the metric chart to a new or existing workbook.
## Advanced chart settings
-You can customize chart style, title, and modify advanced chart settings. When done with customization, pin it to a dashboard or save to a workbook to save your work. You can also configure metrics alerts. Follow [product documentation](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
+You can customize the chart style and title, and modify advanced chart settings. When you're finished with customization, pin the chart to a dashboard or save it to a workbook. You can also configure metrics alerts. Follow [product documentation](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
## Next steps
-* [Learn about advanced features of Metrics Explorer](../essentials/metrics-charts.md)
-* [Viewing multiple resources in Metrics Explorer](./metrics-dynamic-scope.md)
-* [Troubleshooting Metrics Explorer](metrics-troubleshoot.md)
+* [Learn about advanced features of metrics explorer](../essentials/metrics-charts.md)
+* [Viewing multiple resources in metrics explorer](./metrics-dynamic-scope.md)
+* [Troubleshooting metrics explorer](metrics-troubleshoot.md)
* [See a list of available metrics for Azure services](./metrics-supported.md) * [See examples of configured charts](../essentials/metric-chart-samples.md)
azure-monitor Data Collection Rule Sample Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collection-rule-sample-custom-logs.md
The sample [data collection rule](../essentials/data-collection-rule-overview.md
```json { "properties": {
- "dataCollectionEndpointId": "https://my-dcr.westus2-1.ingest.monitor.azure.com",
+ "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/my-resource-groups/providers/Microsoft.Insights/dataCollectionEndpoints/my-data-collection-endpoint",
"streamDeclarations": { "Custom-MyTableRawData": { "columns": [
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
Supported data types:
* [IIS Logs](../agents/data-sources-iis-logs.md) ## Using Private links
-Customer-managed storage accounts are used to ingest Custom logs or IIS logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
+Customer-managed storage accounts are used to ingest Custom logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
### Using a customer-managed storage account over a Private Link #### Workspace requirements
azure-netapp-files Azure Netapp Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-metrics.md
na Previously updated : 09/29/2021 Last updated : 08/11/2022 # Metrics for Azure NetApp Files Azure NetApp Files provides metrics on allocated storage, actual storage usage, volume IOPS, and latency. By analyzing these metrics, you can gain a better understanding on the usage pattern and volume performance of your NetApp accounts.
-You can find metrics for a capacity pool or volume by selecting the **capacity pool** or **volume**. Then click **Metric** to view the available metrics:
+You can find metrics for a capacity pool or volume by selecting the **capacity pool** or **volume**. Then select **Metric** to view the available metrics:
[ ![Snapshot that shows how to navigate to the Metric pull-down.](../media/azure-netapp-files/metrics-navigate-volume.png) ](../media/azure-netapp-files/metrics-navigate-volume.png#lightbox)
You can find metrics for a capacity pool or volume by selecting the **capacity p
- *Is volume replication transferring* Whether the status of the volume replication is ΓÇÿtransferringΓÇÖ.
+- *Volume replication lag time* <br>
+ Lag time is the actual amount of time the replication lags behind the source. It indicates the age of the replicated data in the destination volume relative to the source volume.
+
+> [!NOTE]
+> When assessing the health status of the volume replication, consider the volume replication lag time. If the lag time is greater than the replication schedule, the replication volume will not catch up to the source. To resolve this issue, adjust the replication speed or the replication schedule.
+ - *Volume replication last transfer duration* The amount of time in seconds it took for the last transfer to complete.
You can find metrics for a capacity pool or volume by selecting the **capacity p
Write throughput in bytes per second. * *Other throughput*
- Other throughput (that is not read or write) in bytes per second.
+ Other throughput (that isn't read or write) in bytes per second.
## Volume backup metrics
You can find metrics for a capacity pool or volume by selecting the **capacity p
Shows whether the last volume backup or restore operation is successfully completed. `1` is successful. `0` is unsuccessful. * *Is Volume Backup Suspended*
- Shows whether the backup policy is suspended for the volume. `1` is not suspended. `0` is suspended.
+ Shows whether the backup policy is suspended for the volume. `1` isn't suspended. `0` is suspended.
* *Volume Backup Bytes* The total bytes backed up for this volume.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 08/08/2022 Last updated : 08/11/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files volumes are designed to be contained in a special purpose sub
## Configurable network features
- The [**Standard network features**](configure-network-features.md) configuration for Azure NetApp Files is available for public preview. After registering for this feature with your subscription, you can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features.
+ Register for the [**configurable network features**](configure-network-features.md) to create volumes with standard network features. You can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features.
* ***Standard*** Selecting this setting enables higher IP limits and standard VNet features such as [network security groups](../virtual-network/network-security-groups-overview.md) and [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) on delegated subnets, and additional connectivity patterns as indicated in this article.
Azure NetApp Files standard network features are supported for the following reg
You should understand a few considerations when you plan for Azure NetApp Files network.
+> [!IMPORTANT]
+> [!INCLUDE [Standard network features pricing](includes/standard-networking-pricing.md)]
+ ### Constraints The following table describes whatΓÇÖs supported for each network features configuration:
The following table describes whatΓÇÖs supported for each network features confi
| Load balancers for Azure NetApp Files traffic | No | No | | Dual stack (IPv4 and IPv6) VNet | No <br> (IPv4 only supported) | No <br> (IPv4 only supported) |
+> [!IMPORTANT]
+> Upgrade from basic to standard network feature is not currently supported.
+ ### Supported network topologies The following table describes the network topologies supported by each network features configuration of Azure NetApp Files.
The following table describes the network topologies supported by each network f
|||| | Connectivity to volume in a local VNet | Yes | Yes | | Connectivity to volume in a peered VNet (Same region) | Yes | Yes |
-| Connectivity to volume in a peered VNet (Cross region or global peering) | No | No |
+| Connectivity to volume in a peered VNet (Cross region or global peering) | Yes* | No |
| Connectivity to a volume over ExpressRoute gateway | Yes | Yes | | ExpressRoute (ER) FastPath | Yes | No | | Connectivity from on-premises to a volume in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit | Yes | Yes |
The following table describes the network topologies supported by each network f
| Connectivity over Active/Passive VPN gateways | Yes | Yes | | Connectivity over Active/Active VPN gateways | Yes | No | | Connectivity over Active/Active Zone Redundant gateways | No | No |
-| Connectivity over Virtual WAN (VWAN) | No | No |
+| Connectivity over Virtual WAN (VWAN) | No | No |
+
+\* This option will incur a charge on ingress and egress traffic that uses a virtual network peering connection. For more information, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). For more general information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
## Virtual network for Azure NetApp Files volumes
Before provisioning an Azure NetApp Files volume, you need to create an Azure vi
Subnets segment the virtual network into separate address spaces that are usable by the Azure resources in them. Azure NetApp Files volumes are contained in a special-purpose subnet called a [delegated subnet](../virtual-network/virtual-network-manage-subnet.md).
-Subnet delegation gives explicit permissions to the Azure NetApp Files service to create service-specific resources in the subnet. It uses a unique identifier in deploying the service. In this case, a network interface is created to enable connectivity to Azure NetApp Files.
+Subnet delegation gives explicit permissions to the Azure NetApp Files service to create service-specific resources in the subnet. It uses a unique identifier in deploying the service. In this case, a network interface is created to enable connectivity to Azure NetApp Files.
If you use a new VNet, you can create a subnet and delegate the subnet to Azure NetApp Files by following instructions in [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md). You can also delegate an existing empty subnet that's not delegated to other services.
User-defined routes (UDRs) and Network security groups (NSGs) are only supported
> [!NOTE] > Associating NSGs at the network interface level is not supported for the Azure NetApp Files network interfaces.
-If the subnet has a combination of volumes with the Standard and Basic network features (or for existing volumes not registered for the feature preview), UDRs and NSGs applied on the delegated subnets will only apply to the volumes with the Standard network features.
+If the subnet has a combination of volumes with the Standard and Basic network features (or for existing volumes not registered for the feature), UDRs and NSGs applied on the delegated subnets will only apply to the volumes with the Standard network features.
Configuring user-defined routes (UDRs) on the source VM subnets with address prefix of delegated subnet and next hop as NVA isn't supported for volumes with the Basic network features. Such a setting will result in connectivity issues.
Configuring user-defined routes (UDRs) on the source VM subnets with address pre
The following diagram illustrates an Azure-native environment:
-![Azure-native networking environment](../media/azure-netapp-files/azure-netapp-files-network-azure-native-environment.png)
### Local VNet A basic scenario is to create or connect to an Azure NetApp Files volume from a VM in the same VNet. For VNet 2 in the diagram, Volume 1 is created in a delegated subnet and can be mounted on VM 1 in the default subnet.
-### VNet peering
+### <a name="vnet-peering"></a> VNet peering
-If you have additional VNets in the same region that need access to each otherΓÇÖs resources, the VNets can be connected using [VNet peering](../virtual-network/virtual-network-peering-overview.md) to enable secure connectivity through the Azure infrastructure.
+If you have other VNets in the same region that need access to each otherΓÇÖs resources, the VNets can be connected using [VNet peering](../virtual-network/virtual-network-peering-overview.md) to enable secure connectivity through the Azure infrastructure.
Consider VNet 2 and VNet 3 in the diagram above. If VM 1 needs to connect to VM 2 or Volume 2, or if VM 2 needs to connect to VM 1 or Volume 1, then you need to enable VNet peering between VNet 2 and VNet 3.
-Also, consider a scenario where VNet 1 is peered with VNet 2, and VNet 2 is peered with VNet 3 in the same region. The resources from VNet 1 can connect to resources in VNet 2, but it can't connect to resources in VNet 3 unless VNet 1 and VNet 3 are peered.
+Also, consider a scenario where VNet 1 is peered with VNet 2, and VNet 2 is peered with VNet 3 in the same region. The resources from VNet 1 can connect to resources in VNet 2 but can't connect to resources in VNet 3 unless VNet 1 and VNet 3 are peered.
In the diagram above, although VM 3 can connect to Volume 1, VM 4 can't connect to Volume 2. The reason for this is that the spoke VNets aren't peered, and _transit routing isn't supported over VNet peering_.
+### Global or cross-region VNet peering
+
+The following diagram illustrates an Azure-native environment with cross-region VNet peering.
++
+With the standard network feature, VMs are able to connect to volumes in another region via global or cross-region VNet peering. The above diagram adds a second region to the configuration in the [local VNet peering section](#vnet-peering). For VNet 4 in this diagram, an Azure NetApp Files volume is created in a delegated subnet and can be mounted on VM5 in the application subnet.
+
+In the diagram, VM2 in Region 1 can connect to Volume 3 in Region 2. VM5 in Region 2 can connect to Volume 2 in Region 1 via VNet peering between Region 1 and Region 2.
+ ## Hybrid environments The following diagram illustrates a hybrid environment:
-![Hybrid networking environment](../media/azure-netapp-files/azure-netapp-files-network-hybrid-environment.png)
-In the hybrid scenario, applications from on-premises datacenters need access to the resources in Azure. This is the case whether you want to extend your datacenter to Azure, or you want to use Azure native services or for disaster recovery. See [VPN Gateway planning options](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json#planningtable) for information on how to connect multiple resources on-premises to resources in Azure through a site-to-site VPN or an ExpressRoute.
+In the hybrid scenario, applications from on-premises datacenters need access to the resources in Azure. This is the case whether you want to extend your datacenter to Azure or you want to use Azure native services or for disaster recovery. See [VPN Gateway planning options](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json#planningtable) for information on how to connect multiple resources on-premises to resources in Azure through a site-to-site VPN or an ExpressRoute.
In a hybrid hub-spoke topology, the hub VNet in Azure acts as a central point of connectivity to your on-premises network. The spokes are VNets peered with the hub, and they can be used to isolate workloads.
In the topology illustrated above, the on-premises network is connected to a hub
* VM 3 in the hub VNet can connect to Volume 2 in spoke VNet 1 and Volume 3 in spoke VNet 2. * VM 4 from spoke VNet 1 and VM 5 from spoke VNet 2 can connect to Volume 1 in the hub VNet. * VM 4 in spoke VNet 1 can't connect to Volume 3 in spoke VNet 2. Also, VM 5 in spoke VNet2 can't connect to Volume 2 in spoke VNet 1. This is the case because the spoke VNets aren't peered and _transit routing isn't supported over VNet peering_.
-* In the above architecture if there's a gateway in the spoke VNet as well, the connectivity to the ANF volume from on-prem connecting over the gateway in the Hub will be lost. By design, preference would be given to the gateway in the spoke VNet and so only machines connecting over that gateway can connect to the ANF volume.
+* In the above architecture if there's a gateway in the spoke VNet as well, the connectivity to the ANF volume from on-premises connecting over the gateway in the Hub will be lost. By design, preference would be given to the gateway in the spoke VNet and so only machines connecting over that gateway can connect to the ANF volume.
## Next steps
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
na Previously updated : 08/03/2021 Last updated : 08/11/2022
The **Network Features** functionality enables you to indicate whether you want
This article helps you understand the options and shows you how to configure network features.
->[!IMPORTANT]
->The **Network Features** functionality is currently in public preview. It is not available in Azure Government regions. See [supported regions](azure-netapp-files-network-topologies.md#supported-regions) for a full list.
+The **Network Features** functionality is not available in Azure Government regions. See [supported regions](azure-netapp-files-network-topologies.md#supported-regions) for a full list.
## Options for network features
Two settings are available for network features:
## Register the feature
-The network features capability is currently in public preview. If you are using this feature for the first time, you need to register the feature first.
+Follow the registration steps if you're using the feature for the first time.
1. Register the feature by running the following commands:
This section shows you how to set the Network Features option.
![Screenshot that shows volume creation for Basic network features.](../media/azure-netapp-files/network-features-create-basic.png)
-2. Before completing the volume creation process, you can display the specified network features setting in the **Review + Create** tab of the Create a Volume screen. Click **Create** to complete the volume creation.
+2. Before completing the volume creation process, you can display the specified network features setting in the **Review + Create** tab of the Create a Volume screen. Select **Create** to complete the volume creation.
![Screenshot that shows the Review and Create tab of volume creation.](../media/azure-netapp-files/network-features-review-create-tab.png)
-3. You can click **Volumes** to display the network features setting for each volume:
+3. You can select **Volumes** to display the network features setting for each volume:
[ ![Screenshot that shows the Volumes page displaying the network features setting.](../media/azure-netapp-files/network-features-volume-list.png)](../media/azure-netapp-files/network-features-volume-list.png#lightbox)
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
na Previously updated : 04/18/2022 Last updated : 08/11/2022 # Dynamically change the service level of a volume
-You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not impact access to the volume.
+You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not affect access to the volume.
This functionality enables you to meet your workload needs on demand. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. For example, if the volume is currently in a capacity pool that uses the *Standard* service level and you want the volume to use the *Premium* service level, you can move the volume dynamically to a capacity pool that uses the *Premium* service level.
The capacity pool that you want to move the volume to must already exist. The ca
* This functionality is supported within the same NetApp account. You can't move the volume to a capacity pool in a different NetApp Account.
-* After the volume is moved to another capacity pool, you will no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool.
+* After the volume is moved to another capacity pool, you'll no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool.
* If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*). You can always change to higher service level without wait time.+
+* You cannot change the service level for volumes in a cross-region replication relationship.
## Move a volume to another capacity pool
The capacity pool that you want to move the volume to must already exist. The ca
![Change pool](../media/azure-netapp-files/change-pool.png)
-3. Click **OK**.
+3. Select **OK**.
## Next steps
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 07/29/2022 Last updated : 08/11/2022 - # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## August 2022
+
+* [Standard network features](configure-network-features.md) are now generally available.
+ Standard network features now includes Global VNet peering. You must still [register the feature](configure-network-features.md#register-the-feature) before using it.
+ [!INCLUDE [Standard network features pricing](includes/standard-networking-pricing.md)]
+
+* [Cloud Backup for Virtual Machines on Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/install-cloud-backup-virtual-machines.md)
+ You can now create VM consistent snapshot backups of VMs on Azure NetApp Files datastores using [Cloud Backup for Virtual Machines](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). The associated virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated and consistent backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores.
+
## July 2022
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can [Back up Azure NetApp Files datastores and VMs using Cloud Backup](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores.
+ * [Active Directory connection enhancement: Reset Active Directory computer account password](create-active-directory-connections.md#reset-active-directory) (Preview)
- If you (accidentally) reset the password of the AD computer account on the AD server or the AD server is unreachable, you can now safely reset the computer account password to preserve connectivity to your volumes directly from the portal.
## June 2022 * [Disaster Recovery with Azure NetApp Files, JetStream DR and Azure VMware Solution](../azure-vmware/deploy-disaster-recovery-using-jetstream.md#disaster-recovery-with-azure-netapp-files-jetstream-dr-and-azure-vmware-solution)
- Disaster Recovery to cloud is a resilient and cost-effective way of protecting the workloads against site outages and data corruption events like ransomware. Leveraging the VMware VAIO framework, on-premises VMware workloads can be replicated to Azure Blob storage and recovered with minimal or close to no data loss and near-zero Recovery Time Objective (RTO). JetStream DR can now seamlessly recover workloads replicated from on-premises to Azure VMware Solution to Azure NetApp Files. JetStream DR enables cost-effective disaster recovery by consuming minimal resources at the DR site and using cost-effective cloud storage. JetStream DR automates recovery to Azure NetApp Files datastores using Azure Blob Storage. It can recover independent VMs or groups of related VMs into the recovery site infrastructure according to runbook settings. It also provides point-in-time recovery for ransomware protection.
- * [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) (Preview)
- [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for AVS provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
+ [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for Azure VMware Solution provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West Europe, West US. Regional coverage will expand as the preview progresses.
azure-portal Azure Portal Quickstart Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-quickstart-center.md
Azure Quickstart Center has two options in the **Get started** tab:
## Take an online course
-The **Take an online course** tab of the Azure Quickstart Center highlights free introductory course modules from Microsoft Learn.
+The **Take an online course** tab of the Azure Quickstart Center highlights free introductory course modules.
Select a tile to launch a course and learn more about cloud concepts and managing resources in Azure.
You can also select **Browse our full Azure catalog** to see all Azure learning
## Next steps * Learn more about Azure setup and migration in the [Microsoft Cloud Adoption Framework for Azure](/azure/architecture/cloud-adoption/).
-* Unlock your cloud skills with more courses from [Microsoft Learn](/learn/azure/).
+* Unlock your cloud skills with more [Learn modules]](/learn/azure/).
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Last updated 08/03/2022
This quickstart shows you how to integrate Bicep files with Azure Pipelines for continuous integration and continuous deployment (CI/CD).
-It provides a short introduction to the pipeline task you need for deploying a Bicep file. If you want more detailed steps on setting up the pipeline and project, see [Deploy Azure resources by using Bicep and Azure Pipelines](/learn/paths/bicep-azure-pipelines/) on **Microsoft Learn**.
+It provides a short introduction to the pipeline task you need for deploying a Bicep file. If you want more detailed steps on setting up the pipeline and project, see [Deploy Azure resources by using Bicep and Azure Pipelines](/learn/paths/bicep-azure-pipelines/).
## Prerequisites
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/best-practices.md
Last updated 05/16/2022
This article recommends practices to follow when developing your Bicep files. These practices make your Bicep file easier to understand and use.
-### Microsoft Learn
+### Training resources
-If you would rather learn about Bicep best practices through step-by-step guidance, see [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/) on **Microsoft Learn**.
+If you would rather learn about Bicep best practices through step-by-step guidance, see [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/).
## Parameters
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
"prefer-unquoted-property-names": { "level": "warning" },
- "secure-parameter-default": {
+ "protect-commandtoexecute-secrets": {
"level": "warning" },
- "simplify-interpolation": {
+ "secure-parameter-default": {
"level": "warning" },
- "use-protectedsettings-for-commandtoexecute-secrets": {
+ "simplify-interpolation": {
"level": "warning" }, "secure-secrets-in-params": {
azure-resource-manager Child Resource Name Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/child-resource-name-type.md
Each parent resource accepts only certain resource types as child resources. The
This article show different ways you can declare a child resource.
-### Microsoft Learn
+### Training resources
-If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates) on **Microsoft Learn**.
+If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates).
## Name and type pattern
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
Sometimes you need to optionally deploy a resource or module in Bicep. Use the `
> [!NOTE] > Conditional deployment doesn't cascade to [child resources](child-resource-name-type.md). If you want to conditionally deploy a resource and its child resources, you must apply the same condition to each resource type.
-### Microsoft Learn
+### Training resources
-If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/) on **Microsoft Learn**.
+If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
## Deploy condition
output mgmtStatus string = ((!empty(logAnalytics)) ? 'Enabled monitoring for VM!
## Next steps
-* For a Microsoft Learn module about conditions and loops, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
+* Review the Learn module [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
* For recommendations about creating Bicep files, see [Best practices for Bicep](best-practices.md). * To create multiple instances of a resource, see [Iterative loops in Bicep](loops.md).
azure-resource-manager Contribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/contribute.md
Bicep is an open-source project. That means you can contribute to Bicep's develo
## Contribution types - **Azure Quickstart Templates.** You can contribute example Bicep files and ARM templates to the Azure Quickstart Templates repository. For more information, see the [Azure Quickstart Templates contribution guide](https://github.com/Azure/azure-quickstart-templates/blob/master/1-CONTRIBUTION-GUIDE/README.md#contribution-guide).-- **Documentation.** Bicep's documentation is open to contributions, too. For more information, see the [Microsoft contributor guide overview](/contribute/).
+- **Documentation.** Bicep's documentation is open to contributions, too. For more information, see our [contributor guide overview](/contribute/).
- **Snippets.** Do you have a favorite snippet you think the community would benefit from? You can add it to the Visual Studio Code extension's collection of snippets. For more information, see [Contributing to Bicep](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md#snippets). - **Code changes.** If you're a developer and you have ideas you'd like to see in the Bicep language or tooling, you can contribute a pull request. For more information, see [Contributing to Bicep](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md).
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-management-group.md
This article describes how to set scope with Bicep when deploying to a managemen
As your organization matures, you can deploy a Bicep file to create resources at the management group level. For example, you may need to define and assign [policies](../../governance/policy/overview.md) or [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for a management group. With management group level templates, you can declaratively apply policies and assign roles at the management group level.
-### Microsoft Learn
+### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/) on **Microsoft Learn**.
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md
To simplify the management of resources, you can deploy resources at the level o
> [!NOTE] > You can deploy to 800 different resource groups in a subscription level deployment.
-### Microsoft Learn
+### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/) on **Microsoft Learn**.
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-tenant.md
Last updated 11/22/2021
As your organization matures, you may need to define and assign [policies](../../governance/policy/overview.md) or [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) across your Azure AD tenant. With tenant level templates, you can declaratively apply policies and assign roles at a global level.
-### Microsoft Learn
+### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/) on **Microsoft Learn**.
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-what-if.md
Before deploying a Bicep file, you can preview the changes that will happen. Azu
You can use the what-if operation with Azure PowerShell, Azure CLI, or REST API operations. What-if is supported for resource group, subscription, management group, and tenant level deployments.
-### Microsoft Learn
+### Training resources
-If you would rather learn about the what-if operation through step-by-step guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/) on **Microsoft Learn**.
+If you would rather learn about the what-if operation through step-by-step guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/).
[!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)]
You can use the what-if operation through the Azure SDKs.
* To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). * If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).
-* For a Microsoft Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+* For a Learn module that demonstrates using what-if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
The deployment script resource is only available in the regions where Azure Cont
> [!NOTE] > Retry logic for Azure sign in is now built in to the wrapper script. If you grant permissions in the same Bicep file as your deployment scripts, the deployment script service retries sign in for 10 minutes with 10-second interval until the managed identity role assignment is replicated.
-### Microsoft Learn
+### Training resources
-If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts) on **Microsoft Learn**.
+If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
After the script is tested successfully, you can use it as a deployment script i
## Next steps
-In this article, you learned how to use deployment scripts. To walk through a Microsoft Learn module:
+In this article, you learned how to use deployment scripts. To walk through a Learn module:
> [!div class="nextstepaction"] > [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts)
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md
New-AzResourceGroupDeployment `
- For general information about key vaults, see [What is Azure Key Vault?](../../key-vault/general/overview.md) - For complete examples of referencing key secrets, see [key vault examples](https://github.com/rjmax/ArmExamples/tree/master/keyvaultexamples) on GitHub.-- For a Microsoft Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/learn-bicep.md
Title: Discover Bicep on Microsoft Learn
-description: Provides an overview of the units that are available on Microsoft Learn for Bicep.
+ Title: Learn modules for Bicep
+description: Provides an overview of the Learn modules for Bicep.
Last updated 12/03/2021
-# Bicep on Microsoft Learn
+# Learn modules for Bicep
-Ready to see how Bicep can help simplify and accelerate your deployments to Azure? Check out the many hands-on courses on Microsoft Learn.
+Ready to see how Bicep can help simplify and accelerate your deployments to Azure? Check out the many hands-on courses.
> [!TIP] > Want to learn Bicep live from subject matter experts? [Learn Live with our experts every Tuesday (Pacific time) beginning March 8, 2022.](/events/learntv/learnlive-iac-and-bicep/) ## Get started
-If you're new to Bicep, a great way to get started is by taking this module on Microsoft Learn.
+If you're new to Bicep, a great way to get started is by reviewing the following Learn module. You'll learn how Bicep makes it easier to define how your Azure resources should be configured and deployed in a way that's automated and repeatable. YouΓÇÖll deploy several Azure resources so you can see for yourself how Bicep works. We provide free access to Azure resources to help you practice the concepts.
-There you'll learn how Bicep makes it easier to define how your Azure resources should be configured and deployed in a way that's automated and repeatable. YouΓÇÖll deploy several Azure resources so you can see for yourself how Bicep works. We provide free access to Azure resources to help you practice the concepts.
-
-[<img src="media/learn-bicep/build-first-bicep-template.svg" width="101" height="120" alt="The badge for the Build your first Bicep template module on Microsoft Learn." role="presentation"></img>](/learn/modules/build-first-bicep-template/)
+[<img src="media/learn-bicep/build-first-bicep-template.svg" width="101" height="120" alt="The badge for the Build your first Bicep template module." role="presentation"></img>](/learn/modules/build-first-bicep-template/)
[Build your first Bicep template](/learn/modules/build-first-bicep-template/)
After that, you might be interested in adding your Bicep code to a deployment pi
## Next steps * For a short introduction to Bicep, see [Bicep quickstart](quickstart-create-bicep-use-visual-studio-code.md).
-* For suggestions about how to improve your Bicep files, see [Best practices for Bicep](best-practices.md).
+* For suggestions about how to improve your Bicep files, see [Best practices for Bicep](best-practices.md).
azure-resource-manager Linter Rule Outputs Should Not Contain Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-outputs-should-not-contain-secrets.md
This rule finds possible exposure of secrets in a template's outputs.
## Linter rule code Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:-
+ΓÇï
`outputs-should-not-contain-secrets` ## Solution Don't include any values in an output that could potentially expose secrets. For example, secure parameters of type secureString or secureObject, or [`list*`](./bicep-functions-resource.md#list) functions such as listKeys.-
-The output from a template is stored in the deployment history, so a malicious user could find that information.
-
+ΓÇï
+The output from a template is stored in the deployment history, so a user with read-only permissions could gain access to information otherwise not available with read-only permission.
+ΓÇï
The following example fails because it includes a secure parameter in an output value. ```bicep+ @secure() param secureParam string-
+ΓÇï
output badResult string = 'this is the value ${secureParam}' ```
param storageName string
resource stg 'Microsoft.Storage/storageAccounts@2021-04-01' existing = { name: storageName }-
+ΓÇï
output badResult object = { value: stg.listKeys().keys[0].value }
The following example fails because the output name contains 'password', indicat
output accountPassword string = '...' ```
-To fix it, you will need to remove the secret data from the output.
+To fix it, you will need to remove the secret data from the output. The recommended practice is to output the resourceId of the resource containing the secret and retrieve the secret when the resource needing the information is created or updated. Secrets may also be stored in KeyVault for more complex deployment scenarios.
+
+The following example shows a secure pattern for retrieving a storageAccount key from a module.
+
+```bicep
+output storageId string = stg.id
+```
+
+Which can be used in a subsequent deployment as sown in the following example
+
+```bicep
+someProperty: listKeys(myStorageModule.outputs.storageId.value, '2021-09-01').keys[0].value
+```
## Silencing false positives
It is good practice to add a comment explaining why the rule does not apply to t
## Next steps
-For more information about the linter, see [Use Bicep linter](./linter.md).
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter Rule Protect Commandtoexecute Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-protect-commandtoexecute-secrets.md
+
+ Title: Linter rule - use protectedSettings for commandToExecute secrets
+description: Linter rule - use protectedSettings for commandToExecute secrets
+ Last updated : 12/17/2021++
+# Linter rule - use protectedSettings for commandToExecute secrets
+
+This rule finds possible exposure of secrets in the settings property of a custom script resource.
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`protect-commandtoexecute-secrets`
+
+## Solution
+
+For custom script resources, the `commandToExecute` value should be placed under the `protectedSettings` property object instead of the `settings` property object if it includes secret data such as a password. For example, secret data could be found in secure parameters, [`list*`](./bicep-functions-resource.md#list) functions such as listKeys, or in custom scripts arguments.
+
+Don't use secret data in the `settings` object because it uses clear text. For more information, see [Microsoft.Compute virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions), [Custom Script Extension for Windows](../../virtual-machines/extensions/custom-script-windows.md), and [Use the Azure Custom Script Extension Version 2 with Linux virtual machines](../../virtual-machines/extensions/custom-script-linux.md).
+
+The following example fails because `commandToExecute` is specified under `settings` and uses a secure parameter.
+
+```bicep
+param vmName string
+param location string
+param fileUris string
+param storageAccountName string
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' existing = {
+ name: storageAccountName
+}
+
+resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2019-08-02-preview' = {
+ name: '${vmName}/CustomScriptExtension'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Compute'
+ type: 'CustomScriptExtension'
+ autoUpgradeMinorVersion: true
+ settings: {
+ fileUris: split(fileUris, ' ')
+ commandToExecute: 'mycommand ${storageAccount.listKeys().keys[0].value}'
+ }
+ }
+}
+```
+
+You can fix it by moving the commandToExecute property to the `protectedSettings` object.
+
+```bicep
+param vmName string
+param location string
+param fileUris string
+param storageAccountName string
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' existing = {
+ name: storageAccountName
+}
+
+resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2019-08-02-preview' = {
+ name: '${vmName}/CustomScriptExtension'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Compute'
+ type: 'CustomScriptExtension'
+ autoUpgradeMinorVersion: true
+ settings: {
+ fileUris: split(fileUris, ' ')
+ }
+ protectedSettings: {
+ commandToExecute: 'mycommand ${storageAccount.listKeys().keys[0].value}'
+ }
+ }
+}
+```
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Last updated 12/02/2021
This article shows you how to use the `for` syntax to iterate over items in a collection. This functionality is supported starting in v0.3.1 onward. You can use loops to define multiple copies of a resource, module, variable, property, or output. Use loops to avoid repeating syntax in your Bicep file and to dynamically set the number of copies to create during deployment. To go through a quickstart, see [Quickstart: Create multiple instances](./quickstart-loops.md).
-### Microsoft Learn
+### Training resources
-If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/) on **Microsoft Learn**.
+If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
## Loop syntax
azure-resource-manager Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate.md
The first step in the process is to capture an initial representation of your Az
:::image type="content" source="./media/migrate/migrate-bicep.png" alt-text="Diagram of the recommended workflow for migrating Azure resources to Bicep." border="false":::
-In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/learn/modules/migrate-azure-resources-bicep/) on Microsoft Learn.
+In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/learn/modules/migrate-azure-resources-bicep/).
## Phase 1: Convert
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
To share modules with other people in your organization, create a [template spec
Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template).
-### Microsoft Learn
+### Training resources
-If you would rather learn about modules through step-by-step guidance, see [Create composable Bicep files by using modules](/learn/modules/create-composable-bicep-files-using-modules/) on **Microsoft Learn**.
+If you would rather learn about modules through step-by-step guidance, see [Create composable Bicep files by using modules](/learn/modules/create-composable-bicep-files-using-modules/).
## Definition syntax
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md
Bicep provides the following advantages:
To start with Bicep: 1. **Install the tools**. See [Set up Bicep development and deployment environments](./install.md). Or, you can use the [VS Code Devcontainer/Codespaces repo](https://github.com/Azure/vscode-remote-try-bicep) to get a pre-configured authoring environment.
-2. **Complete the [quickstart](./quickstart-create-bicep-use-visual-studio-code.md) and the [Microsoft Learn Bicep modules](./learn-bicep.md)**.
+2. **Complete the [quickstart](./quickstart-create-bicep-use-visual-studio-code.md) and the [Learn modules for Bicep](./learn-bicep.md)**.
To decompile an existing ARM template to Bicep, see [Decompiling ARM template JSON to Bicep](./decompile.md). You can use the [Bicep Playground](https://aka.ms/bicepdemo) to view Bicep and equivalent JSON side by side.
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
Resource Manager resolves parameter values before starting the deployment operat
Each parameter must be set to one of the [data types](data-types.md).
-### Microsoft Learn
+### Training resources
-If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/learn/modules/build-reusable-bicep-templates-parameters) on **Microsoft Learn**.
+If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/learn/modules/build-reusable-bicep-templates-parameters).
## Declaration
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
To share [modules](modules.md) within your organization, you can create a privat
To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008 or later**. To use with Azure CLI, you must also have version **2.31.0 or later**; to use with Azure PowerShell, you must also have version **7.0.0** or later.
-### Microsoft Learn
+### Training resources
-If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/learn/modules/share-bicep-modules-using-private-registries) on **Microsoft Learn**.
+If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/learn/modules/share-bicep-modules-using-private-registries).
## Configure private registry
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md
Remove-AzResourceGroup -Name exampleRG
## Next steps > [!div class="nextstepaction"]
-> [Bicep in Microsoft Learn](learn-bicep.md)
+> [Learn modules for Bicep](learn-bicep.md)
azure-resource-manager Quickstart Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-loops.md
Remove-AzResourceGroup -Name $resourceGroupName
## Next steps > [!div class="nextstepaction"]
-> [Bicep in Microsoft Learn](learn-bicep.md)
+> [Learn modules for Bicep](learn-bicep.md)
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
Remove-AzResourceGroup -Name $resourceGroupName
## Next steps > [!div class="nextstepaction"]
-> [Bicep in Microsoft Learn](learn-bicep.md)
+> [Learn modules for Bicep](learn-bicep.md)
azure-resource-manager Scope Extension Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scope-extension-resources.md
This article shows how to set the scope for an extension resource type when depl
> [!NOTE] > The scope property is only available to extension resource types. To specify a different scope for a resource type that isn't an extension type, use a [module](modules.md).
-### Microsoft Learn
+### Training resources
-If you would rather learn about extension resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates) on **Microsoft Learn**.
+If you would rather learn about extension resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates).
## Apply at deployment scope
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
When designing your deployment, always consider the lifecycle of the resources a
> - Content in the Bicep module registry can only be deployed from another Bicep file. Template specs can be deployed directly from the API, Azure PowerShell, Azure CLI, and the Azure portal. You can even use [`UiFormDefinition`](../templates/template-specs-create-portal-forms.md) to customize the portal deployment experience. > - Bicep has some limited capabilities for embedding other project artifacts (including non-Bicep and non-ARM-template files. For example, PowerShell scripts, CLI scripts and other binaries) by using the [`loadTextContent`](./bicep-functions-files.md#loadtextcontent) and [`loadFileAsBase64`](./bicep-functions-files.md#loadfileasbase64) functions. Template specs can't package these artifacts.
-### Microsoft Learn
+### Training resources
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs) on **Microsoft Learn**.
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
## Why use template specs?
After creating a template spec, you can link to that template spec in a Bicep mo
## Next steps
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs) on **Microsoft Learn**.
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Protect your Azure resources with a lock description: You can safeguard Azure resources from updates or deletions by locking all users and roles. Previously updated : 08/08/2022 Last updated : 08/11/2022
As an administrator, you can lock an Azure subscription, resource group, or resource to protect them from accidental user deletions and modifications. The lock overrides any user permissions.
-You can set locks that prevent either deletions or modifications. In the portal, these locks are called **Delete** and **Read-only**. In the command line, these locks are called **CanNotDelete** and **ReadOnly**. In the left navigation panel, the subscription lock feature's name is **Resource locks**, while the resource group lock feature's name is **Locks**.
+You can set locks that prevent either deletions or modifications. In the portal, these locks are called **Delete** and **Read-only**. In the command line, these locks are called **CanNotDelete** and **ReadOnly**.
- **CanNotDelete** means authorized users can read and modify a resource, but they can't delete it. - **ReadOnly** means authorized users can read a resource, but they can't delete or update it. Applying this lock is similar to restricting all authorized users to the permissions that the **Reader** role provides.
Unlike role-based access control (RBAC), you use management locks to apply a res
When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the same parent lock. The most restrictive lock in the inheritance takes precedence.
+[Extension resources](extension-resource-types.md) inherit locks from the resource they're applied to. For example, Microsoft.Insights/diagnosticSettings is an extension resource type. If you apply a diagnostic setting to a storage blob, and lock the storage account, you're unable to delete the diagnostic setting. This inheritance makes sense because the full resource ID of the diagnostic setting is:
+
+```json
+/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storage-name}/blobServices/default/providers/microsoft.insights/diagnosticSettings/{setting-name}"
+```
+
+Which matches the scope of the resource ID of the resource that is locked:
+
+```json
+/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storage-name}
+```
+ If you have a **Delete** lock on a resource and attempt to delete its resource group, the feature blocks the whole delete operation. Even if the resource group or other resources in the resource group are unlocked, the deletion doesn't happen. You never have a partial deletion. When you [cancel an Azure subscription](../../cost-management-billing/manage/cancel-azure-subscription.md#what-happens-after-subscription-cancellation):
When you [cancel an Azure subscription](../../cost-management-billing/manage/can
* Azure preserves your resources by deactivating them instead of immediately deleting them. * Azure only deletes your resources permanently after a waiting period. ++ ## Understand scope of locks > [!NOTE]
To delete everything for the service, including the locked infrastructure resour
### Portal
+In the left navigation panel, the subscription lock feature's name is **Resource locks**, while the resource group lock feature's name is **Locks**.
+ [!INCLUDE [resource-manager-lock-resources](../../../includes/resource-manager-lock-resources.md)] ### Template
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/conditional-resource-deployment.md
If you deploy a template with [complete mode](deployment-modes.md) and a resourc
## Next steps
-* For a Microsoft Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+* For a Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
* For recommendations about creating templates, see [ARM template best practices](./best-practices.md). * To create multiple instances of a resource, see [Resource iteration in ARM templates](copy-resources.md).
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-resources.md
The following examples show common scenarios for creating more than one instance
- To set dependencies on resources that are created in a copy loop, see [Define the order for deploying resources in ARM templates](./resource-dependency.md). - To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).-- For a Microsoft Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
- For other uses of the copy loop, see: - [Property iteration in ARM templates](copy-properties.md) - [Variable iteration in ARM templates](copy-variables.md)
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-what-if.md
Before deploying an Azure Resource Manager template (ARM template), you can prev
You can use the what-if operation with Azure PowerShell, Azure CLI, or REST API operations. What-if is supported for resource group, subscription, management group, and tenant level deployments.
-### Microsoft Learn
+### Training resources
-To learn more about what-if, and for hands-on guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif) on **Microsoft Learn**.
+To learn more about what-if, and for hands-on guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif).
[!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)]
You can use the what-if operation through the Azure SDKs.
- [ARM Deployment Insights](https://marketplace.visualstudio.com/items?itemName=AuthorityPartnersInc.arm-deployment-insights) extension provides an easy way to integrate the what-if operation in your Azure DevOps pipeline. - To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). - If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).-- For a Microsoft Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
The deployment script resource is only available in the regions where Azure Cont
> [!NOTE] > Retry logic for Azure sign in is now built in to the wrapper script. If you grant permissions in the same template as your deployment scripts, the deployment script service retries sign in for 10 minutes with 10-second interval until the managed identity role assignment is replicated.
-### Microsoft Learn
+### Training resources
-To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts) on **Microsoft Learn**.
+To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/key-vault-parameter.md
The following template dynamically creates the key vault ID and passes it as a p
- For general information about key vaults, see [What is Azure Key Vault?](../../key-vault/general/overview.md) - For complete examples of referencing key secrets, see [key vault examples](https://github.com/rjmax/ArmExamples/tree/master/keyvaultexamples) on GitHub.-- For a Microsoft Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md
This approach means you can safely share templates that meet your organization's
## Next steps * For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md).
-* To learn about ARM templates through a guided set of modules on Microsoft Learn, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
+* To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
* For information about the properties in template files, see [Understand the structure and syntax of ARM templates](./syntax.md). * To learn about exporting templates, see [Quickstart: Create and deploy ARM templates by using the Azure portal](quickstart-create-templates-use-the-portal.md). * For answers to common questions, see [Frequently asked questions about ARM templates](./frequently-asked-questions.yml).
azure-resource-manager Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md
For information about assessing the deployment order and resolving dependency er
## Next steps * To go through a tutorial, see [Tutorial: Create ARM templates with dependent resources](template-tutorial-create-templates-with-dependent-resources.md).
-* For a Microsoft Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+* For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
* For recommendations when setting dependencies, see [ARM template best practices](./best-practices.md). * To learn about troubleshooting dependencies during deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md). * To learn about creating Azure Resource Manager templates, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
Last updated 07/18/2022
This article describes the structure of an Azure Resource Manager template (ARM template). It presents the different sections of a template and the properties that are available in those sections.
-This article is intended for users who have some familiarity with ARM templates. It provides detailed information about the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). To learn about ARM templates through a guided set of modules on Microsoft Learn, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
+This article is intended for users who have some familiarity with ARM templates. It provides detailed information about the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
> [!TIP] > Bicep is a new language that offers the same capabilities as ARM templates but with a syntax that's easier to use. If you're considering infrastructure as code options, we recommend looking at Bicep.
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
To deploy the template spec, you use standard Azure tools like PowerShell, Azure
When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your templates along with copy loops to create multiple instances of these resources.
-### Microsoft Learn
+### Training resources
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs) on **Microsoft Learn**.
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Azure Resource Manager template specs in Bicep](../bicep/template-specs.md).
azure-resource-manager Template Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-test-cases.md
The following example **passes** because `expressionEvaluationOptions` uses `inn
## Next steps - To learn about running the test toolkit, see [Use ARM template test toolkit](test-toolkit.md).-- For a Microsoft Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
- To test parameter files, see [Test cases for parameter files](parameters.md). - For createUiDefinition tests, see [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md). - To learn about tests for all files, see [Test cases for all files](all-files-test-cases.md).
azure-resource-manager Template Tutorial Create First Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md
This tutorial introduces you to Azure Resource Manager templates (ARM templates)
This tutorial is the first of a series. As you progress through the series, you modify the starting template, step by step, until you explore all of the core parts of an ARM template. These elements are the building blocks for more complex templates. We hope by the end of the series you're confident in creating your own templates and ready to automate your deployments with templates.
-If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of modules on [Microsoft Learn](/learn), see [Deploy and manage resources in Azure by using JSON ARM templates](/learn/paths/deploy-manage-resource-manager-templates).
+If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of [Learn modules](/learn), see [Deploy and manage resources in Azure by using JSON ARM templates](/learn/paths/deploy-manage-resource-manager-templates).
If you don't have a Microsoft Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Template Tutorial Create Multiple Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-multiple-instances.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Microsoft Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Template Tutorial Create Templates With Dependent Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-templates-with-dependent-resources.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Microsoft Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Template Tutorial Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md
This tutorial covers the following tasks:
> * Debug the failed script > * Clean up resources
-For a Microsoft Learn module that covers deployment scripts, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/).
+For a Learn module that covers deployment scripts, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/).
## Prerequisites
azure-resource-manager Template Tutorial Use Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-conditions.md
This tutorial only covers a basic scenario of using conditions. For more informa
* [Template function: If](./template-functions-logical.md#if). * [Comparison functions for ARM templates](./template-functions-comparison.md)
-For a Microsoft Learn module that covers conditions, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers conditions, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Template Tutorial Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-key-vault.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Microsoft Learn module that uses a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that uses a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md
The toolkit contains four sets of tests:
> [!NOTE] > The test toolkit is only available for ARM templates. To validate Bicep files, use the [Bicep linter](../bicep/linter.md).
-### Microsoft Learn
+### Training resources
-To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test) on **Microsoft Learn**.
+To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test).
## Install on Windows
The next example shows how to run the tests.
- To test parameter files, see [Test cases for parameter files](parameters.md). - For createUiDefinition tests, see [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md). - To learn about tests for all files, see [Test cases for all files](all-files-test-cases.md).-- For a Microsoft Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test/).
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
Title: Manage an Azure Video Indexer account
-description: Learn how to manage an Azure Video Indexer account connected to Azure.
+ Title: Repair the connection to Azure, check errors/warnings
+description: Learn how to manage an Azure Video Indexer account connected to Azure repair the connection, examine errors/warnings.
Last updated 01/14/2021
-# Manage an Azure Video Indexer account connected to Azure
+# Repair the connection to Azure, examine errors/warnings
This article demonstrates how to manage an Azure Video Indexer account that's connected to your Azure subscription and an Azure Media Services account.
azure-vmware Backup Azure Netapp Files Datastores Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/backup-azure-netapp-files-datastores-vms.md
+
+ Title: Back up Azure NetApp Files datastores and VMs using Cloud Backup
+description: Learn how to back up datastores and Virtual Machines to the cloud.
++ Last updated : 08/10/2022++
+# Back up Azure NetApp Files datastores and VMs using Cloud Backup for Virtual Machines
+
+From the VMware vSphere client, you can back up datastores and Virtual Machines (VMs) to the cloud.
+
+## Configure subscriptions
+
+Before you back up your Azure NetApp Files datastores, you must add your Azure and Azure NetApp Files cloud subscriptions.
+
+### Add Azure cloud subscription
+
+1. Sign in to the VMware vSphere client.
+2. From the left navigation, select **Cloud Backup for Virtual Machines**.
+3. Select the **Settings** page and then select the **Cloud Subscription** tab.
+4. Select **Add** and then provide the required values from your Azure subscription.
+
+### Add Azure NetApp Files cloud subscription account
+
+1. From the left navigation, select **Cloud Backup for Virtual Machines**.
+2. Select **Storage Systems**.
+3. Select **Add** to add the Azure NetApp Files cloud subscription account details.
+4. Provide the required values and then select **Add** to save your settings.
+
+## Create a backup policy
+
+You must create backup policies before you can use Cloud Backup for Virtual Machines to back up Azure NetApp Files datastores and VMs.
+
+1. In the left navigation of the vCenter web client page, select **Cloud Backup for Virtual Machines** > **Policies**.
+2. On the **Policies** page, select **Create** to initiate the wizard.
+3. On the **New Backup Policy** page, select the vCenter Server that will use the policy, then enter the policy name and a description.
+* **Only alphanumeric characters and underscores (_) are supported in VM, datastore, cluster, policy, backup, or resource group names.** Other special characters are not supported.
+4. Specify the retention settings.
+ The maximum retention value is 255 backups. If the **"Backups to keep"** option is selected during the backup operation, Cloud Backup for Virtual Machines will retain backups with the specified retention count and delete the backups that exceed the retention count.
+5. Specify the frequency settings.
+ The policy specifies the backup frequency only. The specific protection schedule for backing up is defined in the resource group. Therefore, two or more resource groups can share the same policy and backup frequency but have different backup schedules.
+6. **Optional:** In the **Advanced** fields, select the fields that are needed. The Advanced field details are listed in the following table.
+
+ | Field | Action |
+ | - | - |
+ | VM consistency | Check this box to pause the VMs and create a VMware snapshot each time the backup job runs. <br> When you check the VM consistency box, backup operations might take longer and require more storage space. In this scenario, the VMs are first paused, then VMware performs a VM consistent snapshot. Cloud Backup for Virtual Machines then performs its backup operation, and then VM operations are resumed. <br> VM guest memory is not included in VM consistency snapshots. |
+ | Include datastores with independent disks | Check this box to include any datastores with independent disks that contain temporary data in your backup. |
+ | Scripts | Enter the fully qualified path of the prescript or postscript that you want the Cloud Backup for Virtual Machines to run before or after backup operations. For example, you can run a script to update Simple Network Management Protocol (SNMP) traps, automate alerts, and send logs. The script path is validated at the time the script is executed. <br> **NOTE**: Prescripts and postscripts must be located on the virtual appliance VM. To enter multiple scripts, press **Enter** after each script path to list each script on a separate line. The semicolon (;) character is not allowed. |
+7. Select **Add** to save your policy.
+ You can verify that the policy has been created successfully and review the policy configuration by selecting the policy in the **Policies** page.
+
+## Resource groups
+
+A resource group is the container for VMs and datastores that you want to protect.
+
+Do not add VMs in an inaccessible state to a resource group. Although a resource group can contain a VM in an inaccessible state, the inaccessible state will cause backups for the resource group to fail.
+
+### Considerations for resource groups
+
+You can add or remove resources from a resource group at any time.
+* Back up a single resource
+ To back up a single resource (for example, a single VM), you must create a resource group that contains that single resource.
+* Back up multiple resources
+ To back up multiple resources, you must create a resource group that contains multiple resources.
+* Optimize snapshot copies
+ To optimize snapshot copies, group the VMs and datastores that are associated with the same volume into one resource group.
+* Backup policies
+ Although it's possible to create a resource group without a backup policy, you can only perform scheduled data protection operations when at least one policy is attached to the resource group. You can use an existing policy, or you can create a new policy while creating a resource group.
+* Compatibility checks
+ Cloud Backup for VMs performs compatibility checks when you create a resource group. Reasons for incompatibility might be:
+ * Virtual machine disks (VMDKs) are on unsupported storage.
+ * A shared PCI device is attached to a VM.
+ * You have not added the Azure subscription account.
+
+### Create a resource group using the wizard
+
+1. In the left navigation of the vCenter web client page, select **Cloud Backup** for **Virtual Machines** > **Resource Groups**. Then select **+ Create** to start the wizard
+
+ :::image type="content" source="./media/cloud-backup/vsphere-create-resource-group.png" alt-text="Screenshot of the vSphere Client Resource Group interface shows a red box highlights a button with a green plus sign that reads Create, instructing you to select this button." lightbox="./media/cloud-backup/vsphere-create-resource-group.png":::
+
+1. On the **General Info & Notification** page in the wizard, enter the required values.
+1. On the **Resource** page, do the following:
+
+ | Field | Action |
+ | -- | -- |
+ | Scope | Select the type of resource you want to protect: <ul><li>Datastores</li><li>Virtual Machines</li></ul> |
+ | Datacenter | Navigate to the VMs or datastores |
+ | Available entities | Select the resources you want to protect. Then select **>** to move your selections to the Selected entities list. |
+
+ When you select **Next**, the system first checks that Cloud Backup for Virtual Machines manages and is compatible with the storage on which the selected resources are located.
+
+ >[!IMPORTANT]
+ >If you receive the message `selected <resource-name> is not Cloud Backup for Virtual Machines compatible` then a selected resource is not compatible with Cloud Backup for Virtual Machines.
+
+1. On the **Spanning disks** page, select an option for VMs with multiple VMDKs across multiple datastores:
+ * Always exclude all spanning datastores
+ (This is the default option for datastores)
+ * Always include all spanning datastores
+ (This is the default for VMs)
+ * Manually select the spanning datastores to be included
+1. On the **Policies** page, select or create one or more backup policies.
+ * To use **an existing policy**, select one or more policies from the list.
+ * To **create a new policy**:
+ 1. Select **+ Create**.
+ 1. Complete the New Backup Policy wizard to return to the Create Resource Group wizard.
+1. On the **Schedules** page, configure the backup schedule for each selected policy.
+ In the **Starting** field, enter a date and time other than zero. The date must be in the format day/month/year. You must fill in each field. The Cloud Backup for Virtual Machines creates schedules in the time zone in which the Cloud Backup for Virtual Machines is deployed. You can modify the time zone by using the Cloud Backup for Virtual Machines GUI.
+
+ :::image type="content" source="./media/cloud-backup/backup-schedules.png" alt-text="A screenshot of the Backup schedules interface showing an hourly backup beginning at 10:22 a.m. on April 26, 2022." lightbox="./media/cloud-backup/backup-schedules.png":::
+1. Review the summary. If you need to change any information, you can return to any page in the wizard to do so. Select **Finish** to save your settings.
+
+ After you select **Finish**, the new resource group will be added to the resource group list.
+
+ If the pause operation fails for any of the VMs in the backup, then the backup is marked as not VM-consistent even if the policy selected has VM consistency selected. In this case, it's possible that some of the VMs were successfully paused.
+
+### Other ways to create a resource group
+
+In addition to using the wizard, you can:
+* **Create a resource group for a single VM:**
+ 1. Select **Menu** > **Hosts and Clusters**.
+ 1. Right-click the Virtual Machine you want to create a resource group for and select **Cloud Backup for Virtual Machines**. Select **+ Create**.
+* **Create a resource group for a single datastore:**
+ 1. Select **Menu** > **Hosts and Clusters**.
+ 1. Right-click a datastore, then select **Cloud Backup for Virtual Machines**. Select **+ Create**.
+
+## Back up resource groups
+
+Backup operations are performed on all the resources defined in a resource group. If a resource group has a policy attached and a schedule configured, backups occur automatically according to the schedule.
+
+### Prerequisites
+
+* You must have created a resource group with a policy attached.
+ Do not start an on-demand backup job when a job to back up the Cloud Backup for Virtual Machines MySQL database is already running. Use the maintenance console to see the configured backup schedule for the MySQL database.
+
+### Back up resource groups on demand
+
+1. In the left navigation of the vCenter web client page, select **Cloud Backup for Virtual Machines** > **Resource Groups**, then select a resource group. Select **Run Now** to start the backup.
+
+ :::image type="content" source="./media/cloud-backup/resource-groups-run-now.png" alt-text="Image of the vSphere Client Resource Group interface. At the top left, a red box highlights a green circular button with a white arrow inside next to text reading Run Now, instructing you to select this button." lightbox="./media/cloud-backup/resource-groups-run-now.png":::
+
+ 1.1 If the resource group has multiple policies configured, then in the **Backup Now** dialog box, select the policy you want to use for this backup operation.
+1. Select **OK** to initiate the backup.
+ >[!NOTE]
+ >You can't rename a backup once it is created.
+1. **Optional:** Monitor the operation progress by selecting **Recent Tasks** at the bottom of the window or on the dashboard Job Monitor for more details.
+ If the pause operation fails for any of the VMs in the backup, then the backup completes with a warning and is marked as not VM-consistent even if the selected policy has VM consistency selected. In this case, it is possible that some of the VMs were successfully paused. In the job monitor, the failed VM details will show the paused as failed.
+
+## Next steps
+
+* [Restore VMs using Cloud Backup for Virtual Machines](restore-azure-netapp-files-vms.md)
azure-vmware Install Cloud Backup Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-cloud-backup-virtual-machines.md
+
+ Title: Install Cloud Backup for Virtual Machines
+description: Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines.
++ Last updated : 08/10/2022++
+# Install Cloud Backup for Virtual Machines
+
+Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines (VMs).
+
+Use Cloud Backup for VMs to:
+* Build and securely connect both legacy and cloud-native workloads across environments and unify operations
+* Provision and resize datastore volumes right from the Azure portal
+* Take VM consistent snapshots for quick checkpoints
+* Quickly recover VMs
+
+## Prerequisites
+
+Before you can install Cloud Backup for Virtual Machines, you need to create an Azure service principle with the required Azure NetApp Files privileges. If you've already created one, you can skip to the installation steps below.
+
+## Install Cloud Backup for Virtual Machines using the Azure portal
+
+You'll need to install Cloud Backup for Virtual Machines through the Azure portal as an add-on.
+
+1. Sign in to your Azure VMware Solution private cloud.
+1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Install-NetAppCBSA**.
+
+ :::image type="content" source="./media/cloud-backup/run-command.png" alt-text="Screenshot of the Azure interface that shows the configure signal logic step with a backdrop of the Create alert rule page." lightbox="./media/cloud-backup/run-command.png":::
+
+1. Provide the required values, then select **Run**.
+
+ :::image type="content" source="./media/cloud-backup/run-commands-fields.png" alt-text="Image of the Run Command fields which are described in the table below." lightbox="./media/cloud-backup/run-commands-fields.png":::
+
+ | Field | Value |
+ | | -- |
+ | ApplianceVirtualMachineName | VM name for the appliance. |
+ | EsxiCluster | Destination ESXi cluster name to be used for deploying the appliance. |
+ | VmDatastore | Datastore to be used for the appliance. |
+ | NetworkMapping | Destination network to be used for the appliance. |
+ | ApplianceNetworkName | Network name to be used for the appliance. |
+ | ApplianceIPAddress | IPv4 address to be used for the appliance. |
+ | Netmask | Subnet mask. |
+ | Gateway | Gateway IP address. |
+ | PrimaryDNS | Primary DNS server IP address. |
+ | ApplianceUser | User Account for hosting API services in the appliance. |
+ | AppliancePassword | Password of the user hosting API services in the appliance. |
+ | MaintenanceUserPassword | Password of the appliance maintenance user. |
+
+ >[!IMPORTANT]
+ >You can also install Cloud Backup for Virtual Machines using DHCP by running the package `NetAppCBSApplianceUsingDHCP`. If you install Cloud Backup for Virtual Machines using DHCP, you don't need to provide the values for the PrimaryDNS, Gateway, Netmask, and ApplianceIPAddress fields. These values will be automatically generated.
+
+1. Check **Notifications** or the **Run Execution Status** tab to see the progress. For more information about the status of the execution, see [Run command in Azure VMware Solution](concepts-run-command.md).
+
+Upon successful execution, the Cloud Backup for Virtual Machines will automatically be displayed in the VMware vSphere client.
+
+## Upgrade Cloud Backup for Virtual Machines
+
+You can execute this run command to upgrade the Cloud Backup for Virtual Machines to the next available version.
+
+>[!IMPORTANT]
+> Before you initiate the upgrade, you must:
+> * Back up the MySQL database of Cloud Backup for Virtual Machines.
+> * Take snapshot copies of Cloud Backup for Virtual Machines.
+
+1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Invoke-UpgradeNetAppCBSAppliance**.
+
+1. Provide the required values, and then select **Run**.
+
+1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
+
+## Uninstall Cloud Backup for Virtual Machines
+
+You can execute the run command to uninstall Cloud Backup for Virtual Machines.
+
+> [!IMPORTANT]
+> Before you initiate the upgrade, you must:
+> * Backup the MySQL database of Cloud Backup for Virtual Machines.
+> * Ensure that there are no other VMs installed in the VMware vSphere tag: `AVS_ANF_CLOUD_ADMIN_VM_TAG`. All VMs with this tag will be deleted when you uninstall.
+
+1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Uninstall-NetAppCBSAppliance**.
+
+1. Provide the required values, and then select **Run**.
+
+1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
+
+## Change vCenter account password
+
+You can execute this command to reset the vCenter account password:
+
+1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Invoke-ResetNetAppCBSApplianceVCenterPasswordA**.
+
+1. Provide the required values, then select **Run**.
+
+1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
+
+## Next steps
+
+* [Back up Azure NetApp Files datastores and VMs using Cloud Backup for Virtual Machines](backup-azure-netapp-files-datastores-vms.md)
+* [Restore VMs using Cloud Backup for Virtual Machines](restore-azure-netapp-files-vms.md)
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
VMware HCX Advanced and its associated Cloud Manager are no longer pre-deployed
Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud) in a single HCX manager system. The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. Once the service is generally available, you'll have 30 days to decide on your next steps. You can turn off or opt out of the HCX Enterprise Edition service but keep HCX Advanced as it's part of the node cost.
-Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) are in use, and site pairings are three or fewer.
+Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) are in use.
>[!TIP] >You can also [uninstall HCX Advanced](#uninstall-hcx-advanced) through the portal. When you uninstall HCX Advanced, make sure you don't have any active migrations in progress. Removing HCX Advanced returns the resources to your private cloud occupied by the HCX virtual appliances.
azure-vmware Restore Azure Netapp Files Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/restore-azure-netapp-files-vms.md
+
+ Title: Restore VMs using Cloud Backup for Virtual Machines
+description: Learn how to restore virtual machines from a cloud backup to the vCenter.
++ Last updated : 08/10/2022++
+# Restore VMs using Cloud Backup for Virtual Machines
+
+Cloud Backup for Virtual Machines enables you to restore virtual machines (VMs) from the cloud backup to the vCenter.
+
+This topic covers how to:
+* Restore VMs from backups
+* Restore deleted VMs from backups
+* Restore VM disks (VMDKs) from backups
+* Recovery of Cloud Backup for Virtual Machines internal database
+
+## Restore VMs from backups
+
+When you restore a VM, you can overwrite the existing content with the backup copy that you select or you can restore to a new VM.
+
+You can restore VMs to the original datastore mounted on the original ESXi host (this overwrites the original VM).
+
+## Prerequisites to restore VMs
+
+* A backup must exist. <br>
+You must have created a backup of the VM using the Cloud Backup for Virtual Machines before you can restore the VM.
+>[!NOTE]
+>Restore operations cannot finish successfully if there are snapshots of the VM that were performed by software other than the Cloud Backup for Virtual Machines.
+* The VM must not be in transit. <br>
+ The VM that you want to restore must not be in a state of vMotion or Storage vMotion.
+* High Availability (HA) configuration errors <br>
+ Ensure there are no HA configuration errors displayed on the vCenter ESXi Host Summary screen before restoring backups to a different location.
+
+### Considerations for restoring VMs from backups
+
+* VM is unregistered and registered again
+ The restore operation for VMs unregisters the original VM, restores the VM from a backup snapshot, and registers the restored VM with the same name and configuration on the same ESXi server. You must manually add the VMs to resource groups after the restore.
+* Restoring datastores
+ You cannot restore a datastore, but you can restore any VM in the datastore.
+* VMware consistency snapshot failures for a VM
+ Even if a VMware consistency snapshot for a VM fails, the VM is nevertheless backed up. You can view the entities contained in the backup copy in the Restore wizard and use it for restore operations.
+
+### Restore a VM from a backup
+
+1. In the VMware vSphere web client GUI, select **Menu** in the toolbar. Select **Inventory** and then **Virtual Machines and Templates**.
+1. In the left navigation, right-click a Virtual Machine, then select **NetApp Cloud Backup**. In the drop-down list, select **Restore** to initiate the wizard.
+1. In the Restore wizard, on the **Select Backup** page, select the backup snapshot copy that you want to restore.
+ > [!NOTE]
+ > You can search for a specific backup name or a partial backup name, or you can filter the backup list by selecting the filter icon and then choosing a date and time range, selecting whether you want backups that contain VMware snapshots, whether you want mounted backups, and the location. Select **OK** to return to the wizard.
+1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope** field, then select **Restore location**, and then enter the destination ESXi information where the backup should be mounted.
+1. When restoring partial backups, the restore operation skips the Select Scope page.
+1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation.
+1. On the **Select Location** page, select the location for the primary or secondary location.
+1. Review the **Summary** page and then select **Finish**.
+1. **Optional:** Monitor the operation progress by selecting Recent Tasks at the bottom of the screen.
+1. Although the VMs are restored, they are not automatically added to their former resource groups. Therefore, you must manually add the restored VMs to the appropriate resource groups.
+
+## Restore deleted VMs from backups
+
+You can restore a deleted VM from a datastore primary or secondary backup to an ESXi host that you select. You can also restore VMs to the original datastore mounted on the original ESXi host, which creates a clone of the VM.
+
+## Prerequisites to restore deleted VMs
+
+* You must have added the Azure cloud Subscription account.
+ The user account in vCenter must have the minimum vCenter privileges required for Cloud Backup for Virtual Machines.
+* A backup must exist.
+ You must have created a backup of the VM using the Cloud Backup for Virtual Machines before you can restore the VMDKs on that VM.
+
+### Considerations for restoring deleted VMs
+
+You cannot restore a datastore, but you can restore any VM in the datastore.
+
+### Restore deleted VMs
+
+1. Select **Menu** and then select the **Inventory** option.
+1. Select a datastore, then select the **Configure** tab, then the **Backups in the Cloud Backup for Virtual Machines** section.
+1. Select (double-click) a backup to see a list of all VMs that are included in the backup.
+1. Select the deleted VM from the backup list and then select **Restore**.
+1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope field**, then select the restore location, and then enter the destination ESXi information where the backup should be mounted.
+1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation.
+1. On the **Select Location** page, select the location of the backup that you want to restore to.
+1. Review the **Summary** page, then select **Finish**.
+
+## Restore VMDKs from backups
+
+You can restore existing VMDKs or deleted or detached VMDKs from either a primary or secondary backup. You can restore one or more VMDKs on a VM to the same datastore.
+
+## Prerequisites to restore VMDKs
+
+* A backup must exist.
+ You must have created a backup of the VM using the Cloud Backup for Virtual Machines.
+* The VM must not be in transit.
+ The VM that you want to restore must not be in a state of vMotion or Storage vMotion.
+
+### Considerations for restoring VMDKs
+
+* If the VMDK is deleted or detached from the VM, then the restore operation attaches the VMDK to the VM.
+* Attach and restore operations connect VMDKs using the default SCSI controller. VMDKs that are attached to a VM with a NVME controller are backed up, but for attach and restore operations they are connected back using a SCSI controller.
+
+### Restore VMDKs
+
+1. In the VMware vSphere web client GUI, select **Menu** in the toolbar. Select **Inventory**, then **Virtual Machines and Templates**.
+1. In the left navigation, right-click a VM and select **NetApp Cloud Backup**. In the drop-down list, select **Restore**.
+1. In the Restore wizard, on the **Select Backup** page, select the backup copy from which you want to restore. To find the backup, do one of the following options:
+ * Search for a specific backup name or a partial backup name
+ * Filter the backup list by selecting the filter icon and a date and time range. Select if you want backups that contain VMware snapshots, if you want mounted backups, and primary location.
+ Select **OK** to return to the wizard.
+1. On the **Select Scope** page, select **Particular virtual disk** in the Restore scope field, then select the virtual disk and destination datastore.
+1. On the **Select Location** page, select the snapshot copy that you want to restore.
+1. Review the **Summary** page and then select **Finish**.
+1. **Optional:** Monitor the operation progress by clicking Recent Tasks at the bottom of the screen.
+
+## Recovery of Cloud Backup for Virtual Machines internal database
+
+You can use the maintenance console to restore a specific backup of the MySQL database (also called an NSM database) for Cloud Backup for Virtual Machines.
+
+1. Open a maintenance console window.
+1. From the main menu, enter option **1) Application Configuration**.
+1. From the Application Configuration menu, enter option **6) MySQL backup and restore**.
+1. From the MySQL Backup and Restore Configuration menu, enter option **2) List MySQL backups**. Make note of the backup you want to restore.
+1. From the MySQL Backup and Restore Configuration menu, enter option **3) Restore MySQL backup**.
+1. At the prompt ΓÇ£Restore using the most recent backup,ΓÇ¥ enter **n**.
+1. At the prompt ΓÇ£Backup to restore from,ΓÇ¥ enter the backup name, and then select **Enter**.
+ The selected backup MySQL database will be restored to its original location.
+
+If you need to change the MySQL database backup configuration, you can modify:
+* The backup location (the default is: `/opt/netapp/protectionservice/mysqldumps`)
+* The number of backups kept (the default value is three)
+* The time of day the backup is recorded (the default value is 12:39 a.m.)
+
+1. Open a maintenance console window.
+1. From the main menu, enter option **1) Application Configuration**.
+1. From the Application Configuration menu, enter option **6) MySQL backup and restore**.
+1. From the MySQL Backup & Restore Configuration, menu, enter option **1) Configure MySQL backup**.
++
+ :::image type="content" source="./media/cloud-backup/mysql-backup-configuration.png" alt-text="Screenshot of the CLI maintenance menu depicting menu options." lightbox="./media/cloud-backup/mysql-backup-configuration.png":::
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
You can use this library in your app server side to manage the WebSocket client
[Source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub) |
-[API reference documentation](/javascript/api/overview/azure/webpubsub) |
+[API reference documentation](/javascript/api/overview/azure/web-pubsub) |
[Product documentation](./index.yml) | [Samples][samples_ref]
Use **Live Trace** from the Web PubSub service portal to view the live traffic.
## Next steps
backup Azure Backup Architecture For Sap Hana Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md
Title: Azure Backup Architecture for SAP HANA Backup description: Learn about Azure Backup architecture for SAP HANA backup. Previously updated : 09/27/2021- Last updated : 08/11/2022+++ # Azure Backup architecture for SAP HANA backup
Refer to the following SAP HANA setups and see the execution of backup operation
## Next steps
-[Back up SAP HANA databases in Azure VMs](./backup-azure-sap-hana-database.md).
+- Learn about the supported configurations and scenarios in the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
+- Learn about how to [back up SAP HANA databases in Azure VMs](./backup-azure-sap-hana-database.md).
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
In this article, you'll learn how to:
> * Run an on-demand backup job >[!NOTE]
-Refer to the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
## Prerequisites
backup Backup Azure Sql Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-backup-cli.md
Title: Back up SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to back up SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 07/07/2022 Last updated : 08/11/2022
az backup protection auto-enable-for-azurewl --resource-group SQLResourceGroup \
To trigger an on-demand backup, use the [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now) command. >[!NOTE]
->The retention policy of an on-demand backup is determined by the underlying retention policy for the database.
+>The retention period of this backup is determined by the type of on-demand backup you have run.
+>
+>- *On-demand full* retains backups for a minimum of *45 days* and a maximum of *99 years*.
+>- *On-demand copy only full* accepts any v0alue for retaintion.
+>- *On-demand differential* retains backup as per the retention of scheduled differentials set in policy.
+>- *On-demand log* retains backups as per the retention of scheduled logs set in policy.
```azurecli-interactive az backup protection backup-now --resource-group SQLResourceGroup \
backup Backup Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-database.md
Title: Back up SQL Server databases to Azure description: This article explains how to back up SQL Server to Azure. The article also explains SQL Server recovery. Previously updated : 08/20/2021 Last updated : 08/11/2022 # About SQL Server Backup in Azure VMs
Last updated 08/20/2021
>[!Note] >Snapshot-based backup for SQL databases in Azure VM is now in preview. This unique offering combines the goodness of snapshots, leading to a better RTO and low impact on the server along with the benefits of frequent log backups for low RPO. For any queries/access, write to us at [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com).
-To view the backup and restore scenarios that we support today, refer to the [support matrix](sql-support-matrix.md#scenario-support).
+To view the backup and restore scenarios that we support today, see the [support matrix](sql-support-matrix.md#scenario-support).
## Backup process
backup Backup Azure Sql Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-manage-cli.md
Title: Manage SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to manage SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 07/07/2022 Last updated : 08/11/2022
If you've used [Back up an SQL database in Azure using CLI](backup-azure-sql-bac
Azure CLI eases the process of managing an SQL database running on an Azure VM that's backed-up using Azure Backup. The following sections describe each of the management operations.
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Monitor backup and restore jobs Use the [az backup job list](/cli/azure/backup/job#az-backup-job-list) command to monitor completed or currently running jobs (backup or restore). CLI also allows you to [suspend a currently running job](/cli/azure/backup/job#az-backup-job-stop) or [wait until a job completes](/cli/azure/backup/job#az-backup-job-wait).
backup Backup Azure Sql Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-restore-cli.md
Title: Restore SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to restore SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 07/15/2022 Last updated : 08/11/2022
This article assumes you've an SQL database running on Azure VM that's backed-up
* Backed-up database/item named *sqldatabase;mssqlserver;master* * Resources in the *westus2* region
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## View restore points for a backed-up database To view the list of all recovery points for a database, use the [az backup recoverypoint list](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) command as:
Name Operation Status Item Name
- - -- -- 0d863259-b0fb-4935-8736-802c6667200b CrossRegionRestore InProgress master [testSQLVM] AzureWorkload 2022-06-21T08:29:24.919138+00:00 0:00:12.372421 ```
+>[!Note]
+>The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
## Restore as files
backup Backup Azure Sql Vm Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-vm-rest-api.md
Title: Back up SQL server databases in Azure VMs using Azure Backup via REST API description: Learn how to use REST API to back up SQL server databases in Azure VMs in the Recovery Services vault Previously updated : 11/30/2021 Last updated : 08/11/2022
This article describes how to back up SQL server databases in Azure VMs using Azure Backup via REST API.
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Prerequisites - A Recovery Services vault
Once you configure a database for backup, backups run according to the policy sc
Triggering an on-demand backup is a *POST* operation.
+>[!Note]
+>The retention period of this backup is determined by the type of on-demand backup you have run.
+>
+>- *On-demand full* retains backups for a minimum of *45 days* and a maximum of *99 years*.
+>- *On-demand copy only full* accepts any v0alue for retaintion.
+>- *On-demand differential* retains backup as per the retention of scheduled differentials set in policy.
+>- *On-demand log* retains backups as per the retention of scheduled logs set in policy.
+ ```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/backup?api-version=2016-12-01 ```
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 06/01/2022 Last updated : 08/11/2022
In this article, you'll learn how to:
> * Discover databases and set up backups. > * Set up auto-protection for databases.
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Prerequisites Before you back up a SQL Server database, check the following criteria:
backup Backup Sql Server On Availability Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-on-availability-groups.md
Title: Back up SQL Server always on availability groups description: In this article, learn how to back up SQL Server on availability groups. Previously updated : 08/20/2021 Last updated : 08/11/2022 # Back up SQL Server always on availability groups Azure Backup offers an end-to-end support for backing up SQL Server always on availability groups (AG) if all nodes are in the same region and subscription as the Recovery Services vault. However, if the AG nodes are spread across regions/subscriptions/on-premises and Azure, there are a few considerations to keep in mind. >[!Note]
->Backup of Basic Availability Group databases is not supported by Azure Backup.
+>- Backup of Basic Availability Group databases is not supported by Azure Backup.
+>- See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
The backup preference used by Azure Backup SQL AG supports full and differential backups only from the primary replica. So, these backup jobs always run on the Primary node irrespective of the backup preference. For copy-only full and transaction log backups, the AG backup preference is considered while deciding the node where backup will run.
The backup preference used by Azure Backup SQL AG supports full and differential
The workload backup extension gets installed on the node when it is registered with the Azure Backup service. When an AG database is configured for backup, the backup schedules are pushed to all the registered nodes of the AG. The schedules fire on all the AG nodes and the workload backup extensions on these nodes synchronize between themselves to decide which node will perform the backup. The node selection depends on the backup type and the backup preference as explained in section 1.
-The selected node proceeds with the backup job, whereas the job triggered on the other nodes bail out, that is, it skips the job.
+The selected node proceeds with the backup job, whereas the job triggered on the other nodes bails out, that is, it skips the job.
>[!Note] >Azure Backup doesnΓÇÖt consider backup priorities or replicas while deciding among the secondary replicas.
LetΓÇÖs consider the following AG deployment as a reference.
:::image type="content" source="./media/backup-sql-server-on-availability-groups/ag-deployment.png" alt-text="Diagram for AG deployment as reference.":::
-Taking the above sample AG deployment, following are various considerations:
+Based on the above sample AG deployment, following are various considerations:
- As the primary node is in region 1 and subscription 1, the Recovery Services vault (Vault 1) must be in Region 1 and Subscription 1 for protecting this AG. - VM3 can't be registered to Vault 1 as it's in a different subscription.
After the AG has failed over to one of the secondary nodes:
>[!Note] >Log chain breaks do not happen on failover if the failover doesnΓÇÖt coincide with a backup.
-Taking the above sample AG deployment, following are the various failover possibilities:
+Based on the above sample AG deployment, following are the various failover possibilities:
- Failover to VM2 - Full and differential backups will happen from VM2.
Taking the above sample AG deployment, following are the various failover possib
Recovery services vault doesnΓÇÖt support cross-subscription or cross-region backups. This section summarizes how to enable backups for AGs that are spanning subscriptions or Azure regions and the associated considerations. -- Evaluate if you really need to enable backups from all nodes. If one region/subscription has most of the AG nodes and failover to other nodes happens very rarely, setting up backup in that first region may be enough. If the failovers to other region/subscription happen frequently and for prolonged duration, then you may want to setup backups proactively in the other region as well.
+- Evaluate if you really need to enable backups from all nodes. If one region/subscription has most of the AG nodes and failover to other nodes happens very rarely, setting up the backup in that first region may be enough. If the failovers to other region/subscription happen frequently and for prolonged duration, then you may want to set aup backups proactively in the other region as well.
- Each vault where the backup gets enabled will have its own set of recovery point chains. Restores from these recovery points can be done to VMs registered in that vault only.
Recovery services vault doesnΓÇÖt support cross-subscription or cross-region bac
To avoid log backup conflicts between the two vaults, we recommend you to set the backup preference to Primary. Then, whichever vault has the primary node will also take the log backups.
-Taking the above sample AG deployment, here are the steps to enable backup from all the nodes. The assumption is that backup preference is satisfied in all the steps.
+Based on the above sample AG deployment, here are the steps to enable backup from all the nodes. The assumption is that backup preference is satisfied in all the steps.
### Step 1: Enable backups in Region 1, Subscription 1 (Vault 1)
For example, the first node has 50 standalone databases protected and both the n
As the AG database jobs are queued on one node and running on another, the backup synchronization (mentioned in section 6) wonΓÇÖt work properly. Node 2 might assume that Node 1 is down and therefore jobs from there aren't coming up for synchronization. This can lead to log chain breaks or extra backups as both nodes can take backups independently.
-Similar problem can happen if the number of AG databases protected are more than the throttling limit. In such case, backup for, say, DB1 can be queued on Node 1 whereas it runs on Node 2.
+Similar problem can happen if the number of AG databases protected is more than the throttling limit. In such case, backup for, say, DB1 can be queued on Node 1 whereas it runs on Node 2.
We recommend you to use the following backup preferences to avoid these synchronization issues:
backup Backup Sql Server Vm From Vm Pane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-vm-from-vm-pane.md
Title: Back up a SQL Server VM from the VM pane description: In this article, learn how to back up SQL Server databases on Azure virtual machines from the VM pane. Previously updated : 08/13/2020 Last updated : 08/11/2022 # Back up a SQL Server from the VM pane
This article explains how to back up SQL Server running in Azure VMs with the [A
2. Get an [overview](backup-azure-sql-database.md) of Azure Backup for SQL Server VM. 3. Verify that the VM has [network connectivity](backup-sql-server-database-azure-vms.md#establish-network-connectivity).
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Configure backup on the SQL Server You can enable backup on your SQL Server VM from the **Backup** pane in the VM. This method does two things:
backup Manage Azure Sql Vm Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-sql-vm-rest-api.md
Title: Manage SQL server databases in Azure VMs with REST API description: Learn how to use REST API to manage and monitor SQL server databases in Azure VM that are backed up by Azure Backup. Previously updated : 11/29/2021 Last updated : 08/11/2022
This article explains how to manage and monitor the SQL server databases that are backed-up by [Azure Backup](backup-overview.md).
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know about the supported configurations and scenarios.
+ ## Monitor jobs The Azure Backup service triggers jobs that run in the background. This includes scenarios, such as triggering backup, restore operations, and disabling backup. You can track these jobs using their IDs.
backup Manage Monitor Sql Database Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-monitor-sql-database-backup.md
Title: Manage and monitor SQL Server DBs on an Azure VM description: This article describes how to manage and monitor SQL Server databases that are running on an Azure VM. Previously updated : 01/20/2022 Last updated : 08/11/2022
This article describes common tasks for managing and monitoring SQL Server datab
If you haven't yet configured backups for your SQL Server databases, see [Back up SQL Server databases on Azure VMs](backup-azure-sql-database.md)
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Monitor backup jobs in the portal Azure Backup shows all scheduled and on-demand operations under **Backup jobs** in **Backup center** in the Azure portal, except the scheduled log backups since they can be very frequent. The jobs you see in this portal includes database discovery and registration, configure backup, and backup and restore operations.
You can run different types of on-demand backups:
- Differential backup - Log backup
-While you need to specify the retention duration for Copy-only full backup, the retention range for on-demand full backup will automatically be set to 45 days from current time.
+>[!Note]
+>The retention period of this backup is determined by the type of on-demand backup you have run.
+>
+>- *On-demand full* retains backups for a minimum of *45 days* and a maximum of *99 years*.
+>- *On-demand copy only full* accepts any v0alue for retaintion.
+>- *On-demand differential* retains backup as per the retention of scheduled differentials set in policy.
+>- *On-demand log* retains backups as per the retention of scheduled logs set in policy.
For more information, see [SQL Server backup types](backup-architecture.md#sql-server-backup-types).
backup Restore Azure Sql Vm Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-sql-vm-rest-api.md
Title: Restore SQL server databases in Azure VMs with REST API description: Learn how to use REST API to restore SQL server databases in Azure VM from a restore point created by Azure Backup Previously updated : 11/30/2021 Last updated : 08/11/2022
By the end of this article, you'll learn how to perform the following operations
- View the restore points for a backed-up SQL database. - Restore a full SQL database.
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Prerequisites We assume that you have a backed-up SQL database for restore. If you donΓÇÖt have one, see [Backup SQL Server databases in Azure VMs using REST API](backup-azure-sql-vm-rest-api.md) to create.
If you've enabled Cross-region restore, then the recovery points will be replica
1. Choose a target server, which is registered to a vault within the secondary paired region. 1. Trigger restore to that server and track it using *JobId*.
+>[!Note]
+>The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
+ ### Fetch distinct recovery points from the secondary region Use the [List Recovery Points API](/rest/api/backup/recovery-points-crr/list) to fetch the list of available recovery points for the database in the secondary region. In the following example, an optional filter is applied to fetch full and differential recovery points in a given time range.
backup Restore Sql Database Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md
Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 07/15/2021 Last updated : 08/11/2022
This article describes how to restore a SQL Server database that's running on an
This article describes how to restore SQL Server databases. For more information, see [Back up SQL Server databases on Azure VMs](backup-azure-sql-database.md).
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Restore to a time or a recovery point Azure Backup can restore SQL Server databases that are running on Azure VMs as follows:
For eg., when you have a backup policy of weekly fulls, daily differentials and
#### Excluding backup file types
-The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file, that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
+The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ` RecoveryPointsToBeExcludedForRestoreAsFiles ` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
1. In the target machine where files are to be downloaded, go to "C:\Program Files\Azure Workload Backup\bin" folder 2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
The secondary region restore user experience will be similar to the primary regi
>[!NOTE] >- After the restore is triggered and in the data transfer phase, the restore job can't be cancelled. >- The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription.
+>- The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
### Monitoring secondary region restore jobs
backup Sap Hana Db About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-about.md
Title: About SAP HANA database backup in Azure VMs description: In this article, learn about backing up SAP HANA databases that are running on Azure virtual machines. Previously updated : 09/27/2021 Last updated : 08/11/2022 # About SAP HANA database backup in Azure VMs
Using Azure Backup to back up and restore SAP HANA databases, gives the followin
* **Long-term retention**: For rigorous compliance and audit needs. Retain your backups for years, based on the retention duration, beyond which the recovery points will be pruned automatically by the built-in lifecycle management capability. * **Backup Management from Azure**: Use Azure Backup's management and monitoring capabilities for improved management experience. Azure CLI is also supported.
-To view the backup and restore scenarios that we support today, refer to the [SAP HANA scenario support matrix](./sap-hana-backup-support-matrix.md#scenario-support).
+To view the backup and restore scenarios that we support today, see the [SAP HANA scenario support matrix](./sap-hana-backup-support-matrix.md#scenario-support).
## Backup architecture
backup Sap Hana Db Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-manage.md
Title: Manage backed up SAP HANA databases on Azure VMs description: In this article, learn common tasks for managing and monitoring SAP HANA databases that are running on Azure virtual machines. Previously updated : 08/09/2022 Last updated : 08/11/2022
This article describes common tasks for managing and monitoring SAP HANA databas
If you haven't configured backups yet for your SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md).
+>[!Note]
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Monitor manual backup jobs in the portal Azure Backup shows all manually triggered jobs in the **Backup jobs** section in **Backup center**.
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, discover how to restore SAP HANA databases that are running on Azure Virtual Machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 07/15/2022 Last updated : 08/11/2022
This article describes how to restore SAP HANA databases running on an Azure Vir
For more information, on how to back up SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md).
+>[!Note]
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Restore to a point in time or to a recovery point Azure Backup can restore SAP HANA databases that are running on Azure VMs as follows:
For eg., when you have a backup policy of weekly fulls, daily differentials and
#### Excluding backup file types
-The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file, that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
+The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ` RecoveryPointsToBeExcludedForRestoreAsFiles ` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
1. In the target machine where files are to be downloaded, go to "opt/msawb/bin" folder 2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
The secondary region restore user experience will be similar to the primary regi
>[!NOTE] >* After the restore is triggered and in the data transfer phase, the restore job can't be cancelled. >* The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription.
+>* The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
### Monitoring secondary region restore jobs
backup Tutorial Sap Hana Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-backup-cli.md
Title: Tutorial - SAP HANA DB backup on Azure using Azure CLI description: In this tutorial, learn how to back up SAP HANA databases running on an Azure VM to an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 07/22/2022 Last updated : 08/11/2022
To get container name, run the following command. [Learn about this CLI command]
While the section above details how to configure a scheduled backup, this section talks about triggering an on-demand backup. To do this, we use the [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now) command. >[!NOTE]
-> By default, the retention of on-demand backups is set to 45 days.
+>The retention period of this backup is determined by the type of on-demand backup you have run.
+>- *On-demand full backups* are retained for a minimum of *45 days* and a maximum of *99 years*.
+>- *On-demand differential backups* are retained as per the *log retention set in the policy*.
+>- *On-demand incremental backups* aren't currently supported.
```azurecli-interactive az backup protection backup-now --resource-group saphanaResourceGroup \
backup Tutorial Sap Hana Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-manage-cli.md
Title: 'Tutorial: Manage backed-up SAP HANA DB using CLI' description: In this tutorial, learn how to manage backed-up SAP HANA databases running on an Azure VM using Azure CLI. Previously updated : 12/4/2019 Last updated : 08/11/2022
If you've used [Back up an SAP HANA database in Azure using CLI](tutorial-sap-ha
Azure CLI makes it easy to manage an SAP HANA database running on an Azure VM that's backed-up using Azure Backup. This tutorial details each of the management operations.
+>[!Note]
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Monitor backup and restore jobs To monitor completed or currently running jobs (backup or restore), use the [az backup job list](/cli/azure/backup/job#az-backup-job-list) cmdlet. CLI also allows you to [suspend a currently running job](/cli/azure/backup/job#az-backup-job-stop) or [wait until a job completes](/cli/azure/backup/job#az-backup-job-wait).
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md
Title: Tutorial - SAP HANA DB restore on Azure using CLI description: In this tutorial, learn how to restore SAP HANA databases running on an Azure VM from an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 12/23/2021 Last updated : 08/11/2022
This tutorial assumes you have an SAP HANA database running on Azure VM that's b
* Backed-up database/item named *saphanadatabase;hxe;hxe* * Resources in the *westus2* region
+>[!Note]
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## View restore points for a backed-up database To view the list of all the recovery points for a database, use the [az backup recoverypoint list](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) cmdlet as follows:
Name Operation Status Item Name
00000000-0000-0000-0000-000000000000 CrossRegionRestore InProgress H10 [hanasnapcvt01] AzureWorkload 2021-12-22T05:21:34.165617+00:00 0:00:05.665470 ```
+>[!Note]
+>The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
+ ## Restore as files To restore the backup data as files instead of a database, we'll use **RestoreAsFiles** as the restore mode. Then choose the restore point, which can either be a previous point-in-time or any of the previous restore points. Once the files are dumped to a specified path, you can take these files to any SAP HANA machine where you want to restore them as a database. Because you can move these files to any machine, you can now restore the data across subscriptions and regions.
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-edge-secured-core.md
Validation|Device to be validated through toolset to ensure the device supports
|Requirements dependency|HVCI is enabled on the device.| |Validation Type|Manual/Tools| |Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure that HVCI is enabled on the device.|
-|Resources|https://docs.microsoft.com/windows-hardware/design/device-experiences/oem-hvci-enablement|
+|Resources| [Hypervisor-protected Code Integrity enablement](/windows-hardware/design/device-experiences/oem-hvci-enablement) |
</br>
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 8/3/2022 Last updated : 8/11/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## August 2022 Guest OS
+
+>[!NOTE]
+
+>The August Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the August Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-08 | [5016623] | Latest Cumulative Update(LCU) | 6.45 | Aug 9, 2022 |
+| Rel 22-08 | [5016618] | IE Cumulative Updates | 2.127, 3.117, 4.105 | Aug 9, 2022 |
+| Rel 22-08 | [5016627] | Latest Cumulative Update(LCU) | 7.15 | Aug 9, 2022 |
+| Rel 22-08 | [5016622] | Latest Cumulative Update(LCU) | 5.71 | Aug 9, 2022 |
+| Rel 22-08 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | 2.127 | Aug 9, 2022 |
+| Rel 22-08 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup | 2.127 | May 10, 2022 |
+| Rel 22-08 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | 4.107 | Jun 14, 2022 |
+| Rel 22-08 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup | 4.107 | May 10, 2022 |
+| Rel 22-08 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | 3.114 | Aug 9, 2022 |
+| Rel 22-08 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup | 3.114 | May 10, 2022 |
+| Rel 22-08 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.47 | May 10, 2022 |
+| Rel 22-08 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.15 | May 10, 2022 |
+| Rel 22-08 | [5016676] | Monthly Rollup | 2.127 | Aug 9, 2022 |
+| Rel 22-08 | [5016672] | Monthly Rollup | 3.114 | Aug 9, 2022 |
+| Rel 22-08 | [5016681] | Monthly Rollup | 4.107 | Aug 9, 2022 |
+| Rel 22-08 | [5016263] | Servicing Stack update | 3.114 | Jul 12, 2022 |
+| Rel 22-08 | [5016264] | Servicing Stack update | 4.107 | Jul 12, 2022 |
+| Rel 22-08 | [4578013] | OOB Standalone Security Update | 4.107 | Aug 19, 2020 |
+| Rel 22-08 | [5017095] | Servicing Stack update | 5.71 | Aug 9, 2022 |
+| Rel 22-08 | [5016057] | Servicing Stack update | 2.127 | Jul 12, 2022 |
+| Rel 22-08 | [4494175] | Microcode | 5.71 | Sep 1, 2020 |
+| Rel 22-08 | [4494174] | Microcode | 6.47 | Sep 1, 2020 |
+[5016623]: https://support.microsoft.com/kb/5016623
+[5016618]: https://support.microsoft.com/kb/5016618
+[5016627]: https://support.microsoft.com/kb/5016627
+[5016622]: https://support.microsoft.com/kb/5016622
+[5013637]: https://support.microsoft.com/kb/5013637
+[5013644]: https://support.microsoft.com/kb/5013644
+[5013638]: https://support.microsoft.com/kb/5013638
+[5013643]: https://support.microsoft.com/kb/5013643
+[5013635]: https://support.microsoft.com/kb/5013635
+[5013642]: https://support.microsoft.com/kb/5013642
+[5013641]: https://support.microsoft.com/kb/5013641
+[5013630]: https://support.microsoft.com/kb/5013630
+[5016676]: https://support.microsoft.com/kb/5016676
+[5016672]: https://support.microsoft.com/kb/5016672
+[5016681]: https://support.microsoft.com/kb/5016681
+[5016263]: https://support.microsoft.com/kb/5016263
+[5016264]: https://support.microsoft.com/kb/5016264
+[4578013]: https://support.microsoft.com/kb/4578013
+[5017095]: https://support.microsoft.com/kb/5017095
+[5016057]: https://support.microsoft.com/kb/5016057
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
## July 2022 Guest OS
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
This documentation contains the following types of articles:
* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
-For a more structured approach, follow a Microsoft Learn module for Face.
+For a more structured approach, follow a Learn module for Face.
* [Detect and analyze faces with the Face service](/learn/modules/detect-analyze-faces/) ## Example use cases
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
This documentation contains the following types of articles:
* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
-For a more structured approach, follow a Microsoft Learn module for Image Analysis.
+For a more structured approach, follow a Learn module for Image Analysis.
* [Analyze images with the Computer Vision service](/learn/modules/analyze-images-computer-vision/) ## Image Analysis features
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
This documentation contains the following types of articles:
<!--* The [conceptual articles](how-to/call-read-api.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. -->
-For a more structured approach, follow a Microsoft Learn module for OCR.
+For a more structured approach, follow a Learn module for OCR.
* [Read Text in Images and Documents with the Computer Vision Service](/learn/modules/read-text-images-documents-with-computer-vision-service/) ## Read API
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/overview.md
This documentation contains the following article types:
* [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features. * [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions.
-For a more structured approach, follow a Microsoft Learn module for Content Moderator.
+For a more structured approach, follow a Learn module for Content Moderator.
* [Introduction to Content Moderator](/learn/modules/intro-to-content-moderator/) * [Classify and moderate text with Azure Content Moderator](/learn/modules/classify-and-moderate-text-with-azure-content-moderator/)
As with all of the Cognitive Services, developers using the Content Moderator se
## Next steps
-To get started using Content Moderator on the web portal, follow [Try Content Moderator on the web](quick-start.md). Or, complete a [client library or REST API quickstart](client-libraries.md) to implement the basic scenarios in code.
+To get started using Content Moderator on the web portal, follow [Try Content Moderator on the web](quick-start.md). Or, complete a [client library or REST API quickstart](client-libraries.md) to implement the basic scenarios in code.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
This documentation contains the following types of articles:
* The [tutorials](./iot-visual-alerts-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. <!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.-->
-For a more structured approach, follow a Microsoft Learn module for Custom Vision:
+For a more structured approach, follow a Learn module for Custom Vision:
* [Classify images with the Custom Vision service](/learn/modules/classify-images-custom-vision/) * [Classify endangered bird species with Custom Vision](/learn/modules/cv-classify-bird-species/)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the prebuilt neural voices supported in each language.
| Arabic (United Arab Emirates) | `ar-AE` | Male | `ar-AE-HamdanNeural` | General | | Arabic (Yemen) | `ar-YE` | Female | `ar-YE-MaryamNeural` | General | | Arabic (Yemen) | `ar-YE` | Male | `ar-YE-SalehNeural` | General |
-| Azerbaijani (Azerbaijan) | `az-AZ` | Female | `az-AZ-BabekNeural` <sup>New</sup> | General |
-| Azerbaijani (Azerbaijan) | `az-AZ` | Male | `az-AZ-BanuNeural` <sup>New</sup> | General |
+| Azerbaijani (Azerbaijan) | `az-AZ` | Male | `az-AZ-BabekNeural` <sup>New</sup> | General |
+| Azerbaijani (Azerbaijan) | `az-AZ` | Female | `az-AZ-BanuNeural` <sup>New</sup> | General |
| Bangla (Bangladesh) | `bn-BD` | Female | `bn-BD-NabanitaNeural` | General | | Bangla (Bangladesh) | `bn-BD` | Male | `bn-BD-PradeepNeural` | General | | Bengali (India) | `bn-IN` | Female | `bn-IN-TanishaaNeural` | General |
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
The core operation of the Translator service is translating text. In this quicks
> [!TIP] >
- > If you're new to Visual Studio, try the [**Introduction to Visual Studio**](/learn/modules/go-get-started/) Microsoft Learn module.
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/learn/modules/go-get-started/) Learn module.
1. Open Visual Studio.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
-> If you're new to Go, try the [**Get started with Go**](/learn/modules/go-get-started/) Microsoft Learn module.
+> If you're new to Go, try the [Get started with Go](/learn/modules/go-get-started/) Learn module.
1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
After a successful call, you should see the following response:
> [!TIP] >
- > If you're new to Node.js, try the [**Introduction to Node.js**](/learn/modules/intro-to-nodejs/) Microsoft Learn module.
+ > If you're new to Node.js, try the [Introduction to Node.js](/learn/modules/intro-to-nodejs/) Learn module.
1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-app`.
After a successful call, you should see the following response:
> [!TIP] >
- > If you're new to Python, try the [**Introduction to Python**](/learn/paths/beginner-python/) Microsoft Learn module.
+ > If you're new to Python, try the [Introduction to Python](/learn/paths/beginner-python/) Learn module.
1. Open a terminal window and use pip to install the Requests library and uuid0 package:
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
To call the Translator service via the [REST API](reference/rest-api-guide.md),
> [!TIP] >
- > If you're new to Visual Studio, try the [**Introduction to Visual Studio**](/learn/modules/go-get-started/) Microsoft Learn module.
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/learn/modules/go-get-started/) Learn module.
1. Open Visual Studio.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
-> If you're new to Go, try the [**Get started with Go**](/learn/modules/go-get-started/) Microsoft Learn module.
+> If you're new to Go, try the [Get started with Go](/learn/modules/go-get-started/) Learn module.
1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
- > If you're new to Node.js, try the [**Introduction to Node.js**](/learn/modules/intro-to-nodejs/) Microsoft Learn module.
+ > If you're new to Node.js, try the [Introduction to Node.js](/learn/modules/intro-to-nodejs/) Learn module.
1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-text-app`.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
- > If you're new to Python, try the [**Introduction to Python**](/learn/paths/beginner-python/) Microsoft Learn module.
+ > If you're new to Python, try the [Introduction to Python](/learn/paths/beginner-python/) Learn module.
1. Open a terminal window and use pip to install the Requests library and uuid0 package:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
Previously updated : 06/17/2022 Last updated : 08/10/2022
As you use CLU, see the following reference documentation and samples for Azure
|||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-apis) | |
-|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
-|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
## Responsible AI
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/tutorials/cognitive-search.md
Typically after you create a project, you go ahead and start [tagging the docume
## Deploy your model
-Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/view-model-evaluation.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
[!INCLUDE [Deploy a model using Language Studio](../includes/language-studio/deploy-model.md)]
Training could take sometime between 10 and 30 minutes for this sample dataset.
## Deploy your model
-Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this tutorial, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/view-model-evaluation.md) if necessary. In this tutorial, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
### Start deployment job
Generally after training a model you would review its [evaluation details](../ho
### Run the indexer command
-After youΓÇÖve published your Azure function and prepared your configs file, you can run the indexer command.
+After you've published your Azure function and prepared your configs file, you can run the indexer command.
```cli indexer index --index-name <name-your-index-here> --configs <absolute-path-to-configs-file> ```
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/tutorials/cognitive-search.md
Typically after you create a project, you go ahead and start [tagging the docume
## Deploy your model
-Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/view-model-evaluation.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
[!INCLUDE [Deploy a model using Language Studio](../includes/language-studio/deploy-model.md)]
Training could take sometime between 10 and 30 minutes for this sample dataset.
## Deploy your model
-Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/view-model-evaluation.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
### Submit deployment job
Generally after training a model you would review it's [evaluation details](../h
### Run the indexer command
-After youΓÇÖve published your Azure function and prepared your configs file, you can run the indexer command.
+After you've published your Azure function and prepared your configs file, you can run the indexer command.
```cli indexer index --index-name <name-your-index-here> --configs <absolute-path-to-configs-file> ```
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md
Previously updated : 06/17/2022 Last updated : 08/10/2022
As you use orchestration workflow, see the following reference documentation and
|||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-runtime-api) | |
-|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
-|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
## Responsible AI
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/embeddings.md
An embedding is a special format of data representation that can be easily utili
To obtain an embedding vector for a piece of text we make a request to the embeddings endpoint as shown in the following code snippets: ```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-06-01-preview\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-06-01-preview\
-H 'Content-Type: application/json' \ -H 'api-key: YOUR_API_KEY' \ -d '{"input": "Sample Document goes here"}'
Our embedding models may be unreliable or pose social risks in certain cases, an
## Next steps
-Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
+Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md
An exception policy controls the behavior of a Job based on a trigger and execut
[azure_sub]: https://azure.microsoft.com/free/dotnet/ [cla]: https://cla.microsoft.com [nuget]: https://www.nuget.org/
-[netstandars2mappings]:https://github.com/dotnet/standard/blob/master/docs/versions.md
-[useraccesstokens]:https://docs.microsoft.com/azure/communication-services/quickstarts/access-tokens?pivots=programming-language-csharp
+[netstandars2mappings]: https://github.com/dotnet/standard/blob/master/docs/versions.md
+[useraccesstokens]: /azure/communication-services/quickstarts/access-tokens?pivots=programming-language-csharp
[communication_resource_docs]: ../../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows [communication_resource_create_portal]: ../../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows [communication_resource_create_power_shell]: /powershell/module/az.communication/new-azcommunicationservice
communication-services Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-access.md
+
+ Title: Azure Communication Services Calling SDK RAW media overview
+
+description: Provides an overview of media access
+++++ Last updated : 07/21/2022+++++
+# Media access overview
++
+Azure Communication Services provides support for developers to get real-time access to media streams to capture, analyze and process audio or video content during active calls.
+
+Consumption of live audio and video content is very prevalent in our world today in the forms of online meetings, conferences, live events, online classes and customer support. The modern communications world allows people around the globe to connect with anyone anywhere any moment on any matter virtually. With raw media access, developers can analyze audio or video streams for each participant in a call in real-time. In contact centers these streams can be used to run custom AI models for analysis such as your homegrown NLP for conversation analysis or provide real-time insights and suggestions to boost agent productivity. In virtual appointments media streams can be used to analyze sentiment when providing virtual care for patients or provide remote assistance during video calls leveraging Mixed Reality capabilities. This also opens a path for developers to leverage newer innovations with endless possibilities to enhance interaction experiences.
+
+The Azure Communication Services SDKs provides access to the media streams from the client and server side to enable developers building more inclusive and richer virtual experiences during voice or video interactions.
++
+## The workflow can be split into three operations:
+ΓÇó Capture Media: Media can be captured locally via the client SDKs or on the server side.
+
+ΓÇó Process/Transform: Media can be transformed locally on the client (for example add background blur) or be used for processing in a cloud service (for example to use it with your customer NLPU for conversation insights).
+
+ΓÇó Provide context or inject back the Transformed Media: The output of the transformed media streams (ex, sentiment analysis) can be used to provide context or augmented media streams can be injected into the interaction through the client SDK or through the media streaming API via the server SDK.
+
+## Media access via the Calling Client SDK
+During a call, developers can access the audio and video media streams. Outgoing local audio and video media streams can be pre-processed, before being sent to the encoder. Incoming remote captured media streams can be post-processed before playback on screen or speaker. For incoming audio mixed media access, the client calling SDK can have access to the mixed incoming remote audio stream which includes the mixed audio streams of the top four most dominant speakers on the call. For incoming remote unmixed audio the client calling SDK will have access to the individual audio streams of each participant on the call.
+++
+## Media access use cases
+ΓÇó Screen share: Local outgoing video access can be used to enable screen sharing, developers are able to implement the foreground services to capture the frames and send them to be published using the calling SDK OutgoingVirtualVideoStreamOptions.
+
+ΓÇó Background blur: Local outgoing video access can be used to capture the video frames from the camera and implement background blur before sending the blurred frames to be published using the calling SDK OutgoingVirtualVideoStreamOptions.
+
+ΓÇó Video filters: Local outgoing video access can be used to capture the video frames from the camera and implement AI video filters on the captured frames before sending the video frames to be published using the calling SDK OutgoingVirtualVideoStreamOptions.
+
+ΓÇó Augmented reality/Virtual reality: Remote incoming video media streams can be captured and augmented with a virtual environment before rendering on the screen.
+
+ΓÇó Spatial audio: Remote incoming audio access can be used to inject spatial audio into the incoming audio stream.
++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with raw media](../../quickstarts/voice-video-calling/get-started-raw-media-access.md)
+
+For more information, see the following articles:
+- Familiarize yourself with general [call flows](../call-flows.md)
+- Learn about [call types](../voice-video-calling/about-call-types.md)
+- [Plan your PSTN solution](../telephony/plan-solution.md)
communication-services Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/learn-modules.md
Title: Microsoft Learn modules for Azure Communication Services
+ Title: Learn modules for Azure Communication Services
description: Learn about the available Learn modules for Azure Communication Services.
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-indexing.md
In the preceding example, omitting the ```"university":1``` clause returns an er
Unique indexes need to be created while the collection is empty.
-Support for unique index on existing collections with data is available in preview for accounts that do not use Synapse Link or Continuous backup. You can sign up for the feature ΓÇ£Azure Cosmos DB API for MongoDB New Unique Indexes in existing collectionΓÇ¥ through the [Preview Features blade in the portal](./../access-previews.md).
-
-#### Unique partial indexes
-
-Unique partial indexes can be created by specifying a partialFilterExpression along with the 'unique' constraint in the index. This results in the unique constraint being applied only to the documents that meet the specified filter expression.
-
-The unique constraint will not be effective for documents that do not meet the specified criteria. As a result, other documents will not be prevented from being inserted into the collection.
-
-This feature is supported with the Cosmos DB API for MongoDB versions 3.6 and above.
-
-To create a unique partial index from Mongo Shell, use the command `db.collection.createIndex()` with the 'partialFilterExpression' option and 'unique' constraint.
-The partialFilterExpression option accepts a json document that specifies the filter condition using:
-
-* equality expressions (i.e. field: value or using the $eq operator),
-*'$exists: true' expression,
-* $gt, $gte, $lt, $lte expressions,
-* $type expressions,
-* $and operator at the top-level only
-
-The following command creates an index on collection `books` that specifies a unique constraint on the `title` field and a partial filter expression `rating: { $gte: 3 }`:
-
-```shell
-db.books.createIndex(
- { Title: 1 },
- { unique: true, partialFilterExpression: { rating: { $gte: 3 } } }
-)
-```
-
-To delete a partial unique index using from Mongo Shell, use the command `getIndexes()` to list the indexes in the collection.
-Then drop the index with the following command:
-
-```shell
-db.books.dropIndex("indexName")
-```
- ### TTL indexes To enable document expiration in a particular collection, you need to create a [time-to-live (TTL) index](../time-to-live.md). A TTL index is an index on the `_ts` field with an `expireAfterSeconds` value.
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 07/04/2022 Last updated : 08/11/2022 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
The following sections provide details about properties that define Data Factory
## Linked service properties
-The following properties are supported for an Azure Synapse Analytics linked service:
+These generic properties are supported for an Azure Synapse Analytics linked service:
| Property | Description | Required | | : | :-- | :-- | | type | The type property must be set to **AzureSqlDW**. | Yes | | connectionString | Specify the information needed to connect to the Azure Synapse Analytics instance for the **connectionString** property. <br/>Mark this field as a SecureString to store it securely. You can also put password/service principal key in Azure Key Vault,and if it's SQL authentication pull the `password` configuration out of the connection string. See the JSON example below the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
-| servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal. |
-| servicePrincipalKey | Specify the application's key. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal. |
-| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal. |
| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are `AzurePublic`, `AzureChina`, `AzureUsGovernment`, and `AzureGermany`. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
-| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication. |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure Integration Runtime. | No |
-For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively:
+For different authentication types, refer to the following sections on specific properties, prerequisites and JSON samples, respectively:
- [SQL authentication](#sql-authentication) - [Service principal authentication](#service-principal-authentication)
For different authentication types, refer to the following sections on prerequis
### SQL authentication
+To use SQL authentication authentication type, specify the generic properties that are described in the preceding section.
+ #### Linked service example that uses SQL authentication ```json
For different authentication types, refer to the following sections on prerequis
### Service principal authentication
-To use service principal-based Azure AD application token authentication, follow these steps:
+To use service principal authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+| : | :-- | :-- |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalKey | Specify the application's key. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes |
+
+You also need to follow the steps below:
1. **[Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal)** from the Azure portal. Make note of the application name and the following values that define the linked service:
To use service principal-based Azure AD application token authentication, follow
A data factory or Synapse workspace can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity) that represents the resource. You can use this managed identity for Azure Synapse Analytics authentication. The designated resource can access and copy data from or to your data warehouse by using this identity.
-To use system-assigned managed identity authentication, follow these steps:
+To use system-assigned managed identity authentication, specify the generic properties that are described in the preceding section, and follow these steps.
1. **[Provision an Azure Active Directory administrator](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database)** for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or Azure AD group. If you grant the group with system-assigned managed identity an admin role, skip steps 3 and 4. The administrator will have full access to the database.
To use system-assigned managed identity authentication, follow these steps:
A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the resource. You can use this managed identity for Azure Synapse Analytics authentication. The designated resource can access and copy data from or to your data warehouse by using this identity.
-To use user-assigned managed identity authentication, follow these steps:
+To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+| : | :-- | : |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+
+You also need to follow the steps below:
1. **[Provision an Azure Active Directory administrator](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database)** for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or Azure AD group. If you grant the group with user-assigned managed identity an admin role, skip steps 3. The administrator will have full access to the database.
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Previously updated : 07/04/2022 Last updated : 08/10/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
The following sections provide details about properties that are used to define
## Linked service properties
-These properties are supported for an Azure SQL Database linked service:
+These generic properties are supported for an Azure SQL Database linked service:
| Property | Description | Required | |: |: |: | | type | The **type** property must be set to **AzureSqlDatabase**. | Yes | | connectionString | Specify information needed to connect to the Azure SQL Database instance for the **connectionString** property. <br/>You also can put a password or service principal key in Azure Key Vault. If it's SQL authentication, pull the `password` configuration out of the connection string. For more information, see the JSON example following the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
-| servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal |
-| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal |
-| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal |
| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No | | alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No |
-| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication |
| connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is located in a private network. If not specified, the default Azure integration runtime is used. | No |
-For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively:
+For different authentication types, refer to the following sections on specific properties, prerequisites and JSON samples, respectively:
- [SQL authentication](#sql-authentication) - [Service principal authentication](#service-principal-authentication)
For different authentication types, refer to the following sections on prerequis
### SQL authentication
+To use SQL authentication authentication type, specify the generic properties that are described in the preceding section.
+ **Example: using SQL authentication** ```json
For different authentication types, refer to the following sections on prerequis
### Service principal authentication
-To use a service principal-based Azure AD application token authentication, follow these steps:
+To use service principal authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal.| Yes |
+
+You also need to follow the steps below:
1. [Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) from the Azure portal. Make note of the application name and the following values that define the linked service:
To use a service principal-based Azure AD application token authentication, foll
A data factory or Synapse workspace can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity) that represents the service when authenticating to other resources in Azure. You can use this managed identity for Azure SQL Database authentication. The designated factory or Synapse workspace can access and copy data from or to your database by using this identity.
-To use system-assigned managed identity authentication, follow these steps.
+To use system-assigned managed identity authentication, specify the generic properties that are described in the preceding section, and follow these steps.
1. [Provision an Azure Active Directory administrator](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database) for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or an Azure AD group. If you grant the group with managed identity an admin role, skip steps 3 and 4. The administrator has full access to the database.
To use system-assigned managed identity authentication, follow these steps.
A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the service when authenticating to other resources in Azure. You can use this managed identity for Azure SQL Database authentication. The designated factory or Synapse workspace can access and copy data from or to your database by using this identity.
-To use user-assigned managed identity authentication, follow these steps.
+To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+
+You also need to follow the steps below:
1. [Provision an Azure Active Directory administrator](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database) for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or an Azure AD group. If you grant the group with user-assigned managed identity an admin role, skip steps 3. The administrator has full access to the database.
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 07/04/2022 Last updated : 08/11/2022 # Copy and transform data in Azure SQL Managed Instance using Azure Data Factory or Synapse Analytics
The following sections provide details about properties that are used to define
## Linked service properties
-The following properties are supported for the SQL Managed Instance linked service:
+These generic properties are supported for an SQL Managed Instance linked service:
| Property | Description | Required | |: |: |: | | type | The type property must be set to **AzureSqlMI**. | Yes | | connectionString |This property specifies the **connectionString** information that's needed to connect to SQL Managed Instance by using SQL authentication. For more information, see the following examples. <br/>The default port is 1433. If you're using SQL Managed Instance with a public endpoint, explicitly specify port 3342.<br> You also can put a password in Azure Key Vault. If it's SQL authentication, pull the `password` configuration out of the connection string. For more information, see the JSON example following the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
-| servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal |
-| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal |
-| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal |
| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the service's cloud environment is used. | No | | alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No |
-| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication |
| connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. You can use a self-hosted integration runtime or an Azure integration runtime if your managed instance has a public endpoint and allows the service to access it. If not specified, the default Azure integration runtime is used. |Yes |
-For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively:
+For different authentication types, refer to the following sections on specific properties, prerequisites and JSON samples, respectively:
- [SQL authentication](#sql-authentication) - [Service principal authentication](#service-principal-authentication)
For different authentication types, refer to the following sections on prerequis
### SQL authentication
+To use SQL authentication authentication type, specify the generic properties that are described in the preceding section.
+ **Example 1: use SQL authentication** ```json
For different authentication types, refer to the following sections on prerequis
### Service principal authentication
-To use a service principal-based Azure AD application token authentication, follow these steps:
+To use service principal authentication, in addition to the generic properties that are described in the preceding section, specify the following properties
+
+| Property | Description | Required |
+|: |: |: |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
+
+You also need to follow the steps below:
1. Follow the steps to [Provision an Azure Active Directory administrator for your Managed Instance](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance).
To use a service principal-based Azure AD application token authentication, foll
A data factory or Synapse workspace can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity) that represents the service for authentication to other Azure services. You can use this managed identity for SQL Managed Instance authentication. The designated service can access and copy data from or to your database by using this identity.
-To use system-assigned managed identity authentication, follow these steps.
+To use system-assigned managed identity authentication, specify the generic properties that are described in the preceding section, and follow these steps.
1. Follow the steps to [Provision an Azure Active Directory administrator for your Managed Instance](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance).
To use system-assigned managed identity authentication, follow these steps.
A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the service for authentication to other Azure services. You can use this managed identity for SQL Managed Instance authentication. The designated service can access and copy data from or to your database by using this identity.
-To use user-assigned managed identity authentication, follow these steps.
+To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+
+You also need to follow the steps below:
1. Follow the steps to [Provision an Azure Active Directory administrator for your Managed Instance](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance).
data-factory Continuous Integration Delivery Resource Manager Custom Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
Below is the current default parameterization template. If you need to add only
}, "location": "=" },
+ "Microsoft.DataFactory/factories/globalparameters": {
+ "properties": {
+ "*": {
+ "value": "="
+ }
+ }
+ },
"Microsoft.DataFactory/factories/pipelines": { }, "Microsoft.DataFactory/factories/dataflows": {
data-factory How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-access-secured-purview-account.md
Previously updated : 09/02/2021 Last updated : 08/09/2022 # Access a secured Microsoft Purview account from Azure Data Factory
data-factory How To Clean Up Ssisdb Logs With Elastic Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-clean-up-ssisdb-logs-with-elastic-jobs.md
description: This article describes how to clean up SSIS project deployment and
Previously updated : 02/15/2022 Last updated : 08/09/2022
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
Previously updated : 02/15/2022 Last updated : 08/09/2022 # Customize the setup for an Azure-SSIS Integration Runtime
data-factory How To Configure Azure Ssis Ir Enterprise Edition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md
description: "This article describes the features of Enterprise Edition for the
Previously updated : 02/15/2022 Last updated : 08/09/2022
data-factory How To Configure Shir For Log Analytics Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-shir-for-log-analytics-collection.md
Previously updated : 02/22/2022 Last updated : 08/09/2022
data-factory How To Create Custom Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-custom-event-trigger.md
Previously updated : 05/07/2021 Last updated : 08/09/2022 # Create a custom event trigger to run a pipeline in Azure Data Factory
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md
Previously updated : 09/09/2021 Last updated : 08/09/2022 # Create a trigger that runs a pipeline in response to a storage event
This section shows you how to create a storage event trigger within the Azure Da
1. Select trigger type **Storage Event** # [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1.png" alt-text="Screenshot of Author page to create a new storage event trigger in Data Factory UI.":::
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1.png" alt-text="Screenshot of Author page to create a new storage event trigger in Data Factory UI." :::
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot of Author page to create a new storage event trigger in the Azure Synapse UI.":::
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot of Author page to create a new storage event trigger in the Azure Synapse UI.":::
5. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is required, but be mindful that selecting all containers can lead to a large number of events.
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
Previously updated : 09/09/2021 Last updated : 08/09/2022
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md
Previously updated : 09/09/2021 Last updated : 08/09/2022 # Create a trigger that runs a pipeline on a tumbling window
data-factory How To Data Flow Dedupe Nulls Snippets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-data-flow-dedupe-nulls-snippets.md
Previously updated : 01/31/2022 Last updated : 08/09/2022 # Dedupe rows and find nulls by using data flow snippets
data-factory How To Data Flow Error Rows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-data-flow-error-rows.md
Previously updated : 01/31/2022 Last updated : 08/09/2022
data-factory How To Develop Azure Ssis Ir Licensed Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-develop-azure-ssis-ir-licensed-components.md
Previously updated : 02/17/2022 Last updated : 08/09/2022 # Install paid or licensed custom components for the Azure-SSIS integration runtime
data-factory How To Discover Explore Purview Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-discover-explore-purview-data.md
Previously updated : 08/10/2021 Last updated : 08/09/2022 # Discover and explore data in ADF using Microsoft Purview
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-expression-language-functions.md
Previously updated : 01/21/2022 Last updated : 08/09/2022 # How to use parameters, expressions and functions in Azure Data Factory
data-factory How To Fixed Width https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-fixed-width.md
Previously updated : 01/27/2022 Last updated : 08/09/2022
data-factory How To Invoke Ssis Package Azure Enabled Dtexec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-azure-enabled-dtexec.md
description: Learn how to execute SQL Server Integration Services (SSIS) package
Previously updated : 10/22/2021 Last updated : 08/09/2022
data-factory How To Invoke Ssis Package Managed Instance Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-managed-instance-agent.md
Previously updated : 02/15/2022 Last updated : 08/09/2022 # Run SSIS packages by using Azure SQL Managed Instance Agent
data-factory How To Invoke Ssis Package Ssdt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssdt.md
Previously updated : 10/22/2021 Last updated : 08/09/2022 # Execute SSIS packages in Azure from SSDT
data-factory How To Invoke Ssis Package Ssis Activity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity-powershell.md
Previously updated : 10/22/2021 Last updated : 08/09/2022 # Run an SSIS package with the Execute SSIS Package activity in Azure Data Factory with PowerShell
data-factory How To Invoke Ssis Package Ssis Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md
Previously updated : 02/15/2022 Last updated : 08/09/2022 # Run an SSIS package with the Execute SSIS Package activity in Azure portal
Create an Azure-SSIS integration runtime (IR) if you don't have one already by f
In this step, you use the Data Factory UI or app to create a pipeline. You add an Execute SSIS Package activity to the pipeline and configure it to run your SSIS package. # [Azure Data Factory](#tab/data-factory)
-1. On your Data Factory overview or home page in the Azure portal, select the **Author & Monitor** tile to start the Data Factory UI or app in a separate tab.
+1. On your Data Factory overview or home page in the Azure portal, select the **Open Azure Data Factory Studio** tile to start the Data Factory UI or app in a separate tab.
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/data-factory-home-page.png" alt-text="Data Factory home page":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot of the Azure Data Factory home page.":::
On the home page, select **Orchestrate**.
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/orchestrate-button.png" alt-text="Screenshot that shows the Orchestrate button on the Azure Data Factory home page.":::
# [Synapse Analytics](#tab/synapse-analytics)
Navigate to the Integrate tab in Synapse Studio (represented by the pipeline ico
-1. In the **Activities** toolbox, expand **General**. Then drag an **Execute SSIS Package** activity to the pipeline designer surface.
+1. In the **Activities** toolbox, search for **SSIS**. Then drag an **Execute SSIS Package** activity to the pipeline designer surface.
:::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-designer.png" alt-text="Drag an Execute SSIS Package activity to the designer surface":::
On the **Settings** tab of Execute SSIS Package activity, complete the following
1. If your Azure-SSIS IR isn't running or the **Manual entries** check box is selected, enter your package and environment paths from SSISDB directly in the following formats: `<folder name>/<project name>/<package name>.dtsx` and `<folder name>/<environment name>`.
- :::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-settings2.png" alt-text="Set properties on the Settings tab - Manual":::
+ :::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-settings-2.png" alt-text="Set properties on the Settings tab - Manual":::
#### Package location: File System (Package) **File System (Package)** as your package location is automatically selected if your Azure-SSIS IR was provisioned without SSISDB or you can select it yourself. If it's selected, complete the following steps. 1. Specify your package to run by providing a Universal Naming Convention (UNC) path to your package file (with `.dtsx`) in the **Package path** box. You can browse and select your package by selecting **Browse file storage** or enter its path manually. For example, if you store your package in Azure Files, its path is `\\<storage account name>.file.core.windows.net\<file share name>\<package name>.dtsx`.
For all UNC paths previously mentioned, the fully qualified file name must be fe
If you select **File System (Project)** as your package location, complete the following steps. 1. Specify your package to run by providing a UNC path to your project file (with `.ispac`) in the **Project path** box and a package file (with `.dtsx`) from your project in the **Package name** box. You can browse and select your project by selecting **Browse file storage** or enter its path manually. For example, if you store your project in Azure Files, its path is `\\<storage account name>.file.core.windows.net\<file share name>\<project name>.ispac`.
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md
ms.devlang: powershell Previously updated : 02/15/2022 Last updated : 08/10/2022
This article describes how to run an SSIS package in an Azure Data Factory pipel
## Prerequisites ### Azure SQL Database
-The walkthrough in this article uses Azure SQL Database to host the SSIS catalog. You can also use Azure SQL Managed Instance.
+The walk through in this article uses Azure SQL Database to host the SSIS catalog. You can also use Azure SQL Managed Instance.
-## Create an Azure-SSIS integration runtime
-Create an Azure-SSIS integration runtime if you don't have one by following the step-by-step instruction in the [Tutorial: Deploy SSIS packages](./tutorial-deploy-ssis-packages-azure.md).
+### Data Factory
+You will need an instance of Azure Data Factory to implement this walk through. If you do not have one already provisioned, you can follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md).
-## Data Factory UI (Azure portal)
-In this section, you use Data Factory UI to create a Data Factory pipeline with a stored procedure activity that invokes an SSIS package.
+### Azure-SSIS integration runtime
+Finally, you will also require an Azure-SSIS integration runtime if you don't have one by following the step-by-step instruction in the [Tutorial: Deploy SSIS packages](./tutorial-deploy-ssis-packages-azure.md).
-### Create a data factory
-First step is to create a data factory by using the Azure portal.
+## Create a pipeline with stored procedure activity
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-2. Navigate to the [Azure portal](https://portal.azure.com).
-3. Click **New** on the left menu, click **Data + Analytics**, and click **Data Factory**.
-
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/new-azure-data-factory-menu.png" alt-text="New->DataFactory":::
-2. In the **New data factory** page, enter **ADFTutorialDataFactory** for the **name**.
-
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/new-azure-data-factory.png" alt-text="New data factory page":::
-
- The name of the Azure data factory must be **globally unique**. If you see the following error for the name field, change the name of the data factory (for example, yournameADFTutorialDataFactory). See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
-
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/name-not-available-error.png" alt-text="Name not available - error":::
-3. Select your Azure **subscription** in which you want to create the data factory.
-4. For the **Resource Group**, do one of the following steps:
-
- - Select **Use existing**, and select an existing resource group from the drop-down list.
- - Select **Create new**, and enter the name of a resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-4. Select **V2** for the **version**.
-5. Select the **location** for the data factory. Only locations that are supported by Data Factory are shown in the drop-down list. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other locations.
-6. Select **Pin to dashboard**.
-7. Click **Create**.
-8. On the dashboard, you see the following tile with status: **Deploying data factory**.
-
- :::image type="content" source="media//how-to-invoke-ssis-package-stored-procedure-activity/deploying-data-factory.png" alt-text="deploying data factory tile":::
-9. After the creation is complete, you see the **Data Factory** page as shown in the image.
-
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/data-factory-home-page.png" alt-text="Data factory home page":::
-10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) application in a separate tab.
+In this step, you use the Data Factory UI to create a pipeline. If you have not navigated to the Azure Data Factory Studio already, open your data factory in the Azure Portal and click the **Open Azure Data Factory Studio** button to open it.
+
-### Create a pipeline with stored procedure activity
-In this step, you use the Data Factory UI to create a pipeline. You add a stored procedure activity to the pipeline and configure it to run the SSIS package by using the sp_executesql stored procedure.
+Next, you will add a stored procedure activity to a new pipeline and configure it to run the SSIS package by using the sp_executesql stored procedure.
1. In the home page, click **Orchestrate**:
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/orchestrate-button.png" alt-text="Screenshot that shows the Orchestrate button on the Azure Data Factory home page.":::
-2. In the **Activities** toolbox, expand **General**, and drag-drop **Stored Procedure** activity to the pipeline designer surface.
+2. In the **Activities** toolbox, search for **Stored procedure**, and drag-drop a **Stored procedure** activity to the pipeline designer surface.
:::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/drag-drop-sproc-activity.png" alt-text="Drag-and-drop stored procedure activity":::
-3. In the properties window for the stored procedure activity, switch to the **SQL Account** tab, and click **+ New**. You create a connection to the database in Azure SQL Database that hosts the SSIS Catalog (SSIDB database).
+
+3. Select the **Stored procedure** activity you just added to the designer surface, and then the **Settings** tab, and click **+ New** beside the **Linked service**. You create a connection to the database in Azure SQL Database that hosts the SSIS Catalog (SSIDB database).
:::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/new-linked-service-button.png" alt-text="New linked service button":::+ 4. In the **New Linked Service** window, do the following steps: 1. Select **Azure SQL Database** for **Type**.
- 2. Select the **Default** Azure Integration Runtime to connect to the Azure SQL Database that hosts the `SSISDB` database.
+ 2. Select the **Default** AutoResolveIntegrationRuntime to connect to the Azure SQL Database that hosts the `SSISDB` database.
3. Select the Azure SQL Database that hosts the SSISDB database for the **Server name** field. 4. Select **SSISDB** for **Database name**. 5. For **User name**, enter the name of user who has access to the database.
In this step, you use the Data Factory UI to create a pipeline. You add a stored
8. Save the linked service by clicking the **Save** button. :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/azure-sql-database-linked-service-settings.png" alt-text="Screenshot that shows the process for adding a new linked service.":::
-5. In the properties window, switch to the **Stored Procedure** tab from the **SQL Account** tab, and do the following steps:
+
+5. Back in the properties window on the **Settings** tab, complete the following steps:
1. Select **Edit**. 2. For the **Stored procedure name** field, Enter `sp_executesql`.
In this step, you use the Data Factory UI to create a pipeline. You add a stored
``` :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/stored-procedure-settings.png" alt-text="Azure SQL Database linked service":::+ 6. To validate the pipeline configuration, click **Validate** on the toolbar. To close the **Pipeline Validation Report**, click **>>**. :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/validate-pipeline.png" alt-text="Validate pipeline":::
In this section, you trigger a pipeline run and then monitor it.
2. In the **Pipeline Run** window, select **Finish**. 3. Switch to the **Monitor** tab on the left. You see the pipeline run and its status along with other information (such as Run Start time). To refresh the view, click **Refresh**.
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/pipeline-runs.png" alt-text="Pipeline runs":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/pipeline-runs.png" alt-text="Screenshot that shows pipeline runs":::
3. Click **View Activity Runs** link in the **Actions** column. You see only one activity run as the pipeline has only one activity (stored procedure activity).
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/activity-runs.png" alt-text="Activity runs":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/activity-runs.png" alt-text="Screenshot that shows activity runs":::
4. You can run the following **query** against the SSISDB database in SQL Database to verify that the package executed.
data-factory How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-settings.md
Previously updated : 05/24/2022 Last updated : 08/10/2022 # Manage Azure Data Factory settings and preferences
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 07/13/2022 Last updated : 08/10/2022 # Manage Azure Data Factory studio preview experience
data-factory How To Migrate Ssis Job Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-migrate-ssis-job-ssms.md
Previously updated : 10/22/2021 Last updated : 08/10/2022 # Migrate SQL Server Agent jobs to ADF with SSMS
data-factory How To Run Self Hosted Integration Runtime In Windows Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-run-self-hosted-integration-runtime-in-windows-container.md
Previously updated : 07/07/2022 Last updated : 08/10/2022 # How to run Self-Hosted Integration Runtime in Windows container
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
Alternatively, you can create Web activities in ADF or Synapse pipelines to star
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites+
+### Data Factory
+You will need an instance of Azure Data Factory to implement this walk through. If you do not have one already provisioned, you can follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md).
+
+### Azure-SSIS Integration Runtime (IR)
If you have not provisioned your Azure-SSIS IR already, provision it by following instructions in the [tutorial](./tutorial-deploy-ssis-packages-azure.md). ## Create and schedule ADF pipelines that start and or stop Azure-SSIS IR
For example, you can create two triggers, the first one is scheduled to run dail
If you create a third trigger that is scheduled to run daily at midnight and associated with the third pipeline, that pipeline will run at midnight every day, starting your IR just before package execution, subsequently executing your package, and immediately stopping your IR just after package execution, so your IR will not be running idly.
-### Create your ADF
-
-1. Sign in to [Azure portal](https://portal.azure.com/).
-2. Click **New** on the left menu, click **Data + Analytics**, and click **Data Factory**.
-
- :::image type="content" source="./media/tutorial-create-azure-ssis-runtime-portal/new-data-factory-menu.png" alt-text="New->DataFactory":::
-
-3. In the **New data factory** page, enter **MyAzureSsisDataFactory** for **Name**.
-
- :::image type="content" source="./media/tutorial-create-azure-ssis-runtime-portal/new-azure-data-factory.png" alt-text="New data factory page":::
-
- The name of your ADF must be globally unique. If you receive the following error, change the name of your ADF (e.g. yournameMyAzureSsisDataFactory) and try creating it again. See [Data Factory - Naming Rules](naming-rules.md) article to learn about naming rules for ADF artifacts.
-
- `Data factory name MyAzureSsisDataFactory is not available`
-
-4. Select your Azure **Subscription** under which you want to create your ADF.
-5. For **Resource Group**, do one of the following steps:
-
- - Select **Use existing**, and select an existing resource group from the drop-down list.
- - Select **Create new**, and enter the name of your new resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md) article.
-
-6. For **Version**, select **V2** .
-7. For **Location**, select one of the locations supported for ADF creation from the drop-down list.
-8. Select **Pin to dashboard**.
-9. Click **Create**.
-10. On Azure dashboard, you will see the following tile with status: **Deploying Data Factory**.
-
- :::image type="content" source="media/tutorial-create-azure-ssis-runtime-portal/deploying-data-factory.png" alt-text="deploying data factory tile":::
-
-11. After the creation is complete, you can see your ADF page as shown below.
-
- :::image type="content" source="./media/tutorial-create-azure-ssis-runtime-portal/data-factory-home-page.png" alt-text="Data factory home page":::
-
-12. Click **Author & Monitor** to launch ADF UI/app in a separate tab.
- ### Create your pipelines 1. In the home page, select **Orchestrate**.
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/orchestrate-button.png" alt-text="Screenshot that shows the Orchestrate button on the Azure Data Factory home page.":::
-2. In **Activities** toolbox, expand **General** menu, and drag & drop a **Web** activity onto the pipeline designer surface. In **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to **Settings** tab, and do the following actions:
+2. In the **Activities** toolbox, expand **General** menu, and drag & drop a **Web** activity onto the pipeline designer surface. In **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to **Settings** tab, and do the following actions:
> [!NOTE] > For Azure-SSIS in Azure Synapse, use corresponding Azure Synapse REST API to [Get Integration Runtime status](/rest/api/synapse/integration-runtimes/get), [Start Integration Runtime](/rest/api/synapse/integration-runtimes/start) and [Stop Integration Runtime](/rest/api/synapse/integration-runtimes/stop).
Now that your pipelines work as you expected, you can create triggers to run the
4. In **Trigger Run Parameters** page, review any warning, and select **Finish**. 5. Publish the whole ADF settings by selecting **Publish All** in the factory toolbar.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/publish-all.png" alt-text="Publish All":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/publish-all-button.png" alt-text="Screenshot that shows the Publish All button.":::
### Monitor your pipelines and triggers in Azure portal
data-factory How To Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-email.md
Previously updated : 06/07/2021 Last updated : 08/10/2022 # Send an email with an Azure Data Factory or Azure Synapse pipeline
data-factory How To Send Notifications To Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-notifications-to-teams.md
Previously updated : 09/29/2021
-update: 19/03/2022
Last updated : 08/10/2022 # Send notifications to a Microsoft Teams channel from an Azure Data Factory or Synapse Analytics pipeline
data-factory How To Sqldb To Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-sqldb-to-cosmosdb.md
Previously updated : 05/19/2022 Last updated : 08/10/2022 # Migrate normalized database schema from Azure SQL Database to Azure Cosmos DB denormalized container
data-factory How To Use Azure Key Vault Secrets Pipeline Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-azure-key-vault-secrets-pipeline-activities.md
Previously updated : 10/22/2021 Last updated : 08/10/2022 # Use Azure Key Vault secrets in pipeline activities
data-factory How To Use Sql Managed Instance With Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md
Previously updated : 02/15/2022 Last updated : 08/10/2022 # Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory or Azure Synapse Analytics
data-factory How To Use Trigger Parameterization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-trigger-parameterization.md
Previously updated : 03/02/2021 Last updated : 08/10/2022 # Reference trigger metadata in pipeline runs
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in July include:
- [Defender for Container's VA adds support for the detection of language specific packages (Preview)](#defender-for-containers-va-adds-support-for-the-detection-of-language-specific-packages-preview) - [Protect against the Operations Management Suite vulnerability CVE-2022-29149](#protect-against-the-operations-management-suite-vulnerability-cve-2022-29149) - [Integration with Entra Permissions Management](#integration-with-entra-permissions-management)
+- [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit)
+- [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service)
### General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection
Each Azure subscription, AWS account, and GCP project that you onboard, will now
Learn more about [Entra Permission Management (formerly Cloudknox)](other-threat-protections.md#entra-permission-management-formerly-cloudknox)
+### Key Vault recommendations changed to "audit"
+
+The effect for the Key Vault recommendations listed here was changed to "audit":
+
+| Recommendation name | Recommendation ID |
+| - | |
+| Validity period of certificates stored in Azure Key Vault should not exceed 12 months | fc84abc0-eee6-4758-8372-a7681965ca44 |
+| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b |
+| Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 |
++
+### Deprecate API App policies for App Service
+
+We deprecated the following policies to corresponding policies that already exist to include API apps:
+
+| To be deprecated | Changing to |
+|--|--|
+|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
+| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest 'Python version` |
+| `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` |
+| `Managed identity should be used in your API App` | `App Service apps should use managed identity` |
+| `Remote debugging should be turned off for API Apps` | `App Service apps should have remote debugging turned off` |
+| `Ensure that 'PHP version' is the latest, if used as a part of the API app` | `App Service apps that use PHP should use the latest 'PHP version'`|
+| `FTPS only should be required in your API App` | `App Service apps should require FTPS only` |
+| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version` |
+| `Latest TLS version should be used in your API App` | `App Service apps should use the latest TLS version` |
+ ## June 2022 Updates in June include:
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | June 2022 |
-| [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | June 2022 |
| [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022|
-| [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service) | July 2022 |
| [Change in pricing of Runtime protection for Arc-enabled Kubernetes clusters](#change-in-pricing-of-runtime-protection-for-arc-enabled-kubernetes-clusters) | August 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | September 2022 | | [Removing security alerts for machines reporting to cross tenant Log Analytics workspaces](#removing-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces) | September 2022 | | [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) | September 2022 |
-### Changes to recommendations for managing endpoint protection solutions
-
-**Estimated date for change:** August 2022
-
-In August 2021, we added two new **preview** recommendations to deploy and maintain the endpoint protection solutions on your machines. For full details, [see the release note](release-notes-archive.md#two-new-recommendations-for-managing-endpoint-protection-solutions-in-preview).
-
-When the recommendations are released to general availability, they will replace the following existing recommendations:
--- **Endpoint protection should be installed on your machines** will replace:
- - [Install endpoint protection solution on virtual machines (key: 83f577bd-a1b6-b7e1-0891-12ca19d1e6df)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/83f577bd-a1b6-b7e1-0891-12ca19d1e6df)
- - [Install endpoint protection solution on your machines (key: 383cf3bc-fdf9-4a02-120a-3e7e36c6bfee)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/383cf3bc-fdf9-4a02-120a-3e7e36c6bfee)
--- **Endpoint protection health issues should be resolved on your machines** will replace the existing recommendation that has the same name. The two recommendations have different assessment keys:
- - Assessment key for the **preview** recommendation: 37a3689a-818e-4a0e-82ac-b1392b9bb000
- - Assessment key for the **GA** recommendation: 3bcd234d-c9c7-c2a2-89e0-c01f419c1a8a
-
-Learn more:
--- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported)-- [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md)-
-### Key Vault recommendations changed to "audit"
-
-**Estimated date for change:** June 2022
-
-The Key Vault recommendations listed here are currently disabled so that they don't impact your secure score. We will change their effect to "audit".
-
-| Recommendation name | Recommendation ID |
-| - | |
-| Validity period of certificates stored in Azure Key Vault should not exceed 12 months | fc84abc0-eee6-4758-8372-a7681965ca44 |
-| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b |
-| Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 |
- ### Deprecating three VM alerts **Estimated date for change:** June 2022
The following table lists the alerts that will be deprecated during June 2022.
These alerts are used to notify a user about suspicious activity connected to a Kubernetes cluster. The alerts will be replaced with matching alerts that are part of the Microsoft Defender for Cloud Container alerts (`K8S.NODE_ImageBuildOnNode`, `K8S.NODE_ KubernetesAPI` and `K8S.NODE_ ContainerSSH`) which will provide improved fidelity and comprehensive context to investigate and act on the alerts. Learn more about alerts for [Kubernetes Clusters](alerts-reference.md).
-### Deprecate API App policies for App Service
-
-**Estimated date for change:** July 2022
-
-We will be deprecating the following policies to corresponding policies that already exist to include API apps:
-
-| To be deprecated | Changing to |
-|--|--|
-|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
-| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest 'Python version` |
-| `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` |
-| `Managed identity should be used in your API App` | `App Service apps should use managed identity` |
-| `Remote debugging should be turned off for API Apps` | `App Service apps should have remote debugging turned off` |
-| `Ensure that 'PHP version' is the latest, if used as a part of the API app` | `App Service apps that use PHP should use the latest 'PHP version'`|
-| `FTPS only should be required in your API App` | `App Service apps should require FTPS only` |
-| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version` |
-| `Latest TLS version should be used in your API App` | `App Service apps should use the latest TLS version` |
- ### Change in pricing of runtime protection for Arc-enabled Kubernetes clusters **Estimated date for change:** August 2022
Runtime protection is currently a preview feature for Arc-enabled Kubernetes clu
### Multiple changes to identity recommendations
-**Estimated date for change:** July 2022
+**Estimated date for change:** September 2022
Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In June, we'll be making the changes outlined below.
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
In this article, you learned about creating Logic Apps, automating their executi
For related material, see: -- [The Microsoft Learn module on how to use workflow automation to automate a security response](/learn/modules/resolve-threats-with-azure-security-center/)
+- [The Learn module on how to use workflow automation to automate a security response](/learn/modules/resolve-threats-with-azure-security-center/)
- [Security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md) - [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [About Azure Logic Apps](../logic-apps/logic-apps-overview.md)
defender-for-iot Tutorial Qradar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-qradar.md
For the integration to work, you will need to setup in the Defender for IoT appl
1. Select **Save**.
+The following is an example of a payload sent to QRadar:
+
+```sample payload
+<9>May 5 12:29:23 sensor_Agent LEEF:1.0|CyberX|CyberX platform|2.5.0|CyberX platform Alert|devTime=May 05 2019 15:28:54 devTimeFormat=MMM dd yyyy HH:mm:ss sev=2 cat=XSense Alerts title=Device is Suspected to be Disconnected (Unresponsive) score=81 reporter=192.168.219.50 rta=0 alertId=6 engine=Operational senderName=sensor Agent UUID=5-1557059334000 site=Site zone=Zone actions=handle dst=192.168.2.2 dstName=192.168.2.2 msg=Device 192.168.2.2 is suspected to be disconnected (unresponsive).
+```
+ ## Map notifications to QRadar The rule must then be mapped on the on-premises management console.
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
To use the control plane APIs:
* You can call the APIs directly by referencing the latest Swagger folder in the [control plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/stable). This folder also includes a folder of examples that show the usage. * You can currently access SDKs for control APIs in... - [.NET (C#)](https://www.nuget.org/packages/Microsoft.Azure.Management.DigitalTwins/) ([reference [auto-generated]](/dotnet/api/overview/azure/digitaltwins/management?view=azure-dotnet&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/digitaltwins/Microsoft.Azure.Management.DigitalTwins))
- - [Java](https://search.maven.org/search?q=a:azure-mgmt-digitaltwins) ([reference [auto-generated]](/java/api/overview/azure/digitaltwins?view=azure-java-stable&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins))
+ - [Java](https://search.maven.org/search?q=a:azure-mgmt-digitaltwins) ([reference [auto-generated]](/java/api/overview/azure/digital-twins)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins))
- [JavaScript](https://www.npmjs.com/package/@azure/arm-digitaltwins) ([source](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/digitaltwins/arm-digitaltwins)) - [Python](https://pypi.org/project/azure-mgmt-digitaltwins/) ([source](https://github.com/Azure/azure-sdk-for-python/tree/release/v3/sdk/digitaltwins/azure-mgmt-digitaltwins)) - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/services/digitaltwins/mgmt) ([source](https://github.com/Azure/azure-sdk-for-go/tree/main/services/digitaltwins/mgmt))
To use the data plane APIs:
- You can see detailed information and usage examples by continuing to the [.NET (C#) SDK (data plane)](#net-c-sdk-data-plane) section of this article. * You can use the Java SDK. To use the Java SDK... - You can view and install the package from Maven: [`com.azure:azure-digitaltwins-core`](https://search.maven.org/artifact/com.azure/azure-digitaltwins-core/1.0.0/jar)
- - You can view the [SDK reference documentation](/java/api/overview/azure/digitaltwins)
+ - You can view the [SDK reference documentation](/java/api/overview/azure/digital-twins)
- You can find the SDK source in GitHub: [Azure IoT Digital Twins client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins/azure-digitaltwins-core) * You can use the JavaScript SDK. To use the JavaScript SDK... - You can view and install the package from npm: [Azure Azure Digital Twins Core client library for JavaScript](https://www.npmjs.com/package/@azure/digital-twins-core).
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-azure-digital-twins-explorer.md
Azure Digital Twins Explorer is an open-source tool that welcomes contributions
To view the source code for the tool and read detailed instructions on how to contribute to the code, visit its GitHub repository: [digital-twins-explorer](https://github.com/Azure-Samples/digital-twins-explorer).
-To view instructions for contributing to this documentation, visit the [Microsoft contributor guide](/contribute/).
+To view instructions for contributing to this documentation, review our [contributor guide](/contribute/).
## Other considerations
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
The following tables show which migration scenarios are supported when using Azu
### Offline (one-time) migration support
-The following table shows Azure Database Migration Service support for offline migrations.
+The following table shows Azure Database Migration Service support for **offline** migrations.
| Target | Source | Support | Status | | - | - |:-:|:-:| | **Azure SQL DB** | SQL Server | Γ£ö | GA |
-| | RDS SQL | Γ£ö | GA |
+| | Amazon RDS SQL Server | Γ£ö | PP |
| | Oracle | X | | | **Azure SQL DB MI** | SQL Server | Γ£ö | GA |
-| | RDS SQL | Γ£ö | GA |
+| | Amazon RDS SQL Server | X | |
| | Oracle | X | | | **Azure SQL VM** | SQL Server | Γ£ö | GA |
+| | Amazon RDS SQL Server | X | |
| | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA | | **Azure DB for MySQL - Single Server** | MySQL | Γ£ö | GA |
-| | RDS MySQL | Γ£ö | GA |
+| | Amazon RDS MySQL | Γ£ö | GA |
| | Azure DB for MySQL <sup>1</sup> | Γ£ö | GA | | **Azure DB for MySQL - Flexible Server** | MySQL | Γ£ö | GA |
-| | RDS MySQL | Γ£ö | GA |
+| | Amazon RDS MySQL | Γ£ö | GA |
| | Azure DB for MySQL <sup>1</sup> | Γ£ö | GA | | **Azure DB for PostgreSQL - Single server** | PostgreSQL | X |
-| | RDS PostgreSQL | X | |
+| | Amazon RDS PostgreSQL | X | |
| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | X |
-| | RDS PostgreSQL | X | |
+| | Amazon RDS PostgreSQL | X | |
| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | X |
-| | RDS PostgreSQL | X | |
+| | Amazon RDS PostgreSQL | X | |
1. If your source database is already in Azure PaaS (for example, Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you're migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose MySQL as the source engine during scenario creation. If you're migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation. ### Online (continuous sync) migration support
-The following table shows Azure Database Migration Service support for online migrations.
+The following table shows Azure Database Migration Service support for **online** migrations.
| Target | Source | Support | Status | | - | - |:-:|:-:| | **Azure SQL DB** | SQL Server | X | |
-| | RDS SQL | X | |
+| | Amazon RDS SQL | X | |
| | Oracle | X | | | **Azure SQL DB MI** | SQL Server | Γ£ö | GA |
-| | RDS SQL | Γ£ö | GA |
+| | Amazon RDS SQL | X | |
| | Oracle | X | |
-| **Azure SQL VM** | SQL Server <sup>2</sup> | X | |
+| **Azure SQL VM** | SQL Server | Γ£ö | GA |
+| | Amazon RDS SQL | X | |
| | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA | | **Azure DB for MySQL** | MySQL | X | |
-| | RDS MySQL | X | |
+| | Amazon RDS MySQL | X | |
| **Azure DB for PostgreSQL - Single server** | PostgreSQL | Γ£ö | GA | | | Azure DB for PostgreSQL - Single server <sup>1</sup> | Γ£ö | GA |
-| | RDS PostgreSQL | Γ£ö | GA |
+| | Amazon DS PostgreSQL | Γ£ö | GA |
| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | Γ£ö | GA | | | Azure DB for PostgreSQL - Single server <sup>1</sup> | Γ£ö | GA |
-| | RDS PostgreSQL | Γ£ö | GA |
+| | Amazon RDS PostgreSQL | Γ£ö | GA |
| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | Γ£ö | GA |
-| | RDS PostgreSQL | Γ£ö | GA |
+| | Amazon RDS PostgreSQL | Γ£ö | GA |
1. If your source database is already in Azure PaaS (for example, Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you're migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose MySQL as the source engine during scenario creation. If you're migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation.
education-hub Get Started Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/get-started-education-hub.md
# Getting started with Azure Education Hub
-The Education Hub Get Started page provides quick links upon first landing into the Education Hub. There, you can find information about how to set up your course, learn about different services through Microsoft Learn, or easily deploy your first services through Quickstart Templates.
+The Education Hub Get Started page provides quick links upon first landing into the Education Hub. There, you can find information about how to set up your course, learn about different services, or easily deploy your first services through Azure Quickstart Templates.
:::image type="content" source="media/get-started-education-hub/get-started-page.png" alt-text="The Get Started page in the Azure Education Hub." border="false":::
education-hub Hub Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/hub-overview-page.md
Your main landing page in the Azure Education Hub is the Overview page. This pag
1. **Labs** shows the total number of active labs that have been passed out to students. 1. **Action needed** lists any actions you need to complete, such as accepting a Lab invitation. 1. **Software** lists free software available to download as an Educator.
-1. **Learning** links to free Azure learning pathways available through Microsoft Learn.
+1. **Learning** links to free Azure learning paths and modules.
1. **Quickstart Templates** includes Azure templates to help speed up and simplify deployment for common tasks. ## Next steps
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md
For more information about the auto-inflate feature, see [Automatically scale th
## Processing units
- [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation with in a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit*(PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
+ [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit*(PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more.
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
From a security standpoint, Microsoft doesn't recommend disabling certificate su
* Azure Front Door Standard and Premium - it is present in the origin settings. * Azure Front Door (classic) - it is present under the Azure Front Door settings in the Azure portal and in the Backend PoolsSettings in the Azure Front Door API.
- under the Azure Front Door settings in the Azure portal and on the BackendPoolsSettings in the Azure Front Door API.
- ## Frontend TLS connection (Client to Front Door) To enable the HTTPS protocol for secure delivery of contents on an Azure Front Door custom domain, you can choose to use a certificate that is managed by Azure Front Door or use your own certificate.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
Azure API for FHIR is provisioned.
| West US 2 | 40.64.135.77 | > [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to convert data to FHIR (Preview). For more information, see [Host and use templates](../../healthcare-apis/fhir/convert-data.md#host-and-use-templates)
+> The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure ACR firewall](../../healthcare-apis/fhir/convert-data.md#configure-acr-firewall).
### Allowing specific IP addresses for the Azure storage account in the same region
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
Previously updated : 06/06/2022 Last updated : 08/03/2022
FHIR service is provisioned.
| West US 2 | 40.64.135.77 | > [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to convert data to FHIR. For more information, see [Host and use templates](./convert-data.md#host-and-use-templates)
+> The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure ACR firewall](./convert-data.md#configure-acr-firewall).
### Allowing specific IP addresses for the Azure storage account in the same region
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
Title: Data conversion for Azure Health Data Services
-description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure Health Data Services
+ Title: FHIR data conversion for Azure Health Data Services
+description: Use the $convert-data endpoint and custom converter templates to convert data to FHIR in Azure Health Data Services.
Previously updated : 06/06/2022 Last updated : 08/02/2022
# Converting your data to FHIR
-The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**.
+The `$convert-data` custom endpoint in the FHIR service enables converting health data from different formats to FHIR. The `$convert-data` operation uses [Liquid](https://shopify.github.io/liquid/) templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project for FHIR data conversion mapping. You can customize these conversion templates as needed. Currently the `$convert-data` operation supports three types of data conversion: **HL7v2 to FHIR**, **C-CDA to FHIR**, and **JSON to FHIR** (JSON to FHIR templates are intended for custom conversion mapping).
> [!NOTE]
-> `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of raw healthcare data from legacy formats into FHIR format. However, it is not an ETL pipeline in itself. We recommend you to use an ETL engine such as Logic Apps or Azure Data Factory for a complete workflow in preparing your FHIR data to be persisted into the FHIR server. The workflow might include: data reading and ingestion, data validation, making $convert-data API calls, data pre/post-processing, data enrichment, and data de-duplication.
+> The `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of health data formats into the FHIR format. However, the `$convert-data` operation is not an ETL pipeline in itself. We recommend you use an ETL engine based on Azure Logic Apps or Azure Data Factory for a complete workflow in converting your data to FHIR. The workflow might include: data reading and ingestion, data validation, making `$convert-data` API calls, data pre/post-processing, data enrichment, data de-duplication, and loading the data for persistence in the FHIR service.
-## Use the $convert-data endpoint
+## Using the `$convert-data` endpoint
-The `$convert-data` operation is integrated into the FHIR service to run as part of the service. After enabling `$convert-data` in your server, you can make API calls to the server to convert your data into FHIR:
+The `$convert-data` operation is integrated into the FHIR service as a RESTful API action. Calling the `$convert-data` endpoint causes the FHIR service to perform a conversion on health data sent in an API request:
-`https://<<FHIR service base URL>>/$convert-data`
+`POST {{fhirurl}}/$convert-data`
-### Parameter Resource
+The health data is delivered to the FHIR service in the body of the `$convert-data` request. If the request is successful, the FHIR service will return a FHIR `Bundle` response with the data converted to FHIR.
-$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource in the request body as described in the table below. In the API call request body, you would include the following parameters:
+### Parameters Resource
+
+A `$convert-data` API call packages the health data for conversion inside a JSON-formatted [Parameters resource](http://hl7.org/fhir/parameters.html) in the body of the request. See the table below for a description of the parameters.
| Parameter Name | Description | Accepted values | | -- | -- | -- |
-| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
-| inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
-| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
+| `inputData` | Data payload to be converted to FHIR. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
+| `inputDataType` | Type of data input. | ```HL7v2```, ``Ccda``, ``Json`` |
+| `templateCollectionReference` | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection in [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). The reference is to an image containing Liquid templates to use for conversion. This can be a reference either to default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting them on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> `<RegistryServer>/<imageName>@<imageDigest>`, `<RegistryServer>/<imageName>:<imageTag>` |
+| `rootTemplate` | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
> [!NOTE]
-> JSON templates are sample templates for use, not "default" templates that adhere to any pre-defined JSON message types. JSON doesn't have any standardized message types, unlike HL7v2 messages or C-CDA documents. Therefore, instead of default templates we provide you with some sample templates that you can use as a starting guide for your own customized templates.
+> JSON templates are sample templates for use in building your own conversion mappings ΓÇô not "default" templates that adhere to any pre-defined health data message types. JSON itself is not specified as a health data format, unlike HL7v2 or C-CDA. Therefore, instead of "default" JSON templates, we provide you with some sample JSON templates that you can use as a starting guide for your own customized mappings.
> [!WARNING] > Default templates are released under MIT License and are **not** supported by Microsoft Support. >
-> Default templates are provided only to help you get started quickly. They may get updated when we update versions of the FHIR service. Therefore, you must verify the conversion behavior and **host your own copy of templates** on an Azure Container Registry, register those to the FHIR service, and use in your API calls in order to have consistent data conversion behavior across the different versions of services.
+> Default templates are provided only to help you get started with your data conversion workflow. These default templates are not intended for production and may change at any point when Microsoft releases updates for the FHIR service. In order to have consistent data conversion behavior across different versions of the FHIR service, you must 1) **host your own copy of templates** in an Azure Container Registry instance, 2) register the templates to the FHIR service, 3) use your registered templates in your API calls, and 4) verify that the conversion behavior meets your requirements.
#### Sample Request
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
"id": "9d697ec3-48c3-3e17-db6a-29a1765e22c6", ... ...
+ }
"request": { "method": "PUT", "url": "Location/50becdb5-ff56-56c6-40a1-6d554dca80f0"
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
## Customize templates
-You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) for Visual Studio Code to customize the templates as per your needs. The extension provides an interactive editing experience, and makes it easy to download Microsoft-published templates and sample data. Refer to the documentation in the extension for more details.
+You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) for Visual Studio Code to customize templates according to your specific requirements. The extension provides an interactive editing experience and makes it easy to download Microsoft-published templates and sample data. Refer to the extension documentation for more details.
+
+## Host your own templates
-## Host and use templates
+It's recommended that you host your own copy of templates in an Azure Container Registry (ACR) instance. There are six steps involved in hosting your own templates and using them for `$convert-data` operations:
-It's recommended that you host your own copy of templates on ACR. There are four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
+1. Create an Azure Container Registry instance.
+2. Push the templates to your Azure Container Registry.
+3. Enable Managed Identity in your FHIR service instance.
+4. Provide ACR access to the FHIR service Managed Identity.
+5. Register the ACR server in the FHIR service.
+6. Optionally configure ACR firewall for secure access.
-1. Push the templates to your Azure Container Registry.
-1. Enable Managed Identity on your FHIR service instance.
-1. Provide access of the ACR to the FHIR service Managed Identity.
-1. Register the ACR servers in the FHIR service.
-1. Optionally configure ACR firewall for secure access.
+### Create an ACR instance
+
+Read the [Introduction to Container registries in Azure](../../container-registry/container-registry-intro.md) and follow the instructions for creating your own ACR instance. It's recommended to place your ACR instance in the same resource group where your FHIR service is located.
### Push templates to Azure Container Registry
-After creating an ACR instance, you can use the _FHIR Converter: Push Templates_ command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push the customized templates to the ACR. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose.
+After creating an ACR instance, you can use the _FHIR Converter: Push Templates_ command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push your custom templates to your ACR instance. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose.
-### Enable Managed Identity on FHIR service
+### Enable Managed Identity in the FHIR service
-Browse to your instance of FHIR service service in the Azure portal, and then select the **Identity** blade.
-Change the status to **On** to enable managed identity in FHIR service.
+Browse to your instance of the FHIR service in Azure portal and select the **Identity** blade.
+Change the status to **On** to enable managed identity in the FHIR service.
[ ![Screen image of Enable Managed Identity.](media/convert-data/fhir-mi-enabled.png) ](media/convert-data/fhir-mi-enabled.png#lightbox)
-### Provide access of the ACR to FHIR service
+### Provide ACR access to the FHIR service
-1. Select **Access control (IAM)**.
+1. In your resource group, go to your **Container registry** instance and select the **Access control (IAM)** blade.
-1. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task.
+2. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task.
:::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Role** tab, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
+3. On the **Role** tab, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
[![Screen shot showing user interface of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
-1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+4. On the **Members** tab, select **Managed identity**, and then click **Select members**.
-1. Select your Azure subscription.
+5. Select your Azure subscription.
-1. Select **System-assigned managed identity**, and then select the FHIR service.
+6. Select **System-assigned managed identity**, and then select the FHIR service.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+7. On the **Review + assign** tab, click **Review + assign** to assign the role.
For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
-### Register the ACR servers in FHIR service
+### Register the ACR server in FHIR service
-You can register the ACR server using the Azure portal, or using CLI.
+You can register the ACR server using the Azure portal, or using the CLI.
#### Registering the ACR server using Azure portal
-Browse to the **Artifacts** blade under **Data transformation** in your FHIR service instance. You'll see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
+Browse to the **Artifacts** blade under **Data transformation** in your FHIR service instance. You'll see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to click **Save** for the registration to take effect. It may take a few minutes to apply the change.
-#### Registering the ACR server using CLI
+#### Registering the ACR server using the CLI
You can register up to 20 ACR servers in the FHIR service.
-Install the Azure Health Data Services CLI from Azure PowerShell if needed:
+Install the Azure Health Data Services CLI if needed:
```azurecli az extension add -n healthcareapis ```
-Register the acr servers to FHIR service following the examples below:
+Register the ACR servers to the FHIR service following the examples below:
##### Register a single ACR server
az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io fhiracr2020.az
``` ### Configure ACR firewall
-Select **Networking** of the Azure storage account from the portal.
+In your Azure portal, select **Networking** for the ACR instance.
[ ![Screen image of configure ACR firewall.](media/convert-data/networking-container-registry.png) ](media/convert-data/networking-container-registry.png#lightbox)
-Select **Selected networks**.
+Click the **Selected networks** button.
Under the **Firewall** section, specify the IP address in the **Address range** box. Add IP ranges to allow access from the internet or your on-premises networks.
-In the table below, you'll find the IP address for the Azure region where the FHIR service service is provisioned.
+In the table below, you'll find the IP address for the Azure region where the FHIR service is provisioned.
|**Azure Region** |**Public IP Address** | |:-|:-|
In the table below, you'll find the IP address for the Azure region where the FH
| West US 2 | 40.64.135.77 | > [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to configure FHIR export settings. For more information, see [Configure export settings](./configure-export-data.md)
+> The above steps are similar to the configuration steps described in the document **Configure export settings and set up a storage account**. For more information, see [Configure settings for export](./configure-export-data.md).
-For a private network access (that is, private link), you can also disable the public network access of ACR.
-* Select Networking blade of the Azure storage account from the portal.
-* Select `Disabled`.
-* Select Firewall exception: Allow trusted Microsoft services to access this container registry.
+For private network access (that is, a private link), you can also disable the public network access to your ACR instance.
+* Select the **Networking** blade for the Container registry in the portal.
+* Make sure you are in the **Public access** tab.
+* Select **Disabled**.
+* Under **Firewall exception** select **Allow trusted Microsoft services to access this container registry**.
[ ![Screen image of private link for ACR.](media/convert-data/configure-private-network-container-registry.png) ](media/convert-data/configure-private-network-container-registry.png#lightbox)
-### Verify
+### Verify `$convert-data` operation
-Make a call to the $convert-data API specifying your template reference in the templateCollectionReference parameter.
+Make a call to the `$convert-data` API specifying your template reference in the `templateCollectionReference` parameter.
`<RegistryServer>/<imageName>@<imageDigest>`
+You should receive a `Bundle` response containing the health data converted into the FHIR format.
+ ## Next steps
-In this article, you've learned about the $convert-data endpoint and customize-converter templates to convert data in the Azure Health Data Services. For more information about how to export FHIR data, see
+In this article, you've learned about the `$convert-data` endpoint for converting health data to FHIR using the FHIR service in Azure Health Data Services. For information about how to export FHIR data from the FHIR service, see
>[!div class="nextstepaction"] >[Export data](export-data.md)
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
You can configure the server to export the data to any kind of Azure storage acc
#### Using `$export` command
-After configuring your FHIR server, you can follow the [documentation](./export-data.md#using-export-command) to export your FHIR resources at System, Patient, or Group level. For example, you can export all of your FHIR data related to the patients in a `Group` with the following `$export` command, in which you specify your ADL Gen 2 blob storage name in the field `{{BlobContainer}}`:
+After configuring your FHIR server, you can follow the [documentation](./export-data.md#calling-the-export-endpoint) to export your FHIR resources at System, Patient, or Group level. For example, you can export all of your FHIR data related to the patients in a `Group` with the following `$export` command, in which you specify your ADL Gen 2 blob storage name in the field `{{BlobContainer}}`:
```rest https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
Previously updated : 06/06/2022 Last updated : 08/03/2022 # How to export FHIR data
+The bulk `$export` operation in the FHIR service allows users to export data as described in the [HL7 FHIR Bulk Data Access specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
-The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+Before attempting to use `$export`, make sure that your FHIR service is configured to connect with an ADLS Gen2 storage account. For configuring export settings and creating an ADLS Gen2 storage account, refer to the [Configure settings for export](./configure-export-data.md) page.
-Before using $export, you'll want to make sure that the FHIR service is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md).
+## Calling the `$export` endpoint
-## Using $export command
+After setting up the FHIR service to connect with an ADLS Gen2 storage account, you can call the `$export` endpoint and the FHIR service will export data into a blob storage container inside the storage account. The example request below exports all resources into a container specified by name (`{{containerName}}`). Note that the container in the ADLS Gen2 account must be created beforehand if you want to specify the `{{containerName}}` in the request.
-After configuring the FHIR service for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+```
+GET {{fhirurl}}/$export?_container={{containerName}}
+```
+
+If you don't specify a container name in the request (e.g., by calling `GET {{fhirurl}}/$export`), then a new container with an auto-generated name will be created for the exported data.
+
+For general information about the FHIR `$export` API spec, please see the [HL7 FHIR Export Request Flow](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#request-flow) documentation.
**Jobs stuck in a bad state**
-In some situations, there's a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions haven't been set up properly. One way to validate if your export is successful is to check your storage account to see if the corresponding container (that is, ndjson) files are present. If they aren't present, and there are no other export jobs running, then there's a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try requeuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
+In some situations, there's a potential for a job to be stuck in a bad state while attempting to `$export` data from the FHIR service. This can occur especially if the ADLS Gen2 storage account permissions haven't been set up correctly. One way to check the status of your `$export` operation is to go to your storage account's **Storage browser** and see if any `.ndjson` files are present in the export container. If the files aren't present and there are no other `$export` jobs running, then there's a possibility the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later you can requeue the `$export` job and try again. Information about canceling an `$export` operation can be found in the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7.
-The FHIR service supports $export at the following levels:
-* [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET https://<<FHIR service base URL>>/$export>>`
-* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET https://<<FHIR service base URL>>/Patient/$export>>`
-* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - FHIR service exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
+> [!NOTE]
+> In the FHIR service, the default time for an `$export` operation to idle in a bad state is 10 minutes before the service will stop the operation and move to a new job.
-When data is exported, a separate file is created for each resource type. To ensure that the exported files don't become too large. We create a new file after the size of a single exported file becomes larger than 64 MB. The result is that you may get multiple files for each resource type, which will be enumerated (that is, Patient-1.ndjson, Patient-2.ndjson).
+The FHIR service supports `$export` at the following levels:
+* [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET {{fhirurl}}/$export`
+* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET {{fhirurl}}/Patient/$export`
+* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) ΓÇô *The FHIR service exports all referenced resources but doesn't export the characteristics of the group resource itself: `GET {{fhirurl}}/Group/[ID]/$export`
+When data is exported, a separate file is created for each resource type. The FHIR service will create a new file when the size of a single exported file exceeds 64 MB. The result is that you may get multiple files for a resource type, which will be enumerated (e.g., `Patient-1.ndjson`, `Patient-2.ndjson`).
> [!Note]
-> `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if the resource is in a compartment of more than one resource, or is in multiple groups.
+> `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if a resource is in multiple groups or in a compartment of more than one resource.
-In addition, checking the export status through the URL returned by the location header during the queuing is supported along with canceling the actual export job.
+In addition to checking the presence of exported files in your storage account, you can also check your `$export` operation status through the URL in the `Content-Location` header returned in the FHIR service response. See the HL7 [Bulk Data Status Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-status-request) documentation for more information.
### Exporting FHIR data to ADLS Gen2
-Currently we support $export for ADLS Gen2 enabled storage accounts, with the following limitation:
+Currently the FHIR service supports `$export` to ADLS Gen2 storage accounts, with the following limitations:
-- User canΓÇÖt take advantage of [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target export to a specific subdirectory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).-- Once an export is complete, we never export anything to that folder again, since subsequent exports to the same container will be inside a newly created folder.
+- ADLS Gen2 provides [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target `$export` operations to a specific subdirectory within a container. The FHIR service is only able to specify the destination container for the export (where a new folder for each `$export` operation is created).
+- Once an `$export` operation is complete and all data has been written inside a folder, the FHIR service doesn't export anything to that folder again since subsequent exports to the same container will be inside a newly created folder.
-To export data to storage accounts behind the firewalls, see [Configure settings for export](configure-export-data.md).
+To export data to a storage account behind a firewall, see [Configure settings for export](configure-export-data.md).
## Settings and parameters ### Headers
-There are two required header parameters that must be set for $export jobs. The values are defined by the current [$export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#headers).
-* **Accept** - application/fhir+json
-* **Prefer** - respond-async
+There are two required header parameters that must be set for `$export` jobs. The values are set according to the current HL7 [$export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#headers).
+* **Accept** - `application/fhir+json`
+* **Prefer** - `respond-async`
### Query parameters
-The FHIR service supports the following query parameters. All of these parameters are optional:
+The FHIR service supports the following query parameters for filtering exported data. All of these parameters are optional.
|Query parameter | Defined by the FHIR Spec? | Description| ||||
-| \_outputFormat | Yes | Currently supports three values to align to the FHIR Spec: application/fhir+ndjson, application/ndjson, or just ndjson. All export jobs will return `ndjson` and the passed value has no effect on code behavior. |
-| \_since | Yes | Allows you to only export resources that have been modified since the time provided |
-| \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources|
-| \_typeFilter | Yes | To request finer-grained filtering, you can use \_typeFilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results |
-| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isn't specified, the data will be exported to a new container. |
+| `_outputFormat` | Yes | Currently supports three values to align to the FHIR Spec: `application/fhir+ndjson`, `application/ndjson`, or just `ndjson`. All export jobs will return `.ndjson` files and the passed value has no effect on code behavior. |
+| `_since` | Yes | Allows you to only export resources that have been modified since the time provided. |
+| `_type` | Yes | Allows you to specify which types of resources will be included. For example, `_type=Patient` would return only patient resources.|
+| `_typeFilter` | Yes | To request finer-grained filtering, you can use `_typeFilter` along with the `_type` parameter. The value of the `_typeFilter` parameter is a comma-separated list of FHIR queries that further restrict the results. |
+| `_container` | No | Specifies the name of the container in the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder in that container. If the container isn't specified, the data will be exported to a new container with an auto-generated name. |
> [!Note]
-> Only storage accounts in the same subscription as that for FHIR service are allowed to be registered as the destination for $export operations.
+> Only storage accounts in the same subscription as that for the FHIR service are allowed to be registered as the destination for `$export` operations.
## Next steps
-In this article, you've learned how to export FHIR resources using the $export command. For more information about how to set up and use de-identified export or how to export data from Azure API for FHIR to Azure Synapse Analytics, see
+In this article, you've learned about exporting FHIR resources using the `$export` operation. For information about how to set up and use additional options for export, see
>[!div class="nextstepaction"] >[Export de-identified data](de-identified-export.md)
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
az group list
## Next steps
-This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try this Microsoft Learn tutorial [NVIDIA DeepStream development with Microsoft Azure](/learn/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
+This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try the learning path for [NVIDIA DeepStream development with Microsoft Azure](/learn/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
Some of the key differences between the latest release and version 1.1 and earli
* The workload API in the latest version saves encrypted secrets in a new format. If you upgrade from an older version to latest version, the existing master encryption key is imported. The workload API can read secrets saved in the prior format using the imported encryption key. However, the workload API can't write encrypted secrets in the old format. Once a secret is re-encrypted by a module, it is saved in the new format. Secrets encrypted in the latest version are unreadable by the same module in version 1.1. If you persist encrypted data to a host-mounted folder or volume, always create a backup copy of the data *before* upgrading to retain the ability to downgrade if necessary. * For backward compatibility when connecting devices that do not support TLS 1.2, you can configure Edge Hub to still accept TLS 1.0 or 1.1 via the [SslProtocols environment variable](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md#edgehub).  Please note that support for [TLS 1.0 and 1.1 in IoT Hub is considered legacy](../iot-hub/iot-hub-tls-support.md) and may also be removed from Edge Hub in future releases.  To avoid future issues, use TLS 1.2 as the only TLS version when connecting to Edge Hub or IoT Hub. * The preview for the experimental MQTT broker in Edge Hub 1.2 has ended and is not included in Edge Hub 1.3. We are continuing to refine our plans for an MQTT broker based on feedback received. In the meantime, if you need a standards-compliant MQTT broker on IoT Edge, consider deploying an open-source broker like Mosquitto as an IoT Edge module.
+* Starting with version 1.2, when a backing image is removed from a container, the container keeps running and it persists across restarts. In 1.1, when a backing image is removed, the container is immediately recreated and the backing image is updated.
Before automating any update processes, validate that it works on test machines.
iot-hub-device-update Connected Cache Industrial Iot Nested https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-industrial-iot-nested.md
description: Microsoft Connected Cache within an Azure IoT Edge for Industrial I
Last updated 2/16/2021-+
iot-hub-device-update Connected Cache Nested Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-nested-level.md
description: Microsoft Connected Cache two level nested Azure IoT Edge Gateway w
Last updated 2/16/2021-+
iot-hub-device-update Connected Cache Single Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-single-level.md
description: Microsoft Connected Cache preview deployment scenario samples tutor
Last updated 2/16/2021-+
iot-hub Iot Hub C C Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-c-c-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, it allows for isolated configuration and conditions for each component.
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, it allows for isolated configuration and conditions for each component.
At the end of this article, you have two C apps:
-* **CreateIdentities**, which creates a device identity, a module identity and associated security key to connect your device and module clients.
+* **CreateIdentities**: creates a device identity, a module identity and associated security key to connect your device and module clients.
-* **UpdateModuleTwinReportedProperties**, which sends updated module twin reported properties to your IoT Hub.
+* **UpdateModuleTwinReportedProperties**: sends updated module twin, reported properties to your IoT Hub.
> [!NOTE]
-> For information about the Azure IoT SDKs that you can use to build both applications to run on devices, and your solution backend, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
-* An active Azure account. (If you don't have an account, you can create an [Azure free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). * The latest [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
At the end of this article, you have two C apps:
## Create a device identity and a module identity in IoT Hub
-In this section, you create a C app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module cannot connect to IoT hub unless it has an entry in the identity registry. For more information, see the **Identity registry** section of the [IoT Hub developer guide](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
+In this section, you create a C app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
Add the following code to your C file:
This app creates a device identity with ID **myFirstDevice** and a module identi
In this section, you create a C app on your simulated device that updates the module twin reported properties.
-1. **Get your module connection string** -- now if you login to [Azure portal](https://portal.azure.com). Navigate to your IoT Hub and click IoT Devices. Find myFirstDevice, open it and you see myFirstModule was successfully created. Copy the module connection string. It is needed in the next step.
+1. **Get your module connection string** -- now if you sign in to [Azure portal](https://portal.azure.com). Navigate to your IoT Hub and click IoT Devices. Find myFirstDevice, open it and you see myFirstModule was successfully created. Copy the module connection string. It is needed in the next step.
![Azure portal module detail](./media/iot-hub-c-c-module-twin-getstarted/module-detail.png)
int main(void)
To continue getting started with IoT Hub and to explore other IoT scenarios, see: * [Getting started with device management](iot-hub-node-node-device-management-get-started.md)
-* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
+* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub Csharp Csharp Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provide visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provide visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
At the end of this article, you have two .NET console apps:
-* **CreateIdentities**. This app creates a device identity, a module identity, and associated security key to connect your device and module clients.
+* **CreateIdentities**: creates a device identity, a module identity, and associated security key to connect your device and module clients.
-* **UpdateModuleTwinReportedProperties**. This app sends updated module twin reported properties to your IoT hub.
+* **UpdateModuleTwinReportedProperties**: sends updated module twin, reported properties to your IoT hub.
> [!NOTE]
-> For information about the Azure IoT SDKs that you can use to build both applications to run on devices, and your solution back end, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites * Visual Studio.
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). ## Get the IoT hub connection string
At the end of this article, you have two .NET console apps:
## Update the module twin using .NET device SDK
-In this section, you create a .NET console app on your simulated device that updates the module twin reported properties.
+Now let's communicate to the cloud from your simulated device. Once a module identity is created, a module twin is implicitly created in IoT Hub. In this section, you create a .NET console app on your simulated device that updates the module twin reported properties.
-Here's how to get your module connection string from the Azure portal. Sign in to the [Azure portal](https://portal.azure.com/). Navigate to your hub and select **Devices**. Find **myFirstDevice**. Select **myFirstDevice** to open it, and then select **myFirstModule** to open it. In **Module Identity Details**, copy the **Connection string (primary key)** to save it for the console app.
+To retrieve your module connection string, navigate to your [IoT hub](https://ms.portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Devices%2FIotHubs) then select **Devices**. Find and select **myFirstDevice** to open it and then select **myFirstModule** to open it. In **Module Identity Details**, copy the **Connection string (primary key)** and save it for the console app.
:::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/module-identity-detail.png" alt-text="Screenshot that shows the 'Module Identity Details' page." lightbox="./media/iot-hub-csharp-csharp-module-twin-getstarted/module-identity-detail.png":::
Here's how to get your module connection string from the Azure portal. Sign in t
} ```
- This code sample shows you how to retrieve the module twin and update reported properties with AMQP protocol. In public preview, we only support AMQP for module twin operations.
+ Now you know how to retrieve the module twin and update reported properties with AMQP protocol.
1. Optionally, you can add these statements to the **Main** method to send an event to IoT Hub from your module. Place these lines below the `try catch` block.
iot-hub Iot Hub Csharp Csharp Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-twin-getstarted.md
[!INCLUDE [iot-hub-selector-twin-get-started](../../includes/iot-hub-selector-twin-get-started.md)]
-In this article, you create these .NET console apps:
+In this article, you create two .NET console apps:
-* **AddTagsAndQuery**. This back-end app adds tags and queries device twins.
+* **AddTagsAndQuery**: a back-end app that adds tags and queries device twins.
-* **ReportConnectivity**. This device app simulates a device that connects to your IoT hub with the device identity created earlier, and reports its connectivity condition.
+* **ReportConnectivity**: a simulated device app that connects to your IoT hub and reports its connectivity condition.
> [!NOTE]
-> The article [Azure IoT SDKs](iot-hub-devguide-sdks.md) provides information about the Azure IoT SDKs that you can use to build both device and back-end apps.
->
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
In this article, you create these .NET console apps:
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Get the IoT hub connection string
In this article, you create these .NET console apps:
## Create the service app
-In this section, you create a .NET console app, using C#, that adds location metadata to the device twin associated with **myDeviceId**. It then queries the device twins stored in the IoT hub selecting the devices located in the US, and then the ones that reported a cellular connection.
+In this section, you create a .NET console app, using C#, that adds location metadata to the device twin associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
1. In Visual Studio, select **File > New > Project**. In **Create a new project**, select **Console App (.NET Framework)**, and then select **Next**.
In this section, you create a .NET console app, using C#, that adds location met
![Query results in window](./media/iot-hub-csharp-csharp-twin-getstarted/addtagapp.png)
-In the next section, you create a device app that reports the connectivity information and changes the result of the query in the previous section.
+In the next section, you create a device app that reports connectivity information and changes the result of the query in the previous section.
## Create the device app
-In this section, you create a .NET console app that connects to your hub as **myDeviceId**, and then updates its reported properties to contain the information that it is connected using a cellular network.
+In this section, you create a .NET console app that connects to your hub as **myDeviceId**, and then updates its reported properties to confirm that it's connected using a cellular network.
1. In Visual Studio, select **File** > **New** > **Project**. In **Create new project**, choose **Console App (.NET Framework)**, and then select **Next**.
In this section, you create a .NET console app that connects to your hub as **my
![Device connectivity reported successfully](./media/iot-hub-csharp-csharp-twin-getstarted/tagappsuccess.png)
-## Next steps
+In this article, you:
-In this article, you configured a new IoT hub in the Azure portal, and then created a device identity in the IoT hub's identity registry. You added device metadata as tags from a back-end app, and wrote a simulated device app to report device connectivity information in the device twin. You also learned how to query this information using the SQL-like IoT Hub query language.
+* Configured a new IoT hub in the Azure portal
+* Created a device identity in the IoT hub's identity registry
+* Added device metadata as tags from a back-end app
+* Reported device connectivity information in the device twin
+* Queried the device twin information, using SQL-like IoT Hub query language
+
+## Next steps
-You can learn more from the following resources:
+To learn how to:
-* To learn how to send telemetry from devices, see the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) article.
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp).
-* To learn how to configure devices using device twin's desired properties, see the [Use desired properties to configure devices](tutorial-device-twins.md) article.
+* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
-* To learn how to control devices interactively, such as turning on a fan from a user-controlled app, see the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-csharp) quickstart.
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-csharp).
iot-hub Iot Hub Java Java Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-twin-getstarted.md
In this article, you create two Java console apps:
-* **add-tags-query**, a Java back-end app that adds tags and queries device twins.
-* **simulated-device**, a Java device app that connects to your IoT hub and reports its connectivity condition using a reported property.
+* **add-tags-query**: a back-end app that adds tags and queries device twins.
+* **simulated-device**: a simulated device app that connects to your IoT hub and reports its connectivity condition.
> [!NOTE]
-> The article [Azure IoT SDKs](iot-hub-devguide-sdks.md) provides information about the Azure IoT SDKs that you can use to build both device and back-end apps.
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
In this article, you create two Java console apps:
* [Maven 3](https://maven.apache.org/download.cgi)
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Get the IoT hub connection string
In this article, you create two Java console apps:
## Create the service app
-In this section, you create a Java app that adds location metadata as a tag to the device twin in IoT Hub associated with **myDeviceId**. The app first queries IoT hub for devices located in the US, and then for devices that report a cellular network connection.
+In this section, you create a Java app that adds location metadata as a tag to the device twin in IoT Hub associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
1. On your development machine, create an empty folder named **iot-java-twin-getstarted**.
In this section, you create a Java app that adds location metadata as a tag to t
mvn clean package -DskipTests ```
-## Create a device app
+In the next section, you create a device app that reports connectivity information and changes the result of the query in the previous section.
+
+## Create the device app
-In this section, you create a Java console app that sets a reported property value that is sent to IoT Hub.
+In this section, you create a Java console app that connects to your hub as **myDeviceId**, and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
1. In the **iot-java-twin-getstarted** folder, create a Maven project named **simulated-device** using the following command at your command prompt:
You are now ready to run the console apps.
Now that your device has sent the **connectivityType** property to IoT Hub, the second query returns your device.
+In this article, you:
+
+* Configured a new IoT hub in the Azure portal
+* Created a device identity in the IoT hub's identity registry
+* Added device metadata as tags from a back-end app
+* Reported device connectivity information in the device twin
+* Queried the device twin information, using SQL-like IoT Hub query language
+ ## Next steps
-In this article, you configured a new IoT hub in the Azure portal, and then created a device identity in the IoT hub's identity registry. You added device metadata as tags from a back-end app, and wrote a device app to report device connectivity information in the device twin. You also learned how to query the device twin information using the SQL-like IoT Hub query language.
+To learn how to:
-Use the following resources to learn how to:
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java)
-* Send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) article.
+* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md)
-* Control devices interactively (such as turning on a fan from a user-controlled app) with the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-java) quickstart.
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-java)
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
result = iothub_job_manager.create_import_export_job(JobProperties(
## SDK samples - [.NET SDK sample](https://aka.ms/iothubmsicsharpsample) - [Java SDK sample](https://aka.ms/iothubmsijavasample)-- [Python SDK sample](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-hub/samples)
+- [Python SDK sample](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples)
- Node.js SDK samples: [bulk device import](https://aka.ms/iothubmsinodesampleimport), [bulk device export](https://aka.ms/iothubmsinodesampleexport) ## Next steps
iot-hub Iot Hub Node Node Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system-based devices or firmware devices, it allows for isolated configuration and conditions for each component.
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, it allows for isolated configuration and conditions for each component.
At the end of this article, you have two Node.js apps:
-* **CreateIdentities**, which creates a device identity, a module identity, and associated security keys to connect your device and module clients.
+* **CreateIdentities**: creates a device identity, a module identity, and associated security keys to connect your device and module clients.
-* **UpdateModuleTwinReportedProperties**, which sends updated module twin reported properties to your IoT Hub.
+* **UpdateModuleTwinReportedProperties**: sends updated module twin, reported properties to your IoT Hub.
> [!NOTE]
-> For information about the Azure IoT SDKs that you can use to build both applications to run on devices, and your solution back end, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
At the end of this article, you have two Node.js apps:
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- ## Get the IoT hub connection string [!INCLUDE [iot-hub-howto-module-twin-shared-access-policy-text](../../includes/iot-hub-howto-module-twin-shared-access-policy-text.md)]
At the end of this article, you have two Node.js apps:
## Create a device identity and a module identity in IoT Hub
-In this section, you create a Node.js app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module cannot connect to IoT hub unless it has an entry in the identity registry. For more information, see the "Identity registry" section of the [IoT Hub developer guide](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
+In this section, you create a Node.js app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. The ID and key are case-sensitive. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub.
1. Create a directory to hold your code.
iot-hub Iot Hub Node Node Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-twin-getstarted.md
[!INCLUDE [iot-hub-selector-twin-get-started](../../includes/iot-hub-selector-twin-get-started.md)]
-At the end of this article, you will have two Node.js console apps:
+In this article, you create two Node.js console apps:
-* **AddTagsAndQuery.js**, a Node.js back-end app, which adds tags and queries device twins.
+* **AddTagsAndQuery.js**: a back-end app that adds tags and queries device twins.
-* **TwinSimulatedDevice.js**, a Node.js app, which simulates a device that connects to your IoT hub with the device identity created earlier, and reports its connectivity condition.
+* **TwinSimulatedDevice.js**: a simulated device app that connects to your IoT hub and reports its connectivity condition.
> [!NOTE]
-> The article [Azure IoT SDKs](iot-hub-devguide-sdks.md) provides information about the Azure IoT SDKs that you can use to build both device and back-end apps.
->
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
To complete this article, you need:
* Node.js version 10.0.x or later.
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Get the IoT hub connection string
To complete this article, you need:
## Create the service app
-In this section, you create a Node.js console app that adds location metadata to the device twin associated with **myDeviceId**. It then queries the device twins stored in the IoT hub selecting the devices located in the US, and then the ones that are reporting a cellular connection.
+In this section, you create a Node.js console app that adds location metadata to the device twin associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
1. Create a new empty folder called **addtagsandqueryapp**. In the **addtagsandqueryapp** folder, create a new package.json file using the following command at your command prompt. The `--yes` parameter accepts all the defaults.
In this section, you create a Node.js console app that adds location metadata to
![See the one device in the query results](media/iot-hub-node-node-twin-getstarted/service1.png)
-In the next section, you create a device app that reports the connectivity information and changes the result of the query in the previous section.
+In the next section, you create a device app that reports connectivity information and changes the result of the query in the previous section.
## Create the device app
-In this section, you create a Node.js console app that connects to your hub as **myDeviceId**, and then updates its device twin's reported properties to contain the information that it is connected using a cellular network.
+In this section, you create a Node.js console app that connects to your hub as **myDeviceId**, and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
1. Create a new empty folder called **reportconnectivity**. In the **reportconnectivity** folder, create a new package.json file using the following command at your command prompt. The `--yes` parameter accepts all the defaults.
In this section, you create a Node.js console app that connects to your hub as *
![Show myDeviceId in both query results](media/iot-hub-node-node-twin-getstarted/service2.png)
-## Next steps
+In this article, you:
-In this article, you configured a new IoT hub in the Azure portal, and then created a device identity in the IoT hub's identity registry. You added device metadata as tags from a back-end app, and wrote a simulated device app to report device connectivity information in the device twin. You also learned how to query this information using the SQL-like IoT Hub query language.
+* Configured a new IoT hub in the Azure portal
+* Created a device identity in the IoT hub's identity registry
+* Added device metadata as tags from a back-end app
+* Reported device connectivity information in the device twin
+* Queried the device twin information, using SQL-like IoT Hub query language
+
+## Next steps
-Use the following resources to learn how to:
+To learn how to:
-* send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) article,
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
-* configure devices using device twin's desired properties with the [Use desired properties to configure devices](tutorial-device-twins.md) article,
+* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md)
-* control devices interactively (such as turning on a fan from a user-controlled app), with the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-nodejs) quickstart.
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-nodejs)
iot-hub Iot Hub Portal Csharp Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-portal-csharp-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provide visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
->
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provide visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
-In this article, you will learn:
+In this article, you will learn how to:
-* How to create a module identity in the portal.
+* Create a module identity in the portal.
-* How to use a .NET device SDK to update the module twin from your device.
+* Use a .NET device SDK to update the module twin from your device.
> [!NOTE]
-> For information about the Azure IoT SDKs that you can use to build both applications to run on devices and your solution back end, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
->
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites * Visual Studio.
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). * A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
Save the **Connection string (primary key)**. You use it in the next section to
## Update the module twin using .NET device SDK
-You've successfully created the module identity in your IoT Hub. Let's try to communicate to the cloud from your simulated device. Once a module identity is created, a module twin is implicitly created in IoT Hub. In this section, you will create a .NET console app on your simulated device that updates the module twin reported properties.
+Now let's communicate to the cloud from your simulated device. Once a module identity is created, a module twin is implicitly created in IoT Hub. In this section, you will create a .NET console app on your simulated device that updates the module twin reported properties.
### Create a Visual Studio project
-To create an app that updates the module twin reported properties, follow these steps:
+To create an app that updates the module twin, reported properties, follow these steps:
1. In Visual Studio, select **Create a new project**, then choose **Console App (.NET Framework)**, and select **Next**.
To create an app that updates the module twin reported properties, follow these
### Install the latest Azure IoT Hub .NET device SDK
-Module identity and module twin is in public preview. It's only available in the IoT Hub pre-release device SDKs. To install it, follow these steps:
+Module identity and module twin is only available in the IoT Hub pre-release device SDKs. To install it, follow these steps:
1. In Visual Studio, open **Tools** > **NuGet Package Manager** > **Manage NuGet Packages for Solution**. 1. Select **Browse**, and then select **Include prerelease**. Search for *Microsoft.Azure.Devices.Client*. Select the latest version and install.
- :::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png" alt-text="Screenshot showing how to install the Microsoft.Azure.Devices.Client.":::
+ :::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png" alt-text="Screenshot showing how to install the Microsoft.Azure.Devices.Client." lightbox="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png":::
Now you have access to all the module features.
To create your app, follow these steps:
You can build and run this app by using **F5**.
-This code sample shows you how to retrieve the module twin and update reported properties with AMQP protocol. In public preview, we only support AMQP for module twin operations.
+Now you know how to retrieve the module twin and update reported properties with AMQP protocol.
## Next steps To continue getting started with IoT Hub and to explore other IoT scenarios, see:
-* [Get started with IoT Hub module identity and module twin using .NET backup and .NET device](iot-hub-csharp-csharp-module-twin-getstarted.md)
+* [Getting started with device management](iot-hub-node-node-device-management-get-started.md)
* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub Python Python Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identities and device twins, but provide finer granularity. While Azure IoT Hub device identities and device twins enable a back-end application to configure a device and provide visibility on the device's conditions, module identities and module twins provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, they allow for isolated configuration and conditions for each component.
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identities and device twins, but provide finer granularity. While Azure IoT Hub device identities and device twins enable a back-end application to configure a device and provide visibility on the device's conditions, module identities and module twins provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, they allow for isolated configuration and conditions for each component.
> At the end of this article, you have three Python apps:
-* **CreateModule**, which creates a device identity, a module identity, and associated security keys to connect your device and module clients.
+* **CreateModule**: creates a device identity, a module identity, and associated security keys to connect your device and module clients.
-* **UpdateModuleTwinDesiredProperties**, which sends updated module twin desired properties to your IoT Hub.
+* **UpdateModuleTwinDesiredProperties**: sends updated module twin, desired properties to your IoT Hub.
-* **ReceiveModuleTwinDesiredPropertiesPatch**, which receives the module twin desired properties patch on your device.
+* **ReceiveModuleTwinDesiredPropertiesPatch**: receives the module twin, desired properties patch on your device.
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
In this article, you create a back-end service that adds a device in the identit
## Create a device identity and a module identity in IoT Hub
-In this section, you create a Python service app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
+In this section, you create a Python service app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. The ID and key are case-sensitive. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub.
1. At your command prompt, run the following command to install the **azure-iot-hub** package:
iot-hub Iot Hub Python Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-twin-getstarted.md
[!INCLUDE [iot-hub-selector-twin-get-started](../../includes/iot-hub-selector-twin-get-started.md)]
-At the end of this article, you will have two Python console apps:
+In this article, you create two Python console apps:
-* **AddTagsAndQuery.py**, a Python back-end app, which adds tags and queries device twins.
+* **AddTagsAndQuery.py**: a back-end app that adds tags and queries device twins.
-* **ReportConnectivity.py**, a Python app, which simulates a device that connects to your IoT hub with the device identity created earlier, and reports its connectivity condition.
+* **ReportConnectivity.py**: a simulated device app that connects to your IoT hub and reports its connectivity condition.
+> [!NOTE]
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
At the end of this article, you will have two Python console apps:
## Create the service app
-In this section, you create a Python console app that adds location metadata to the device twin associated with your **{Device ID}**. It then queries the device twins stored in the IoT hub selecting the devices located in Redmond, and then the ones that are reporting a cellular connection.
+In this section, you create a Python console app that adds location metadata to the device twin associated with your **{Device ID}**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
1. In your working directory, open a command prompt and install the **Azure IoT Hub Service SDK for Python**.
In this section, you create a Python console app that adds location metadata to
![first query showing all devices in Redmond](./media/iot-hub-python-twin-getstarted/service-1.png)
-In the next section, you create a device app that reports the connectivity information and changes the result of the query in the previous section.
+In the next section, you create a device app that reports connectivity information and changes the result of the query in the previous section.
## Create the device app
-In this section, you create a Python console app that connects to your hub as your **{Device ID}**, and then updates its device twin's reported properties to contain the information that it is connected using a cellular network.
+In this section, you create a Python console app that connects to your hub as your **{Device ID}** and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
1. From a command prompt in your working directory, install the **Azure IoT Hub Device SDK for Python**:
In this section, you create a Python console app that connects to your hub as yo
![receive desired properties on device app](./media/iot-hub-python-twin-getstarted/device-2.png)
-## Next steps
+In this article, you:
+
+* Configured a new IoT hub in the Azure portal
+* Created a device identity in the IoT hub's identity registry
+* Added device metadata as tags from a back-end app
+* Reported device connectivity information in the device twin
+* Queried the device twin information, using SQL-like IoT Hub query language
-In this article, you configured a new IoT hub in the Azure portal, and then created a device identity in the IoT hub's identity registry. You added device metadata as tags from a back-end app, and wrote a simulated device app to report device connectivity information in the device twin. You also learned how to query this information using the registry.
+## Next steps
-Use the following resources to learn how to:
+To learn how to:
-* Send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) article.
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) article.
-* Configure devices using device twin's desired properties with the [Use desired properties to configure devices](tutorial-device-twins.md) article.
+* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
-* Control devices interactively (such as turning on a fan from a user-controlled app), with the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-python) quickstart.
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-python).
marketplace Azure Consumption Commitment Enrollment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-consumption-commitment-enrollment.md
An offer must meet the following requirements to be enrolled in the MACC program
## Next steps - To learn more about how the MACC program benefits customers and how they can find solutions that are enabled for MACC, see [Azure Consumption Commitment benefit](/marketplace/azure-consumption-commitment-benefit).-- To learn more about how your organization can leverage Azure Marketplace, complete our Microsoft Learn module: [Simplify cloud procurement and governance with Azure Marketplace](/learn/modules/simplify-cloud-procurement-governance-azure-marketplace/)
+- To learn more about how your organization can leverage Azure Marketplace, complete our Learn module, [Simplify cloud procurement and governance with Azure Marketplace](/learn/modules/simplify-cloud-procurement-governance-azure-marketplace/)
- [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md#transact-publishing-option)
marketplace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/overview.md
When you create a commercial marketplace offer in Partner Center, it may be list
## Next steps -- Get an [Introduction to the Microsoft commercial marketplace](/learn/modules/intro-commercial-marketplace/) on Microsoft Learn.-- Find videos and hands on labs at [Mastering the marketplace](https://go.microsoft.com/fwlink/?linkid=2195692)
+- Get an [Introduction to the Microsoft commercial marketplace](/learn/modules/intro-commercial-marketplace/).
+- Find videos and hands-on labs at [Mastering the marketplace](https://go.microsoft.com/fwlink/?linkid=2195692)
- For new Microsoft partners who are interested in publishing to the commercial marketplace, see [Create a commercial marketplace account in Partner Center](create-account.md). - To learn more about recent and future releases, join the conversation in the [Microsoft Partner Community](https://www.microsoftpartnercommunity.com/).
migrate Concepts Migration Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-planning.md
Before finalizing your migration plan, make sure you consider and mitigate other
- **Network requirements**: Evaluate network bandwidth and latency constraints, which might cause unforeseen delays and disruptions to migration replication speed. - **Testing/post-migration tweaks**: Allow a time buffer to conduct performance and user acceptance testing for migrated apps, or to configure/tweak apps post-migration, such as updating database connection strings, configuring web servers, performing cut-overs/cleanup etc. - **Permissions**: Review recommended Azure permissions, and server/database access roles and permissions needed for migration.-- **Training**: Prepare your organization for the digital transformation. A solid training foundation is important for successful organizational change. Check out free training on [Microsoft Learn](/learn/azure/?ocid=CM_Discovery_Checklist_PDF), including courses on Azure fundamentals, solution architectures, and security. Encourage your team to exploreΓÇ»[Azure certifications](https://www.microsoft.com/learning/certification-overview.aspx?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
+- **Training**: Prepare your organization for the digital transformation. A solid training foundation is important for successful organizational change. Check out [free Microsoft training](/learn/azure/?ocid=CM_Discovery_Checklist_PDF), including courses on Azure fundamentals, solution architectures, and security. Encourage your team to exploreΓÇ»[Azure certifications](https://www.microsoft.com/learning/certification-overview.aspx?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
- **Implementation support**: Get support for your implementation if you need it. Many organizations opt for outside help to support their cloud migration. To move to Azure quickly and confidently with personalized assistance, consider anΓÇ»[Azure Expert Managed Service Provider](https://www.microsoft.com/solution-providers/search?cacheId=9c2fed4f-f9e2-42fb-8966-4c565f08f11e&ocid=CM_Discovery_Checklist_PDF), orΓÇ»[FastTrack for Azure](https://azure.microsoft.com/programs/azure-fasttrack/?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table to define the packet core instance
|The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. | **N6 gateway** (for 5G) or **SGi gateway** (for 4G). | | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**| | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). We recommend that you collect these addresses to allow the UEs to resolve domain names. </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network (for example, if you want to use this data network for local [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) only). | **DNS Addresses** |
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses.</br></br>If you want to use [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) in this data network, keep NAPT disabled. |**NAPT**| ## Next steps
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
For each of these networks, allocate a subnet and then identify the listed IP ad
- Default gateway. - One IP address for port 6 on the Azure Stack Edge Pro device. - One IP address for the user plane interface. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface.
+- Optionally, one or more Domain Name System (DNS) server addresses.
## Allocate user equipment (UE) IP address pools
Do the following for each site you want to add to your private mobile network. D
| 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data network</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-install.md) | | 4. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md) | | 5. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md) |
-| 6. | Configure a name, Domain Name System (DNS) name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
+| 6. | Configure a name, DNS name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
| 7. | Configure certificates for your Azure Stack Edge Pro device. | [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md) | | 8. | Activate your Azure Stack Edge Pro device. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) | | 9. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning.</br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
- Use the same value for both the **S1-MME subnet** and **S1-U subnet** fields (if this site will support 4G UEs). - Use the same value for both the **S1-MME gateway** and **S1-U gateway** fields (if this site will support 4G UEs).
-1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields and select **Submit**. Note that you can only connect the packet core instance to a single data network.
+1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. If you decided not to configure a DNS server, untick the **Specify DNS addresses for UEs?** checkbox.
:::image type="content" source="media/create-a-site/create-site-add-data-network.png" alt-text="Screenshot of the Azure portal showing the Add data network screen.":::
+1. Select **Submit**. Note that you can only connect the packet core instance to a single data network.
1. Select **Review + create**. 1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Four Azure resources are defined in the template.
|**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. | | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | | **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. |
+ | **Dns Addresses** | Enter the DNS server addresses. You can omit this if you don't want to configure a DNS server for the UEs in this data network. |
| **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. | 1. Select **Review + create**.
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
|**Data Network Name** | Enter the name of the data network. | |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | |**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.|
+ | **Dns Addresses** | Enter the DNS server addresses. You can omit this if you don't want to configure a DNS server for the UEs in this data network. |
|**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.| 1. Select **Review + create**.
purview How To Data Owner Policies Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-azure-sql-db.md
Previously updated : 07/20/2022 Last updated : 08/11/2022 # Provision access by data owner for Azure SQL DB (preview)
This how-to guide describes how a data owner can delegate authoring policies in
[!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)] ### Azure SQL Database configuration
-Each Azure SQL Database server needs a Managed Identity assigned to it.
-You can use the following PowerShell script:
+Each Azure SQL Database server needs a Managed Identity assigned to it. You can do this from Azure Portal by navigating to the Azure SQL Server that hosts the Azure SQL DB, navigating to Identity on the side menu, checking status to *On* and then saving. See screenshot:
+![Screenshot shows how to assign system managed identity to Azure SQL Server.](./media/how-to-data-owner-policies-sql//assign-identity-azure-sql-db.png)
++
+You will also need to enable external policy based authorization on the server. You can do this in Power Shell
+ ```powershell Connect-AzAccount $context = Get-AzSubscription -SubscriptionId xxxx-xxxx-xxxx-xxxx Set-AzContext $context
-Set-AzSqlServer -ResourceGroupName "RESOURCEGROUPNAME" -ServerName "SERVERNAME" -AssignIdentity
-```
-You will also need to enable external policy based authorization on the server.
-
-```powershell
$server = Get-AzSqlServer -ResourceGroupName "RESOURCEGROUPNAME" -ServerName "SERVERNAME" #Initiate the call to the REST API to set externalPolicyBasedAuthorization to true
This section contains a reference of how actions in Microsoft Purview data polic
Check blog, demo and related how-to guides * [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2) * [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md)
-* Blog: [Private preview: controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491)
+* Blog: [Microsoft Purview Data Policy for SQL DevOps access provisioning now in public preview](https://techcommunity.microsoft.com/t5/microsoft-purview-blog/microsoft-purview-data-policy-for-sql-devops-access-provisioning/ba-p/3403174)
+* Blog: [Controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491)
* [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md) * [Enable Microsoft Purview data owner policies on an Arc-enabled SQL Server](./how-to-data-owner-policies-arc-sql-server.md)
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
Previously updated : 05/27/2022 Last updated : 8/11/2022 # Access provisioning by data owner to Azure Storage datasets (Preview)
Execute the steps in the **Create a new policy** and **Publish a policy** sectio
>[!Important] > - Publish is a background operation. Azure Storage accounts can take up to **2 hours** to reflect the changes.
-## Additional information
-- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container (like Storage Explorer does), and there's no access at that level, the request will fail. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
+## Data Consumption
+- Data consumer can access the requested dataset using tools such as PowerBI or Azure Synapse Analytics workspace.
+- Sub-container access: Policy statements set below container level on a Storage account are supported. However, users will not be able to browse to the data asset using Azure Portal's Storage Browser or Microsoft Azure Storage Explorer tool if access is granted only at file or folder level of the Azure Storage account. This is because these apps attempt to crawl down the hierarchy starting at container level, and the request fails because no access has been granted at that level. Instead, the App that requests the data must execute a direct access by providing a fully qualified name to the data object. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
- [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster) - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)+
+## Additional information
- Creating a policy at Storage account level will enable the Subjects to access system containers, for example *$logs*. If this is undesired, first scan the data source(s) and then create finer-grained policies for each (that is, at container or subcontainer level). - The root blob in a container will be accessible to the Azure AD principals in a Microsoft Purview *allow*-type RBAC policy if the scope of such policy is either subscription, resource group, Storage account or container in Storage account. - The root container in a Storage account will be accessible to the Azure AD principals in a Microsoft Purview *allow*-type RBAC policy if the scope of such policy is either subscription, resource group, or Storage account.
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
Last updated 07/15/2022
-# Microsoft Purview - Profisee Integration
+# Microsoft Purview - Profisee MDM Integration
-Master data management (MDM) is a key pillar of any unified data governance solution. Microsoft Purview supports master data management with our partner [Profisee](https://profisee.com/profisee-advantage/). This tutorial compiles reference and integration deployment materials in one place to get you started on your MDM journey with Microsoft Purview through our integration with Profisee.
+Master data management (MDM) is a key pillar of any unified data governance solution. Microsoft Purview supports master data management with our partner [Profisee](https://profisee.com/profisee-advantage/). This tutorial compiles reference and integration deployment materials in one place; firstly to put Purview Unified Data Governance and MDM in the context of an Azure data estate; and more importantly, to get you started on your MDM journey with Microsoft Purview through our integration with Profisee.
-## What, why and how of MDM - Master Data Management?
+## Why Data Governance and Master Data Management (MDM) are essential to the modern Data Estate?
-Many businesses today have large data estates that move massive amounts of data between applications, storage systems, analytics systems, and across departments within their organization. During these movements, and over time, data can be accidentally duplicated or become fragmented, and become stale or out of date. Hence, accuracy becomes a concern when using this data to drive insights into your business.
+All organizations have multiple data sources, and the larger the organization the greater the number of data sources. Typically, there will be ERPs, CRMs, Legacy applications, regional versions of each of these, external data feeds and so on. Most of these businesses move massive amounts of data between applications, storage systems, analytics systems, and across departments within their organization. During these movements, and over time, data can get duplicated or become fragmented, and become stale or out of date. Hence, accuracy becomes a concern when using this data to drive insights into your business.
+
+Inevitably, data that was created in different ΓÇÿsilosΓÇÖ with different (or no) governance standards to meet the needs of their respective applications will always have issues. When you look at the data drawn from each of these applications, you'll see that it's inconsistent in terms of both the standardization of data. Often, there are numerous inconsistencies in terms of the values themselves, and most often individual records are incomplete. In fact, it would be surprising if these inconsistencies weren't the case ΓÇô but it does present a problem. What is needed is data that is complete, and consistent, and accurate.
To protect the quality of data within an organization, master data management (MDM) arose as a discipline that creates a source of truth for enterprise data so that an organization can check and validate their key assets. These key assets, or master data assets, are critical records that provide context for a business. For example, master data might include information on specific products, employees, customers, financial structures, suppliers, or locations. Master data management ensures data quality across an entire organization by maintaining an authoritative consolidated de-duplicated set of the master data records, and ensuring data remains consistent across your organization's complete data estate.
-As an example, it can be difficult for a company to have a clear, single view of their customers. Customer data may differ between systems, there may be duplicated records due to incorrect entry, or shipping and customer service systems may vary due to name, address, or other attributes. Master data management consolidates all this differing information about the customer it into a single, standard format that can be used to check data across an organizations entire data estate. Not only does this improve quality of data by eliminating mismatched data across departments, but it ensures that data analyzed for business intelligence (BI) and other applications is trustworthy and up to date, reduces data load by removing duplicate records across the organization, and streamlines communications between business systems.
+As an example, it can be difficult for a company to have a clear, single view of their customers. Customer data may differ between systems, there may be duplicated records due to incorrect entry, or shipping and customer service systems may vary due to name, address, or other attributes. Master data management consolidates and standardizes all this differing information about the customer. This standardization process may involve automatic or user-defined rules, validations and checks. It's the job of the MDM system to ensure your data remains consistent within the framework of these rules over time. Not only does this improve quality of data by eliminating mismatched data across departments, but it ensures that data analyzed for business intelligence (BI) and other applications is trustworthy and up to date, reduces data load by removing duplicate records across the organization, and streamlines communications between business systems.
-More Details on [Profisee MDM](https://profisee.com/master-data-management-what-why-how-who/) and [Profisee-Purview MDM Concepts and Azure Architecture](/azure/architecture/reference-architectures/data/profisee-master-data-management-purview).
+The ability to consolidate data from multiple disparate systems is key if we want to use the data to drive business insights and operational efficiencies ΓÇô or any form of ΓÇÿdigital transformationΓÇÖ. What we need in that case is high quality, trusted data that is ready to use, whether it's being consumed in basic enterprise metrics or advanced AI algorithms. Bridging this gap is the job of Data Governance and MDM, and in the Azure world that means [Microsoft Purview](https://azure.microsoft.com/services/purview/) and [Profisee MDM](https://profisee.com/platform).
+
-## Microsoft Purview & Profisee Integrated MDM - Better Together!
+While governance systems can *define* data standards, MDM is where they're *enforced*. Data from different systems can be matched and merged, validated against data quality and governance standards, and remediated where required. Then the new corrected and validated ΓÇÿmasterΓÇÖ data can be shared to downstream analytics systems and then back into source systems to drive operational improvements. By properly creating and maintaining enterprise master data, we ensure that data is no longer a liability and cause for concern, but an asset of the business that enables improved operation and innovation.
-### Profisee MDM: True SaaS experience
+More Details on [Profisee MDM](https://profisee.com/master-data-management-what-why-how-who/) and [Profisee-Purview MDM Concepts and Azure Architecture](/azure/architecture/reference-architectures/data/profisee-master-data-management-purview).
-A fully managed instance of Profisee MDM hosted in the Azure cloud. Full turn-key service for the easiest and fastest MDM deployment.
+## Microsoft Purview & Profisee MDM - Better Together!
-- **Platform and Management in One** - Apply a true, end-to-end SaaS platform with one agreement and no third parties. -- **Industry-leading Cloud Service** - Hosted on Azure for industry-leading scalability and availability. -- **The fastest path to trusted data** - Leave the networking, firewalls and storage to us so you can deploy in minutes.
+Microsoft Purview and Profisee MDM are often discussed as being a ΓÇÿBetter TogetherΓÇÖ value proposition due to the complementary nature of the solutions. Microsoft Purview excels at cataloging data sources and defining data standards, while Profisee MDM enforces those standards across master data drawn from multiple siloed sources. It's clear not only that either system has independent value to offer, but also that each reinforces the other for a natural ΓÇÿBetter TogetherΓÇÖ synergy that goes deeper than the independent offerings.
+ - Common technical foundation ΓÇô Profisee was born out of Microsoft technologies using common tools, databases & infrastructure so any ΓÇÿMicrosoft shopΓÇÖ will find the Profisee solution familiar. In fact, for many years Profisee MDM was built on Microsoft Master Data Services (MDS) and now that MDS is nearing end of life, Profisee is the premier upgrade/replacement solution for MDS.
+ - Developer collaboration and joint development ΓÇô Profisee and Purview developers have collaborated extensively to ensure a good complementary fit between their respective solutions to deliver a seamless integration that meets the needs of their customers.
+ - Joint sales and deployments ΓÇô Profisee has more MDM deployments on Azure, and jointly with Purview, than any other MDM vendor, and can be purchased through Azure Marketplace. In FY2023 Profisee is the only MDM vendor with a Top Tier Microsoft partner certification available as an IaaS/CaaS or SaaS offering through Azure Marketplace.
+ - Rapid and reliable deployment ΓÇô Rapid and reliable deployment is critical for any enterprise software and Gartner points out that Profisee has more implementations taking under 90 days than any other MDM vendor.
+ - Inherently multi-domain ΓÇô Profisee offers an inherently multi-domain approach to MDM where there are no limitations to the number of specificity of master data domains. This design aligns well with customers looking to modernize their data estate who may start with a limited number of domains, but ultimately will benefit from maximizing domain coverage (matched to their data governance coverage) across their whole data estate.
+ - Engineered for Azure ΓÇô Profisee has been engineered to be cloud-native with options for both SaaS and managed IaaS/CaaS deployments on Azure (see next section)
-### Profisee MDM: Ultimate PaaS flexibility
+## Profisee MDM: Deployment Flexibility ΓÇô Turnkey SaaS Experience or IaaS/CaaS Flexibility
+Profisee MDM has been engineered for a cloud-native experience and may be deployed on Azure in two ways ΓÇô SaaS and Azure IaaS/CaaS/Kubernetes Cluster.
-Complete deployment flexibility and control, using the most efficient and low-maintenance option on the [Microsoft Azure](https://azure.microsoft.com/) cloud or on-premises.
+### Turnkey SaaS Experience
+A fully managed instance of Profisee MDM hosted by Profisee in the Azure cloud. Full turn-key service for the easiest and fastest MDM deployment. Profisee MDM SaaS can be purchased on [Azure Marketplace Profisee MDM - SaaS](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/profisee.profisee_saas_private/product~/).
+- **Platform and Management in one** ΓÇô Leverage a true, end-to-end SaaS platform with one agreement and no third parties.
+- **Industry-leading Cloud service** ΓÇô Hosted on Azure for industry-leading scalability and availability.
+- **The fastest path to Trusted Data** ΓÇô Deploy in minutes with minimal technical knowledge. Leave the networking, firewalls and storage to us so you can deploy in minutes.
+### Ultimate IaaS/CaaS Flexibility
+Complete deployment flexibility and control, using the most efficient and low-maintenance option on the [Microsoft Azure](https://azure.microsoft.com/) Kubernetes Service, functioning as a customer hosted fully managed IaaS/CaaS (container-as-a-service) deployment. The section below on "Microsoft Purview - Profisee integration deployment on Azure Kubernetes Service (AKS)" describes this deployment route in detail.
- **Modern Cloud Architecture** - Platform available as a containerized Kubernetes service. -- **Complete Flexibility & Autonomy** - Available in Azure, AWS, Google Cloud or on-prem. -- **Fast to Deploy, Easy to Maintain** - Fully containerized configuration streamlines patches and upgrades.
+- **Complete Flexibility & Autonomy** - Available in Azure, AWS, Google Cloud or on-premises.
+- **Fast to Deploy, Easy to Maintain** - 100% containerized configuration streamlines patches and upgrades.
More Details on [Profisee MDM Benefits On Modern Cloud Architecture](https://profisee.com/our-technology/modern-cloud-architecture/), [Profisee Advantage Videos](https://profisee.com/profisee-advantage/) and why it fits best with [Microsoft Azure](https://azure.microsoft.com/) cloud deployments!
-## Microsoft Purview - Profisee reference architecture
+## Microsoft Purview - Profisee Reference Architecture
+
+The reference architecture shows how both Microsoft Purview and Profisee MDM work together to provide a foundation of high-quality, trusted data for the Azure data estate. It's also available as a short video walk-through.
+
+**Video: [Profisee Reference Architecture: MDM and Governance for Azure](https://profisee.wistia.com/medias/k72zte2wbr)**
:::image type="content" alt-text="Diagram of Profisee-Purview Reference Architecture." source="./medim-reference-architecture.png":::
+1. Scan & classify metadata from LOB systems ΓÇô uses pre-built Purview connectors to scan data sources and populate the Purview Data Catalog
+2. Publish master data model to Purview ΓÇô any master data entities created in Profisee MDM are seamlessly published into Purview to further populate the Purview Data Catalog and ensure Purview is ΓÇÿawareΓÇÖ of this critical source of data
+3. Enrich master data model with governance details ΓÇô Governance Data Stewards can enrich master data entity definitions with data dictionary and glossary information as well as ownership and sensitive data classifications, etc. in Purview
+4. Leverage enriched governance data for data stewardship ΓÇô any definitions and metadata available on Purview are visible in real-time in Profisee as guidance for the MDM Data Stewards
+5. Load source data from business applications ΓÇô Azure Data Factory extracts data from source systems with 100+ pre-built connectors and/or REST gateway
+ Transactional and unstructured data is loaded to downstream analytics solution ΓÇô All ΓÇÿrawΓÇÖ source data can be loaded to analytics database such as Synapse (Synapse is generally the preferred analytic database but other such as Snowflake are also common). Analysis on this raw information without proper master (ΓÇÿgoldenΓÇÖ) data will be subject to inaccuracy as data overlaps, mismatches and conflicts won't yet have been resolved.
+7. Master data from source systems is loaded to Profisee MDM application ΓÇô Multiple streams of ΓÇÿmasterΓÇÖ data is loaded to Profisee MDM. Master data is the data that defines a domain entity such as customer, product, asset, location, vendor, patient, household, menu item, ingredient, and so on. This data is typically present in multiple systems and resolving differing definitions and matching and merging this data across systems is critical to the ability to use any cross-system data in a meaningful way.
+8. Master data is standardized, matched, merged, enriched and validated according to governance rules ΓÇô Although data quality and governance rules may be defined in other systems (such as Purview), Profisee MDM is where they're enforced. Source records are matched and merged both within and across source systems to create the most complete and correct record possible. Data quality rules check each record for compliance with business and technical requirements.
+9. Extra data stewardship to review and confirm matches, data quality, and data validation issues, as required ΓÇô Any record failing validation or matching with only a low probability score is subject to remediation. To remediate failed validations, a workflow process assigns records requiring review to Data Stewards who are experts in their business data domain. Once records have been verified or corrected, they're ready to use as a ΓÇÿgolden recordΓÇÖ master.
+10. Direct access to curated master data including secure data access for reporting in Power BI ΓÇô Power BI users may report directly on master data through a dedicated Power BI Connector that recognizes and enforces role-based security and hides various system fields for simplicity.
+11. High-quality, curated master data published to downstream analytics solution ΓÇô Verified master data can be published out to any target system using Azure Data Factory. Master data including the parent-child lineage of merged records published into Azure Synapse (or wherever the ΓÇÿrawΓÇÖ source transactional data was loaded). With this combination of properly curated master data plus transactional data, we have a solid foundation of trusted data for further analysis.
+12. Visualization and analytics with high-quality master data eliminates common data quality issues and delivers improved insights ΓÇô Irrespective of the tools used for analysis, including machine learning, and visualization, well-curated master data forms a better and more reliable data foundation. The alternative is to use whatever information you can get ΓÇô and risk misleading results that can damage the business.
+ ### Reference architecture guides/reference documents - [Data Governance with Profisee and Microsoft Purview](/azure/architecture/reference-architectures/data/profisee-master-data-management-purview) - [Operationalize Profisee with ADF Azure Data Factory, Azure Synapse Analytics and Power BI](/azure/architecture/reference-architectures/data/profisee-master-data-management-data-factory) - [MDM on Azure Overview](/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/govern-master-data)
-### Example scenario: Business & technical use case
-
-Let's take an example of a sample manufacturing company working across multiple data sources; it uses ADF to load the business critical data sources into Profisee, which is when Profisee works its magic and finds out the golden records and matching records and then we finally are able to enrich the metadata with Microsoft Purview (updates made by Microsoft Purview on Classifications, Sensitivity Labels, Glossary and all other Catalog features are reflected seamlessly into Profisee). Finally, they connect the enriched metadata detected by Microsoft Purview and cleansed/curated data by Profisee with Power BI or Azure ML for advanced analytics.
-
-## Microsoft Purview - Profisee integration SaaS deployment on Azure Kubernetes Service (AKS) guide
+## Microsoft Purview - Profisee integration deployment on Azure Kubernetes Service (AKS)
+Go to [https://github.com/Profisee/kubernetes](https://github.com/Profisee/kubernetes) and select Microsoft Purview [**Azure ARM**]. The deployment process detailed below is owned and hosted by you on your Azure subscription as an IaaS / CaaS (container-as-a-service) AKS Cluster.
1. [Create a user-assigned managed identity in Azure](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. This managed identity must have the following permissions when running a deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity: - Contributor role to the resource group where AKS will be deployed. It can either be assigned directly to the resource group **OR** at the subscription level and down.
Recommended: Keep it to "Yes, use default Azure DNS". Choosing Yes, the deployer
:::image type="content" alt-text="Image 12 - Screenshot of Profisee Azure ARM Wizard Select Outputs Get FinalDeployment URL." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-click-outputs-get-final-deployment-url.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-click-outputs-get-final-deployment-url.png"::: -- Populate and hydrate data to the newly installed Profisee environment by installing FastApp. Go to your Profisee SaaS deployment URL and select **/Profisee/api/client**. It should look something like - "https://[profisee_name].[region].cloudapp.azure.com/profisee/api/client".
+- Populate and hydrate data to the newly installed Profisee environment by installing FastApp. Go to your Profisee deployment URL and select **/Profisee/api/client**. It should look something like - "https://[profisee_name].[region].cloudapp.azure.com/profisee/api/client". Select the Downloads for "Profisee FastApp Studio" utility and the "Profisee Platform Tools". Install both these tools on your local client machine.
+
+ :::image type="content" alt-text="Image 13 - Screenshot of Profisee Client Tools Download." source="./media/how-to-deploy-profisee-purview/profisee-download-fastapp-tools.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-download-fastapp-tools.png":::
+
+- Log in to FastApp Studio and perform the rest of the MDM Administration and configuration management for Profisee. Once you log in with the administrator email address supplied during the setup; you should be able to see the administration menu on the left pane of the Profisee FastApp Studio. Navigate to these menus and perform the rest of your MDM journey using FastApp tool. Being able to see the administration menu as seen in the image below confirms successful installation of Profisee on Azure Platform.
+
+ :::image type="content" alt-text="Image 14 - Screenshot of Profisee FastApp Studio once you sign in." source="./media/how-to-deploy-profisee-purview/profisee-fastapp-studio-home-screen.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-fastapp-studio-home-screen.png":::
+
+- As a final validation step to ensure successful installation and for checking whether Profisee has been successfully connected to your Microsoft Purview instance, go to **/Profisee/api/governance/health** It should look something like - "https://[profisee_name].[region].cloudapp.azure.com//Profisee/api/governance/health". The output response will indicate the words **"Status": "Healthy"** on all the Purview subsystems.
+
+```{
+ "OverallStatus": "Healthy",
+ "TotalCheckDuration": "0:XXXXXXX",
+ "DependencyHealthChecks": {
+ "purview_service_health_check": {
+ "Status": "Healthy",
+ "Duration": "00:00:NNNN",
+ "Description": "Successfully connected to Purview."
+ },
+ "governance_service_health_check": {
+ "Status": "Healthy",
+ "Duration": "00:00:NNNN",
+ "Description": "Purview cache loaded successfully.
+ Total assets: NNN; Instances: 1; Entities: NNN; Attributes: NNN; Relationships: NNN; Hierarchies: NNN"
+ },
+ "messaging_db_health_check": {
+ "Status": "Healthy",
+ "Duration": "00:00:NNNN",
+ "Description": null
+ },
+ "logging_db_health_check": {
+ "Status": "Healthy",
+ "Duration": "00:00:NNNN",
+ "Description": null
+ }
+ }
+}
+```
+An output response that looks similar as the above confirms successful installation, completes all the deployment steps; and validates whether Profisee has been successfully connected to your Microsoft Purview and indicates that the two systems are able to communicate properly.
## Next steps
-Through this guide, we learned how to set up and deploy a Microsoft Purview-Profisee integration.
-For more usage details on Profisee and Profisee FastApp, especially how to configure data models, data quality, MDM and various other features of Profisee - Register on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/) for further detailed tutorials on the Profisee side of MDM!
+Through this guide, we learned of the importance of MDM in driving and supporting Data Governance in the context of the Azure data estate, and how to set up and deploy a Microsoft Purview-Profisee integration.
+For more usage details on Profisee MDM, register for scheduled trainings, live product demonstration and Q&A on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/)!
purview How To Enable Data Use Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-management.md
Previously updated : 4/21/2022 Last updated : 8/10/2022
To disable Data Use Management for a source, resource group, or subscription, a
1. Set the **Data Use Management** toggle to **Disabled**. ## Additional considerations related to Data Use Management- - Make sure you write down the **Name** you use when registering in Microsoft Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name. - To disable a source for *Data Use Management*, remove it first from being bound (i.e. published) in any policy. - While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data Use Management*, either of those roles can independently disable it.-- Make sure you write down the **Name** you use when registering in Microsoft Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name.-- To disable a source for *Data Use Management*, remove it first from being bound (i.e., published) in any policy.-- While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data Use Management*, either of those roles can independently disable it. - Disabling *Data Use Management* for a subscription will disable it also for all assets registered in that subscription. > [!WARNING] > **Known issues** related to source registration
-> - Moving data sources to a different resource group or subscription is not yet supported. If want to do that, de-register the data source in Microsoft Purview before moving it and then register it again after that happens.
+> - Moving data sources to a different resource group or subscription is not supported. If want to do that, de-register the data source in Microsoft Purview before moving it and then register it again after that happens. Note that policies are bound to the data source ARM path. Changing the data source subscription or resource group makes policies ineffective.
> - Once a subscription gets disabled for *Data Use Management* any underlying assets that are enabled for *Data Use Management* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that. ## Data Use Management best practices
purview How To Lineage Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-powerbi.md
Previously updated : 03/30/2021 Last updated : 08/11/2022 # How to get lineage from Power BI into Microsoft Purview
purview How To Lineage Sql Server Integration Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-sql-server-integration-services.md
Previously updated : 06/30/2021 Last updated : 08/11/2022 # How to get lineage from SQL Server Integration Services (SSIS) into Microsoft Purview
search Search Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-filters.md
You can't modify existing fields to make them filterable. Instead, you need to a
Text filters match string fields against literal strings that you provide in the filter: `$filter=Category eq 'Resort and Spa'`
-Unlike full-text search, there is no lexical analysis or word-breaking for text filters, so comparisons are for exact matches only. For example, assume a field *f* contains "sunny day", `$filter=f eq 'Sunny'` does not match, but `$filter=f eq 'sunny day'` will.
+Unlike full-text search, there is no lexical analysis or word-breaking for text filters, so comparisons are for exact matches only. For example, assume a field *f* contains "sunny day", `$filter=f eq 'sunny'` does not match, but `$filter=f eq 'sunny day'` will.
-Text strings are case-sensitive. There is no lower-casing of upper-cased words: `$filter=f eq 'Sunny day'` will not find "sunny day".
+Text strings are case-sensitive which means text filters are case sensitive by default. For example, `$filter=f eq 'Sunny day'` will not find "sunny day". However, you can use a [normalizer](search-normalizers.md) to make it so filtering isn't case sensitive.
### Approaches for filtering on text
To work with more examples, see [OData Filter Expression Syntax > Examples](./se
+ [Search Documents REST API](/rest/api/searchservice/search-documents) + [Simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) + [Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search)
-+ [Supported data types](/rest/api/searchservice/supported-data-types)
++ [Supported data types](/rest/api/searchservice/supported-data-types)
search Search Synapseml Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synapseml-cognitive-services.md
+
+ Title: Use Search with SynapseML
+
+description: Add full text search to big data on Apache Spark that's been loaded and transformed through the open source SynapseML library. In this walkthrough, you'll load invoice files into data frames, apply machine learning through SynapseML, then send it into a generated search index.
++++++ Last updated : 08/09/2022++
+# Add search to AI-enriched data from Apache Spark using SynapseML
+
+In this Azure Cognitive Search article, learn how to add data exploration and full text search to a SynapseML solution.
+
+[SynapseML](/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/) is an open source library that supports massively parallel machine learning over big data. One of the ways in which machine learning is exposed is through *transformers* that perform specialized tasks. Transformers tap into a wide range of AI capabilities, but in this article, we'll focus on just those that call Cognitive Services and Cognitive Search.
+
+In this walkthrough, you'll set up a workbook that does the following:
+
+> [!div class="checklist"]
+> + Load various forms (invoices) into a data frame in an Apache Spark session
+> + Analyze them to determine their features
+> + Assemble the resulting output into a tabular data structure
+> + Write the output to a search index in Azure Cognitive Search
+> + Explore and search over the content you created
+
+Although Azure Cognitive Search has native [AI enrichment](cognitive-search-concept-intro.md), this walkthrough shows you how to access AI capabilities outside of Cognitive Search. By using SynapseML instead of indexers or skills, you're not subject to data limits or any other constraint associated with those objects.
+
+> [!TIP]
+> Watch a demo at [https://www.youtube.com/watch?v=iXnBLwp7f88](https://www.youtube.com/watch?v=iXnBLwp7f88). The demo expands on this walkthrough with more steps and visuals.
+
+## Prerequisites
+
+You'll need the `synapseml` library and several Azure resources. If possible, use the same subscription and region for your Azure resources and put everything into one resource group for simple cleanup later. The following links are for portal installs. The sample data is imported from a public site.
+++ [Azure Cognitive Search](search-create-service-portal.md) (any tier) <sup>1</sup> ++ [Azure Cognitive Services](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#create-a-new-azure-cognitive-services-resource) (any tier) <sup>2</sup> ++ [Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal?tabs=azure-portal) (any tier) <sup>3</sup>+
+<sup>1</sup> You can use the free tier for this walkthrough but [choose a higher tier](search-sku-tier.md) if data volumes are large. You'll need the [API key](search-security-api-keys.md#find-existing-keys) for this resource.
+
+<sup>2</sup> This walkthrough uses Azure Forms Recognizer and Azure Translator. In the instructions below, you'll provide a [Cognitive Services multi-service key](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource) and the region, and it'll work for both services.
+
+<sup>3</sup> In this walkthrough, Azure Databricks provides the computing platform. You could also use Azure Synapse Analytics or any other computing platform supported by `synapseml`. The Azure Databricks article listed in the prerequisites includes multiple steps. For this walkthrough, follow only the instructions in "Create a workspace".
+
+> [!NOTE]
+> All of the above resources support security features in the Microsoft Identity platform. For simplicity, this walkthrough assumes key-based authentication, using endpoints and keys copied from the portal pages of each service. If you implement this workflow in a production environment, or share the solution with others, remember to replace hard-coded keys with integrated security or encrypted keys.
+
+## Create a Spark cluster and notebook
+
+In this section, you'll create a cluster, install the `synapseml` library, and create a notebook to run the code.
+
+1. In Azure portal, find your Azure Databricks workspace and select **Launch workspace**.
+
+1. On the left menu, select **Compute**.
+
+1. Select **Create cluster**.
+
+1. Give the cluster a name, accept the default configuration, and then create the cluster. It takes several minutes to create the cluster.
+
+1. Install the `synapseml` library after the cluster is created:
+
+ 1. Select **Library** from the tabs at the top of the cluster's page.
+
+ 1. Select **Install new**.
+
+ :::image type="content" source="media/search-synapseml-cognitive-services/install-library.png" alt-text="Screenshot of the Install New command." border="true":::
+
+ 1. Select **Maven**.
+
+ 1. In Coordinates, enter `com.microsoft.azure:synapseml_2.12:0.10.0`
+
+ 1. Select **Install**.
+
+1. On the left menu, select **Create** > **Notebook**.
+
+ :::image type="content" source="media/search-synapseml-cognitive-services/create-notebook.png" alt-text="Screenshot of the Create Notebook command." border="true":::
+
+1. Give the notebook a name, select **Python** as the default language, and select the cluster that has the `synapseml` library.
+
+1. Create seven consecutive cells. You'll paste code into each one.
+
+ :::image type="content" source="media/search-synapseml-cognitive-services/create-seven-cells.png" alt-text="Screenshot of the notebook with placeholder cells." border="true":::
+
+## Set up dependencies
+
+Paste the following code into the first cell of your notebook. Replace the placeholders with endpoints and access keys for each resource. No other modifications are required, so run the code when you're ready.
+
+This code imports packages and sets up access to the Azure resources used in this workflow.
+
+```python
+import os
+from pyspark.sql.functions import udf, trim, split, explode, col, monotonically_increasing_id, lit
+from pyspark.sql.types import StringType
+from synapse.ml.core.spark import FluentAPI
+
+cognitive_services_key = "placeholder-cognitive-services-multi-service-key"
+cognitive_services_region = "placeholder-cognitive-services-region"
+
+search_service = "placeholder-search-service-name"
+search_key = "placeholder-search-service-api-key"
+search_index = "placeholder-search-index-name"
+```
+
+## Load data into Spark
+
+Paste the following code into the second cell. No modifications are required, so run the code when you're ready.
+
+This code loads a small number of external files from an Azure storage account that's used for demo purposes. The files are various invoices, and they're read into a data frame.
+
+```python
+def blob_to_url(blob):
+ [prefix, postfix] = blob.split("@")
+ container = prefix.split("/")[-1]
+ split_postfix = postfix.split("/")
+ account = split_postfix[0]
+ filepath = "/".join(split_postfix[1:])
+ return "https://{}/{}/{}".format(account, container, filepath)
++
+df2 = (spark.read.format("binaryFile")
+ .load("wasbs://ignite2021@mmlsparkdemo.blob.core.windows.net/form_subset/*")
+ .select("path")
+ .limit(10)
+ .select(udf(blob_to_url, StringType())("path").alias("url"))
+ .cache())
+
+display(df2)
+```
+
+## Apply form recognition
+
+Paste the following code into the third cell. No modifications are required, so run the code when you're ready.
+
+This code loads the [AnalyzeInvoices transformer](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#analyzeinvoices) and passes a reference to the data frame containing the invoices. It calls the pre-built [invoice model](/azure/applied-ai-services/form-recognizer/concept-invoice) of Azure Forms Analyzer.
+
+```python
+from synapse.ml.cognitive import AnalyzeInvoices
+
+analyzed_df = (AnalyzeInvoices()
+ .setSubscriptionKey(cognitive_services_key)
+ .setLocation(cognitive_services_region)
+ .setImageUrlCol("url")
+ .setOutputCol("invoices")
+ .setErrorCol("errors")
+ .setConcurrency(5)
+ .transform(df2)
+ .cache())
+
+display(analyzed_df)
+```
+
+## Apply data restructuring
+
+Paste the following code into the fourth cell and run it. No modifications are required.
+
+This code loads [FormOntologyLearner](https://mmlspark.blob.windows.net/docs/0.10.0/pyspark/synapse.ml.cognitive.html?highlight=formontologylearner#module-synapse.ml.cognitive.FormOntologyLearner), a transformer that analyzes the output of Form Recognizer transformers and infers a tabular data structure. The output of AnalyzeInvoices is dynamic and varies based on the features detected in your content. Furthermore, the AnalyzeInvoices transformer consolidates output into a single column. Because the output is dynamic and consolidated, it's difficult to use in downstream transformations that require more structure.
+
+FormOntologyLearner extends the utility of the AnalyzeInvoices transformer by looking for patterns that can be used to create a tabular data structure. Organizing the output into multiple columns and rows makes the content consumable in other transformers, like AzureSearchWriter.
+
+```python
+from synapse.ml.cognitive import FormOntologyLearner
+
+itemized_df = (FormOntologyLearner()
+ .setInputCol("invoices")
+ .setOutputCol("extracted")
+ .fit(analyzed_df)
+ .transform(analyzed_df)
+ .select("url", "extracted.*").select("*", explode(col("Items")).alias("Item"))
+ .drop("Items").select("Item.*", "*").drop("Item"))
+
+display(itemized_df)
+```
+
+## Apply translations
+
+Paste the following code into the fifth cell. No modifications are required, so run the code when you're ready.
+
+This code loads [Translate](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#translate), a transformer that calls the Azure Translator service in Cognitive Services. The original text, which is in English in the "Description" column, is machine-translated into various languages. All of the output is consolidated into "output.translations" array.
+
+```python
+from synapse.ml.cognitive import Translate
+
+translated_df = (Translate()
+ .setSubscriptionKey(cognitive_services_key)
+ .setLocation(cognitive_services_region)
+ .setTextCol("Description")
+ .setErrorCol("TranslationError")
+ .setOutputCol("output")
+ .setToLanguage(["zh-Hans", "fr", "ru", "cy"])
+ .setConcurrency(5)
+ .transform(itemized_df)
+ .withColumn("Translations", col("output.translations")[0])
+ .drop("output", "TranslationError")
+ .cache())
+
+display(translated_df)
+```
+
+> [!TIP]
+> To check for translated strings, scroll to the end of the rows.
+>
+> :::image type="content" source="media/search-synapseml-cognitive-services/translated-strings.png" alt-text="Screenshot of table output, showing the Translations column." border="true":::
+
+## Apply search indexing
+
+Paste the following code in the sixth cell and then run it. No modifications are required.
+
+This code loads [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch). It consumes a tabular dataset and infers a search index schema that defines one field for each column. The translations structure is an array, so it's articulated in the index as a complex collection with subfields for each language translation. The generated index will have a document key and use the default values for fields created using the [Create Index REST API](/rest/api/searchservice/create-index).
+
+```python
+from synapse.ml.cognitive import *
+
+(translated_df.withColumn("DocID", monotonically_increasing_id().cast("string"))
+ .withColumn("SearchAction", lit("upload"))
+ .writeToAzureSearch(
+ subscriptionKey=search_key,
+ actionCol="SearchAction",
+ serviceName=search_service,
+ indexName=search_index,
+ keyCol="DocID",
+ ))
+```
+
+## Query the index
+
+Paste the following code into the seventh cell and then run it. No modifications are required, except that you might want to vary the [query syntax](query-simple-syntax.md) or [review these query examples](search-query-simple-examples.md) to further explore your content.
+
+This code calls the [Search Documents REST API](/rest/api/searchservice/search-documents) that queries an index. This particular example is searching for the word "door". This query returns a count of the number of matching documents. It also returns just the contents of the "Description' and "Translations" fields. If you want to see the full list of fields, remove the "select" parameter.
+
+```python
+import requests
+
+url = "https://{}.search.windows.net/indexes/{}/docs/search?api-version=2020-06-30".format(search_service, search_index)
+requests.post(url, json={"search": "door", "count": "true", "select": "Description, Translations"}, headers={"api-key": search_key}).json()
+```
+
+The following screenshot shows the cell output for above script.
++
+## Clean up resources
+
+When you're working in your own subscription, at the end of a project, it's a good idea to remove the resources that you no longer need. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
+
+You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
+
+## Next steps
+
+In this walkthrough, you learned about the [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch) transformer in SynapseML, which is a new way of creating and loading search indexes in Azure Cognitive Search. The transformer takes structured JSON as an input. The FormOntologyLearner can provide the necessary structure for output produced by the Forms Recognizer transformers in SynapseML.
+
+As a next step, review the other SynapseML tutorials that produce transformed content you might want to explore through Azure Cognitive Search:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Text Analytics with Cognitive Service](/azure/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark)
search Tutorial Csharp Orders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-orders.md
Consider the following takeaways from this project:
You have completed this series of C# tutorials - you should have gained valuable knowledge of the Azure Cognitive Search APIs.
-For further reference and tutorials, consider browsing [Microsoft Learn](/learn/browse/?products=azure), or the other tutorials in the [Azure Cognitive Search Documentation](./index.yml).
+For further reference and tutorials, consider browsing [Microsoft Learn](/learn/browse/?products=azure), or the other tutorials in the [Azure Cognitive Search documentation](./index.yml).
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
_Im_NetworkSession (hostname_has_any = torProxies)
The Network Session information model is aligned with the [OSSEM Network entity schema](https://github.com/OTRF/OSSEM/blob/master/docs/cdm/entities/network.md).
-Network session events use the descriptors `Src` and `Dst` to denote the roles of the devices and related users and applications involved in the session. So, for example, the source device hostname and IP address are named `SrcHostname` and `SrcIpAddr`. Note that other ASIM schemas typically use `Target` instead of `Dst`.
+Network session events use the descriptors `Src` and `Dst` to denote the roles of the devices and related users and applications involved in the session. So, for example, the source device hostname and IP address are named `SrcHostname` and `SrcIpAddr`. Other ASIM schemas typically use `Target` instead of `Dst`.
For events reported by an endpoint and for which the event type is `EndpointNetworkSession`, the descriptors `Local` and `Remote` denote the endpoint itself and the device at the other end of the network session respectively.
The following list mentions fields that have specific guidelines for Network Ses
| Field | Class | Type | Description | ||-||--| | **EventCount** | Mandatory | Integer | Netflow sources support aggregation, and the **EventCount** field should be set to the value of the Netflow **FLOWS** field. For other sources, the value is typically set to `1`. |
-| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`: for sessions reported by endpoint systems, including clients and servers. For such systems, the schema supports the `remote` and `local` alias fields. <br> - `NetworkSession`: for sessions reported by intermediary systems and network taps. <br> - `Flow`: for `NetFlow` type aggregated flows which group multiple similar sessions together. For such records, [EventSubType](#eventsubtype) should be left empty. |
+| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`: for sessions reported by endpoint systems, including clients and servers. For such systems, the schema supports the `remote` and `local` alias fields. <br> - `NetworkSession`: for sessions reported by intermediary systems and network taps. <br> - `Flow`: for `NetFlow` type aggregated flows, which group multiple similar sessions together. For such records, [EventSubType](#eventsubtype) should be left empty. |
| <a name="eventsubtype"></a>**EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End` |
-| **EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. |
+| <a name="eventresult"></a>**EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. |
+| **EventResultDetails** | Recommended | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Supported values are:<br> - Failover <br> - Invalid TCP <br> - Invalid Tunnel <br> - Maximum Retry <br> - Reset <br> - Routing issue <br> - Simulation <br> - Terminated <br> - Timeout <br> - Unknown <br> - NA.<br><br>The original, source specific, value is stored in the [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails) field. |
| **EventSchema** | Mandatory | String | The name of the schema documented here is `NetworkSession`. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.4`. | | <a name="dvcaction"></a>**DvcAction** | Recommended | Enumerated | The action taken on the network session. Supported values are:<br>- `Allow`<br>- `Deny`<br>- `Drop`<br>- `Drop ICMP`<br>- `Reset`<br>- `Reset Source`<br>- `Reset Destination`<br>- `Encrypt`<br>- `Decrypt`<br>- `VPNroute`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. The original value should be stored in the [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction) field.<br><br>Example: `drop` |
The following list mentions fields that have specific guidelines for Network Ses
#### All common fields
-Fields that appear in the table below are common to all ASIM schemas. Any guideline specified above overrides the general guidelines for the field. For example, a field might be optional in general, but mandatory for a specific schema. For further details on each field, refer to the [ASIM Common Fields](normalization-common-fields.md) article.
+Fields that appear in the table below are common to all ASIM schemas. Any guideline specified above overrides the general guidelines for the field. For example, a field might be optional in general, but mandatory for a specific schema. For more information on each field, refer to the [ASIM Common Fields](normalization-common-fields.md) article.
| **Class** | **Fields** | | | - |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **NetworkPackets** | Optional | Long | The number of packets sent in both directions. If both **PacketsReceived** and **PacketsSent** exist, **BytesTotal** should equal their sum. The meaning of a packet is defined by the reporting device. If the event is aggregated, **NetworkPackets** should be the sum over all aggregated sessions.<br><br>Example: `6924` | |<a name="networksessionid"></a>**NetworkSessionId** | Optional | string | The session identifier as reported by the reporting device. <br><br>Example: `172\_12\_53\_32\_4322\_\_123\_64\_207\_1\_80` | | **SessionId** | Alias | String | Alias to [NetworkSessionId](#networksessionid). |
-| **TcpFlagsAck** | Optional | Boolean | The TCP ACK Flag reported. The acknowledgment flag is used to acknowledge the successful receipt of a packet. As we can see from the diagram above, the receiver sends an ACK as well as a SYN in the second step of the three way handshake process to tell the sender that it received its initial packet. |
+| **TcpFlagsAck** | Optional | Boolean | The TCP ACK Flag reported. The acknowledgment flag is used to acknowledge the successful receipt of a packet. As we can see from the diagram above, the receiver sends an ACK and a SYN in the second step of the three way handshake process to tell the sender that it received its initial packet. |
| **TcpFlagsFin** | Optional | Bo