Updates from: 08/12/2022 01:15:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
zone_pivot_groups: b2c-policy-type
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [`https://twitter.com/signup`](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access).
+1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials.
+1. Select **+ Create Project** button.
+1. Under **Project name** tab, enter a preferred name of your project, and then select **Next** button.
+1. Under **Use case** tab, select your preferred use case, and then select **Next**.
+1. Under **Project description** tab, enter your project description, and then select **Next** button.
+1. Under **App name** tab, enter a name for your app, such as *azureadb2c*, and the select **Next** button.
+1. Under **Keys & Tokens** tab, copy the value of **API Key** and **API Key Secret** for later. You use both of them to configure Twitter as an identity provider in your Azure AD B2C tenant.
+1. Select **App settings** to open the app settings.
+1. At the lower part of the page, under **User authentication settings**, select **Set up**.
+1. In the **User authentication settings** page, select **OAuth 2.0** option.
+1. Under **OAUTH 2.0 SETTINGS**, for the **Type of app**, select your appropriate app type such as *Web App*.
+1. Under **GENERAL AUTHENTICATION SETTINGS**:
+ 1. For the **Callback URI/Redirect URL**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-policy-id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace:
+ - `your-tenant-name` with the name of your tenant name.
+ - `your-domain-name` with your custom domain.
+ - `your-policy-id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`.
+ 1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`.
+ 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application.
+ 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application.
+1. Select **Save**.
+ ::: zone-end
+++ 1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials. 1. Select **+ Create Project** button. 1. Under **Project name** tab, enter a preferred name of your project, and then select **Next** button.
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
1. In the **User authentication settings** page, select **OAuth 2.0** option. 1. Under **OAUTH 2.0 SETTINGS**, for the **Type of app**, select your appropriate app type such as *Web App*. 1. Under **GENERAL AUTHENTICATION SETTINGS**:
- 1. For the **Callback URI/Redirect URL**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace:
+ 1. For the **Callback URI/Redirect URL**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-name/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace:
- `your-tenant-name` with the name of your tenant name. - `your-domain-name` with your custom domain.
- - `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`.
-
+ - `your-user-flow-name` with the identifier of your user flow. For example, `b2c_1_signup_signin_twitter`.
1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`. 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application. 1. Select **Save**.
+
::: zone pivot="b2c-user-flow"
At this point, the Twitter identity provider has been set up, but it's not yet a
1. Select the **Run user flow** button. 1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account.
-If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
- ::: zone-end ::: zone pivot="b2c-custom-policy"
You can define a Twitter account as a claims provider by adding it to the **Clai
1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account. If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+> [!TIP]
+> If you're facing `unauthorized` error while testing this identity provider, make sure you use the correct Twitter API Key and API Key Secret, or try to apply for [elevated](https://developer.twitter.com/en/portal/products/elevated) access. Also, we recommend you've a look at [Twitter's projects structure](https://developer.twitter.com/en/docs/projects/overview), if you registered your app before the feature was available.
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
The following are the IDs for a [Verification display control](display-control-v
| ID | Default value | | | - |
-|intro_msg <sup>*</sup>| Verification is necessary. Please click Send button.|
+|intro_msg<sup>1</sup>| Verification is necessary. Please click Send button.|
|success_send_code_msg | Verification code has been sent. Please copy it to the input box below.| |failure_send_code_msg | We are having trouble verifying your email address. Please enter a valid email address and try again.| |success_verify_code_msg | E-mail address verified. You can now continue.|
The following are the IDs for a [Verification display control](display-control-v
|but_verify_code | Verify code| |but_send_new_code | Send new code| |but_change_claims | Change e-mail|
+| UserMessageIfVerificationControlClaimsNotVerified<sup>2</sup>| The claims for verification control have not been verified. |
-Note: The `intro_msg` element is hidden, and not shown on the self-asserted page. To make it visible, use the [HTML customization](customize-ui-with-html.md) with Cascading Style Sheets. For example:
+<sup>1</sup> The `intro_msg` element is hidden, and not shown on the self-asserted page. To make it visible, use the [HTML customization](customize-ui-with-html.md) with Cascading Style Sheets. For example:
```css .verificationInfoText div{display: block!important} ```
+<sup>2</sup> This error message is displayed to the user if they enter a verification code, but instead of completing the verification by selecting on the **Verify** button, they select the **Continue** button.
+
### Verification display control example ```xml
Note: The `intro_msg` element is hidden, and not shown on the self-asserted page
<LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_verify_code">Verify code</LocalizedString> <LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_send_new_code">Send new code</LocalizedString> <LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_change_claims">Change e-mail</LocalizedString>
+ <LocalizedString ElementType="ErrorMessage" StringId="UserMessageIfVerificationControlClaimsNotVerified">The claims for verification control have not been verified.</LocalizedString>
</LocalizedStrings> </LocalizedResources> ```
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
|Element |Page layout version range |jQuery version |Handlebars Runtime version |Handlebars Compiler version | ||||--|-|
-|multifactor |>= 1.2.4 | 3.5.1 | 4.7.6 |4.7.7 |
+|multifactor |>= 1.2.8 | 3.5.1 | 4.7.7 |4.7.7 |
+| |>= 1.2.4 | 3.5.1 | 4.7.6 |4.7.7 |
| |< 1.2.4 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|selfasserted |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
+|selfasserted |>= 2.1.11 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
| |< 2.1.4 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|unifiedssp |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
+|unifiedssp |>= 2.1.7 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 2.1.4 | 3.5.1 |4.7.6 |4.7.7 |
| |< 2.1.4 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|globalexception |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+|globalexception |>= 1.2.3 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|providerselection |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+|providerselection |>= 1.2.3 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|claimsconsent |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+|claimsconsent |>= 1.2.2 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 1.2.2 | 3.5.1 |4.7.7 |4.7.7 |
| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
-|unifiedssd |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
+|unifiedssd |>= 1.2.3 | 3.5.1 |4.7.7 |4.7.7 |
+| |>= 1.2.1 | 3.5.1 |4.7.6 |4.7.7 |
| |< 1.2.1 | 3.4.1 |4.0.12 |2.0.1 | | |< 1.2.0 | 1.12.4 |
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md
The Azure AD provisioning service includes a feature to help avoid accidental de
The feature lets you specify a deletion threshold, above which an admin needs to explicitly choose to allow the deletions to be processed.
-> [!NOTE]
-> Accidental deletions are not supported for our Workday / SuccessFactors integrations. It is also not supported for changes in scoping (e.g. changing a scoping filter or changing from "sync all users and groups" to "sync assigned users and groups"). Until the accidental deletions prevention feature is fully released, you'll need to access the Azure portal using this URL: https://aka.ms/AccidentalDeletionsPreview
-- ## Configure accidental deletion prevention To enable accidental deletion prevention: 1. In the Azure portal, select **Azure Active Directory**.
threshold. Also, be sure the notification email address is completed. If the del
When the deletion threshold is met, the job will go into quarantine and a notification email will be sent. The quarantined job can then be allowed or rejected. To learn more about quarantine behavior, see [Application provisioning in quarantine status](application-provisioning-quarantine-status.md).
-## Known limitations
-There are two key limitations to be aware of and are actively working to address:
-- HR-driven provisioning from Workday and SuccessFactors don't support the accidental deletions feature. -- Changes to your provisioning configuration (e.g. changing scoping) isn't supported by the accidental deletions feature. - ## Recovering from an accidental deletion If you encounter an accidental deletion you'll see it on the provisioning status page. It will say **Provisioning has been quarantined. See quarantine details for more information.**.
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
This article describes how to onboard a Google Cloud Platform (GCP) project on P
> [!NOTE] > 1. To confirm that the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app.
- > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your AWS account.
+ > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your GCP account.
1. Return to Permissions Management, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**.
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Continuous access evaluation is implemented by enabling services, like Exchange
This process enables the scenario where users lose access to organizational SharePoint Online files, email, calendar, or tasks, and Teams from Microsoft 365 client apps within minutes after a critical event. > [!NOTE]
-> Teams and SharePoint Online do not support user risk events.
+> SharePoint Online doesn't support user risk events.
### Conditional Access policy evaluation
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
Previously updated : 1/19/2022 Last updated : 08/11/2022
With this evaluation and enforcement, Conditional Access defines the basis of [M
![Conditional Access overview](./media/plan-conditional-access/conditional-access-overview-how-it-works.png)
-Microsoft provides [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) that ensure a basic level of security enabled in tenants that do not have Azure AD Premium. With Conditional Access, you can create policies that provide the same protection as security defaults, but with granularity. Conditional Access and security defaults are not meant to be combined as creating Conditional Access policies will prevent you from enabling security defaults.
+Microsoft provides [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) that ensure a basic level of security enabled in tenants that don't have Azure AD Premium. With Conditional Access, you can create policies that provide the same protection as security defaults, but with granularity. Conditional Access and security defaults aren't meant to be combined as creating Conditional Access policies will prevent you from enabling security defaults.
### Prerequisites * A working Azure AD tenant with Azure AD Premium or trial license enabled. If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).- * An account with Conditional Access administrator privileges.- * A test user (non-administrator) that allows you to verify policies work as expected before you impact real users. If you need to create a user, see [Quickstart: Add new users to Azure Active Directory](../fundamentals/add-users-azure-active-directory.md).- * A group that the non-administrator user is a member of. If you need to create a group, see [Create a group and add members in Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md). ## Understand Conditional Access policy components
Here are some common questions about [Assignments and Access Controls](concept-c
**Users or workload identities** * Which users, groups, directory roles and workload identities will be included in or excluded from the policy?- * What emergency access accounts or groups should be excluded from policy? **Cloud apps or actions**
Will this policy apply to any application, user action, or authentication contex
**Conditions** * Which device platforms will be included in or excluded from the policy?- * What are the organizationΓÇÖs trusted locations?- * What locations will be included in or excluded from the policy?- * What client app types will be included in or excluded from the policy?- * Do you have policies that would drive excluding Azure AD joined devices or Hybrid Azure AD joined devices from policies? - * If using [Identity Protection](../identity-protection/concept-identity-protection-risks.md), do you want to incorporate sign-in risk protection? **Grant or Block**
Will this policy apply to any application, user action, or authentication contex
Do you want to grant access to resources by requiring one or more of the following? * Require MFA- * Require device to be marked as compliant- * Require hybrid Azure AD joined device- * Require approved client app- * Require app protection policy- * Require password change- * Use Terms of Use **Session control**
Do you want to grant access to resources by requiring one or more of the followi
Do you want to enforce any of the following access controls on cloud apps? * Use app enforced restrictions- * Use Conditional Access App control- * Enforce sign-in frequency- * Use persistent browser sessions- * Customize continuous access evaluation ### Access token issuance
Do you want to enforce any of the following access controls on cloud apps?
This doesnΓÇÖt prevent the app to have separate authorization to block access. For example, consider a policy where: * IF user is in finance team, THEN force MFA to access their payroll app.- * IF a user not in finance team attempts to access the payroll app, the user will be issued an access token. -
- * To ensure users outside of finance group cannot access the payroll app, a separate policy should be created to block all other users. If all users except for finance team and emergency access accounts group, accessing payroll app, then block access.
+ * To ensure users outside of finance group can't access the payroll app, a separate policy should be created to block all other users. If all users except for finance team and emergency access accounts group, accessing payroll app, then block access.
## Follow best practices
Conditional Access provides you with great configuration flexibility. However, g
**If you misconfigure a policy, it can lock the organizations out of the Azure portal**.
-Mitigate the impact of accidental administrator lock out by creating two or more [emergency access accounts](../roles/security-emergency-access.md) in your organization. Create a user account dedicated to policy administration and excluded from all your policies.
+Mitigate the impact of accidental administrator lockout by creating two or more [emergency access accounts](../roles/security-emergency-access.md) in your organization. Create a user account dedicated to policy administration and excluded from all your policies.
### Apply Conditional Access policies to every app
-**Ensure that every app has at least one conditional access policy applied**. From a security perspective it is better to create a policy that encompasses All cloud apps and then exclude applications that you do not want the policy to apply to. This ensures you do not need to update Conditional Access policies every time you onboard a new application.
+**Ensure that every app has at least one Conditional Access policy applied**. From a security perspective it's better to create a policy that encompasses **All cloud apps**, and then exclude applications that you don't want the policy to apply to. This ensures you don't need to update Conditional Access policies every time you onboard a new application.
> [!IMPORTANT] > Be very careful in using block and all apps in a single policy. This could lock admins out of the Azure portal, and exclusions cannot be configured for important endpoints such as Microsoft Graph. ### Minimize the number of Conditional Access policies
-Creating a policy for each app isnΓÇÖt efficient and leads to difficult administration. Conditional Access will only apply to the first 195 policies per user. We recommend that you **analyze your apps and group them into applications that have the same resource requirements for the same users**. For example, if all Microsoft 365 apps or all HR apps have the same requirements for the same users, create a single policy and include all the apps to which it applies.
+Creating a policy for each app isnΓÇÖt efficient and leads to difficult administration. Conditional Access has a limit of 195 policies per-tenant. We recommend that you **analyze your apps and group them into applications that have the same resource requirements for the same users**. For example, if all Microsoft 365 apps or all HR apps have the same requirements for the same users, create a single policy and include all the apps to which it applies.
### Set up report-only mode It can be difficult to predict the number and names of users affected by common deployment initiatives such as:
-* blocking legacy authentication
-* requiring MFA
-* implementing sign-in risk policies
+* Blocking legacy authentication
+* Requiring MFA
+* Implementing sign-in risk policies
[Report-only mode ](concept-conditional-access-report-only.md) allows administrators to evaluate the impact of Conditional Access policies before enabling them in their environment. **First configure your policies in report-only mode and let it run for an interval before enforcing it in your environment**. ### Plan for disruption
-If you rely on a single access control, such as MFA or a network location, to secure your IT systems, you are susceptible to access failures if that single access control becomes unavailable or misconfigured.
+If you rely on a single access control such as MFA or a network location to secure your IT systems, you're susceptible to access failures if that single access control becomes unavailable or misconfigured.
**To reduce the risk of lockout during unforeseen disruptions, [plan strategies](../authentication/concept-resilient-controls.md) to adopt for your organization**.
If you rely on a single access control, such as MFA or a network location, to se
**A naming standard helps you to find policies and understand their purpose without opening them in the Azure admin portal**. We recommend that you name your policy to show: * A Sequence Number- * The cloud app(s) it applies to- * The response- * Who it applies to- * When it applies (if applicable) ![Screenshot that shows the naming standards for policies.](media/plan-conditional-access/11.png)
A descriptive name helps you to keep an overview of your Conditional Access impl
In addition to your active policies, implement disabled policies that act as secondary [resilient access controls in outage or emergency scenarios](../authentication/concept-resilient-controls.md). Your naming standard for the contingency policies should include: * ENABLE IN EMERGENCY at the beginning to make the name stand out among the other policies.- * The name of disruption it should apply to.- * An ordering sequence number to help the administrator to know in which order policies should be enabled. **Example**
The following name indicates that this policy is the first of four policies to e
### Block countries from which you never expect a sign-in.
-Azure active directory allows you to create [named locations](location-condition.md). Create the list of countries that are allowed, and then create a network block policy with these "allowed countries" as an exclusion. This is less overhead for customers who are mainly based in smaller geographic locations.**Be sure to exempt your emergency access accounts from this policy**.
+Azure active directory allows you to create [named locations](location-condition.md). Create the list of countries that are allowed, and then create a network block policy with these "allowed countries" as an exclusion. This is less overhead for customers who are based in smaller geographic locations.**Be sure to exempt your emergency access accounts from this policy**.
## Deploy Conditional Access policy
-When new policies are ready, deploy your conditional access policies in phases.
+When new policies are ready, deploy your Conditional Access policies in phases.
### Build your Conditional Access policy
Before you see the impact of your Conditional Access policy in your production e
#### Set up report-only mode
-By default, each policy is created in report-only mode, we recommended organizations test and monitor usage, to ensure intended result, before turning each policy on.
+By default, each policy is created in report-only mode, we recommended organizations test and monitor usage, to ensure intended result, before turning on each policy.
[Enable the policy in report-only mode](howto-conditional-access-insights-reporting.md). Once you save the policy in report-only mode, you can see the impact on real-time sign-ins in the sign-in logs. From the sign-in logs, select an event and navigate to the Report-only tab to see the result of each report-only policy.
-You can view the aggregate impact of your Conditional Access policies in the Insights and Reporting workbook. To access the workbook, you need an Azure Monitor subscription and you will need to [stream your sign-in logs to a log analytics workspace](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) .
+You can view the aggregate impact of your Conditional Access policies in the Insights and Reporting workbook. To access the workbook, you need an Azure Monitor subscription and you'll need to [stream your sign-in logs to a log analytics workspace](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) .
#### Simulate sign-ins using the What If tool
Perform each test in your test plan with test users. The test plan is important
| [Password change for risky users](../identity-protection/howto-identity-protection-configure-risk-policies.md)| Authorized user attempts to sign in with compromised credentials (high risk sign in)| User is prompted to change password or access is blocked based on your policy | ### Deploy in production+ After confirming impact using **report-only mode**, an administrator can move the **Enable policy** toggle from **Report-only** to **On**. ### Roll back policies In case you need to roll back your newly implemented policies, use one or more of the following options:
-* **Disable the policy.** Disabling a policy makes sure it does not apply when a user tries to sign in. You can always come back and enable the policy when you would like to use it.
+* **Disable the policy.** Disabling a policy makes sure it doesn't apply when a user tries to sign in. You can always come back and enable the policy when you would like to use it.
![enable policy image](media/plan-conditional-access/enable-policy.png)
In case you need to roll back your newly implemented policies, use one or more o
When a user is having an issue with a Conditional Access policy, collect the following information to facilitate troubleshooting.
-* User Principle Name
-
+* User Principal Name
* User display name- * Operating system name- * Time stamp (approximate is ok)- * Target application- * Client application type (browser vs client)- * Correlation ID (this is unique to the sign-in) If the user received a message with a More details link, they can collect most of this information for you. ![CanΓÇÖt get to app error message](media/plan-conditional-access/cant-get-to-app.png)
-Once you have collected the information, See the following resources:
+Once you've collected the information, See the following resources:
* [Sign-in problems with Conditional Access](troubleshoot-conditional-access.md) ΓÇô Understand unexpected sign-in outcomes related to Conditional Access using error messages and Azure AD sign-ins log.- * [Using the What-If tool](troubleshoot-conditional-access-what-if.md) - Understand why a policy was or wasn't applied to a user in a specific circumstance or if a policy would apply in a known state. ## Next Steps
-[Learn more about Multi-factor authentication](../authentication/concept-mfa-howitworks.md)
+[Learn more about Multifactor authentication](../authentication/concept-mfa-howitworks.md)
[Learn more about Identity Protection](../identity-protection/overview-identity-protection.md)
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
In this article, we walk through a few common scenarios that can help you unders
In the following examples, you create, update, link, and delete policies for service principals. Claims-mapping policies can only be assigned to service principal objects. If you are new to Azure AD, we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
-When creating a claims-mapping policy, you can also emit a claim from a directory schema extension attribute in tokens. Use *ExtensionID* for the extension attribute instead of *ID* in the `ClaimsSchema` element. For more info on extension attributes, see [Using directory schema extension attributes](active-directory-schema-extensions.md).
+When creating a claims-mapping policy, you can also emit a claim from a directory extension attribute in tokens. Use *ExtensionID* for the extension attribute instead of *ID* in the `ClaimsSchema` element. For more info on extension attributes, see [Using directory extension attributes](active-directory-schema-extensions.md).
> [!NOTE] > The [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview) is required to configure claims-mapping policies. The PowerShell module is in preview, while the claims mapping and token creation runtime in Azure is generally available. Updates to the preview PowerShell module could require you to update or change your configuration scripts.
If you're not using a verified domain, Azure AD will return an `AADSTS501461` er
- Read the [claims-mapping policy type](reference-claims-mapping-policy-type.md) reference article to learn more. - To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- To learn more about extension attributes, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
+- To learn more about extension attributes, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
You can use optional claims to:
For the lists of standard claims, see the [access token](access-tokens.md) and [id_token](id-tokens.md) claims documentation.
-While optional claims are supported in both v1.0 and v2.0 format tokens, as well as SAML tokens, they provide most of their value when moving from v1.0 to v2.0. One of the goals of the [Microsoft identity platform](./v2-overview.md) is smaller token sizes to ensure optimal performance by clients. As a result, several claims formerly included in the access and ID tokens are no longer present in v2.0 tokens and must be asked for specifically on a per-application basis.
+While optional claims are supported in both v1.0 and v2.0 format tokens and SAML tokens, they provide most of their value when moving from v1.0 to v2.0. One of the goals of the [Microsoft identity platform](./v2-overview.md) is smaller token sizes to ensure optimal performance by clients. As a result, several claims formerly included in the access and ID tokens are no longer present in v2.0 tokens and must be asked for specifically on a per-application basis.
**Table 1: Applicability**
While optional claims are supported in both v1.0 and v2.0 format tokens, as well
## v1.0 and v2.0 optional claims set
-The set of optional claims available by default for applications to use are listed below. To add custom optional claims for your application, see [Directory Extensions](#configuring-directory-extension-optional-claims), below. When adding claims to the **access token**, the claims apply to access tokens requested *for* the application (a web API), not claims requested *by* the application. No matter how the client accesses your API, the right data is present in the access token that is used to authenticate against your API.
+The set of optional claims available by default for applications to use are listed below. You can use custom data in extension attributes and directory extensions to add optional claims for your application. To use directory extensions, see [Directory Extensions](#configuring-directory-extension-optional-claims), below. When adding claims to the **access token**, the claims apply to access tokens requested *for* the application (a web API), not claims requested *by* the application. No matter how the client accesses your API, the right data is present in the access token that is used to authenticate against your API.
> [!NOTE] >The majority of these claims can be included in JWTs for v1.0 and v2.0 tokens, but not SAML tokens, except where noted in the Token Type column. Consumer accounts support a subset of these claims, marked in the "User Type" column. Many of the claims listed do not apply to consumer users (they have no tenant, so `tenant_ctry` has no value).
The set of optional claims available by default for applications to use are list
| Name | Description | Token Type | User Type | Notes | |-|-||--|--|
-| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they are a guest, the value is `1`. |
+| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they're a guest, the value is `1`. |
| `auth_time` | Time when the user last authenticated. See OpenID Connect spec.| JWT | | | | `ctry` | User's country/region | JWT | | Azure AD returns the `ctry` optional claim if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
-| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value is not guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or pre-fill in your UX. |
+| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or pre-fill in your UX. |
| `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) | | `groups`| Optional formatting for group claims |JWT, SAML| |For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md). Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well.
-| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This is the most accurate way for an API to determine if a token is an app token or an app+user token.|
-| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user clicks on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you are operating in a guest scenario, where the user is from another tenant, then you must still provide a tenant identifier in the sign-in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
+| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This claim is the most accurate way for an API to determine if a token is an app token or an app+user token.|
+| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user selects on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you're operating in a guest scenario where the user is from another tenant, you must provide a tenant identifier in the sign-in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
| `sid` | Session ID, used for per-session user sign-out. | JWT | Personal and Azure AD accounts. | | | `tenant_ctry` | Resource tenant's country/region | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. | | `tenant_region_scope` | Region of the resource tenant | JWT | | |
-| `upn` | UserPrincipalName | JWT, SAML | | An identifier for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used for authorization or to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. For more information, see [Validate the user has permission to access this data](access-tokens.md). Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. You should use the `login_hint` claim for `login_hint` use - human-readable identifiers like UPN are unreliable.|
+| `upn` | UserPrincipalName | JWT, SAML | | An identifier for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. For more information, see [Validate the user has permission to access this data](access-tokens.md). Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. You should use the `login_hint` claim for `login_hint` use - human-readable identifiers like UPN are unreliable.|
| `verified_primary_email` | Sourced from the user's PrimaryAuthoritativeEmail | JWT | | | | `verified_secondary_email` | Sourced from the user's SecondaryAuthoritativeEmail | JWT | | | | `vnet` | VNET specifier information. | JWT | | |
These claims are always included in v1.0 Azure AD tokens, but not included in v2
| JWT Claim | Name | Description | Notes | |||-|-| | `ipaddr` | IP Address | The IP address the client logged in from. | |
-| `onprem_sid` | On-Premises Security Identifier | | |
+| `onprem_sid` | On-premises Security Identifier | | |
| `pwd_exp` | Password Expiration Time | The number of seconds after the time in the iat claim at which the password expires. This claim is only included when the password is expiring soon (as defined by "notification days" in the password policy). | | | `pwd_url` | Change Password URL | A URL that the user can visit to change their password. This claim is only included when the password is expiring soon (as defined by "notification days" in the password policy). | | | `in_corp` | Inside Corporate Network | Signals if the client is logging in from the corporate network. If they're not, the claim isn't included. | Based off of the [trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) settings in MFA. | | `family_name` | Last Name | Provides the last name, surname, or family name of the user as defined in the user object. <br>"family_name":"Miller" | Supported in MSA and Azure AD. Requires the `profile` scope. | | `given_name` | First name | Provides the first or "given" name of the user, as set on the user object.<br>"given_name": "Frank" | Supported in MSA and Azure AD. Requires the `profile` scope. |
-| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used for authorization or to uniquely identity user information (for example, as a database key). For more information, see [Validate the user has permission to access this data](access-tokens.md). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) below for configuration of the claim. Requires the `profile` scope.|
+| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). For more information, see [Validate the user has permission to access this data](access-tokens.md). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) below for configuration of the claim. Requires the `profile` scope.|
## v1.0-specific optional claims set
-Some of the improvements of the v2 token format are available to apps that use the v1 token format, as they help improve security and reliability. These will not take effect for ID tokens requested from the v2 endpoint, nor access tokens for APIs that use the v2 token format. These only apply to JWTs, not SAML tokens.
+Some of the improvements of the v2 token format are available to apps that use the v1 token format, as they help improve security and reliability. These improvements won't take effect for ID tokens requested from the v2 endpoint, nor access tokens for APIs that use the v2 token format. These improvements only apply to JWTs, not SAML tokens.
**Table 4: v1.0-only optional claims** | JWT Claim | Name | Description | Notes | |||-|-|
-|`aud` | Audience | Always present in JWTs, but in v1 access tokens it can be emitted in a variety of ways - any appID URI, with or without a trailing slash, as well as the client ID of the resource. This randomization can be hard to code against when performing token validation. Use the [additional properties for this claim](#additional-properties-of-optional-claims) to ensure it's always set to the resource's client ID in v1 access tokens. | v1 JWT access tokens only|
-|`preferred_username` | Preferred username | Provides the preferred username claim within v1 tokens. This makes it easier for apps to provide username hints and show human readable display names, regardless of their token type. It's recommended that you use this optional claim instead of using e.g. `upn` or `unique_name`. | v1 ID tokens and access tokens |
+|`aud` | Audience | Always present in JWTs, but in v1 access tokens it can be emitted in various ways - any appID URI, with or without a trailing slash, and the client ID of the resource. This randomization can be hard to code against when performing token validation. Use the [additional properties for this claim](#additional-properties-of-optional-claims) to ensure it's always set to the resource's client ID in v1 access tokens. | v1 JWT access tokens only|
+|`preferred_username` | Preferred username | Provides the preferred username claim within v1 tokens. This claim makes it easier for apps to provide username hints and show human readable display names, regardless of their token type. It's recommended that you use this optional claim instead of using, for example, `upn` or `unique_name`. | v1 ID tokens and access tokens |
### Additional properties of optional claims
-Some optional claims can be configured to change the way the claim is returned. These additional properties are mostly used to help migration of on-premises applications with different data expectations. For example, `include_externally_authenticated_upn_without_hash` helps with clients that cannot handle hash marks (`#`) in the UPN.
+Some optional claims can be configured to change the way the claim is returned. These additional properties are mostly used to help migration of on-premises applications with different data expectations. For example, `include_externally_authenticated_upn_without_hash` helps with clients that can't handle hash marks (`#`) in the UPN.
**Table 4: Values for configuring optional claims**
Some optional claims can be configured to change the way the claim is returned.
| `upn` | | Can be used for both SAML and JWT responses, and for v1.0 and v2.0 tokens. | | | `include_externally_authenticated_upn` | Includes the guest UPN as stored in the resource tenant. For example, `foo_hometenant.com#EXT#@resourcetenant.com` | | | `include_externally_authenticated_upn_without_hash` | Same as above, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com`|
-| `aud` | | In v1 access tokens, this is used to change the format of the `aud` claim. This has no effect in v2 tokens or either version's ID tokens, where the `aud` claim is always the client ID. Use this configuration to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.|
-| | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim always instead of it being runtime dependent. For example, if a resource sets this flag, and its client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`. </br></br> Without this claim set, an API could get tokens with an `aud` claim of `api://MyApi.com`, `api://MyApi.com/`, `api://myapi.com/AdditionalRegisteredField` or any other value set as an app ID URI for that API, as well as the client ID of the resource. |
+| `aud` | | In v1 access tokens, this claim is used to change the format of the `aud` claim. This claim has no effect in v2 tokens or either version's ID tokens, where the `aud` claim is always the client ID. Use this configuration to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.|
+| | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim always instead of it being runtime dependent. For example, if a resource sets this flag, and its client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`. </br></br> Without this claim set, an API could get tokens with an `aud` claim of `api://MyApi.com`, `api://MyApi.com/`, `api://myapi.com/AdditionalRegisteredField` or any other value set as an app ID URI for that API, and the client ID of the resource. |
#### Additional properties example
You can configure optional claims for your application through the UI or applica
[![Configure optional claims in the UI](./media/active-directory-optional-claims/token-configuration.png)](./media/active-directory-optional-claims/token-configuration.png) 1. Under **Manage**, select **Token configuration**.
- - The UI option **Token configuration** blade is not available for apps registered in an Azure AD B2C tenant which can be configured by modifying the application manifest. For more information see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md)
+ - The UI option **Token configuration** blade isn't available for apps registered in an Azure AD B2C tenant, which can be configured by modifying the application manifest. For more information, see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md)
1. Select **Add optional claim**. 1. Select the token type you want to configure.
If supported by a specific claim, you can also modify the behavior of the Option
## Configuring directory extension optional claims
-In addition to the standard optional claims set, you can also configure tokens to include extensions. For more info, see [the Microsoft Graph extensionProperty documentation](/graph/api/resources/extensionproperty).
+In addition to the standard optional claims set, you can also configure tokens to include Microsoft Graph extensions. For more info, see [Add custom data to resources using extensions](/graph/extensibility-overview).
-Schema and open extensions are not supported by optional claims, only the AAD-Graph style directory extensions. This feature is useful for attaching additional user information that your app can use ΓÇô for example, an additional identifier or important configuration option that the user has set. See the bottom of this page for an example.
+Schema and open extensions aren't supported by optional claims, only extension attributes and directory extensions. This feature is useful for attaching additional user information that your app can use ΓÇô for example, an additional identifier or important configuration option that the user has set. See the bottom of this page for an example.
-Directory schema extensions are an Azure AD-only feature. If your application manifest requests a custom extension and an MSA user logs in to your app, these extensions will not be returned.
+Directory extensions are an Azure AD-only feature. If your application manifest requests a custom extension and an MSA user logs in to your app, these extensions won't be returned.
### Directory extension formatting
-When configuring directory extension optional claims using the application manifest, use the full name of the extension (in the format: `extension_<appid>_<attributename>`). The `<appid>` must match the ID of the application requesting the claim.
+When configuring directory extension optional claims using the application manifest, use the full name of the extension (in the format: `extension_<appid>_<attributename>`). The `<appid>` is the stripped version of the **appId** (or Client ID) of the application requesting the claim.
Within the JWT, these claims will be emitted with the following name format: `extn.<attributename>`.
Within the SAML tokens, these claims will be emitted with the following URI form
This section covers the configuration options under optional claims for changing the group attributes used in group claims from the default group objectID to attributes synced from on-premises Windows Active Directory. You can configure groups optional claims for your application through the UI or application manifest. > [!IMPORTANT]
-> Azure AD limits the number of groups emitted in a token to 150 for SAML assertions and 200 for JWT, including nested groups. For more details on group limits and important caveats for group claims from on-premises attributes, see [Configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
+> Azure AD limits the number of groups emitted in a token to 150 for SAML assertions and 200 for JWT, including nested groups. For more information on group limits and important caveats for group claims from on-premises attributes, see [Configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
**Configuring groups optional claims through the UI:**
This section covers the configuration options under optional claims for changing
Some applications require group information about the user in the role claim. To change the claim type from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values will be emitted in the role claim.
- If "emit_as_roles" is used, any application roles configured that the user is assigned will not appear in the role claim.
+ If "emit_as_roles" is used, any application roles configured that the user is assigned won't appear in the role claim.
**Examples:**
There are multiple options available for updating the properties on an applicati
**Example:**
-In the example below, you will use the **Token configuration** UI and **Manifest** to add optional claims to the access, ID, and SAML tokens intended for your application. Different optional claims will be added to each type of token that the application can receive:
+In the example below, you'll use the **Token configuration** UI and **Manifest** to add optional claims to the access, ID, and SAML tokens intended for your application. Different optional claims will be added to each type of token that the application can receive:
- The ID tokens will now contain the UPN for federated users in the full form (`<upn>_<homedomain>#EXT#@<resourcedomain>`). - The access tokens that other clients request for this application will now include the auth_time claim.-- The SAML tokens will now contain the skypeId directory schema extension (in this example, the app ID for this app is ab603c56068041afb2f6832e2a17e237). The SAML tokens will expose the Skype ID as `extension_skypeId`.
+- The SAML tokens will now contain the skypeId directory schema extension (in this example, the app ID for this app is ab603c56068041afb2f6832e2a17e237). The SAML tokens will expose the Skype ID as `extension_ab603c56068041afb2f6832e2a17e237_skypeId`.
**UI configuration:**
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
Title: Use Azure AD schema extension attributes in claims
-description: Describes how to use directory schema extension attributes for sending user data to applications in token claims.
+ Title: Use Azure AD directory extension attributes in claims
+description: Describes how to use directory extension attributes for sending user data to applications in token claims.
Last updated 07/29/2020
-# Using directory schema extension attributes in claims
+# Using directory extension attributes in claims
-Directory schema extension attributes provide a way to store additional data in Azure Active Directory on user objects and other directory objects such as groups, tenant details, service principals. Only extension attributes on user objects can be used for emitting claims to applications. This article describes how to use directory schema extension attributes for sending user data to applications in token claims.
+Directory extension attributes, also called Azure AD extensions, provide a way to store additional data in Azure Active Directory on user objects and other directory objects such as groups, tenant details, service principals. Only extension attributes on user objects can be used for emitting claims to applications. This article describes how to use directory extension attributes for sending user data to applications in token claims.
> [!NOTE]
-> Microsoft Graph provides two other extension mechanisms to customize Graph objects. These are known as Microsoft Graph open extensions and Microsoft Graph schema extensions. See the [Microsoft Graph documentation](/graph/extensibility-overview) for details. Data stored on Microsoft Graph objects using these capabilities are not available as sources for claims in tokens.
+> Microsoft Graph provides three other extension mechanisms to customize Graph objects. These are the extension attributes 1-15, open extensions, and schema extensions. See the [Microsoft Graph documentation](/graph/extensibility-overview) for details. Data stored on Microsoft Graph objects using open and schema extensions are not available as sources for claims in tokens.
-Directory schema extension attributes are always associated with an application in the tenant and are referenced by the application's *applicationId* in their name.
+Directory extension attributes are always associated with an application in the tenant and are referenced by the application's *appId* in their name.
-The identifier for a directory schema extension attribute is of the form *Extension_xxxxxxxxx_AttributeName*. Where *xxxxxxxxx* is the *applicationId* of the application the extension was defined for.
+The identifier for a directory extension attribute is of the form *extension_xxxxxxxxx_AttributeName*. Where *xxxxxxxxx* is the *appId* of the application the extension was defined for, with only characters 0-9 and A-Z.
-## Registering and using directory schema extensions
-Directory schema extension attributes can be registered and populated in one of two ways:
+## Registering and using directory extensions
+Directory extension attributes can be registered and populated in one of two ways:
-- By configuring AD Connect to create them and to sync data into them from on premises AD. See [Azure AD Connect Sync Directory Extensions](../hybrid/how-to-connect-sync-feature-directory-extensions.md).-- By using Microsoft Graph to register, set the values of, and read from [schema extensions](/graph/extensibility-overview). [PowerShell cmdlets](/powershell/azure/active-directory/using-extension-attributes-sample) are also available.
+- By configuring AD Connect to create them and to sync data into them from on-premises AD. See [Azure AD Connect Sync Directory Extensions](../hybrid/how-to-connect-sync-feature-directory-extensions.md).
+- By using Microsoft Graph to register, set the values of, and read from [directory extensions](/graph/extensibility-overview#directory-azure-ad-extensions). [PowerShell cmdlets](/powershell/azure/active-directory/using-extension-attributes-sample) are also available.
-### Emitting claims with data from directory schema extension attributes created with AD Connect
-Directory schema extension attributes created and synced using AD Connect are always associated with the application ID used by AD Connect. They can be used as a source for claims both by configuring them as claims in the **Enterprise Applications** configuration in the Portal UI for SAML applications registered using the Gallery or the non-Gallery application configuration experience under **Enterprise Applications**, and via a claims-mapping policy for applications registered via the Application registration experience. Once a directory extension attribute created via AD Connect is in the directory, it will show in the SAML SSO claims configuration UI.
+### Emitting claims with data from directory extension attributes created with AD Connect
+Directory extension attributes created and synced using AD Connect are always associated with the application ID used by AD Connect. They can be used as a source for claims both by configuring them as claims in the **Enterprise Applications** configuration in the Portal UI for SAML applications registered using the Gallery or the non-Gallery application configuration experience under **Enterprise Applications**, and via a claims-mapping policy for applications registered via the Application registration experience. Once a directory extension attribute created via AD Connect is in the directory, it will show in the SAML SSO claims configuration UI.
-### Emitting claims with data from directory schema extension attributes created for an application using Graph or PowerShell
-If a directory schema extension attribute is registered for an application using Microsoft Graph or PowerShell (via an applications initial setup or provisioning step for instance), the same application can be configured in Azure Active Directory to receive data in that attribute from a user object in a claim when the user signs in. The application can be configured to receive data in directory schema extensions that are registered on that same application using [optional claims](active-directory-optional-claims.md#configuring-directory-extension-optional-claims). These can be set in the application manifest. This enables a multi-tenant application to register directory schema extension attributes for its own use. When the application is provisioned into a tenant the associated directory schema extensions become available to be set on users in that tenant, and to be consumed. Once it's configured in the tenant and consent granted, it can be used to store and retrieve data via graph and to map to claims in tokens the Microsoft identity platform emits to applications.
+### Emitting claims with data from directory extension attributes created for an application using Graph or PowerShell
+If a directory extension attribute is registered for an application using Microsoft Graph or PowerShell (via an applications initial setup or provisioning step for instance), the same application can be configured in Azure Active Directory to receive data in that attribute from a user object in a claim when the user signs in. The application can be configured to receive data in directory extensions that are registered on that same application using [optional claims](active-directory-optional-claims.md#configuring-directory-extension-optional-claims). These can be set in the application manifest. This enables a multi-tenant application to register directory extension attributes for its own use. When the application is provisioned into a tenant the associated directory extensions become available to be set on users in that tenant, and to be consumed. Once it's configured in the tenant and consent granted, it can be used to store and retrieve data via graph and to map to claims in tokens the Microsoft identity platform emits to applications.
-Directory schema extension attributes can be registered and populated for any application.
+Directory extension attributes can be registered and populated for any application.
-If an application needs to send claims with data from an extension attribute registered on a different application, a [claims mapping policy](active-directory-claims-mapping.md) must be used to map the extension attribute to the claim. A common pattern for managing directory schema extension attributes is to create an application specifically to be the point of registration for all the schema extensions you need. It doesn't have to be a real application and this technique means that all the extensions have the same application ID in their name.
+If an application needs to send claims with data from an extension attribute registered on a different application, a [claims mapping policy](active-directory-claims-mapping.md) must be used to map the extension attribute to the claim. A common pattern for managing directory extension attributes is to create an application specifically to be the point of registration for all the directory extensions you need. It doesn't have to be a real application and this technique means that all the extensions have the same appID in their name.
-For example, here is a claims-mapping policy to emit a single claim from a directory schema extension attribute in an OAuth/OIDC token:
+For example, here is a claims-mapping policy to emit a single claim from a directory extension attribute in an OAuth/OIDC token:
```json {
For example, here is a claims-mapping policy to emit a single claim from a direc
} ```
-Where *xxxxxxx* is the application ID the extension was registered with.
+Where *xxxxxxx* is the appID (or Client ID) of the application that the extension was registered with.
> [!TIP]
-> Case consistency is important when setting directory extension attributes on objects. Extension attribute names aren't cases sensitive when being set up, but they are case sensitive when being read from the directory by the token service. If an extension attribute is set on a user object with the name "LegacyId" and on another user object with the name "legacyid", when the attribute is mapped to a claim using the name "LegacyId" the data will be successfully retrieved and the claim included in the token for the first user but not the second.
+> Case consistency is important when setting directory extension attributes on objects. Extension attribute names aren't cases sensitive when being set up, but they are case sensitive when being read from the directory by the token service. If an extension attribute is set on a user object with the name "LegacyId" and on another user object with the name "legacyid", when the attribute is mapped to a claim using the name "LegacyId" the data will be successfully retrieved and the claim included in the token for the first user but not the second.
> > The "Id" parameter in the claims schema used for built-in directory attributes is "ExtensionID" for directory extension attributes. ## Next steps - Learn how to [add custom or additional claims to the SAML 2.0 and JSON Web Tokens (JWT) tokens](active-directory-optional-claims.md).-- Learn how to [customize claims emitted in tokens for a specific app](active-directory-claims-mapping.md).
+- Learn how to [customize claims emitted in tokens for a specific app](active-directory-claims-mapping.md).
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md
Title: Create a self-signed public certificate to authenticate your application description: Create a self-signed public certificate to authenticate your application. -+
Last updated 08/10/2021-+ #Customer intent: As an application developer, I want to understand the basic concepts of authentication and authorization in the Microsoft identity platform.
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
Previously updated : 06/13/2022 Last updated : 08/10/2022
The `error` field has several possible values - review the protocol documentatio
| AADSTS50146 | MissingCustomSigningKey - This app is required to be configured with an app-specific signing key. It is either not configured with one, or the key has expired or isn't yet valid. | | AADSTS50147 | MissingCodeChallenge - The size of the code challenge parameter isn't valid. | | AADSTS501481 | The Code_Verifier doesn't match the code_challenge supplied in the authorization request.|
+| AADSTS501491 | InvalidCodeChallengeMethodInvalidSize - Invalid size of Code_Challenge parameter.|
| AADSTS50155 | DeviceAuthenticationFailed - Device authentication failed for this user. | | AADSTS50158 | ExternalSecurityChallenge - External security challenge was not satisfied. | | AADSTS50161 | InvalidExternalSecurityChallengeConfiguration - Claims sent by external provider isn't enough or Missing claim requested to external provider. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS70000 | InvalidGrant - Authentication failed. The refresh token isn't valid. Error may be due to the following reasons:<ul><li>Token binding header is empty</li><li>Token binding hash does not match</li></ul> | | AADSTS70001 | UnauthorizedClient - The application is disabled. To learn more, see the troubleshooting article for error [AADSTS70001](/troubleshoot/azure/active-directory/error-code-aadsts70001-app-not-found-in-directory). | | AADSTS70002 | InvalidClient - Error validating the credentials. The specified client_secret does not match the expected value for this client. Correct the client_secret and try again. For more info, see [Use the authorization code to request an access token](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). |
+| AADSTS700025 | InvalidClientPublicClientWithCredential - Client is public so neither 'client_assertion' nor 'client_secret' should be presented. |
| AADSTS70003 | UnsupportedGrantType - The app returned an unsupported grant type. | | AADSTS700030 | Invalid certificate - subject name in certificate isn't authorized. SubjectNames/SubjectAlternativeNames (up to 10) in token certificate are: {certificateSubjects}. | | AADSTS70004 | InvalidRedirectUri - The app returned an invalid redirect URI. The redirect address specified by the client does not match any configured addresses or any addresses on the OIDC approve list. |
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
For each claim schema entry defined in this property, certain information is req
**Source/ID pair:** The Source and ID elements define where the data in the claim is sourced from.
-**Source/ExtensionID pair:** The Source and ExtensionID elements define the directory schema extension attribute where the data in the claim is sourced from. For more information, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
+**Source/ExtensionID pair:** The Source and ExtensionID elements define the directory extension attribute where the data in the claim is sourced from. For more information, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
Set the Source element to one of the following values:
The ID element identifies which property on the source provides the value for th
| User | lastpasswordchangedatetime | Last Password Change Date/Time | | User | mobilephone | Mobile Phone | | User | officelocation | Office Location |
-| User | onpremisesdomainname | On-Premises Domain Name |
-| User | onpremisesimmutableid | On-Premises Imutable ID |
-| User | onpremisessyncenabled | On-Premises Sync Enabled |
-| User | preferreddatalocation | Preffered Data Location |
+| User | onpremisesdomainname | On-premises Domain Name |
+| User | onpremisesimmutableid | On-premises Immutable ID |
+| User | onpremisessyncenabled | On-premises Sync Enabled |
+| User | preferreddatalocation | Preferred Data Location |
| User | proxyaddresses | Proxy Addresses | | User | usertype | User Type | | User | telephonenumber| Business Phones / Office Phones |
Based on the method chosen, a set of inputs and outputs is expected. Define the
|TransformationMethod|Expected input|Expected output|Description| |--|--|--|--| |Join|string1, string2, separator|outputClaim|Joins input strings by using a separator in between. For example: string1:"foo@bar.com" , string2:"sandbox" , separator:"." results in outputClaim:"foo@bar.com.sandbox"|
-|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other Schema Extensions, which are storing a UPN or email address value for the user for example, johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
+|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other directory extensions, which are storing a UPN or email address value for the user, for example, johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
**InputClaims:** Use an InputClaims element to pass the data from a claim schema entry to a transformation. It has three attributes: **ClaimTypeReferenceId**, **TransformationClaimType** and **TreatAsMultiValue** (Preview)
Based on the method chosen, a set of inputs and outputs is expected. Define the
- To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md) - To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- To learn more about extension attributes, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
+- To learn more about extension attributes, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
active-directory Tutorial V2 Angular Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-angular-auth-code.md
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
```javascript import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router';
+ import { BrowserUtils } from '@azure/msal-browser';
import { HomeComponent } from './home/home.component'; import { ProfileComponent } from './profile/profile.component';
Register your **Redirect URI** value as **http://localhost:4200/** and type as '
@NgModule({ imports: [RouterModule.forRoot(routes, {
- initialNavigation: !isIframe ? 'enabled' : 'disabled' // Don't perform initial navigation in iframes
+ // Don't perform initial navigation in iframes or popups
+ initialNavigation: !BrowserUtils.isInIframe() && !BrowserUtils.isInPopup() ? 'enabledNonBlocking' : 'disabled' // Set to enabledBlocking to use Angular Universal
})], exports: [RouterModule] })
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
device.objectId -ne null
## Extension properties and custom extension properties
-Extension attributes and custom extension properties are supported as string properties in dynamic membership rules. [Extension attributes](/graph/api/resources/onpremisesextensionattributes) are synced from on-premises Window Server Active Directory and take the format of "ExtensionAttributeX", where X equals 1 - 15. Here's an example of a rule that uses an extension attribute as a property:
+Extension attributes and custom extension properties are supported as string properties in dynamic membership rules. [Extension attributes](/graph/api/resources/onpremisesextensionattributes) can be synced from on-premises Window Server Active Directory or updated using Microsoft Graph and take the format of "ExtensionAttributeX", where X equals 1 - 15. Here's an example of a rule that uses an extension attribute as a property:
``` (user.extensionAttribute15 -eq "Marketing") ```
-[Custom extension properties](../hybrid/how-to-connect-sync-feature-directory-extensions.md) are synced from on-premises Windows Server Active Directory or from a connected SaaS application and are of the format of `user.extension_[GUID]_[Attribute]`, where:
+[Custom extension properties](../hybrid/how-to-connect-sync-feature-directory-extensions.md) can be synced from on-premises Windows Server Active Directory, from a connected SaaS application, or created using Microsoft Graph, and are of the format of `user.extension_[GUID]_[Attribute]`, where:
-- [GUID] is the unique identifier in Azure AD for the application that created the property in Azure AD
+- [GUID] is the stripped version of the unique identifier in Azure AD for the application that created the property. It contains only characters 0-9 and A-Z
- [Attribute] is the name of the property as it was created An example of a rule that uses a custom extension property is:
An example of a rule that uses a custom extension property is:
user.extension_c272a57b722d4eb29bfe327874ae79cb_OfficeNumber -eq "123" ```
+Custom extension properties are also called directory or Azure AD extension properties.
+ The custom property name can be found in the directory by querying a user's property using Graph Explorer and searching for the property name. Also, you can now select **Get custom extension properties** link in the dynamic user group rule builder to enter a unique app ID and receive the full list of custom extension properties to use when creating a dynamic membership rule. This list can also be refreshed to get any new custom extension properties for that app. Extension attributes and custom extension properties must be from applications in your tenant. For more information, see [Use the attributes in dynamic groups](../hybrid/how-to-connect-sync-feature-directory-extensions.md#use-the-attributes-in-dynamic-groups) in the article [Azure AD Connect sync: Directory extensions](../hybrid/how-to-connect-sync-feature-directory-extensions.md).
active-directory B2b Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-fundamentals.md
Previously updated : 03/31/2022- Last updated : 08/10/2022
This article contains recommendations and best practices for business-to-business (B2B) collaboration in Azure Active Directory (Azure AD). > [!IMPORTANT]
-> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## B2B recommendations
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 04/26/2022- Last updated : 08/10/2022
You can enable this feature at any time in the Azure portal by configuring the E
> [!IMPORTANT] >
-> - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> - The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you havenΓÇÖt explicitly turned it off. This feature provides a seamless fallback authentication method for your guest users. If you donΓÇÖt want to use this feature, you can [disable it](#disable-email-one-time-passcode), in which case users will redeem invitations using unmanaged ("viral") Azure AD accounts as a fallback. Soon, weΓÇÖll stop creating new unmanaged accounts and tenants during invitation redemption, and we'll enforce redemption with a Microsoft account instead.
> - Email one-time passcode settings have moved in the Azure portal from **External collaboration settings** to **All identity providers**.- > [!NOTE] > One-time passcode users must sign in using a link that includes the tenant context (for example, `https://myapps.microsoft.com/?tenantid=<tenant id>` or `https://portal.azure.com/<tenant id>`, or in the case of a verified domain, `https://myapps.microsoft.com/<verified domain>.onmicrosoft.com`). Direct links to applications and resources also work as long as they include the tenant context. Guest users are currently unable to sign in using endpoints that have no tenant context. For example, using `https://myapps.microsoft.com`, `https://portal.azure.com` will result in an error.
Guest user teri@gmail.com is invited to Fabrikam, which doesn't have Google fede
## Disable email one-time passcode
-We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can disable it. Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. This feature provides a seamless fallback authentication method for your guest users. If you don't want to use this feature, you can disable it, in which case users will redeem invitations using unmanaged ("viral") Azure AD accounts as a fallback. Soon, we'll stop creating new unmanaged accounts and tenants during invitation redemption, and we'll enforce redemption with a Microsoft account instead.
> [!NOTE] >
For more information about current limitations, see [Azure AD B2B in government
## Frequently asked questions
-**Why do I still see ΓÇ£Automatically enable email one-time passcode for guests starting October 2021ΓÇ¥ selected in my email one-time passcode settings?**
-
-We've begun globally rolling out the change to enable email one-time passcode. In the meantime, you might still see ΓÇ£Automatically enable email one-time passcode for guests starting October 2021ΓÇ¥ selected in your email one-time passcode settings.
- **What happens to my existing guest users if I enable email one-time passcode?** Your existing guest users won't be affected if you enable email one-time passcode, as your existing users are already past the point of redemption. Enabling email one-time passcode will only affect future redemption activities where new guest users are redeeming into the tenant.
-**What is the user experience for guests during global rollout?**
-
-The user experience depends on your current email one-time passcode settings, whether the user already has an unmanaged account, and whether you [reset a user's redemption status](reset-redemption-status.md). The following table describes these scenarios.
-
-|User scenario |With email one-time passcode enabled prior to rollout |With email one-time passcode disabled prior to rollout |
-||||
-|**User has an existing unmanaged Azure AD account (not from redemption in your tenant)** |Both before and after rollout, the user redeems invitations using email one-time passcode. |Both before and after rollout, the user continues signing in with their unmanaged account.<sup>1</sup> |
-|**User previously redeemed an invitation to your tenant using an unmanaged Azure AD account** |Both before and after rollout, the user continues to use their unmanaged account. Or, you can [reset their redemption status](reset-redemption-status.md) so they can redeem a new invitation using email one-time passcode. |Both before and after rollout, the user continues to use their unmanaged account, even if you reset their redemption status and reinvite them.<sup>1</sup> |
-|**User with no unmanaged Azure AD account** |Both before and after rollout, the user redeems invitations using email one-time passcode. |Both before and after rollout, the user redeems invitations using an unmanaged account.<sup>2</sup> |
+**What is the user experience when email one-time passcode is disabled?**
-<sup>1</sup> In a separate release, weΓÇÖll roll out a change that will enforce redemption with a Microsoft account. To prevent your users from having to manage both an unmanaged Azure AD account and an MSA, we strongly encourage you to enable email one-time passcode.
+If youΓÇÖve disabled the email one-time passcode feature, the user redeems invitations using an unmanaged ("viral") account as a fallback. In a separate release, weΓÇÖll stop creating new, unmanaged Azure AD accounts and tenants during B2B collaboration invitation redemption and will enforce redemption with a Microsoft account.
-<sup>2</sup> The user might see a sign-in error when they're redeeming a direct application link and they weren't added to your directory in advance. In a separate release, weΓÇÖll roll out a change that will enforce redemption and future sign-ins with a Microsoft account.
+Also, when email one-time passcode is disabled, users might see a sign-in error when they're redeeming a direct application link and they weren't added to your directory in advance.
For more information about the different redemption pathways, see [B2B collaboration invitation redemption](redemption-experience.md).
-**Does this mean the ΓÇ£No account? Create one!ΓÇ¥ option for self-service sign-up is going away?**
+**Will the ΓÇ£No account? Create one!ΓÇ¥ option for self-service sign-up go away?**
-ItΓÇÖs easy to get [self-service sign-up in the context of External Identities](self-service-sign-up-overview.md) confused with self-service sign-up for email-verified users, but they're two different features. The feature that's going away is [self-service sign-up with email-verified users](../enterprise-users/directory-self-service-signup.md), which results in your guests creating an unmanaged Azure AD account. However, self-service sign-up for External Identities will continue to be available, which results in your guests signing up to your organization with a [variety of identity providers](identity-providers.md).ΓÇ»
+No. ItΓÇÖs easy to get [self-service sign-up in the context of External Identities](self-service-sign-up-overview.md) confused with self-service sign-up for email-verified users, but they're two different features. The unmanaged ("viral") feature that's going away is [self-service sign-up with email-verified users](../enterprise-users/directory-self-service-signup.md), which results in your guests creating an unmanaged Azure AD account. However, self-service sign-up for External Identities will continue to be available, which results in your guests signing up to your organization with a [variety of identity providers](identity-providers.md).ΓÇ»
**What does Microsoft recommend we do with existing Microsoft accounts (MSA)?** When we support the ability to disable Microsoft Account in the Identity providers settings (not available today), we strongly recommend you disable Microsoft Account and enable email one-time passcode. Then you should [reset the redemption status](reset-redemption-status.md) of existing guests with Microsoft accounts so that they can re-redeem using email one-time passcode authentication and use email one-time passcode to sign in going forward.
-**Does this change include SharePoint and OneDrive integration with Azure AD B2B?**
+**Regarding the change to enable email one-time-passcode by default, does this include SharePoint and OneDrive integration with Azure AD B2B?**
No, the global rollout of the change to enable email one-time passcode by default doesn't include enabling SharePoint and OneDrive integration with Azure AD B2B. To learn how to enable integration so that collaboration on SharePoint and OneDrive uses B2B capabilities, or how to disable this integration, see [SharePoint and OneDrive Integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration).
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
Previously updated : 04/07/2022- Last updated : 08/10/2022
When you add a guest user to your directory, the guest user account has a consen
> [!IMPORTANT] > - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
-> - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> - The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## Redemption and sign-in through a common endpoint
When a user clicks the **Accept invitation** link in an [invitation email](invit
![Screenshot showing the redemption flow diagram](media/redemption-experience/invitation-redemption-flow.png)
-**If the userΓÇÖs User Principal Name (UPN) matches with both an existing Azure AD and personal MSA account, the user will be prompted to choose which account they want to redeem with. If Email OTP is enabled, existing unmanaged "viral" Azure AD accounts will be ignored (See step #9).*
+**If the userΓÇÖs User Principal Name (UPN) matches with both an existing Azure AD and personal Microsoft account, the user is prompted to choose which account they want to redeem with. If email one-time passcode is enabled, existing unmanaged ("viral") Azure AD accounts will be ignored (See step #9).*
1. Azure AD performs user-based discovery to determine if the user exists in an [existing Azure AD tenant](./what-is-b2b.md#easily-invite-guest-users-from-the-azure-ad-portal).
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Previously updated : 05/17/2022 Last updated : 08/10/2022 tags: active-directory
Here are some remedies for common problems with Azure Active Directory (Azure AD
> > - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
- > - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
-
+ > - The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## Guest sign-in fails with error code AADSTS50020
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Previously updated : 08/05/2022- Last updated : 08/10/2022
The following table describes B2B collaboration users based on how they authenti
- **Internal member**: These users are generally considered employees of your organization. The user authenticates internally via Azure AD, and the user object created in the resource Azure AD directory has a UserType of Member. > [!IMPORTANT]
-> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## Invitation redemption
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 06/30/2022- Last updated : 08/10/2022
A simple invitation and redemption process lets partners use their own credentia
Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/). > [!IMPORTANT]
-> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
+> The email one-time passcode feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. Learn more about [configuring email one-time passcode](one-time-passcode.md) and [plans for other fallback authentication methods](one-time-passcode.md#disable-email-one-time-passcode), such as unmanaged ("viral") accounts and Microsoft accounts.
## Collaborate with any partner using their identities
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Security defaults make it easier to help protect your organization from these id
## Enabling security defaults
-If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation.
+If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation.
To enable security defaults in your directory:
You may choose to [disable password expiration](../authentication/concept-sspr-p
For more detailed information about emergency access accounts, see the article [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
+### B2B guest users
+
+Any B2B Guest users that access your directory will be subject to the same controls as your organization's users.
+ ### Disabled MFA status If your organization is a previous user of per-user based Azure AD Multi-Factor Authentication, don't be alarmed to not see users in an **Enabled** or **Enforced** status if you look at the Multi-Factor Auth status page. **Disabled** is the appropriate status for users who are using security defaults or Conditional Access based Azure AD Multi-Factor Authentication.
active-directory How To Connect Install Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-automatic-upgrade.md
Azure AD Connect automatic upgrade is a feature that regularly checks for newer versions of Azure AD Connect. If your server is enabled for automatic upgrade and a newer version is found for which your server is eligible, it will perform an automatic upgrade to that newer version. Note that for security reasons the agent that performs the automatic upgrade validates the new build of Azure AD Connect based on the digital signature of the downloaded version.
+>[!NOTE]
+> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](https://docs.microsoft.com/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
+>
+> Product governed by the Modern Policy follow a [continuous support and servicing model](https://docs.microsoft.com/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported.
+>
+> For products and services governed by the Modern Lifecycle Policy, Microsoft's policy is to provide a minimum 30 days' notification when customers are required to take action in order to avoid significant degradation to the normal use of the product or service.
+ ## Overview Making sure your Azure AD Connect installation is always up to date has never been easier with the **automatic upgrade** feature. This feature is enabled by default for express installations and DirSync upgrades. When a new version is released, your installation is automatically upgraded. Automatic upgrade is enabled by default for the following:
active-directory Myapps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/myapps-overview.md
Previously updated : 05/05/2022 Last updated : 08/11/2022 + #Customer intent: As an Azure AD administrator, I want to make applications available to users in the My Apps portal. # My Apps portal overview
-[My Apps](https://myapps.microsoft.com) is a web-based portal that is used for managing and launching applications in Azure Active Directory (Azure AD). To work with applications in My Apps, use an organizational account in Azure AD and obtain access granted by the Azure AD administrator. My Apps is separate from the Azure portal and doesn't require users to have an Azure subscription or Microsoft 365 subscription.
+My Apps is a web-based portal that is used for managing and launching applications in Azure Active Directory (Azure AD). To work with applications in My Apps, use an organizational account in Azure AD and obtain access granted by the Azure AD administrator. My Apps is separate from the Azure portal and doesn't require users to have an Azure subscription or Microsoft 365 subscription.
Users access the My Apps portal to:
For more information, see [Properties of an enterprise application](application-
### Discover applications
-When signed in to the My Apps portal, the applications that have been made visible are shown. For an application to be visible in the My Apps portal, set the appropriate properties in the Azure portal. Also in the Azure portal, assign a user or group with the appropriate members.
+When signed in to the [My Apps](https://myapps.microsoft.com) portal, the applications that have been made visible are shown. For an application to be visible in the My Apps portal, set the appropriate properties in the [Azure portal](https://portal.azure.com). Also in the Azure portal, assign a user or group with the appropriate members.
In the My Apps portal, to search for an application, enter an application name in the search box at the top of the page to find an application. The applications that are listed can be formatted in **List view** or a **Grid view**.
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
zone_pivot_groups: enterprise-apps-minus-portal
In this article, you'll learn how to restore a soft deleted enterprise application in your Azure Active Directory (Azure AD) tenant. Soft deleted enterprise applications can be restored from the recycle bin within the first 30 days after their deletion. After the 30-day window, the enterprise application is permanently deleted and can't be restored.
-When an [application registration is deleted](../develop/howto-remove-app.md) in its home tenant through app registrations in the Azure portal, the enterprise application, which is its corresponding service principal also gets deleted. Restoring the deleted application registration through the Azure portal won't restore its corresponding service principal, but will instead create a new one.
-
-Currently, the [soft deleted enterprise applications](delete-application-portal.md) can't be viewed or restored through the Azure portal. Therefore, if you had configurations on the previous enterprise application, you can't restore them through the Azure portal. To recover your previous configurations, first delete the enterprise application that was restored through the Azure portal, then follow the steps in this article to recover the soft deleted enterprise application. For more information on frequently asked questions about deletion and recovery of applications, see [Deleting and recovering applications FAQs](delete-recover-faq.yml.
--
+>[!IMPORTANT]
+>If you deleted an [application registration](../develop/howto-remove-app.md) in its home tenant through app registrations in the Azure portal, the enterprise application, which is its corresponding service principal also got deleted. If you restore the deleted application registration through the Azure portal, its corresponding service principal, won't be restored. Instead, this action will create a new service principal. Therefore, if you had configurations on the previous enterprise application, you can't restore them through the Azure portal. Use the workaround provided in this article to recover the deleted service principal and its previous configurations.
## Prerequisites To restore an enterprise application, you need:
To restore an enterprise application, you need:
- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. - A [soft deleted enterprise application](delete-application-portal.md) in your tenant.- ## View restorable enterprise applications
+To recover your enterprise application with its previous configurations, first delete the enterprise application that was restored through the Azure portal, then take the following steps to recover the soft deleted enterprise application. For more information on frequently asked questions about deletion and recovery of applications, see [Deleting and recovering applications FAQs](delete-recover-faq.yml).
+ :::zone pivot="aad-powershell" > [!IMPORTANT]
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tenant-restrictions.md
For specific details, refer to your proxy server documentation.
## Blocking consumer applications
-Applications from Microsoft that support both consumer accounts and organizational accounts, like OneDrive or Microsoft Learn can sometimes be hosted on the same URL. This means that users that must access that URL for work purposes also have access to it for personal use, which may not be permitted under your operating guidelines.
+Applications from Microsoft that support both consumer accounts and organizational accounts such as OneDrive can sometimes be hosted on the same URL. This means that users that must access that URL for work purposes also have access to it for personal use, which may not be permitted under your operating guidelines.
Some organizations attempt to fix this by blocking `login.live.com` in order to block personal accounts from authenticating. This has several downsides:
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Knowledge Administrator](#knowledge-administrator) | Can configure knowledge, learning, and other intelligent features. | b5a8dcf3-09d5-43a9-a639-8e29ef291470 | > | [Knowledge Manager](#knowledge-manager) | Can organize, create, manage, and promote topics and knowledge. | 744ec460-397e-42ad-a462-8b3f9747a02c | > | [License Administrator](#license-administrator) | Can manage product licenses on users and groups. | 4d6ac14f-3453-41d0-bef9-a3e0c569773a |
+> | [Lifecycle Workflows Administrator](#lifecycle-workflows-administrator) | Create and manage all aspects of workflows and tasks associated with Lifecycle Workflows in Azure AD. | 59d46f88-662b-457b-bceb-5c3809e5908f |
> | [Message Center Privacy Reader](#message-center-privacy-reader) | Can read security messages and updates in Office 365 Message Center only. | ac16e43d-7b2d-40e0-ac05-243ff356ab5b | > | [Message Center Reader](#message-center-reader) | Can read messages and updates for their organization in Office 365 Message Center only. | 790c1fb9-7f7d-4f88-86a1-ef1f95c05c1b | > | [Modern Commerce User](#modern-commerce-user) | Can manage commercial purchases for a company, department or team. | d24aef57-1500-4070-84db-2666f29cf966 |
Users in this role can add, remove, and update license assignments on users, gro
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Lifecycle Workflows Administrator
+
+Assign the Lifecycle Workflows Administrator role to users who need to do the following tasks:
+
+- Create and manage all aspects of workflows and tasks associated with Lifecycle Workflows in Azure AD
+- Check the execution of scheduled workflows
+- Launch on-demand workflow runs
+- Inspect workflow execution logs
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/lifecycleManagement/workflows/allProperties/allTasks | Manage all aspects of lifecycle management workflows and tasks in Azure AD |
+ ## Message Center Privacy Reader Users in this role can monitor all notifications in the Message Center, including data privacy messages. Message Center Privacy Readers get email notifications including those related to data privacy and they can unsubscribe using Message Center Preferences. Only the Global Administrator and the Message Center Privacy Reader can read data privacy messages. Additionally, this role contains the ability to view groups, domains, and subscriptions. This role has no permission to view, create, or manage service requests.
active-directory Zendesk Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Zendesk SSO
+You can set up one SAML configuration for team members and a second SAML configuration for end users.
+ 1. To automate the configuration within **Zendesk**, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**. ![Screenshot shows the Install the extension button.](./media/target-process-tutorial/install_extension.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Setup configuration](common/setup-sso.png)
-1. If you want to setup Zendesk manually, open a new web browser window and sign into your Zendesk company site as an administrator and perform the following steps:
+1. If you want to set up Zendesk manually, open a new web browser window and sign into your Zendesk company site as an administrator and perform the following steps:
-1. In the **Zendesk Admin Center**, Go to the **Account -> Security -> Single sign-on** page and click **Configure** in the **SAML**.
+1. In the **Zendesk Admin Center**, go to **Account -> Security -> Single sign-on**, then click **Create SSO configuration** and select **SAML**.
- ![Screenshot shows the Zendesk Admin Center with Security settings selected.](./media/zendesk-tutorial/settings.png "Security")
+ ![Screenshot shows the Zendesk Admin Center with Security settings selected.](https://zen-marketing-documentation.s3.amazonaws.com/docs/en/zendesk_create_sso_configuration.png "Security")
1. Perform the following steps in the **Single sign-on** page.
- ![Single sign-on](./media/zendesk-tutorial/saml-configuration.png "Single sign-on")
+ ![Single sign-on](https://zen-marketing-documentation.s3.amazonaws.com/docs/en/zendesk_saml_configuration_settings.png "Single sign-on")
+
+ a. In **Configuration name**, enter a name for your configuration. Up to two SAML and two JWT configurations are possible.
- a. Check the **Enabled**.
-
b. In **SAML SSO URL** textbox, paste the value of **Login URL** which you have copied from Azure portal. c. In **Certificate fingerprint** textbox, paste the **Thumbprint** value of certificate which you have copied from Azure portal.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
e. Click **Save**.
+After creating your SAML configuration, you must activate it by assigning it to end users or team members.
+
+1. In the **Zendesk Admin Center**, go to **Account -> Security** and select either **Team member authentication** or **End user authentication**.
+
+1. If you're assigning the configuration to team members, select **External authentication** to show the authentication options. These options are already displayed for end users.
+
+1. Click the **Single sign-on (SSO)** option in the **External authentication** section, then select the name of the SSO configuration you want to use.
+
+1. Select the primary SSO method for this group of users if you have more than one authentication method assigned to the group. This option sets the default method used when users go to a page that requires authentication.
+
+1. Click **Save**.
+ ### Create Zendesk test user The objective of this section is to create a user called Britta Simon in Zendesk. Zendesk supports automatic user provisioning, which is by default enabled. You can find more details [here](Zendesk-provisioning-tutorial.md) on how to configure automatic user provisioning.
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
Verifiable credentials definitions are made up of two components, *display* defi
This article explains how to modify both types of definitions to meet the requirements of your organization.
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Display definition: wallet credential visuals Microsoft Entra Verified ID offer a limited set of options that can be used to reflect your brand. This article provides instructions how to customize your credentials, and best practices for designing credentials that look great after they're issued to users.
The rules definition is a simple JSON document that describes important properti
### Attestations
-The following four attestation types are currently available to be configured in the rules definition. They're used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your decentralized identifier (DID).
+The following four attestation types are currently available to be configured in the rules definition. They are different ways of providing claims used by the Entra verified ID issuing service to be inserted into a verifiable credential and attest to that information with your decentralized identifier (DID). Multiple attestation types can be used in the rules definition.
* **ID token**: When this option is configured, you'll need to provide an Open ID Connect configuration URI and include the claims that should be included in the verifiable credential. Users are prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account. To configure this option, see this [how to guide](how-to-use-quickstart-idtoken.md)
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable. But identity data has too often been exposed in security breaches. These breaches affect our social, professional, and financial lives. Microsoft believes that thereΓÇÖs a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This primer explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations.
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
Microsoft Entra Verified ID includes the Request Service REST API. This API allows you to issue and verify credentials. This article shows you how to start using the Request Service REST API.
-> [!IMPORTANT]
-> The Request Service REST API is currently in preview. This preview version is provided without a service level agreement, and you can occasionally expect breaking changes and deprecation of the API while in preview. The preview version of the API isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## API access token Your application needs to include a valid access token with the required permissions so that it can access the Request Service REST API. Access tokens issued by the Microsoft identity platform contain information (scopes) that the Request Service REST API uses to validate the caller. An access token ensures that the caller has the proper permissions to perform the operation they're requesting.
To issue or verify a verifiable credential, follow these steps:
1. Submit the request to the Request Service REST API.
-The Request Service API returns a HTTP Status Code `201 Created` on a successful call. If the API call returns an error, please check the [error reference documentation](error-codes.md). //TODO
+The Request Service API returns an HTTP Status Code `201 Created` on a successful call. If the API call returns an error, please check the [error reference documentation](error-codes.md). //TODO
## Issuance request example
Authorization: Bearer <token>
"clientName": "Verifiable Credential Expert Sample" }, "type": "VerifiedCredentialExpert",
- "manifestUrl": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert1",
+ "manifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert1",
"pin": { "value": "3539", "length": 4
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- > [!NOTE] > The requirement of an Azure Active Directory (Azure AD) P2 license was removed in early May 2001. The Azure AD Free tier is now supported.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- ## Prerequisites To link your DID to your domain, you need to have completed the following.
It is of high importance that you link your DID to a domain recognizable to the
## How do you update the linked domain on your DID?
-1. Navigate to the Verifiable Credentials | Getting Started page.
-1. On the left side of the page, select **Domain**.
+1. Navigate to the Verified ID in the Azure portal.
+1. On the left side of the page, select **Registration**.
1. In the Domain box, enter your new domain name. 1. Select **Publish**.
If the trust system is ION, once the domain changes are published to ION, the do
## Distribute well-known config
-1. From the Azure portal, navigate to the Verifiable Credentials page. Select **Domain** and choose **Verify this domain**
+1. From the Azure portal, navigate to the Verified ID page. Select **Registration** and choose **Verify** for the domain
2. Download the did-configuration.json file shown in the image below.
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Title: How to Revoke a Verifiable Credential as an Issuer - Azure Active Directory Verifiable Credentials
+ Title: How to Revoke a Verifiable Credential as an Issuer - Entra Verified ID
description: Learn how to revoke a Verifiable Credential that you've issued documentationCenter: ''
As part of the process of working with verifiable credentials (VCs), you not only have to issue credentials, but sometimes you also have to revoke them. In this article, we go over the **Status** property part of the VC specification and take a closer look at the revocation process, why we may want to revoke credentials and some data and privacy implications.
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Why you may want to revoke a VC?
+## Why you may want to revoke a verifiable credential?
Each customer will have their own unique reason's for wanting to revoke a verifiable credential, but here are some of the common themes we've heard thus far.
Each customer will have their own unique reason's for wanting to revoke a verifi
Using the indexed claim in verifiable credentials, you can search for issued verifiable credentials by that claim in the portal and revoke it.
-1. Navigate to the verifiable credentials blade in Azure Active Directory.
+1. Navigate to the Verified ID blade in the Azure portal as an admin user with sign key permission on Azure KeyVault.
1. Select the verifiable credential type 1. On the left-hand menu, choose **Revoke a credential** ![Revoke a credential](media/how-to-issuer-revoke/settings-revoke.png)
-1. Search for the index claim of the user you want to revoke. If you haven't indexed a claim, search won't work, and you won't be able to revoke the verifiable credential.
+1. Search for the index claim of the user you want to revoke. If you haven't indexed a claim, search will not work, and you will not be able to revoke the verifiable credential.
![Screenshot of the credential to revoke](media/how-to-issuer-revoke/revoke-search.png) >[!NOTE]
- >Since we are only storing a hash of the indexed claim from the verifiable credential, only an exact match will populate the search results. We take the input as searched by the IT Admin and we use the same hashing algorithm to see if we have a hash match in our database.
+ >Since only a hash of the indexed claim from the verifiable credential is stored, only an exact match will populate the search results. What is entered in the textbox is hashed using the same algorithm and used as a search criteria to match the stored, hashed, value.
-1. Once you've found a match, select the **Revoke** option to the right of the credential you want to revoke.
+1. When a match is found, select the **Revoke** option to the right of the credential you want to revoke.
+
+ >[!NOTE]
+ >The admin user performing the revoke operation needs to have **sign** key permission on Azure KeyVault or you will get error message ***Unable to access KeyVault resource with given credentials***.
![Screenshot of a warning letting you know that after revocation the user still has the credential](media/how-to-issuer-revoke/warning.png)
Verifiable credential data isn't stored by Microsoft. Therefore, the issuer need
``` >[!NOTE]
->Only one claim can be indexed from a rules claims mapping.
+>Only one claim can be indexed from a rules claims mapping. If you accidentally have no indexed claim in your rules definition, and you later correct this, already issued verifiable credentials will not be searchable since they were issued when no index existed.
## How does revocation work?
Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.c
In every Microsoft issued verifiable credential, there is a claim called `credentialStatus`. This data is a navigational map to where in a block of data this VC has its revocation flag.
+>[!NOTE]
+>If the verifiable credential is old and was issued during the preview period, this claim may not exist. Revocation will not work for this credential and you have to reissue it.
+ ```json ... "credentialStatus": {
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
Title: Opt out of the Microsoft Entra Verified ID
-description: Learn how to Opt Out of the Verifiable Credentials Preview
+description: Learn how to Opt Out of Entra Verified ID
documentationCenter: ''
In this article:
- What happens to your data? - Effect on existing verifiable credentials.
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites - Complete verifiable credentials onboarding. ## When do you need to opt out?
-Opting out is a one-way operation, after you opt-out your Microsoft Entra Verified ID environment will be reset. During the Public Preview opting out may be required to:
+Opting out is a one-way operation, after you opt-out your Entra Verified ID environment will be reset. Opting out may be required to:
- Enable new service capabilities. - Reset your service configuration.
Once an opt-out takes place, you won't be able to recover your DID or conduct an
All verifiable credentials already issued will continue to exist. They won't be cryptographically invalidated as your DID will remain resolvable through ION. However, when relying parties call the status API, they will always receive back a failure message.
-## How to opt-out from the Microsoft Entra Verified ID Public Preview?
+## How to opt-out from the Microsoft Entra Verified ID service?
1. From the Azure portal search for verifiable credentials. 2. Choose **Organization Settings** from the left side menu.
-3. Under the section, **Reset your organization**, select **Delete all credentials, and opt out of preview**.
+3. Under the section, **Reset your organization**, select **Delete all credentials and reset service**.
:::image type="content" source="media/how-to-opt-out/settings-reset.png" alt-text="Section in settings that allows you to reset your organization":::
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites - Complete verifiable credentials onboarding with Web as the selected trust system.-- Complete the Linked Domain setup.
+- Complete the Linked Domain setup. Without completing this step, you can't perform this registration step.
## Why do I need to register my website ID?
-If your trust system for the tenant is Web, you need register your website ID to be able to issue and verify your credentials. When you use the ION based trust system, information like your issuers' public keys are published to the blockchain. When the trust system is Web, you have to make this information available on your website.
+If your trust system for the tenant is Web, you need register your website ID to be able to issue and verify your credentials. When the trust system is Web, you have to make this information available on your website and complete this registration. When you use the ION based trust system, information like your issuers' public keys are published to blockchain and you don't need to complete this step.
## How do I register my website ID?
-1. Navigate to the Verifiable Credentials | Getting Started page.
-1. On the left side of the page, select Domain.
+1. Navigate to the Verified ID in the Azure portal.
+1. On the left side of the page, select Registration.
1. At the Website ID registration, select Review. ![Screenshot of website registration page.](media/how-to-register-didwebsite/how-to-register-didwebsite-domain.png)
If your trust system for the tenant is Web, you need register your website ID to
![Screenshot of did.json.](media/how-to-register-didwebsite/how-to-register-didwebsite-diddoc.png) 1. Upload the file to your webserver. The DID document JSON file needs to be uploaded to location /.well-known/did.json on your webserver.
-1. Once the file is available on your webserver, you need to select the Refresh registration status button to verify that the system can request the file.
+1. Once the file is available on your webserver, you need to select the **Refresh registration status** button to verify that the system can request the file.
## When is the DID document in the did.json file used?
The DID document contains the public keys for your issuer and is used during bot
The DID document in the did.json file needs to be republished if you changed the Linked Domain or if you rotate your signing keys.
+## How can I verify that the registration is working?
+
+The portal verifies that the `did.json` is reachable and correct when you click the [**Refresh registration status** button](#how-do-i-register-my-website-id). You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, bad SSL certificate or URL not being public. If the did.json file can be requested anonymously in a browser, without warnings or errors, the portal will not be able to complete the **Refresh registration status** step either.
+ ## Next steps - [Tutorial for issue a verifiable credential](verifiable-credentials-configure-issuer.md)
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [idTokens attestation](rules-and-display-definitions-model.md#idtokenattestation-type) produces an issuance flow where you're required to do an interactive sign-in to an OpenID Connect (OIDC) identity provider in Microsoft Authenticator. Claims in the ID token that the identity provider returns can be used to populate the issued verifiable credential. The claims mapping section in the rules definition specifies which claims are used. ## Create a custom credential with the idTokens attestation type
The JSON display definition is nearly the same, regardless of attestation type.
## Sample JSON rules definitions
-The JSON attestation definition should contain the **idTokens** name, the [OIDC configuration details](rules-and-display-definitions-model.md#idtokenattestation-type) and the claims mapping section. The expected JSON for the rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
+The JSON attestation definition should contain the **idTokens** name, the [OIDC configuration details](rules-and-display-definitions-model.md#idtokenattestation-type) (clientId, configuration, redirectUri and scope) and the claims mapping section. The expected JSON for the rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
The claims mapping in the following example requires that you configure the token as explained in the [Claims in the ID token from the identity provider](#claims-in-the-id-token-from-the-identity-provider) section.
The clientId attribute is the application ID of a registered application in the
1. In **Redirect URI (optional)**, select **Public client/native (mobile & desktop)**, and then enter **vcclient://openid**.
-If you want to be able to test what claims are in the token, do the following:
+If you want to be able to test what claims are in the Azure Active Directory ID token, do the following:
1. On the left pane, select **Authentication**> **Add platform** > **Web**.
To configure your sample code to issue and verify your custom credentials, you n
- The credential type - The manifest URL to your credential
-The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-
-![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-
-After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**. Then you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
active-directory How To Use Quickstart Selfissued https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-selfissued.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) type produces an issuance flow where you're required to manually enter values for the claims in Microsoft Authenticator. ## Create a custom credential with the selfIssued attestation type
To configure your sample code to issue and verify your custom credential, you ne
- The credential type - The manifest URL to your credential
-The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-
-![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-
-After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**. Then you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
active-directory How To Use Quickstart Verifiedemployee https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-verifiedemployee.md
If attribute values change in the user's Azure AD profile, the VC isn't automati
## Configure the samples to issue and verify your VerifiedEmployee credential
-Verifiable Credentials for directory based claims can be issued and verified just like any other credentials you create. All you need is your issuer DID for your tenant, the credential type and the manifest url to your credential. The easiest way to find these values for a Managed Credential is to view the credential in the portal, select Issue credential and switch to Custom issue. These steps bring up a textbox with a skeleton JSON payload for the Request Service API.
+Verifiable Credentials for directory based claims can be issued and verified just like any other credentials you create. All you need is your issuer DID for your tenant, the credential type and the manifest url to your credential. The easiest way to find these values for a Managed Credential is to view the credential in the portal, select **Issue credential** and you will get a header named **Custom issue**. These steps bring up a textbox with a skeleton JSON payload for the Request Service API.
![Custom issue](media/how-to-use-quickstart-verifiedemployee/verifiable-credentials-configure-verifiedemployee-custom-issue.png)
active-directory How To Use Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites To use the Microsoft Entra Verified ID quickstart, you need only to complete the verifiable credentials onboarding process. ## What is the quickstart?
-Azure Active Directory verifiable credentials now come with a quickstart in the Azure portal for creating custom credentials. When you use the quickstart, you don't need to edit and upload rules and display files to Azure Storage. Instead, you enter all details in the Azure portal and create the credential on a single page.
+Entra Verified ID now come with quickstarts in the Azure portal for creating custom credentials. When you use the quickstart, you don't need to edit and upload rules and display files to Azure Storage. Instead, you enter all details in the Azure portal and create the credential on a single page.
>[!NOTE] >When you work with custom credentials, you provide display definitions and rules definitions in JSON documents. These definitions are stored with the credential details.
To configure your sample code to issue and verify by using custom credentials, y
- The credential type - The manifest URL to your credential
-The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-
-![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-
-After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**. There you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
active-directory How Use Vcnetwork https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-use-vcnetwork.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites To use the Entra Verified ID Network, you need to have completed the following.
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Microsoft Entra Verified ID](decentralized-identifier-overview.md) and the [FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial. This architectural overview introduces the capabilities and components of the Microsoft Entra Verified ID service. For more detailed information on issuance and validation, see
Terminology for verifiable credentials (VCs) might be confusing if you're not fa
* In the scenario above, both the issuer and verifier have a DID, and a DID document. The DID document contains the public key, and the list of DNS web domains associated with the DID (also known as linked domains).
-* Woodgrove (issuer) signs their employeesΓÇÖ VCs with its public key; similarly, Proseware (verifier) signs requests to present a VC using its key, which is also associated with its DID.
+* Woodgrove (issuer) signs their employeesΓÇÖ VCs with its private key; similarly, Proseware (verifier) signs requests to present a VC using its key, which is also associated with its DID.
A ***trust system*** is the foundation in establishing trust between decentralized systems. It can be a distributed ledger or it can be something centralized, such as [DID Web](https://w3c-ccg.github.io/did-method-web/).
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
The callback endpoint is called when a user scans the QR code, uses the deep lin
|Property |Type |Description | |||| | `requestId`| string | Mapped to the original request when the payload was posted to the Verifiable Credentials service.|
-| `code` |string |The code returned when the request has an error. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the issuance flow.</li><li>`issuance_successful`: The issuance of the verifiable credentials was successful.</li><li>`issuance_error`: There was an error during issuance. For details, see the `error` property.</li></ul> |
+| `requestStatus` |string |The status returned for the request. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the issuance flow.</li><li>`issuance_successful`: The issuance of the verifiable credentials was successful.</li><li>`issuance_error`: There was an error during issuance. For details, see the `error` property.</li></ul> |
| `state` |string| Returns the state value that you passed in the original payload. | | `error`| error | When the `code` property value is `Issuance_error`, this property contains information about the error.| | `error.code` | string| The return error code. |
The following example demonstrates a callback payload when the authenticator app
```json {     "requestId": "799f23ea-5241-45af-99ad-cf8e5018814e",
-    "code":"request_retrieved",
+    "requestStatus":"request_retrieved",
    "state": "de19cb6b-36c1-45fe-9409-909a51292a9c" } ```
The following example demonstrates a callback payload after the user successfull
```json {     "requestId": "799f23ea-5241-45af-99ad-cf8e5018814e",
-    "code":"issuance_successful",
+    "requestStatus":"issuance_successful",
    "state": "de19cb6b-36c1-45fe-9409-909a51292a9c" }  ```
The following example demonstrates a callback payload when an error occurred:
```json {     "requestId": "799f23ea-5241-45af-99ad-cf8e5018814e",
-    "code": "issuance_error",
+    "requestStatus": "issuance_error",
    "state": "de19cb6b-36c1-45fe-9409-909a51292a9c", "error": { "code":"IssuanceFlowFailed",
active-directory Issuer Openid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuer-openid.md
Title: Issuer service communication examples - Azure Active Directory Verifiable Credentials
+ Title: Issuer service communication examples - Entra Verified ID
description: Details of communication between identity provider and issuer service
The Microsoft Entra Verified ID service can issue verifiable credentials by retrieving claims from an ID token generated by your organization's OpenID compliant identity provider. This article instructs you on how to set up your identity provider so Authenticator can communicate with it and retrieve the correct ID Token to pass to the issuing service.
-> [!IMPORTANT]
-> Azure Active Directory Verifiable Credentials is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- To issue a Verifiable Credential, Authenticator is instructed through downloading the contract to gather input from the user and send that information to the issuing service. If you need to use an ID Token, you have to set up your identity provider to allow Authenticator to sign in a user using the OpenID Connect protocol. The claims in the resulting ID token are used to populate the contents of your verifiable credential. Authenticator authenticates the user using the OpenID Connect authorization code flow. Your OpenID provider must support the following OpenID Connect features: | Feature | Description |
The ID token must use the JWT compact serialization format, and must not be encr
## Next steps -- [How to customize your Azure Active Directory Verifiable Credentials](credential-design.md)
+- [Customize your verifiable credentials](credential-design.md)
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
- >[!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ItΓÇÖs important to plan your issuance solution so that in addition to issuing credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt done so, we recommend you view the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md) for foundational information. ## Scope of guidance
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
->[!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- MicrosoftΓÇÖs Microsoft Entra Verified ID (Azure AD VC) service enables you to trust proofs of user identity without expanding your trust boundary. With Azure AD VC, you create accounts or federate with another identity provider. When a solution implements a verification exchange using verifiable credentials, it enables applications to request credentials that aren't bound to a specific domain. This approach makes it easier to request and verify credentials at scale. If you havenΓÇÖt already, we suggest you review the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md). You may also want to review [Plan your Microsoft Entra Verified ID issuance solution](plan-issuance-solution.md).
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
The `RequestCredential` provides information about the requested credentials the
|||| | `type`| string| The verifiable credential type. The `type` must match the type as defined in the `issuer` verifiable credential manifest (for example, `VerifiedCredentialExpert`). To get the issuer manifest, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md). Copy the **Issue credential URL**, open it in a web browser, and check the **id** property. | | `purpose`| string | Provide information about the purpose of requesting this verifiable credential. |
-| `acceptedIssuers`| string collection | A collection of issuers' DIDs that could issue the type of verifiable credential that subjects can present. To get your issuer DID, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md), and copy the value of the **Decentralized identifier (DID)**. |
+| `acceptedIssuers`| string collection | A collection of issuers' DIDs that could issue the type of verifiable credential that subjects can present. To get your issuer DID, see [Gather credentials and environment details to set up your sample application](verifiable-credentials-configure-issuer.md), and copy the value of the **Decentralized identifier (DID)**. If the `acceptedIssuers` collection is empty, then the presentation request will accept a credential type issued by any issuer. |
| `configuration.validation` | [Configuration.Validation](#configurationvalidation-type) | Optional settings for presentation validation.| ### Configuration.Validation type
The `Configuration.Validation` provides information about the presented credenti
|Property |Type |Description | |||| | `allowRevoked` | Boolean | Determines if a revoked credential should be accepted. Default is `false` (it shouldn't be accepted). |
-| `validateLinkedDomain` | Boolean | Determines if the linked domain should be validated. Default is `true` (it should be validated). Setting this flag to `false` means you'll accept credentials from unverified linked domain. Setting this flag to `true` means the linked domain will be validated and only verified domains will be accepted. |
+| `validateLinkedDomain` | Boolean | Determines if the linked domain should be validated. Default is `false`. Setting this flag to `false` means you as a Relying Party application accept credentials from unverified linked domain. Setting this flag to `true` means the linked domain will be validated and only verified domains will be accepted. |
## Successful response
The callback endpoint is called when a user scans the QR code, uses the deep lin
|Property |Type |Description | |||| | `requestId`| string | Mapped to the original request when the payload was posted to the Verifiable Credentials service.|
-| `code` |string |The code returned when the request was retrieved by the authenticator app. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the presentation flow.</li><li>`presentation_verified`: The verifiable credential validation completed successfully.</li></ul> |
+| `requestStatus` |string |The status returned when the request was retrieved by the authenticator app. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the presentation flow.</li><li>`presentation_verified`: The verifiable credential validation completed successfully.</li></ul> |
| `state` |string| Returns the state value that you passed in the original payload. | | `subject`|string | The verifiable credential user DID.| | `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuer's domain. </li><li>The verifiable credential issuer's domain validation status. </li></ul> |
The following example demonstrates a callback payload when the authenticator app
```json {     "requestId": "e4ef27ca-eb8c-4b63-823b-3b95140eac11",
-    "code":"request_retrieved",
+    "requestStatus":"request_retrieved",
    "state": "92d076dd-450a-4247-aa5b-d2e75a1a5d58" } ```
The following example demonstrates a callback payload after the verifiable crede
```json { "requestId": "e4ef27ca-eb8c-4b63-823b-3b95140eac11",
- "code": "presentation_verified",
+ "requestStatus": "presentation_verified",
"state": "92d076dd-450a-4247-aa5b-d2e75a1a5d58", "subject": "did:ion:EiAlrenrtD3Lsw0GlbzS1O2YFdy3Xtu8yo35W<SNIP>…", "issuers": [
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
The following diagram illustrates the Microsoft Entra Verified ID architecture a
- To clone the repository that hosts the sample app, install [GIT](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download), or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- [ngrok](https://ngrok.com/) (free).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account.
- A mobile device with Microsoft Authenticator:
- - Android version 6.2108.5654 or later installed.
- - iOS version 6.5.82 or later installed.
+ - Android version 6.2206.3973 or later installed.
+ - iOS version 6.6.2 or later installed.
## Create the verified credential expert card in Azure In this step, you create the verified credential expert card by using Microsoft Entra Verified ID. After you create the credential, your Azure AD tenant can issue it to users who initiate the process.
-1. Using the [Azure portal](https://portal.azure.com/), search for *verifiable credentials*. Then select **Verifiable Credentials (Preview)**.
+1. Using the [Azure portal](https://portal.azure.com/), search for **Verified ID** and select it.
1. After you [set up your tenant](verifiable-credentials-configure-tenant.md), the **Create credential** should appear. Alternatively, you can select **Credentials** in the left hand menu and select **+ Add a credential**.
-1. In **Create a new credential**, do the following:
+1. In **Create credential**, select **Custom Credential** and click **Next**:
1. For **Credential name**, enter **VerifiedCredentialExpert**. This name is used in the portal to identify your verifiable credentials. It's included as part of the verifiable credentials contract.
The following screenshot demonstrates how to create a new credential:
Now that you have a new credential, you're going to gather some information about your environment and the credential that you created. You use these pieces of information when you set up your sample application.
-1. In Verifiable Credentials, select **Issue credential** and switch to **Custom issue**.
+1. In Verifiable Credentials, select **Issue credential**.
![Screenshot that shows how to select the newly created verified credential.](media/verifiable-credentials-configure-issuer/issue-credential-custom-view.png)
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The Verifiable Credentials Service Request is the Request Service API, and it ne
To set up Microsoft Entra Verified ID, follow these steps:
-1. In the [Azure portal](https://portal.azure.com/), search for *verifiable credentials*. Then, select **Verifiable Credentials (Preview)**.
+1. In the [Azure portal](https://portal.azure.com/), search for *Verified ID*. Then, select **Verified ID**.
1. From the left menu, select **Getting started**.
To add the required permissions, follow these steps:
## Service endpoint configuration
-1. In the Azure portal, navigate to the Verifiable credentials page.
+1. Navigate to the Verified ID in the Azure portal.
1. Select **Registration**. 1. Notice that there are two sections: 1. Website ID registration
active-directory Verifiable Credentials Configure Verifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-verifier.md
Last updated 06/16/2022
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-In [Issue Microsoft Entra Verified ID credentials from an application (preview)](verifiable-credentials-configure-issuer.md), you learn how to issue and verify credentials by using the same Azure Active Directory (Azure AD) tenant. In this tutorial, you go over the steps needed to present and verify your first verifiable credential: a verified credential expert card.
+In [Issue Microsoft Entra Verified ID credentials from an application](verifiable-credentials-configure-issuer.md), you learn how to issue and verify credentials by using the same Azure Active Directory (Azure AD) tenant. In this tutorial, you go over the steps needed to present and verify your first verifiable credential: a verified credential expert card.
As a verifier, you unlock privileges to subjects that possess verified credential expert cards. In this tutorial, you run a sample application from your local computer that asks you to present a verified credential expert card, and then verifies it.
In this article, you learn how to:
- If you want to clone the repository that hosts the sample app, install [Git](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download) or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- [ngrok](https://ngrok.com/) (free).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account.
- A mobile device with Microsoft Authenticator:
- - Android version 6.2108.5654 or later installed.
- - iOS version 6.5.82 or later installed.
+ - Android version 6.2206.3973 or later installed.
+ - iOS version 6.6.2 or later installed.
## Gather tenant details to set up your sample application Now that you've set up your Microsoft Entra Verified ID service, you're going to gather some information about your environment and the verifiable credentials you set. You use these pieces of information when you set up your sample application.
-1. From **Verifiable credentials (Preview)**, select **Organization settings**.
+1. From **Verified ID**, select **Organization settings**.
1. Copy the **Tenant identifier** value, and record it for later. 1. Copy the **Decentralized identifier** value, and record it for later.
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
This page contains commonly asked questions about Verifiable Credentials and Dec
- [Conceptual questions about decentralized identity](#conceptual-questions) - [Questions about using Verifiable Credentials preview](#using-the-preview)
-> [!IMPORTANT]
-> Azure Active Directory Verifiable Credentials is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## The basics ### What is a DID?
-Decentralized Identifers(DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains this in further detail.
+Decentralized Identifers (DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains this in further detail.
### Why do we need a DID?
There are multiple ways of offering a recovery mechanism to users, each with the
### How can a user trust a request from an issuer or verifier? How do they know a DID is the real DID for an organization?
-We implement [the Decentralized Identity Foundation's Well Known DID Configuration spec](https://identity.foundation/.well-known/resources/did-configuration/) in order to connect a DID to a highly known existing system, domain names. Each DID created using the Azure Active Directory Verifiable Credentials has the option of including a root domain name that will be encoded in the DID Document. Follow the article titled [Link your Domain to your Distributed Identifier](how-to-dnsbind.md) to learn more.
+We implement [the Decentralized Identity Foundation's Well Known DID Configuration spec](https://identity.foundation/.well-known/resources/did-configuration/) in order to connect a DID to a highly known existing system, domain names. Each DID created using the Entra Verified ID has the option of including a root domain name that will be encoded in the DID Document. Follow the article titled [Link your Domain to your Distributed Identifier](how-to-dnsbind.md) to learn more.
-### Why does the Verifiable Credential preview use ION as its DID method, and therefore Bitcoin to provide decentralized public key infrastructure?
+### Why does the Entra Verified ID support ION as its DID method, and therefore Bitcoin to provide decentralized public key infrastructure?
Microsoft now offers two different trust systems, Web and ION. You may choose to use either one of them during tenant onboarding. ION is a decentralized, permissionless, scalable decentralized identifier Layer 2 network that runs atop Bitcoin. It achieves scalability without including a special crypto asset token, trusted validators, or centralized consensus mechanisms. We use Bitcoin for the base Layer 1 substrate because of the strength of the decentralized network to provide a high degree of immutability for a chronological event record system.
Yes! The following repositories are the open-sourced components of our services.
There are no special licensing requirements to issue Verifiable credentials. All you need is An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ### Updating the VC Service configuration
-The following instructions will take 15 mins to complete and are only required if you have been using the Azure AD Verifiable Credentials service prior to April 25, 2022. You are required to execute these steps to update the existing service principals in your tenant that run the verifiable credentials service the following is an overview of the steps:
+The following instructions will take 15 mins to complete and are only required if you have been using the Entra Verified ID service prior to April 25, 2022. You are required to execute these steps to update the existing service principals in your tenant that run the verifiable credentials service the following is an overview of the steps:
1. Register new service principals for the Azure AD Verifiable Service 1. Update the Key Vault access policies
For the Request API the new scope for your application or Postman is now:
```3db474b9-6a0c-4840-96ac-1fceb342124f/.default ```
-### How do I reset the Azure AD Verifiable credentials service?
+### How do I reset the Entra Verified ID service?
-Resetting requires that you opt out and opt back into the Azure Active Directory Verifiable Credentials service, your existing verifiable credentials configurations will reset and your tenant will obtain a new DID to use during issuance and presentation.
+Resetting requires that you opt out and opt back into the Entra Verified ID service, your existing verifiable credentials configurations will reset and your tenant will obtain a new DID to use during issuance and presentation.
1. Follow the [opt-out](how-to-opt-out.md) instructions.
-1. Go over the Azure Active Directory Verifiable credentials [deployment steps](verifiable-credentials-configure-tenant.md) to reconfigure the service.
+1. Go over the Entra Verified ID [deployment steps](verifiable-credentials-configure-tenant.md) to reconfigure the service.
1. If you are in the European region, it's recommended that your Azure Key Vault and container are in the same European region otherwise you may experience some performance and latency issues. Create new instances of these services in the same EU region as needed. 1. Finish [setting up](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials) your verifiable credentials service. You need to recreate your credentials. 1. If your tenant needs to be configured as an issuer, it's recommended that your storage account is in the European region as your Verifiable Credentials service.
Resetting requires that you opt out and opt back into the Azure Active Directory
### How can I check my Azure AD Tenant's region?
-1. In the [Azure portal](https://portal.azure.com), go to Azure Active Directory for the subscription you use for your Azure Active Directory Verifiable credentials deployment.
+1. In the [Azure portal](https://portal.azure.com), go to Azure Active Directory for the subscription you use for your Entra Verified ID deployment.
1. Under Manage, select Properties :::image type="content" source="media/verifiable-credentials-faq/region.png" alt-text="settings delete and opt out"::: 1. See the value for Country or Region. If the value is a country or a region in Europe, your Microsoft Entra Verified ID service will be set up in Europe. ### How can I check if my tenant has the new Hub endpoint?
-1. In the Azure portal, go to the Verifiable Credentials service.
+1. Navigate to the Verified ID in the Azure portal.
1. Navigate to the Organization Settings. 1. Copy your organizationΓÇÖs Decentralized Identifier (DID). 1. Go to the ION Explorer and paste the DID in the search box
Resetting requires that you opt out and opt back into the Azure Active Directory
], ```
-### If I reconfigure the Azure AD Verifiable Credentials service, do I need to relink my DID to my domain?
+### If I reconfigure the Entra Verified ID service, do I need to relink my DID to my domain?
Yes, after reconfiguring your service, your tenant has a new DID use to issue and verify verifiable credentials. You need to [associate your new DID](how-to-dnsbind.md) with your domain.
No, at this point it isn't possible to keep your tenant's DID after you have opt
## Next steps -- [How to customize your Azure Active Directory Verifiable Credentials](credential-design.md)
+- [Customize your verifiable credentials](credential-design.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Title: What's new for Microsoft Entra Verified ID (preview)
+ Title: What's new for Microsoft Entra Verified ID
description: Recent updates for Microsoft Entra Verified ID
This article lists the latest features, improvements, and changes in the Microso
Microsoft Entra Verified ID is now generally available (GA) as the new member of the Microsoft Entra portfolio! [read more](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-verified-id-now-generally-available/ba-p/3295506) ### Known issues -- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) without issuing any Verifiable Credential will get a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by 08/20/22.
+- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential will get a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by 08/20/22.
## July 2022
Microsoft Entra Verified ID is now generally available (GA) as the new member of
- Request Service API **[Error codes](error-codes.md)** have been **updated** - The **[Admin API](admin-api.md)** is made **public** and is documented. The Azure portal is using the Admin API and with this REST API you can automate the onboarding or your tenant and creation of credential contracts. - Find issuers and credentials to verify via the [The Microsoft Entra Verified ID Network](how-use-vcnetwork.md).-- For migrating your Azure Storage based credentials to become Managed Credentials there is a PowerShell script in the [github samples repo](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/contractmigration/scripts/contractmigration) for the task.
+- For migrating your Azure Storage based credentials to become Managed Credentials there is a PowerShell script in the [GitHub samples repo](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/contractmigration/scripts/contractmigration) for the task.
- We also made the following updates to our Plan and design docs: - (updated) [architecture planning overview](introduction-to-verifiable-credentials-architecture.md).
Microsoft Entra Verified ID is now generally available (GA) as the new member of
## June 2022 -- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you will need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service).
+- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you will need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).
- We are rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform: - Introducing Managed Credentials, Managed Credentials are verifiable credentials that no longer use of Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions. - Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md).
We are rolling out some breaking changes to our service. These updates require M
- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for Android. [More information](whats-new.md?#microsoft-authenticator-did-generation-update) >[!IMPORTANT]
-> All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service).
+> All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).
### Microsoft Entra Verified ID available in Europe
Since the beginning of the Microsoft Entra Verified ID service public preview, t
Take the following steps to configure the Verifiable Credentials service in Europe: 1. [Check the location](verifiable-credentials-faq.md#how-can-i-check-my-azure-ad-tenants-region) of your Azure Active Directory to make sure is in Europe.
-1. [Reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in your tenant.
+1. [Reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in your tenant.
>[!IMPORTANT]
-> On March 31st, 2022 European tenants that have not been [reconfigured](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in Europe will lose access to any previous configuration and will require to configure a new instance of the Azure AD Verifiable Credential service.
+> On March 31st, 2022 European tenants that have not been [reconfigured](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in Europe will lose access to any previous configuration and will require to configure a new instance of the Azure AD Verifiable Credential service.
#### Are there any changes to the way that we use the Request API as a result of this move?
To uptake this feature follow the next steps:
1. [Check if your tenant has the Hub endpoint](verifiable-credentials-faq.md#how-can-i-check-if-my-tenant-has-the-new-hub-endpoint). 1. If so, go to the next step.
- 1. If not, [reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service) in your tenant and go to the next step.
+ 1. If not, [reconfigure the Verifiable Credentials service](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) in your tenant and go to the next step.
1. Create new verifiable credentials contracts. In the rules file you must add the ` "credentialStatusConfiguration": "anonymous" ` property to start using the new feature in combination with the Hub endpoint for your credentials: Sample contract file:
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
test.txt
## Resize a persistent volume without downtime (Preview) > [!IMPORTANT]
-> Azure Disks CSI driver supports resizing PVCs without downtime.
+> Azure Disks CSI driver supports expanding PVCs without downtime (Preview).
> Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature. > > az feature register --namespace Microsoft.Compute --name LiveResize--
+>
+> az feature show --namespace Microsoft.Compute --name LiveResize
+>
+> Follow this [link][expand-pvc-with-downtime] to expand PVCs **with** downtime if you cannot try preview feature.
You can request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.
The output of the command resembles the following example:
[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/ [csi-driver-parameters]: https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/docs/driver-parameters.md [create-burstable-storage-class]: https://github.com/Azure-Samples/burstable-managed-csi-premium
+[expand-pvc-with-downtime]: https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/docs/known-issues/sizegrow.md
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
az aks create \
The following screenshot from the Azure portal shows an example of configuring these settings during AKS cluster creation: ## Dynamic allocation of IPs and enhanced subnet support
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
KMS supports [public key vault][Enable-KMS-with-public-key-vault] and [private k
> > If you need to recover your Key Vault or key, see the [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md?tabs=azure-cli) documentation.
+#### For non-RBAC key vault
+ Use `az keyvault create` to create a KeyVault. ```azurecli
export KEY_ID=$(az keyvault key show --name MyKeyName --vault-name MyKeyVault --
echo $KEY_ID ```
+The above example stores the Key ID in *KEY_ID*.
+
+#### For RBAC key vault
+
+Use `az keyvault create` to create a KeyVault using Azure Role Based Access Control.
+
+```azurecli
+export KEYVAULT_RESOURCE_ID=$(az keyvault create --name MyKeyVault --resource-group MyResourceGroup --enable-rbac-authorization true --query id -o tsv)
+```
+
+Assign yourself permission to create a key.
+
+```azurecli-interactive
+az role assignment create --role "Key Vault Crypto Officer" --assignee-object-id $(az ad signed-in-user show --query id --out tsv) --assignee-principal-type "User" --scope $KEYVAULT_RESOURCE_ID
+```
+
+Use `az keyvault key create` to create a key.
+
+```azurecli
+az keyvault key create --name MyKeyName --vault-name MyKeyVault
+```
+
+Use `az keyvault key show` to export the Key ID.
+
+```azurecli
+export KEY_ID=$(az keyvault key show --name MyKeyName --vault-name MyKeyVault --query 'key.kid' -o tsv)
+echo $KEY_ID
+```
+ The above example stores the Key ID in *KEY_ID*. ### Create a user-assigned managed identity
az keyvault set-policy -n MyKeyVault --key-permissions decrypt encrypt --object-
#### For RBAC key vault
-If your key vault is enabled with `--enable-rbac-authorization`, you need to assign the "Key Vault Administrator" RBAC role which has decrypt, encrypt permission.
+If your key vault is enabled with `--enable-rbac-authorization`, you need to assign the "Key Vault Crypto User" RBAC role which has decrypt, encrypt permission.
```azurecli-interactive az role assignment create --role "Key Vault Crypto User" --assignee-object-id $IDENTITY_OBJECT_ID --assignee-principal-type "ServicePrincipal" --scope $KEYVAULT_RESOURCE_ID
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Because tabular models in Azure Analysis Services are much the same as tabular m
### Contribute!
-Analysis Services documentation, like this article, is open source. To learn more about how you can contribute, see the [Docs contributor guide](/contribute/).
+Analysis Services documentation, like this article, is open source. To learn more about how you can contribute, see our [contributor guide](/contribute/).
Azure Analysis Services documentation also uses [GitHub Issues](/teamblog/a-new-feedback-system-is-coming-to-docs). You can provide feedback about the product or documentation. Use **Feedback** at the bottom of an article. GitHub Issues are not enabled for the shared Analysis Services documentation.
api-management Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/observability.md
Title: Observability in Azure API Management | Microsoft Docs
-description: Overview of all observability options in Azure API Management.
+description: Overview of all API observability and monitoring options in Azure API Management.
documentationcenter: ''--+ na-+ Last updated 06/01/2020
Azure API Management helps organizations centralize the management of all APIs.
## Overview
-Azure API Management allows you to choose use the managed gateway or [self-hosted gateway](self-hosted-gateway-overview.md), either self-deployed or by using an [Azure Arc extension](how-to-deploy-self-hosted-gateway-azure-arc.md).
+Azure API Management allows you to choose to use the managed gateway or [self-hosted gateway](self-hosted-gateway-overview.md), either self-deployed or by using an [Azure Arc extension](how-to-deploy-self-hosted-gateway-azure-arc.md).
-The table below summarizes all the observability capabilities supported by API Management to operate APIs and what deployment models they support.
+The table below summarizes all the observability capabilities supported by API Management to operate APIs and what deployment models they support. These capabilities can be used by API publishers and others who have permissions to operate or manage the API Management instance.
+> [!NOTE]
+> For API consumers who use the developer portal, a built-in API report is available. It only provides information about their individual API usage during the preceding 90 days.
+>
| Tool | Useful for | Data lag | Retention | Sampling | Data kind | Supported Deployment Model(s) | |:- |:-|:- |:-|:- |: |:- | | **[API Inspector](api-management-howto-api-inspector.md)** | Testing and debugging | Instant | Last 100 traces | Turned on per request | Request traces | Managed, Self-hosted, Azure Arc |
-| **Built-in Analytics** | Reporting and monitoring | Minutes | Lifetime | 100% | Reports and logs | Managed |
+| **[Built-in Analytics](howto-use-analytics.md)** | Reporting and monitoring | Minutes | Lifetime | 100% | Reports and logs | Managed |
| **[Azure Monitor Metrics](api-management-howto-use-azure-monitor.md)** | Reporting and monitoring | Minutes | 90 days (upgrade to extend) | 100% | Metrics | Managed, Self-hosted<sup>2</sup>, Azure Arc | | **[Azure Monitor Logs](api-management-howto-use-azure-monitor.md)** | Reporting, monitoring, and debugging | Minutes | 31 days/5GB (upgrade to extend) | 100% (adjustable) | Logs | Managed<sup>1</sup>, Self-hosted<sup>3</sup>, Azure Arc<sup>3</sup> | | **[Azure Application Insights](api-management-howto-app-insights.md)** | Reporting, monitoring, and debugging | Seconds | 90 days/5GB (upgrade to extend) | Custom | Logs, metrics | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
-| **[Logging through Azure Event Hub](api-management-howto-log-event-hubs.md)** | Custom scenarios | Seconds | User managed | Custom | Custom | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
+| **[Logging through Azure Event Hubs](api-management-howto-log-event-hubs.md)** | Custom scenarios | Seconds | User managed | Custom | Custom | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
| **[OpenTelemetry](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md#introduction-to-opentelemetry)** | Monitoring | Minutes | User managed | 100% | Metrics | Self-hosted<sup>2</sup> | *1. Optional, depending on the configuration of feature in Azure API Management*
The table below summarizes all the observability capabilities supported by API M
## Next Steps
-* [Follow the tutorials to learn more about API Management](import-and-publish.md)
-- To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
+- Get started with [Azure Monitor metrics and logs](api-management-howto-use-azure-monitor.md)
+- Learn how to log requests with [Application Insights](api-management-howto-app-insights.md)
+- Learn how to log events through [Event Hubs](api-management-howto-log-event-hubs.md)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
For example, here's how to calculate the available addressing for a subnet with
Subnet Size /24 = 255 IP addresses - 5 reserved from the platform = 250 available addresses. 250 - Gateway 1 (10) - 1 private frontend IP configuration = 239 239 - Gateway 2 (2) = 237
-237 - Gateway 3 (15) - 1 private frontend IP configuration = 223
+237 - Gateway 3 (15) - 1 private frontend IP configuration = 221
> [!IMPORTANT] > Although a /24 subnet is not required per Application Gateway v2 SKU deployment, it is highly recommended. This is to ensure that Application Gateway v2 has sufficient space for autoscaling expansion and maintenance upgrades. You should ensure that the Application Gateway v2 subnet has sufficient address space to accommodate the number of instances required to serve your maximum expected traffic. If you specify the maximum instance count, then the subnet should have capacity for at least that many addresses. For capacity planning around instance count, see [instance count details](understanding-pricing.md#instance-count).
applied-ai-services Write A Valid Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/tutorials/write-a-valid-query.md
main branch.
<!-- This template provides the basic structure of a tutorial article.
-See the [tutorial guidance](contribute-how-to-mvc-tutorial.md) in the contributor guide.
+See the [tutorial guidance](contribute-how-to-mvc-tutorial.md) in our contributor guide.
To provide feedback on this template contact [the templates workgroup](mailto:templateswg@microsoft.com).
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
$Job = Start-AzAutomationRunbook @StartAzAutomationRunBookParameters
$PollingSeconds = 5 $MaxTimeout = New-TimeSpan -Hours 3 | Select-Object -ExpandProperty TotalSeconds $WaitTime = 0
-while((-NOT (IsJobTerminalState $Job.Status) -and $WaitTime -lt $MaxTimeout) {
+while(-NOT (IsJobTerminalState $Job.Status) -and $WaitTime -lt $MaxTimeout) {
Start-Sleep -Seconds $PollingSeconds $WaitTime += $PollingSeconds $Job = $Job | Get-AzAutomationJob
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
Azure Service Management (ASM) REST APIs for Azure Automation will be retired an
## Next steps
-If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
+If you'd like to contribute to Azure Automation documentation, see our [contributor guide](/contribute/).
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation now supports [system-assigned managed identities](./automation-
## Next steps
-If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
+If you'd like to contribute to Azure Automation documentation, see our [contributor guide](/contribute/).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Cisco Hyperflex on VMware <br/> Cisco IKS ESXi 6.7 U3 |v1.20.14|v1.4.1_2022-03-08|15.0.2255.119| PostgreSQL 12.3 (Ubuntu 12.3-1) |
+|Cisco Hyperflex on VMware <br/> Cisco IKS ESXi 6.7 U3 |1.21.13|v1.9.0_2022-07-12|16.0.312.4243| Not validated |
### Dell |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--| | Dell EMC PowerFlex |1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
-| PowerFlex version 3.6 |1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
-| PowerFlex CSI version 1.4 |1.21.5|v1.4.1_2022-03-08 | Not validated |
-| PowerStore X|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1) |
-| PowerStore T|1.20.6|v1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1)|
+| PowerFlex version 3.6 |v1.21.5|v1.4.1_2022-03-08|15.0.2255.119 | Not validated |
+| PowerFlex CSI version 1.4 |1.21.5|1.4.1_2022-03-08 | Not validated |
+| PowerStore X|1.20.6|1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1) |
+| PowerStore T|1.23.5|1.9.0_2022-07-12|16.0.312.4243 |postgres 12.3 (Ubuntu 12.3-1)|
### HPE |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|HPE Superdome Flex 280|1.20.0|v1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1)
+|HPE Superdome Flex 280|1.20.0|1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1)
### Kublr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Kublr |1.22.3 / 1.22.10 | v1.9.0_2022-07-12 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
+|Kublr |1.22.3 / 1.22.10 | 1.9.0_2022-07-12 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
### Lenovo |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2|v1.0.0_2021-07-30 |15.0.2148.140|Not validated|
+|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2|1.0.0_2021-07-30 |15.0.2148.140|Not validated|
### Nutanix |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | v1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
+| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | 1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
### Platform 9 |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | v1.0.0_2021-07-30| 15.0.2195.191 | PostgreSQL 12.3 (Ubuntu 12.3-1) |
+| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | 1.0.0_2021-07-30| 15.0.2195.191 | PostgreSQL 12.3 (Ubuntu 12.3-1) |
### PureStorage |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| Portworx Enterprise 2.7 1.22.5 | 1.20.7 | v1.1.0_2021-11-02 | 15.0.2148.140 | Not validated |
+| Portworx Enterprise 2.7 1.22.5 | 1.20.7 | 1.1.0_2021-11-02 | 15.0.2148.140 | Not validated |
### Red Hat |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| OpenShift 4.7.13 | 1.20.0 | v1.0.0_2021-07-30 | 15.0.2148.140 | postgres 12.3 (Ubuntu 12.3-1)|
+| OpenShift 4.7.13 | 1.20.0 | 1.0.0_2021-07-30 | 15.0.2148.140 | postgres 12.3 (Ubuntu 12.3-1)|
### VMware |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| TKGm v1.5.1 | 1.20.5 | v1.4.1_2022-03-08 |15.0.2255.119|postgres 12.3 (Ubuntu 12.3-1)|
+| TKGm v1.5.3 | 1.22.8 | 1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1)|
### Wind River |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Wind River Cloud Platform 22.06 | v1.23.1|v1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) |
+|Wind River Cloud Platform 22.06 | 1.23.1|1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) |
## Data services validation process
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-en
|Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)| |`arcdata` Azure CLI extension version|1.4.5 ([Download](https://arcdataazurecliextension.blob.core.windows.net/stage/arcdata-1.4.5-py2.py3-none-any.whl))| |Arc enabled Kubernetes helm chart extension version|1.2.20381002|
-|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.5.0.vsix))</br>1.5.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.5.0.vsix))|
+|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.1 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.5.1.vsix))</br>1.5.1 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.5.1.vsix))|
## July 12, 2022
azure-cache-for-redis Cache How To Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md
Use import to bring Redis compatible RDB files from any Redis server running in
> >
-1. To import one or more exported cache blobs, [browse to your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the Azure portal and select **Import data** from the **Resource menu**. In the working pane you see **Choose Blob(s)** where you can find .RDB files.
+1. To import one or more exported cache blobs, [browse to your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the Azure portal and select **Import data** from the **Resource menu**. In the working pane, you see **Choose Blob(s)** where you can find RDB files.
:::image type="content" source="./media/cache-how-to-import-export-data/cache-import-data.png" alt-text="Screenshot showing Import data selected in the Resource menu.":::
Export allows you to export the data stored in Azure Cache for Redis to Redis co
:::image type="content" source="./media/cache-how-to-import-export-data/cache-export-data-choose-account.png" alt-text="Screenshot showing a list of containers in the working pane.":::
-3. Choose the storage container you want to hold your export, then **Select**. If you want a new container, select **Add Container** to add it first and then select it from the list.
+3. Choose the storage container you want to hold your export, then **Select**. If you want a new container, select **Add Container** to add it first, and then select it from the list.
:::image type="content" source="./media/cache-how-to-import-export-data/cache-export-data-container.png" alt-text="Screenshot of a list of containers with one highlighted and a select button.":::
To resolve this error, start the import or export operation before 15 minutes ha
### I got an error when exporting my data to Azure Blob Storage. What happened?
-Export works only with RDB files stored as page blobs. Other blob types aren't currently supported, including Blob storage accounts with hot and cool tiers. For more information, see [Azure storage account overview](../storage/common/storage-account-overview.md).
+Export works only with RDB files stored as page blobs. Other blob types aren't currently supported, including Blob storage accounts with hot and cool tiers. For more information, see [Azure storage account overview](../storage/common/storage-account-overview.md). If you're using an access key to authenticate a storage account, having firewall exceptions on the storage account tends to cause the import/export process to fail.
## Next steps
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Azure Cache for Redis offers Redis persistence using the Redis database (RDB) an
- **RDB persistence** - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. - **AOF persistence** - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second into an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
-Azure Cache for Redis persistence features are intended to be used to restore data to the same cache after data loss and the RDB/AOF persisted data files cannot be imported to a new cache.
+Azure Cache for Redis persistence features are intended to be used to restore data to the same cache after data loss and the RDB/AOF persisted data files can't be imported to a new cache.
To move data across caches, use the Import/Export feature. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
-To generate backup of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI to export data periodically.
+To generate any backups of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI to export data periodically.
> [!NOTE] > Persistence features are intended to be used to restore data to the same cache after data loss.
The following list contains answers to commonly asked questions about Azure Cach
- [Can I use the same storage account for persistence across two different caches?](#can-i-use-the-same-storage-account-for-persistence-across-two-different-caches) - [Will I be charged for the storage being used in Data Persistence](#will-i-be-charged-for-the-storage-being-used-in-data-persistence) - [How frequently does RDB and AOF persistence write to my blobs, and should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete)
+- [Will having firewall exceptions on the storage account affect persistence](#Will having firewall exceptions on the storage account affect persistence)
### RDB persistence
When clustering is enabled, each shard in the cache has its own set of page blob
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to stay in the soft delete state.
+### Will having firewall exceptions on the storage account affect persistence
+Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process.
+ ## Next steps Learn more about Azure Cache for Redis features.
azure-cache-for-redis Cache Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-managed-identity.md
# Managed identity for storage (Preview)
-[Managed identities](../active-directory/managed-identities-azure-resources/overview.md) are a common tool used in Azure to help developers minimize the burden of managing secrets and login information. Managed identities are useful when Azure services connect to each other. Instead of managing authorization between each service, [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) can be used to provide a managed identity that makes the authentication process more streamlined and secure.
+[Managed identities](../active-directory/managed-identities-azure-resources/overview.md) are a common tool used in Azure to help developers minimize the burden of managing secrets and sign-in information. Managed identities are useful when Azure services connect to each other. Instead of managing authorization between each service, [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) can be used to provide a managed identity that makes the authentication process more streamlined and secure.
## Use managed identity with storage accounts
To use managed identity, you must have a premium-tier cache.
> :::image type="content" source="media/cache-managed-identity/basics.png" alt-text="create a premium azure cache":::
-1. Click the **Advanced** tab. Then, scroll down to **(PREVIEW) System assigned managed identity** and select **On**.
+1. Select the **Advanced** tab. Then, scroll down to **(PREVIEW) System assigned managed identity** and select **On**.
:::image type="content" source="media/cache-managed-identity/system-assigned.png" alt-text="Advanced page of the form":::
Set-AzRedisCache -ResourceGroupName \"MyGroup\" -Name \"MyCache\" -IdentityType
:::image type="content" source="media/cache-managed-identity/blob-data.png" alt-text="storag blob data contributor list"::: > [!NOTE]
-> Adding an Azure Cache for Redis instance as a storage blog data contributor through system-assigned identity will conveniently add the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to implement.
+> Adding an Azure Cache for Redis instance as a storage blob data contributor through system-assigned identity conveniently adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to implement. If you're not using managed identity and instead authorizing a storage account with a key, then having firewall exceptions on the storage account tends to break the persistence process and the import-export processes.
## Use managed identity to access a storage account
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Title: "Quickstart: Create your first C# function in Azure using Visual Studio"
-description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions that runs on .NET Core 3.1."
+description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions."
ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 06/13/2022 Last updated : 09/08/2022 ms.devlang: csharp adobe-target: true
adobe-target-content: ./functions-create-your-first-function-visual-studio-uiex
Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using Visual Studio Code, you should instead consider the [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
-By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) version of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](functions-create-your-first-function-visual-studio.md?tabs=isolated-process).
+By default, this article shows you how to create C# functions that run [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET. To create C# functions [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](functions-create-your-first-function-visual-studio.md?tabs=isolated-process). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) before getting started.
In this article, you learn how to: > [!div class="checklist"]
-> * Use Visual Studio to create a C# class library project on .NET 6.0.
+> * Use Visual Studio to create a C# class library project.
> * Create a function that responds to HTTP requests. > * Run your code locally to verify function behavior. > * Deploy your code project to Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in you
## Prerequisites
-+ [Visual Studio 2022](https://visualstudio.microsoft.com/vs/), which supports .NET 6.0. Make sure to select the **Azure development** workload during installation.
++ [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure to select the **Azure development** workload during installation. + [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't already have an account [create a free one](https://azure.microsoft.com/free/dotnet/) before you begin.
The Azure Functions project template in Visual Studio creates a C# class library
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6** | When you choose **.NET 6**, you create a project that runs in-process with the Azure Functions runtime. Use in-process unless you need to run your function app on .NET 5.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](functions-dotnet-class-library.md#supported-versions). |
+ | **Functions worker** | **.NET 6** | When you choose **.NET 6**, you create a project that runs in-process with the Azure Functions runtime. Use in-process unless you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](functions-dotnet-class-library.md#supported-versions). |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. | | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
The Azure Functions project template in Visual Studio creates a C# class library
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated process when you need to run your function app on .NET 5.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
+ | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated process when you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. | | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
Advance to the next article to learn how to add an Azure Storage queue binding t
# [.NET 6 Isolated](#tab/isolated-process)
-To learn more about working with C# functions that run in an isolated process, see the [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+To learn more about working with C# functions that run in an isolated process, see the [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md). Check out [.NET supported versions](functions-dotnet-class-library.md#supported-versions) to see other versions of supported .NET versions in an isolated process .
Advance to the next article to learn how to add an Azure Storage queue binding to your function: > [!div class="nextstepaction"]
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
There are a number of advantages to using deployment slots. The following scenar
- **Different environments for different purposes**: Using different slots gives you the opportunity to differentiate app instances before swapping to production or a staging slot. - **Prewarming**: Deploying to a slot instead of directly to production allows the app to warm up before going live. Additionally, using slots reduces latency for HTTP-triggered workloads. Instances are warmed up before deployment, which reduces the cold start for newly deployed functions. - **Easy fallbacks**: After a swap with production, the slot with a previously staged app now has the previous production app. If the changes swapped into the production slot aren't as you expect, you can immediately reverse the swap to get your "last known good instance" back.-- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into productions with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](functions-versions.md#minimum-downtime-upgrade).
+- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into production with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](functions-versions.md#minimum-downtime-upgrade).
## Swap operations
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
description: Learn how to develop and test Azure Functions by using Azure Functi
ms.devlang: csharp Previously updated : 05/19/2022 Last updated : 09/08/2022 # Develop Azure Functions using Visual Studio
Visual Studio provides the following benefits when you develop your functions:
This article provides details about how to use Visual Studio to develop C# class library functions and publish them to Azure. Before you read this article, consider completing the [Functions quickstart for Visual Studio](functions-create-your-first-function-visual-studio.md).
-Unless otherwise noted, procedures and examples shown are for Visual Studio 2022.
+Unless otherwise noted, procedures and examples shown are for Visual Studio 2022. For more information about Visual Studio 2022 releases, see [the release notes](/visualstudio/releases/2022/release-notes) or the [preview release notes](/visualstudio/releases/2022/release-notes-preview).
## Prerequisites
When you update your Visual Studio 2017 installation, make sure that you're usin
1. If your version is older, update your tools in Visual Studio as shown in the following section.
-### Update your tools in Visual Studio 2017
+### Update your tools in Visual Studio
1. In the **Extensions and Updates** dialog, expand **Updates** > **Visual Studio Marketplace**, choose **Azure Functions and Web Jobs Tools** and select **Update**.
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio](./functions-create-your-first-function-visual-studio.md)<li>[Visual Studio Code](./create-first-function-vs-code-csharp.md)<li>[Command line](./create-first-function-cli-csharp.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=csharp&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=C%23) |
-| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=csharp)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [C# language reference](./functions-dotnet-class-library.md)|
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-java.md)<li>[Jav) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=java&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Java) |
-| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/learn/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/learn/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=java)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Java language reference](./functions-reference-java.md)| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-node.md)<li>[Node.js terminal/command prompt](./create-first-function-cli-node.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=javascript%2ctypescript&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=JavaScript%2CTypeScript) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/learn/modules/shift-nodejs-express-apis-serverless/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=javascript)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [JavaScript](./functions-reference-node.md) or [TypeScript](./functions-reference-node.md#typescript) language reference| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | <li>Using [Visual Studio Code](./create-first-function-vs-code-powershell.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=powershell&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=PowerShell) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/)<li>[Execute an Azure Function with triggers](/learn/modules/execute-azure-function-with-triggers/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=powershell)<li>[Security](./security-concepts.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [PowerShell language reference](./functions-reference-powershell.md))| ::: zone-end
Use the following resources to get started.
| | | | **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-python.md)<li>[Terminal/command prompt](./create-first-function-cli-python.md) | | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=python&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Python) |
-| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/) <br><br>See Microsoft Learn for a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
+| **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/learn/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/learn/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/learn/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/learn/modules/create-serverless-logic-with-azure-functions/) <br><br>See a [full listing of interactive tutorials](/learn/browse/?expanded=azure&products=azure-functions).|
| **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=python)<li>[Security](./security-concepts.md)<li>[Improve throughput performance](./python-scale-performance-reference.md)| | **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [Python language reference](./functions-reference-python.md)| ::: zone-end
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
This type of streaming logs requires that Application Insights integration be en
## Next steps
-Learn how to develop, test, and publish Azure Functions by using Azure Functions Core Tools [Microsoft learn module](/learn/modules/develop-test-deploy-azure-functions-with-core-tools/)
-Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli).
-To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues).
+Learn how to [develop, test, and publish Azure functions by using Azure Functions core tools](/learn/modules/develop-test-deploy-azure-functions-with-core-tools/). Azure Functions Core Tools is [open source and hosted on GitHub](https://github.com/azure/azure-functions-cli). To file a bug or feature request, [open a GitHub issue](https://github.com/azure/azure-functions-cli/issues).
<!-- LINKS -->
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md
source = new DataSource(
); //Import the geojson data and add it to the data source.
-source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/SamplePoiDataSet.json");
+source.importDataFromUrl("https://samples.azuremaps.com/data/geojson/SamplePoiDataSet.json");
//Add data source to the map. map.sources.add(source);
val source = DataSource(
) //Import the geojson data and add it to the data source.
-source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/SamplePoiDataSet.json")
+source.importDataFromUrl("https://samples.azuremaps.com/data/geojson/SamplePoiDataSet.json")
//Add data source to the map. map.sources.add(source)
azure-maps Clustering Point Data Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-ios-sdk.md
map.layers.addLayer(
) ```
-For this sample, the following images is loaded into the assets folder of the app.
+For this sample, the following image is loaded into the assets folder of the app.
| ![Earthquake icon image](./media/ios-sdk/cluster-point-data-ios-sdk/earthquake-icon.png) | ![Weather icon image of rain showers](./media/ios-sdk/cluster-point-data-ios-sdk/warning-triangle-icon.png) | |:--:|:--:|
let source = DataSource(options: [
]) // Import the geojson data and add it to the data source.
-let url = URL(string: "https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/SamplePoiDataSet.json")!
+let url = URL(string: "https://samples.azuremaps.com/data/geojson/SamplePoiDataSet.json")!
source.importData(fromURL: url) // Add data source to the map.
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
To see your indoor map, load it into a web browser. It should appear like the im
![indoor map image](media/how-to-use-indoor-module/indoor-map-graphic.png)
-[See live demo](https://azuremapscodesamples.azurewebsites.net/?sample=Creator%20indoor%20maps)
+[See live demo](https://samples.azuremaps.com/?sample=creator-indoor-maps)
## Next steps
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
The web application that you previously opened in a browser should now reflect t
![Free room in green and Busy room in red](./media/indoor-map-dynamic-styling/room-state.png)
-[See live demo](https://azuremapscodesamples.azurewebsites.net/?sample=Creator%20indoor%20maps)
+[See live demo](https://samples.azuremaps.com/?sample=creator-indoor-maps)
## Next steps
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
Learn about accessibility in the Web SDK modules.
> [!div class="nextstepaction"] > [Drawing tools accessibility](drawing-tools-interactions-keyboard-shortcuts.md)
-Learn about developing accessible apps with Microsoft Learn:
+Learn about developing accessible apps:
> [!div class="nextstepaction"] > [Accessibility in Action Digital Badge Learning Path](https://ready.azurewebsites.net/learning/track/2940)
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
var feature = new atlas.data.Feature(new atlas.data.Point([0, 0]), {
subValue: 'Pizza' }, arrayValue: [3, 4, 5, 6],
- imageLink: 'https://azuremapscodesamples.azurewebsites.net/common/images/Pike_Market.jpg'
+ imageLink: 'https://samples.azuremaps.com/images/Pike_Market.jpg'
}); var popup = new atlas.Popup({
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon-android.md
A choropleth map can be rendered using the polygon extrusion layer. Set the `hei
DataSource source = new DataSource(); //Import the geojson data and add it to the data source.
-source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/US_States_Population_Density.json");
+source.importDataFromUrl("https://samples.azuremaps.com/data/geojson/US_States_Population_Density.json");
//Add data source to the map. map.sources.add(source);
map.layers.add(layer, "labels");
val source = DataSource() //Import the geojson data and add it to the data source.
-source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/US_States_Population_Density.json")
+source.importDataFromUrl("https://samples.azuremaps.com/data/geojson/US_States_Population_Density.json")
//Add data source to the map. map.sources.add(source)
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure
If migrating an existing web application, check to see if it is using an open-source map control library such as Cesium, Leaflet, and OpenLayers. If it is and you would prefer to continue to use that library, you can connect it to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile) \| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The links below provide details on how to use Azure Maps in some commonly used open-source map control libraries.
-* [Cesium](https://www.cesium.com/) - A 3D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=Cesium) \| [Plugin repo]()
-* [Leaflet](https://leafletjs.com/) ΓÇô Lightweight 2D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=leaflet) \| [Plugin repo]()
-* [OpenLayers](https://openlayers.org/) - A 2D map control for the web that supports projections. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=openlayers) \| [Plugin repo]()
+* [Cesium](https://www.cesium.com/) - A 3D map control for the web. [Code samples](https://samples.azuremaps.com/?search=Cesium) \| [Plugin repo]()
+* [Leaflet](https://leafletjs.com/) ΓÇô Lightweight 2D map control for the web. [Code samples](https://samples.azuremaps.com/?search=leaflet) \| [Plugin repo]()
+* [OpenLayers](https://openlayers.org/) - A 2D map control for the web that supports projections. [Code samples](https://samples.azuremaps.com/?search=openlayers) \| [Plugin repo]()
If developing using a JavaScript framework, one of the following open-source projects may be useful:
The following table lists key API features in the Bing Maps V8 JavaScript SDK an
| Heat maps | Γ£ô | | Tile Layers | Γ£ô | | KML Layer | Γ£ô |
-| Contour layer | [Samples](https://azuremapscodesamples.azurewebsites.net/?search=contour) |
+| Contour layer | [Samples](https://samples.azuremaps.com/?search=contour) |
| Data binning layer | Included in the open-source Azure Maps [Gridded Data Source module](https://github.com/Azure-Samples/azure-maps-gridded-data-source) | | Animated tile layer | Included in the open-source Azure Maps [Animation module](https://github.com/Azure-Samples/azure-maps-animations) | | Drawing tools | Γ£ô |
Azure Maps also has many additional [open-source modules for the web SDK](open-s
The following are some of the key differences between the Bing Maps and Azure Maps Web SDKs to be aware of:
-* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an NPM package is also available for embedding the Web SDK into apps if preferred. For more information, see this [documentation](./how-to-use-map-control.md) for more information. This package also includes TypeScript definitions.
-* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps there you can use the NPM module and point to any previous minor version release.
+* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is also available for embedding the Web SDK into apps if preferred. For more information, see this [documentation](./how-to-use-map-control.md) for more information. This package also includes TypeScript definitions.
+* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps there you can use the npm module and point to any previous minor version release.
> [!TIP] > Azure Maps publishes both minified and unminified versions of the SDK. Simple remove `.min` from the file names. The unminified version is useful when debugging issues but be sure to use the minified version in production to take advantage of the smaller file size.
map.events.add('click', marker, function () {
**Additional resources** * [Add a popup](./map-add-popup.md)
-* [Popup with Media Content](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Popup%20with%20Media%20Content)
-* [Popups on Shapes](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Popups%20on%20Shapes)
-* [Reusing Popup with Multiple Pins](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Reusing%20Popup%20with%20Multiple%20Pins)
+* [Popup with Media Content](https://samples.azuremaps.com/?sample=popup-with-media-content)
+* [Popups on Shapes](https://samples.azuremaps.com/?sample=popups-on-shapes)
+* [Reusing Popup with Multiple Pins](https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins)
* [Popup class](/javascript/api/azure-maps-control/atlas.popup) * [Popup options](/javascript/api/azure-maps-control/atlas.popupoptions)
If you click on one of the traffic icons in Azure Maps, additional information i
**Additional resources** * [Show traffic on the map](./map-show-traffic.md)
-* [Traffic overlay options](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Traffic%20Overlay%20Options)
-* [Traffic control](https://azuremapscodesamples.azurewebsites.net/?sample=Traffic%20controls)
+* [Traffic overlay options](https://samples.azuremaps.com/?sample=traffic-overlay-options)
+* [Traffic control](https://samples.azuremaps.com/?sample=traffic-controls)
### Add a ground overlay
In Azure Maps the drawing tools module needs to be loaded by loading the JavaScr
**Additional resources** * [Documentation](./set-drawing-options.md)
-* [Code samples](https://azuremapscodesamples.azurewebsites.net/#Drawing-Tools-Module)
+* [Code samples](https://samples.azuremaps.com/#drawing-tools-module)
## Additional resources
Review code samples related migrating other Bing Maps features:
**Data visualizations** > [!div class="nextstepaction"]
-> [Contour layer](https://azuremapscodesamples.azurewebsites.net/?search=contour)
+> [Contour layer](https://samples.azuremaps.com/?search=contour)
> [!div class="nextstepaction"]
-> [Data Binning](https://azuremapscodesamples.azurewebsites.net/?search=data%20binning)
+> [Data Binning](https://samples.azuremaps.com/?search=Data%20Binning)
**Services**
Review code samples related migrating other Bing Maps features:
> [Show directions from A to B](./map-route.md) > [!div class="nextstepaction"]
-> [Search Autosuggest with JQuery UI](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Search%20Autosuggest%20and%20JQuery%20UI)
+> [Search Autosuggest with JQuery UI](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui)
Learn more about the Azure Maps Web SDK.
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Azure Maps can snap coordinates to roads by using the [route directions](/rest/a
There are two different ways to use the route directions API to snap coordinates to roads. * If there are 150 coordinates or less, they can be passed as waypoints in the GET route directions API. Using this approach two different types of snapped data can be retrieved; route instructions will contain the individual snapped waypoints, while the route path will have an interpolated set of coordinates that fill the full path between the coordinates.
-* If there are more than 150 coordinates, the POST route directions API can be used. The coordinates start and end coordinates have to be passed into the query parameter, but all coordinates can be passed into the `supportingPoints` parameter in the body of the POST request and formatted a GeoJSON geometry collection of points. The only snapped data available using this approach will be the route path that is an interpolated set of coordinates that fill the full path between the coordinates. [Here is an example](https://azuremapscodesamples.azurewebsites.net/?sample=Snap%20points%20to%20logical%20route%20path) of this approach using the services module in the Azure Maps Web SDK.
+* If there are more than 150 coordinates, the POST route directions API can be used. The coordinates start and end coordinates have to be passed into the query parameter, but all coordinates can be passed into the `supportingPoints` parameter in the body of the POST request and formatted a GeoJSON geometry collection of points. The only snapped data available using this approach will be the route path that is an interpolated set of coordinates that fill the full path between the coordinates. [Here is an example](https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path) of this approach using the services module in the Azure Maps Web SDK.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The Azure Maps route directions API does not currently return speed limit data,
The Azure Maps Web SDK uses vector tiles to render the maps. These vector tiles contain the raw road geometry information and can be used to calculate the nearest road to a coordinate for simple snapping of individual coordinates. This is useful when you want the coordinates to visually appear over roads and you are already using the Azure Maps Web SDK to visualize the data.
-This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. [Here is a code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Basic%20snap%20to%20road%20logic) that demonstrates this.
+This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. [Here is a code sample](https://samples.azuremaps.com/?sample=basic-snap-to-road-logic) that demonstrates this.
**Using the Azure Maps vector tiles directly to snap coordinates**
Here are some useful resources around hosting and querying spatial data in Azure
Azure Maps provides client libraries for the following programming languages;
-* JavaScript, TypeScript, Node.js ΓÇô [documentation](./how-to-use-services-module.md) \| [NPM package](https://www.npmjs.com/package/azure-maps-rest)
+* JavaScript, TypeScript, Node.js ΓÇô [documentation](./how-to-use-services-module.md) \| [npm package](https://www.npmjs.com/package/azure-maps-rest)
Open-source client libraries for other programming languages;
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
You will also learn:
If migrating an existing web application, check to see if it is using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you do not want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile) \| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
-* Cesium - A 3D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20Cesium%20JS) \| [Documentation](https://www.cesium.com/)
-* Leaflet ΓÇô Lightweight 2D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Azure%20Maps%20Raster%20Tiles%20in%20Leaflet%20JS) \| [Documentation](https://leafletjs.com/)
-* OpenLayers - A 2D map control for the web that supports projections. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20OpenLayers) \| [Documentation](https://openlayers.org/)
+* Cesium - A 3D map control for the web. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-cesium) \| [Documentation](https://www.cesium.com/)
+* Leaflet ΓÇô Lightweight 2D map control for the web. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-leaflet) \| [Documentation](https://leafletjs.com/)
+* OpenLayers - A 2D map control for the web that supports projections. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-openlayers) \| [Documentation](https://openlayers.org/)
If developing using a JavaScript framework, one of the following open-source projects may be useful:
The table lists key API features in the Google Maps V3 JavaScript SDK and the su
The following are some key differences between the Google Maps and Azure Maps Web SDKs, to be aware of: -- In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an NPM package is available. Embed the Web SDK package into apps. For more information, see this [documentation](how-to-use-map-control.md). This package also includes TypeScript definitions.
+- In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available. Embed the Web SDK package into apps. For more information, see this [documentation](how-to-use-map-control.md). This package also includes TypeScript definitions.
- You first need to create an instance of the Map class in Azure Maps. Wait for the maps `ready` or `load` event to fire before programmatically interacting with the map. This order will ensure that all the map resources have been loaded and are ready to be accessed. - Both platforms use a similar tiling system for the base maps. The tiles in Google Maps are 256 pixels in dimension; however, the tiles in Azure Maps are 512 pixels in dimension. To get the same map view in Azure Maps as Google Maps, subtract Google Maps zoom level by the number one in Azure Maps. - Coordinates in Google Maps are referred to as `latitude,longitude`, while Azure Maps uses `longitude,latitude`. The Azure Maps format is aligned with the standard `[x, y]`, which is followed by most GIS platforms.
The following are some key differences between the Google Maps and Azure Maps We
## Web SDK side-by-side examples
-This collection has code samples for each platform, and each sample covers a common use case. It's intended to help you migrate your web application from Google Maps V3 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript. However, Azure Maps also provides TypeScript definitions as an additional option through an [NPM module](how-to-use-map-control.md).
+This collection has code samples for each platform, and each sample covers a common use case. It's intended to help you migrate your web application from Google Maps V3 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript. However, Azure Maps also provides TypeScript definitions as an additional option through an [npm module](how-to-use-map-control.md).
**Topics**
value in Google Maps is relative to the top-left corner of the image.
var marker = new google.maps.Marker({ position: new google.maps.LatLng(51.5, -0.2), icon: {
- url: 'https://azuremapscodesamples.azurewebsites.net/Common/images/icons/ylw-pushpin.png',
+ url: 'https://samples.azuremaps.com/images/icons/ylw-pushpin.png',
anchor: new google.maps.Point(5, 30) }, map: map
To customize an HTML marker, pass an HTML `string` or `HTMLElement` to the `html
```javascript map.markers.add(new atlas.HtmlMarker({
- htmlContent: '<img src="https://azuremapscodesamples.azurewebsites.net/Common/images/icons/ylw-pushpin.png" style="pointer-events: none;" />',
+ htmlContent: '<img src="https://samples.azuremaps.com/images/icons/ylw-pushpin.png" style="pointer-events: none;" />',
anchor: 'top-left', pixelOffset: [-5, -30], position: [-0.2, 51.5]
Symbol layers in Azure Maps support custom images as well. First, load the image
map.events.add('ready', function () { //Load the custom image icon into the map resources.
- map.imageSprite.add('my-yellow-pin', 'https://azuremapscodesamples.azurewebsites.net/Common/images/icons/ylw-pushpin.png').then(function () {
+ map.imageSprite.add('my-yellow-pin', 'https://samples.azuremaps.com/images/icons/ylw-pushpin.png').then(function () {
//Create a data source and add it to the map. datasource = new atlas.source.DataSource();
map.events.add('click', marker, function () {
**Additional resources:** - [Add a popup](map-add-popup.md)-- [Popup with Media Content](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Popup%20with%20Media%20Content)-- [Popups on Shapes](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Popups%20on%20Shapes)-- [Reusing Popup with Multiple Pins](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Reusing%20Popup%20with%20Multiple%20Pins)
+- [Popup with Media Content](https://samples.azuremaps.com/?sample=popup-with-media-content)
+- [Popups on Shapes](https://samples.azuremaps.com/?sample=popups-on-shapes)
+- [Reusing Popup with Multiple Pins](https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins)
- [Popup class](/javascript/api/azure-maps-control/atlas.popup) - [Popup options](/javascript/api/azure-maps-control/atlas.popupoptions)
If you click on one of the traffic icons in Azure Maps, additional information i
**Additional resources:** * [Show traffic on the map](map-show-traffic.md)
-* [Traffic overlay options](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Traffic%20Overlay%20Options)
+* [Traffic overlay options](https://samples.azuremaps.com/?sample=traffic-overlay-options)
### Add a ground overlay
In Azure Maps, GeoJSON is the main data format used in the web SDK, additional s
The following are some additional code samples related to Google Maps migration: * [Drawing tools](map-add-drawing-toolbar.md)
-* [Limit Map to Two Finger Panning](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Limit%20Map%20to%20Two%20Finger%20Panning)
-* [Limit Scroll Wheel Zoom](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Limit%20Scroll%20Wheel%20Zoom)
-* [Create a Fullscreen Control](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Create%20a%20Fullscreen%20Control)
+* [Limit Map to Two Finger Panning](https://samples.azuremaps.com/?sample=limit-map-to-two-finger-panning)
+* [Limit Scroll Wheel Zoom](https://samples.azuremaps.com/?sample=limit-scroll-wheel-zoom)
+* [Create a Fullscreen Control](https://samples.azuremaps.com/?sample=fullscreen-control)
**
The following are some additional code samples related to Google Maps migration:
* [Search for points of interest](map-search-location.md) * [Get information from a coordinate (reverse geocode)](map-get-information-from-coordinate.md) * [Show directions from A to B](map-route.md)
-* [Search Autosuggest with JQuery UI](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Search%20Autosuggest%20and%20JQuery%20UI)
+* [Search Autosuggest with JQuery UI](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui)
## Google Maps V3 to Azure Maps Web SDK class mapping
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
The following service APIs aren't currently available in Azure Maps:
- Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but does not currently support cell tower or WiFi triangulation. - Places details and photos - Phone numbers and website URL are available in the Azure Maps search API. - Map URLs-- Nearest Roads - This is achievable using the Web SDK as shown [here](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Basic%20snap%20to%20road%20logic), but not available as a service currently.
+- Nearest Roads - This is achievable using the Web SDK as shown [here](https://samples.azuremaps.com/?sample=basic-snap-to-road-logic), but not available as a service currently.
- Static street view Azure Maps has several other REST web services that may be of interest:
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
You might want to target older browsers that don't support WebGL or that have on
(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
-Additional code samples using Azure Maps in Leaflet can be found [here](https://azuremapscodesamples.azurewebsites.net/?search=leaflet).
+Additional code samples using Azure Maps in Leaflet can be found [here](https://samples.azuremaps.com/?search=leaflet).
[Here](open-source-projects.md#third-part-map-control-plugins) are some popular open-source map controls that the Azure Maps team has created plugin's for.
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
For more information about Azure Maps authentication, see [Manage authentication
In this tutorial, you'll create a store locator for a fictional company named *Contoso Coffee*. Also, this tutorial includes some tips to help you learn about extending the store locator with other optional functionality.
-To see a live sample of what you will create in this tutorial, see [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) on the **Azure Maps Code Samples** site.
+To see a live sample of what you will create in this tutorial, see [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) on the **Azure Maps Code Samples** site.
To more easily follow and engage this tutorial, you'll need to download the following resources:
This section lists the Azure Maps features that are demonstrated in the Contoso
## Store locator design
-The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) sample application on the **Azure Maps Code Samples** site.
+The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) sample application on the **Azure Maps Code Samples** site.
:::image type="content" source="./media/tutorial-create-store-locator/store-locator-wireframe.png" alt-text="A screenshot the Contoso Coffee store locator Azure Maps sample application.":::
If you resize the browser window to fewer than 700 pixels wide or open the appli
In this tutorial, you learned how to create a basic store locator by using Azure Maps. The store locator you create in this tutorial might have all the functionality you need. You can add features to your store locator or use more advance features for a more custom user experience:
-* Enable [suggestions as you type](https://azuremapscodesamples.azurewebsites.net/?sample=Search%20Autosuggest%20and%20JQuery%20UI) in the search box.
-* Add [support for multiple languages](https://azuremapscodesamples.azurewebsites.net/?sample=Map%20Localization).
-* Allow the user to [filter locations along a route](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Data%20Along%20Route).
-* Add the ability to [set filters](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Symbols%20by%20Property).
+* Enable [suggestions as you type](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui) in the search box.
+* Add [support for multiple languages](https://samples.azuremaps.com/?sample=map-localization).
+* Allow the user to [filter locations along a route](https://samples.azuremaps.com/?sample=filter-data-along-route).
+* Add the ability to [set filters](https://samples.azuremaps.com/?sample=filter-symbols-by-property).
* Add support to specify an initial search value by using a query string. When you include this option in your store locator, users are then able to bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page. * Deploy your store locator as an [Azure App Service Web App](../app-service/quickstart-html.md). * Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview](/sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017) and [Query spatial data for the nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017).
In this tutorial, you learned how to create a basic store locator by using Azure
## Additional information * For the completed code used in this tutorial, see the [Simple Store Locator](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator) tutorial on GitHub.
-* To view this sample live, see [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) on the **Azure Maps Code Samples** site.
+* To view this sample live, see [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) on the **Azure Maps Code Samples** site.
* learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md). * You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic.
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
This section shows you how to use the Azure Maps Route service to get directions
* The truck route is displayed using a thick blue line and the car route is displayed using a thin purple line. * The car route goes across Lake Washington via I-90, passing through tunnels beneath residential areas. Because the tunnels are in residential areas, hazardous waste cargo is restricted. The truck route, which specifies a `USHazmatClass2` cargo type, is directed to use a different route that doesn't have this restriction.
-* For the completed code used in this tutorial, see the [Truck Route](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Truck%20Route) tutorial on GitHub.
-* To view this sample live, see [Multiple routes by mode of travel](https://azuremapscodesamples.azurewebsites.net/?sample=Multiple%20routes%20by%20mode%20of%20travel) on the **Azure Maps Code Samples** site.
+* For the completed code used in this tutorial, see the [Truck Route](https://samples.azuremaps.com/?sample=car-vs-truck-route) tutorial on GitHub.
+* To view this sample live, see [Multiple routes by mode of travel](https://samples.azuremaps.com/?sample=multiple-routes-by-mode-of-travel) on the **Azure Maps Code Samples** site.
* You can also use [Data-driven style expressions](data-driven-style-expressions-web-sdk.md) ## Next steps
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
This section shows you how to use the Azure Maps Route Directions API to get rou
:::image type="content" source="./media/tutorial-route-location/map-route.png" alt-text="[A screenshot showing a map that demonstrates the Azure Map control and Route service."::: * For the completed code used in this tutorial, see the [route](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Route) tutorial on GitHub.
-* To view this sample live, see [Route to a destination](https://azuremapscodesamples.azurewebsites.net/?sample=Route%20to%20a%20destination) on the **Azure Maps Code Samples** site.
+* To view this sample live, see [Route to a destination](https://samples.azuremaps.com/?sample=route-to-a-destination) on the **Azure Maps Code Samples** site.
## Next steps
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The map that we've made so far only looks at the longitude/latitude data for the
![A screen shot of a map with information popups that appear when you hover over a search pin.](./media/tutorial-search-location/popup-map.png) * For the completed code used in this tutorial, see the [search](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Search) tutorial on GitHub.
-* To view this sample live, see [Search for points of interest](https://azuremapscodesamples.azurewebsites.net/?sample=Search%20for%20points%20of%20interest) on the **Azure Maps Code Samples** site.
+* To view this sample live, see [Search for points of interest](https://samples.azuremaps.com/?sample=search-for-points-of-interest) on the **Azure Maps Code Samples** site.
## Next steps
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Title: Azure Monitor Agent overview description: Overview of the Azure Monitor Agent, which collects monitoring data from the guest operating system of virtual machines. -+ Last updated 7/21/2022
The Azure Monitor Agent extensions for Windows and Linux can communicate either
# [Windows VM](#tab/PowerShellWindows) ```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString ```
Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMo
# [Linux VM](#tab/PowerShellLinux) ```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString ```
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMoni
# [Windows Arc-enabled server](#tab/PowerShellWindowsArc) ```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString ```
New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType Az
# [Linux Arc-enabled server](#tab/PowerShellLinuxArc) ```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
+$settingsString = @{"proxy" = @{mode = "application"; address = "http://[address]:[port]"; auth = true}}
+$protectedSettingsString = @{"proxy" = @{username = "[username]"; password = "[password]"}}
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString ```
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
If your IT security policies do not allow computers on your network to connect t
Before starting, review the following requirements.
-* Azure Monitor only supports System Center Operations Manager 2016 or later, Operations Manager 2012 SP1 UR6 or later, and Operations Manager 2012 R2 UR2 or later. Proxy support was added in Operations Manager 2012 SP1 UR7 and Operations Manager 2012 R2 UR3.
-* Integrating System Center Operations Manager 2016 with US Government cloud requires an updated Advisor management pack included with Update Rollup 2 or later. System Center Operations Manager 2012 R2 requires an updated Advisor management pack included with Update Rollup 3 or later.
+* Azure Monitor supports the following:
+ * System Center Operations Manager 2022
+ * System Center Operations Manager 2019
+ * System Center Operations Manager 2016
+ * System Center Operations Manager 2012 SP1 UR6 or later
+ * System Center Operations Manager 2012 R2 UR2 or later
+* Integrating System Center Operations Manager 2016 with US Government cloud requires the following:
+ * System Center Operations Manager 2022
+ * System Center Operations Manager 2019
+ * System Center Operations Manager 2016 UR 2 or later
+ * System Center Operations Manager 2012 R2 UR 3 or later
* All Operations Manager agents must meet minimum support requirements. Ensure that agents are at the minimum update, otherwise Windows agent communication may fail and generate errors in the Operations Manager event log. * A Log Analytics workspace. For further information, review [Log Analytics workspace overview](../logs/workspace-design.md). * You authenticate to Azure with an account that is a member of the [Log Analytics Contributor role](../logs/manage-access.md#azure-rbac).
-* Supported Regions - Only the following Azure regions are supported by System Center Operations Manager to connect to a Log Analytics workspace:
- - West Central US
- - Australia South East
- - West Europe
- - East US
- - South East Asia
- - Japan East
- - UK South
- - Central India
- - Canada Central
- - West US 2
>[!NOTE] >Recent changes to Azure APIs will prevent customers from being able to successfully configure integration between their management group and Azure Monitor for the first time. For customers who have already integrated their management group with the service, you are not impacted unless you need to reconfigure your existing connection.
azure-monitor Alerts Dynamic Thresholds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md
Title: Creating Alerts with Dynamic Thresholds in Azure Monitor
-description: Create Alerts with machine learning based Dynamic Thresholds
+ Title: Create alerts with Dynamic Thresholds in Azure Monitor
+description: Create alerts with machine learning-based Dynamic Thresholds.
Last updated 2/23/2022
-# Dynamic thresholds in Metric Alerts
+# Dynamic thresholds in metric alerts
- Dynamic thresholds in metric alerts use advanced machine learning (ML) to learn metrics' historical behavior, and to identify patterns and anomalies that indicate possible service issues. Dynamic thresholds in metric alerts support both a simple UI and operations at scale by allowing users to configure alert rules through the fully automated Azure Resource Manager API.
+Dynamic thresholds in metric alerts use advanced machine learning to learn metrics' historical behavior and identify patterns and anomalies that indicate possible service issues. Dynamic thresholds in metric alerts support both a simple UI and operations at scale by allowing users to configure alert rules through the fully automated Azure Resource Manager API.
-An alert rule using a dynamic threshold only fires when the monitored metric doesnΓÇÖt behave as expected, based on its tailored thresholds.
+An alert rule using dynamic thresholds only fires when the monitored metric doesn't behave as expected, based on its tailored thresholds.
-We would love to hear your feedback, keep it coming at <azurealertsfeedback@microsoft.com>.
+To send us feedback, use <azurealertsfeedback@microsoft.com>.
-Alert rules with dynamic thresholds provide:
-- **Scalable Alerting**. Dynamic threshold alert rules can create tailored thresholds for hundreds of metric series at a time, yet are as easy to define as an alert rule on a single metric. They give you fewer alerts to create and manage. You can use either Azure portal or the Azure Resource Manager API to create them. The scalable approach is especially useful when dealing with metric dimensions or when applying to multiple resources, such as to all subscription resources. [Learn more about how to configure Metric Alerts with Dynamic Thresholds using templates](./alerts-metric-create-templates.md).
+Alert rules with dynamic thresholds provide:
-- **Smart Metric Pattern Recognition**. Using our ML technology, weΓÇÖre able to automatically detect metric patterns and adapt to metric changes over time, which may often include seasonality (hourly / daily / weekly). Adapting to the metricsΓÇÖ behavior over time and alerting based on deviations from its pattern relieves the burden of knowing the "right" threshold for each metric. The ML algorithm used in dynamic thresholds is designed to prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern.--- **Intuitive Configuration**. Dynamic thresholds allow you to set up metric alerts using high-level concepts, alleviating the need to have extensive domain knowledge about the metric.
+- **Scalable alerting**. Dynamic thresholds alert rules can create tailored thresholds for hundreds of metric series at a time. They're as easy to define as an alert rule on a single metric. They give you fewer alerts to create and manage. You can use either the Azure portal or the Azure Resource Manager API to create them. The scalable approach is especially useful when you're dealing with metric dimensions or applying to multiple resources, such as to all subscription resources. Learn more about how to [configure metric alerts with dynamic thresholds by using templates](./alerts-metric-create-templates.md).
+- **Smart metric pattern recognition**. With our machine learning technology, we can automatically detect metric patterns and adapt to metric changes over time, which often includes seasonality patterns, such as hourly, daily, or weekly. Adapting to the metrics' behavior over time and alerting based on deviations from its pattern relieves the burden of knowing the "right" threshold for each metric. The machine learning algorithm used in dynamic thresholds is designed to prevent noisy (low precision) or wide (low recall) thresholds that don't have an expected pattern.
+- **Intuitive configuration**. Dynamic thresholds allow you to set up metric alerts by using high-level concepts. This way, you don't need to have extensive domain knowledge about the metric.
## Configure alerts rules with dynamic thresholds
-Alerts with Dynamic thresholds can be configured using Azure Monitor metric alerts. [Learn more about how to configure Metric Alerts](alerts-metric.md).
+Alerts with dynamic thresholds can be configured by using Azure Monitor metric alerts. Learn more about how to [configure metric alerts](alerts-metric.md).
## How are the thresholds calculated?
-Dynamic Thresholds continuously learns the data of the metric series and tries to model it using a set of algorithms and methods. It detects patterns in the data such as seasonality (Hourly / Daily / Weekly), and is able to handle noisy metrics (such as machine CPU or memory) as well as metrics with low dispersion (such as availability and error rate).
+Dynamic Thresholds continuously learns the data of the metric series and tries to model it by using a set of algorithms and methods. It detects patterns in the data like hourly, daily, or weekly seasonality. It can handle noisy metrics, such as machine CPU or memory, and metrics with low dispersion, such as availability and error rate.
The thresholds are selected in such a way that a deviation from these thresholds indicates an anomaly in the metric behavior. > [!NOTE]
-> Dynamic Thresholds can detect seasonality for hourly, daily, or weekly patterns. Other patterns like bi-hourly or semi-weekly seasonality might not be detected. To detect weekly seasonality, at least three weeks of historical data are required.
+> Dynamic thresholds can detect seasonality for hourly, daily, or weekly patterns. Other patterns like bi-hourly or semi-weekly seasonality might not be detected. To detect weekly seasonality, at least three weeks of historical data are required.
-## What does the 'Sensitivity' setting in Dynamic Thresholds mean?
+## What does the Sensitivity setting in Dynamic Thresholds mean?
Alert threshold sensitivity is a high-level concept that controls the amount of deviation from metric behavior required to trigger an alert.
-This option doesn't require domain knowledge about the metric like static threshold. The options available are:
-- High: The thresholds will be tight and close to the metric series pattern. An alert rule will be triggered on the smallest deviation, resulting in more alerts.-- Medium: Less tight and more balanced thresholds, fewer alerts than with high sensitivity (default).-- Low: The thresholds will be loose with more distance from metric series pattern. An alert rule will only trigger on large deviations, resulting in fewer alerts.
+This option doesn't require domain knowledge about the metric like a static threshold. The options available are:
+
+- **High**: The thresholds will be tight and close to the metric series pattern. An alert rule will be triggered on the smallest deviation, resulting in more alerts.
+- **Medium**: The thresholds will be less tight and more balanced. There will be fewer alerts than with high sensitivity (default).
+- **Low**: The thresholds will be loose with more distance from metric series pattern. An alert rule will only trigger on large deviations, resulting in fewer alerts.
-## What are the 'Operator' setting options in Dynamic Thresholds?
+## What are the Operator setting options in Dynamic Thresholds?
+
+Dynamic thresholds alert rules can create tailored thresholds based on metric behavior for both upper and lower bounds by using the same alert rule.
-Dynamic Thresholds alerts rule can create tailored thresholds based on metric behavior for both upper and lower bounds using the same alert rule.
You can choose the alert to be triggered on one of the following three conditions: - Greater than the upper threshold or lower than the lower threshold (default) - Greater than the upper threshold-- Lower than the lower threshold.
+- Lower than the lower threshold
-## What do the advanced settings in Dynamic Thresholds mean?
+## What do the Advanced settings in Dynamic Thresholds mean?
-**Failing Periods**. Using dynamic thresholds, you can also configure a minimum number of deviations required within a certain time window for the system to raise an alert. The default is four deviations in 20 minutes. You can configure failing periods and choose what to be alerted on by changing the failing periods and time window. These configurations reduce alert noise generated by transient spikes. For example:
+**Failing periods**. You can configure a minimum number of deviations required within a certain time window for the system to raise an alert by using dynamic thresholds. The default is four deviations in 20 minutes. You can configure failing periods and choose what to be alerted on by changing the failing periods and time window. These configurations reduce alert noise generated by transient spikes. For example:
-To trigger an alert when the issue is continuous for 20 minutes, 4 consecutive times in a given period grouping of 5 minutes, use the following settings:
+To trigger an alert when the issue is continuous for 20 minutes, four consecutive times in a period grouping of 5 minutes, use the following settings:
-![Failing periods settings for continuous issue for 20 minutes, 4 consecutive times in a given period grouping of 5 minutes](media/alerts-dynamic-thresholds/0008.png)
+![Screenshot that shows failing periods settings for continuous issue for 20 minutes, four consecutive times in a period grouping of 5 minutes.](media/alerts-dynamic-thresholds/0008.png)
-To trigger an alert when there was a violation from a Dynamic Thresholds in 20 minutes out of the last 30 minutes with period of 5 minutes, use the following settings:
+To trigger an alert when there was a violation from Dynamic Thresholds in 20 minutes out of the last 30 minutes with a period of 5 minutes, use the following settings:
-![Failing periods settings for issue for 20 minutes out of the last 30 minutes with period grouping of 5 minutes](media/alerts-dynamic-thresholds/0009.png)
+![Screenshot that shows failing periods settings for issue for 20 minutes out of the last 30 minutes with a period grouping of 5 minutes.](media/alerts-dynamic-thresholds/0009.png)
-**Ignore data before**. Users may also optionally define a start date from which the system should begin calculating the thresholds. A typical use case may occur when a resource was a running in a testing mode and is now promoted to serve a production workload, and therefore the behavior of any metric during the testing phase should be disregarded.
+**Ignore data before**. You can optionally define a start date from which the system should begin calculating the thresholds. A typical use case might occur when a resource was running in a testing mode and is promoted to serve a production workload. As a result, the behavior of any metric during the testing phase should be disregarded.
> [!NOTE]
-> An alert fires when the rule is evaluated and the result shows an anomaly. The alert is resolved if the rule is evaluated and does not show an anomaly three times in a row.
+> An alert fires when the rule is evaluated and the result shows an anomaly. The alert is resolved if the rule is evaluated and doesn't show an anomaly three times in a row.
## How do you find out why a dynamic thresholds alert was triggered?
-You can explore triggered alert instances by clicking on the link in the email or text message, or browse to see the alerts in the Azure portal. [Learn more about the alerts view](./alerts-page.md).
+You can explore triggered alert instances by selecting the link in the email or text message. You can also browse to see the alerts in the Azure portal. Learn more about the [alerts view](./alerts-page.md).
The alert view displays: -- All the metric details at the moment the Dynamic Thresholds alert fired.-- A chart of the period in which the alert was triggered that includes the Dynamic Thresholds used at that point in time.-- Ability to provide feedback on Dynamic Thresholds alert and the alerts view experience, which could improve future detections.
+- All the metric details at the moment the dynamic thresholds alert fired.
+- A chart of the period in which the alert was triggered that includes the dynamic thresholds used at that point in time.
+- Ability to provide feedback on the dynamic thresholds alert and the alerts view experience, which could improve future detections.
## Will slow behavior changes in the metric trigger an alert?
-Probably not. Dynamic Thresholds are good for detecting significant deviations rather than slowly evolving issues.
+Probably not. Dynamic thresholds are good for detecting significant deviations rather than slowly evolving issues.
## How much data is used to preview and then calculate thresholds?
-When an alert rule is first created, the thresholds appearing in the chart are calculated based on enough historical data to calculate hour or daily seasonal patterns (10 days). Once an alert rule is created, Dynamic Thresholds uses all needed historical data that is available and will continuously learn and adapt based on new data to make the thresholds more accurate. This means that after this calculation, the chart will also display weekly patterns.
+When an alert rule is first created, the thresholds appearing in the chart are calculated based on enough historical data to calculate hourly or daily seasonal patterns (10 days). After an alert rule is created, Dynamic Thresholds uses all needed historical data that's available and continuously learns and adapts based on new data to make the thresholds more accurate. After this calculation, the chart also displays weekly patterns.
## How much data is needed to trigger an alert?
-If you have a new resource or missing metric data, Dynamic Thresholds won't trigger alerts before three days and at least 30 samples of metric data are available, to ensure accurate thresholds.
-For existing resources with sufficient metric data, Dynamic Thresholds can trigger alerts immediately.
+If you have a new resource or missing metric data, Dynamic Thresholds won't trigger alerts before three days and at least 30 samples of metric data are available, to ensure accurate thresholds. For existing resources with sufficient metric data, Dynamic Thresholds can trigger alerts immediately.
## How do prolonged outages affect the calculated thresholds?
-The system automatically recognizes prolonged outages and removes them from threshold learning algorithm. As a result, despite prolonged outages, dynamic thresholds understand the data. Service issues are detected with the same sensitivity as before an outage occurred.
+The system automatically recognizes prolonged outages and removes them from the threshold learning algorithm. As a result, despite prolonged outages, dynamic thresholds understand the data. Service issues are detected with the same sensitivity as before an outage occurred.
## Dynamic Thresholds best practices
-Dynamic Thresholds can be applied to most platform and custom metrics in Azure Monitor and it was also tuned for the common application and infrastructure metrics.
-The following items are best practices on how to configure alerts on some of these metrics using Dynamic Thresholds.
+Dynamic Thresholds can be applied to most platform and custom metrics in Azure Monitor, and it was also tuned for the common application and infrastructure metrics.
+
+The following items are best practices on how to configure alerts on some of these metrics by using Dynamic Thresholds.
### Configure dynamic thresholds on virtual machine CPU percentage metrics
-1. In [Azure portal](https://portal.azure.com), select **Monitor**. The Monitor view consolidates all your monitoring settings and data in one view.
+1. In the [Azure portal](https://portal.azure.com), select **Monitor**. The **Monitor** view consolidates all your monitoring settings and data in one view.
-2. Select **Alerts** then select **+ New alert rule**.
+1. Select **Alerts** > **+ New alert rule**.
> [!TIP]
- > Most resource blades also have **Alerts** in their resource menu under **Monitoring**, you could create alerts from there as well.
+ > Most resource panes also have **Alerts** in their resource menu under **Monitoring**. You can also create alerts from there.
-3. Select **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **'Virtual Machines' Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
+1. Choose **Select target**. In the pane that opens, select a target resource that you want to alert on. Use the **Subscription** and **Virtual Machines Resource type** dropdowns to find the resource you want to monitor. You can also use the search bar to find your resource.
-4. Once you've selected a target resource, select **Add condition**.
+1. After you've selected a target resource, select **Add condition**.
-5. Select the **'CPU Percentage'**.
+1. Select the **CPU Percentage**.
-6. Optionally, refine the metric by adjusting **Period** and **Aggregation**. It's discouraged to use 'Maximum' aggregation type for this metric type as it is less representative of behavior. For 'Maximum' aggregation type static threshold maybe more appropriate.
+1. Optionally, refine the metric by adjusting **Period** and **Aggregation**. We discourage using the **Maximum** aggregation for this metric type because it's less representative of behavior. Static thresholds might be more appropriate for the **Maximum** aggregation type.
-7. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
- 1. **Condition Type** - Choose 'Dynamic' option.
- 1. **Sensitivity** - Choose Medium/Low sensitivity to reduce alert noise.
- 1. **Operator** - Choose 'Greater Than' unless behavior represents the application usage.
- 1. **Frequency** - Consider lowering the frequency based on business impact of the alert.
- 1. **Failing Periods** (Advanced Option) - The look back window should be at least 15 minutes. For example, if the period is set to five minutes, then failing periods should be at least three or more.
+1. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
+ 1. **Condition Type**: Select the **Dynamic** option.
+ 1. **Sensitivity**: Select **Medium/Low** sensitivity to reduce alert noise.
+ 1. **Operator**: Select **Greater Than** unless behavior represents the application usage.
+ 1. **Frequency**: Consider lowering the frequency based on the business impact of the alert.
+ 1. **Failing Periods** (advanced option): The look-back window should be at least 15 minutes. For example, if the period is set to 5 minutes, failing periods should be at least 3 minutes or more.
-8. The metric chart displays the calculated thresholds based on recent data.
+1. The metric chart displays the calculated thresholds based on recent data.
-9. Select **Done**.
+1. Select **Done**.
-10. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**.
+1. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**.
-11. Add an action group to the alert either by selecting an existing action group or creating a new action group.
+1. Add an action group to the alert either by selecting an existing action group or creating a new action group.
-12. Select **Done** to save the metric alert rule.
+1. Select **Done** to save the metric alert rule.
> [!NOTE]
-> Metric alert rules created through portal are created in the same resource group as the target resource.
+> Metric alert rules created through the portal are created in the same resource group as the target resource.
### Configure dynamic thresholds on Application Insights HTTP request execution time
-1. In [Azure portal](https://portal.azure.com), select on **Monitor**. The Monitor view consolidates all your monitoring settings and data in one view.
+1. In the [Azure portal](https://portal.azure.com), select **Monitor**. The **Monitor** view consolidates all your monitoring settings and data in one view.
-2. Select **Alerts** then select **+ New alert rule**.
+1. Select **Alerts** > **+ New alert rule**.
> [!TIP]
- > Most resource blades also have **Alerts** in their resource menu under **Monitoring**, you could create alerts from there as well.
+ > Most resource panes also have **Alerts** in their resource menu under **Monitoring**. You can also create alerts from there.
-3. Select **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **'Application Insights' Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
+1. Choose **Select target**. In the pane that opens, select a target resource that you want to alert on. Use the **Subscription** and **Application Insights Resource type** dropdowns to find the resource you want to monitor. You can also use the search bar to find your resource.
-4. Once you've selected a target resource, select **Add condition**.
+1. After you've selected a target resource, select **Add condition**.
-5. Select the **'HTTP request execution time'**.
+1. Select the **HTTP request execution time**.
-6. Optionally, refine the metric by adjusting **Period** and **Aggregation**. We discourage using the **Maximum** aggregation type for this metric type, since it is less representative of behavior. Static thresholds maybe more appropriate for the **Maximum** aggregation type.
+1. Optionally, refine the metric by adjusting **Period** and **Aggregation**. We discourage using the **Maximum** aggregation for this metric type because it's less representative of behavior. Static thresholds might be more appropriate for the **Maximum** aggregation type.
-7. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
- 1. **Condition Type** - Choose 'Dynamic' option.
- 1. **Operator** - Choose 'Greater Than' to reduce alerts fired on improvement in duration.
- 1. **Frequency** - Consider lowering based on business impact of the alert.
+1. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
+ 1. **Condition Type**: Select the **Dynamic** option.
+ 1. **Operator**: Select **Greater Than** to reduce alerts fired on improvement in duration.
+ 1. **Frequency**: Consider lowering the frequency based on the business impact of the alert.
-8. The metric chart will display the calculated thresholds based on recent data.
+1. The metric chart displays the calculated thresholds based on recent data.
-9. Select **Done**.
+1. Select **Done**.
-10. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**.
+1. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**.
-11. Add an action group to the alert either by selecting an existing action group or creating a new action group.
+1. Add an action group to the alert either by selecting an existing action group or creating a new action group.
-12. Select **Done** to save the metric alert rule.
+1. Select **Done** to save the metric alert rule.
> [!NOTE]
-> Metric alert rules created through portal are created in the same resource group as the target resource.
+> Metric alert rules created through the portal are created in the same resource group as the target resource.
-## Interpret Dynamic Threshold charts
+## Interpret Dynamic Thresholds charts
-Following is a chart showing a metric, its dynamic threshold limits, and some alerts fired when the value was outside of the allowed thresholds.
+The following chart shows a metric, its dynamic thresholds limits, and some alerts that fired when the value was outside the allowed thresholds.
-![Learn more about how to configure Metric Alerts](media/alerts-dynamic-thresholds/threshold-picture-8bit.png)
+![Screenshot that shows a metric, its dynamic thresholds limits, and some alerts that fired.](media/alerts-dynamic-thresholds/threshold-picture-8bit.png)
-Use the following information to interpret the previous chart.
+Use the following information to interpret the chart:
-- **Blue line** - The actual measured metric over time.-- **Blue shaded area** - Shows the allowed range for the metric. As long as the metric values stay within this range, no alert will occur.-- **Blue dots** - If you left select on part of the chart and then hover over the blue line, a blue dot appears under your cursor showing an individual aggregated metric value.-- **Pop-up with blue dot** - Shows the measured metric value (the blue dot) and the upper and lower values of allowed range. -- **Red dot with a black circle** - Shows the first metric value out of the allowed range. This is the value that fires a metric alert and puts it in an active state.-- **Red dots**- Indicate other measured values outside of the allowed range. They won't fire additional metric alerts, but the alert stays in the active.-- **Red area** - Shows the time when the metric value was outside of the allowed range. The alert remains in the active state as long as subsequent measured values are out of the allowed range, but no new alerts are fired.-- **End of red area** - When the blue line is back inside the allowed values, the red area stops and the measured value line turns blue. The status of the metric alert fired at the time of the red dot with black outline is set to resolved.
+- **Blue line**: The actual measured metric over time.
+- **Blue shaded area**: Shows the allowed range for the metric. If the metric values stay within this range, no alert will occur.
+- **Blue dots**: If you left select on part of the chart and then hover over the blue line, a blue dot appears under your cursor that shows an individual aggregated metric value.
+- **Pop-up with blue dot**: Shows the measured metric value (the blue dot) and the upper and lower values of the allowed range.
+- **Red dot with a black circle**: Shows the first metric value out of the allowed range. This value fires a metric alert and puts it in an active state.
+- **Red dots**: Indicate other measured values outside of the allowed range. They won't fire more metric alerts, but the alert stays in the active state.
+- **Red area**: Shows the time when the metric value was outside of the allowed range. The alert remains in the active state as long as subsequent measured values are out of the allowed range, but no new alerts are fired.
+- **End of red area**: When the blue line is back inside the allowed values, the red area stops and the measured value line turns blue. The status of the metric alert fired at the time of the red dot with black outline is set to resolved.
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
In the past, users used the [legacy Log Analytics Alert API](./api-alerts.md) to
## Process
+View workspaces to upgrade using this [Azure Resource Graph Explorer query](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29). Open the [link](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29), select all available subscriptions, and run the query.
+ The process of switching isn't interactive and doesn't require manual steps, in most cases. Your alert rules aren't stopped or stalled, during or after the switch.
-Do this call to switch all alert rules associated with the specific Log Analytics workspace:
+Do this call to switch all alert rules associated with each of the Log Analytics workspaces:
``` PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview
azure-monitor Alerts Metric Multiple Time Series Single Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-multiple-time-series-single-rule.md
Title: Monitor multiple time-series in a single metric alert rule
-description: Alert at scale using a single alert rule for multiple time series
+ Title: Monitor multiple time series in a single metric alert rule
+description: Alert at scale by using a single alert rule for multiple time series.
Last updated 2/23/2022
-# Monitor multiple time-series in a single metric alert rule
+# Monitor multiple time series in a single metric alert rule
-A single metric alert rule can be used to monitor one or many metric time-series, making it easier to monitor resources at scale.
+A single metric alert rule can be used to monitor one or many metric time series. This capability makes it easier to monitor resources at scale.
-## Metric time-series
+## Metric time series
-A metric time-series is a series of measurements (or "metric values") captured over a period of time.
+A metric time series is a series of measurements, or "metric values," captured over a period of time.
For example:
For example:
- The incoming bytes (ingress) to a storage account - The number of failed requests of a web application
+## Alert rule on a single time series
+An alert rule monitors a single time series when it meets all the following conditions:
-## Alert rule on a single time-series
-An alert rule monitors a single time-series when it meets all the following conditions:
-- The rule monitors a single target resource -- Contains a single condition-- Evaluates a metric without choosing dimensions (assuming the metric supports dimensions)
+- It monitors a single target resource.
+- It contains a single condition.
+- It evaluates a metric without choosing dimensions (assuming the metric supports dimensions).
-An example of such an alert rule (with only the relevant properties shown):
-- Target resource: *myVM1*-- Metric: *Percentage CPU*-- Operator: *Greater Than*-- Threshold: *70*
+An example of such an alert rule, with only the relevant properties shown:
+- **Target resource**: *myVM1*
+- **Metric**: *Percentage CPU*
+- **Operator**: *Greater Than*
+- **Threshold**: *70*
+
+For this alert rule, a single metric time series is monitored:
-For this alert rule, a single metric time-series is monitored:
- Percentage CPU where *Resource*=ΓÇÖmyVM1ΓÇÖ > 70%
-![An alert rule on a single time-series](media/alerts-metric-multiple-time-series-single-rule/simple-alert-rule.png)
+![Screenshot that shows an alert rule on a single time series.](media/alerts-metric-multiple-time-series-single-rule/simple-alert-rule.png)
+
+## Alert rule on multiple time series
+
+An alert rule monitors multiple time series if it uses at least one of the following features:
-## Alert rule on multiple time-series
-An alert rule monitors multiple time-series if it uses at least one of the following features:
- Multiple resources-- Multiple conditions
+- Multiple conditions
- Multiple dimensions - ## Multiple resources (multi-resource)
-A single metric alert rule can monitor multiple resources, provided the resources are of the same type and exist in the same Azure region. Using this type of rule reduces complexity and the total number of alert rules you have to maintain.
+A single metric alert rule can monitor multiple resources, provided the resources are of the same type and exist in the same Azure region. Using this type of rule reduces complexity and the total number of alert rules you have to maintain.
An example of such an alert rule:-- Target resource: *myVM1, myVM2*-- Metric: *Percentage CPU*-- Operator: *Greater Than*-- Threshold: *70*
-For this alert rule, two metric time-series are being monitored separately:
+- **Target resource**: *myVM1, myVM2*
+- **Metric**: *Percentage CPU*
+- **Operator**: *Greater Than*
+- **Threshold**: *70*
+
+For this alert rule, two metric time series are monitored separately:
+ - Percentage CPU where *Resource*=ΓÇÖmyVM1ΓÇÖ > 70% - Percentage CPU where *Resource*=ΓÇÖmyVM2ΓÇÖ > 70%
-![A multi-resource alert rule](media/alerts-metric-multiple-time-series-single-rule/multi-resource-alert-rule.png)
-
-In a multi-resource alert rule, the condition is evaluated **separately** for each of the resources (or more accurately, for each of the metric time-series corresponded to each resource). This means that alerts are also fired for each resource separately.
+![Screenshot that shows a multi-resource alert rule.](media/alerts-metric-multiple-time-series-single-rule/multi-resource-alert-rule.png)
+
+In a multi-resource alert rule, the condition is evaluated separately for each of the resources (or more accurately, for each of the metric time series corresponded to each resource). As a result, alerts are also fired for each resource separately.
-For example, assume we've set the alert rule above to monitor for CPU above 70%. In the evaluated time period (that is, the last 5 minutes)
-- The *Percentage CPU* of *myVM1* is greater than 70% -- The *Percentage CPU* of *myVM2* is at 50%
+For example, assume we've set the preceding alert rule to monitor for CPU above 70%. In the evaluated time period, that is, the last 5 minutes:
-The alert rule triggers on *myVM1*, but not *myVM2*. These triggered alerts are independent. They can also resolve at different times depending on the individual behavior of each of the virtual machines.
+- The *Percentage CPU* of *myVM1* is greater than 70%.
+- The *Percentage CPU* of *myVM2* is at 50%.
+
+The alert rule triggers on *myVM1* but not *myVM2*. These triggered alerts are independent. They can also resolve at different times depending on the individual behavior of each of the virtual machines.
For more information about multi-resource alert rules and the resource types supported for this capability, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
-> [!NOTE]
+> [!NOTE]
> In a metric alert rule that monitors multiple resources, only a single condition is allowed. ## Multiple conditions (multi-condition)
-A single metric alert rule can also monitor up to five conditions per alert rule.
+A single metric alert rule can also monitor up to five conditions per alert rule.
For example: -- Target resource: *myVM1*
+- **Target resource**: *myVM1*
- Condition1
- - Metric: *Percentage CPU*
- - Operator: *Greater Than*
- - Threshold: *70*
+ - **Metric**: *Percentage CPU*
+ - **Operator**: *Greater Than*
+ - **Threshold**: *70*
- Condition2
- - Metric: *Network In Total*
- - Operator: *Greater Than*
- - Threshold: *20 MB*
+ - **Metric**: *Network In Total*
+ - **Operator**: *Greater Than*
+ - **Threshold**: *20 MB*
-For this alert rule, two metric time-series are being monitored:
+For this alert rule, two metric time series are being monitored:
-- Percentage CPU where *Resource*=ΓÇÖmyVM1ΓÇÖ > 70%-- Network In Total where *Resource*=ΓÇÖmyVM1ΓÇÖ > 20 MB
+- The *Percentage CPU* where *Resource*=ΓÇÖmyVM1ΓÇÖ > 70%.
+- The *Network In Total* where *Resource*=ΓÇÖmyVM1ΓÇÖ > 20 MB.
-![A multi-condition alert rule](media/alerts-metric-multiple-time-series-single-rule/multi-condition-alert-rule.png)
-
-An ΓÇÿANDΓÇÖ operator is used between the conditions. The alert rule fires an alert when **all** conditions are met. The fired alert resolves if at least one of the conditions is no longer met.
+![Screenshot that shows a multi-condition alert rule.](media/alerts-metric-multiple-time-series-single-rule/multi-condition-alert-rule.png)
-> [!NOTE]
-> There are restrictions when using dimensions in an alert rule with multiple conditions. For more information, see [Restrictions when using dimensions in a metric alert rule with multiple conditions](alerts-troubleshoot-metric.md#restrictions-when-using-dimensions-in-a-metric-alert-rule-with-multiple-conditions).
+An AND operator is used between the conditions. The alert rule fires an alert when *all* conditions are met. The fired alert resolves if at least one of the conditions is no longer met.
+> [!NOTE]
+> There are restrictions when you use dimensions in an alert rule with multiple conditions. For more information, see [Restrictions when using dimensions in a metric alert rule with multiple conditions](alerts-troubleshoot-metric.md#restrictions-when-using-dimensions-in-a-metric-alert-rule-with-multiple-conditions).
## Multiple dimensions (multi-dimension)
-A single metric alert rule can also monitor multiple dimension values of a metric. The dimensions of a metric are name-value pairs that carry additional data to describe the metric value. For example, the **Transactions** metric of a storage account has a dimension called **API name**, describing the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). The use of dimensions is optional, but it allows filtering the metric and only monitoring specific time-series, instead of monitoring the metric as an aggregate of all the dimensional values put together.
+A single metric alert rule can also monitor multiple dimension values of a metric. The dimensions of a metric are name-value pairs that carry more data to describe the metric value. For example, the **Transactions** metric of a storage account has a dimension called **API name**. This dimension describes the name of the API called by each transaction, for example, GetBlob, DeleteBlob, and PutPage. The use of dimensions is optional, but it allows filtering the metric and only monitoring specific time series, instead of monitoring the metric as an aggregate of all the dimensional values put together.
-For example, you can choose to have an alert fired when the number of transactions is high across all API names (which is the aggregated data), or further break it down into only alerting when the number of transactions is high for specific API names.
+For example, you can choose to have an alert fired when the number of transactions is high across all API names (which is the aggregated data). Or you can further break it down into only alerting when the number of transactions is high for specific API names.
An example of an alert rule monitoring multiple dimensions is: -- Target resource: *myStorage1*-- Metric: *Transactions*-- Dimensions
+- **Target resource**: *myStorage1*
+- **Metric**: *Transactions*
+- **Dimensions**:
* API name = *GetBlob, DeleteBlob, PutPage*-- Operator: *Greater Than*-- Threshold: *70*
+- **Operator**: *Greater Than*
+- **Threshold**: *70*
-For this alert rule, three metric time-series are being monitored:
+For this alert rule, three metric time series are being monitored:
- Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖGetBlobΓÇÖ > 70 - Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖDeleteBlobΓÇÖ > 70 - Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖPutPageΓÇÖ > 70
-![A multi-dimension alert rule with values from one dimension](media/alerts-metric-multiple-time-series-single-rule/multi-dimension-1.png)
+![Screenshot that shows a multi-dimension alert rule with values from one dimension.](media/alerts-metric-multiple-time-series-single-rule/multi-dimension-1.png)
-A multi-dimension metric alert rule can also monitor multiple dimension values from **different** dimensions of a metric. In this case, the alert rule **separately** monitors all the dimensions value combinations of the selected dimension values.
+A multi-dimension metric alert rule can also monitor multiple dimension values from *different* dimensions of a metric. In this case, the alert rule *separately* monitors all the dimension value combinations of the selected dimension values.
An example of this type of alert rule: -- Target resource: *myStorage1*-- Metric: *Transactions*-- Dimensions
+- **Target resource**: *myStorage1*
+- **Metric**: *Transactions*
+- **Dimensions**:
* API name = *GetBlob, DeleteBlob, PutPage* * Authentication = *SAS, AccountKey*-- Operator: *Greater Than*-- Threshold: *70*
+- **Operator**: *Greater Than*
+- **Threshold**: *70*
-For this alert rule, six metric time-series are being monitored separately:
+For this alert rule, six metric time series are being monitored separately:
- Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖGetBlobΓÇÖ and *Authentication*=ΓÇÖSASΓÇÖ > 70 - Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖGetBlobΓÇÖ and *Authentication*=ΓÇÖAccountKeyΓÇÖ > 70
For this alert rule, six metric time-series are being monitored separately:
- Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖPutPageΓÇÖ and *Authentication*=ΓÇÖSASΓÇÖ > 70 - Transactions where *Resource*=ΓÇÖmyStorage1ΓÇÖ and *API Name*=ΓÇÖPutPageΓÇÖ and *Authentication*=ΓÇÖAccountKeyΓÇÖ > 70
-![A multi-dimension alert rule with values from multiple dimensions](media/alerts-metric-multiple-time-series-single-rule/multi-dimension-2.png)
-
-### Advanced multi-dimension features
+![Screenshot that shows a multi-dimension alert rule with values from multiple dimensions.](media/alerts-metric-multiple-time-series-single-rule/multi-dimension-2.png)
-1. **Selecting all current and future dimensions** ΓÇô You can choose to monitor all possible values of a dimension, including future values. Such an alert rule will scale automatically to monitor all values of the dimension without you needing to modify the alert rule every time a dimension value is added or removed.
-2. **Excluding dimensions** ΓÇô Selecting the 'Γëá' (exclude) operator for a dimension value is equivalent to selecting all other values of that dimension, including future values.
-3. **New and custom dimensions** ΓÇô The dimension values displayed in the Azure portal are based on metric data collected in the last day. If the dimension value youΓÇÖre looking for isnΓÇÖt yet emitted, you can add a custom dimension value.
-4. **Matching dimensions with a prefix** - You can choose to monitor all dimension values that start with a specific pattern, by selecting the 'Starts with' operator and entering a custom prefix.
+### Advanced multi-dimension features
-![Advanced multi-dimension features](media/alerts-metric-multiple-time-series-single-rule/advanced-features.png)
+- **Select all current and future dimensions**: You can choose to monitor all possible values of a dimension, including future values. Such an alert rule will scale automatically to monitor all values of the dimension without you needing to modify the alert rule every time a dimension value is added or removed.
+- **Exclude dimensions**: Selecting the **Γëá** (exclude) operator for a dimension value is equivalent to selecting all other values of that dimension, including future values.
+- **Add new and custom dimensions**: The dimension values displayed in the Azure portal are based on metric data collected in the last day. If the dimension value you're looking for isn't yet emitted, you can add a custom dimension value.
+- **Match dimensions with a prefix**: You can choose to monitor all dimension values that start with a specific pattern by selecting the **Starts with** operator and entering a custom prefix.
+![Screenshot that shows advanced multi-dimension features.](media/alerts-metric-multiple-time-series-single-rule/advanced-features.png)
## Metric alerts pricing The pricing of metric alert rules is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-When creating a metric alert rule, the provided price estimation is based on the selected features and the number of monitored time-series, which is determined from the rule configuration and current metric values. However, the monthly charge is based on actual evaluations of the time-series, and can therefore differ from the original estimation if some time-series donΓÇÖt have data to evaluate, or if the alert rule uses features that can make it scale dynamically.
+When you create a metric alert rule, the provided price estimation is based on the selected features and the number of monitored time series. This number is determined from the rule configuration and current metric values. The monthly charge is based on actual evaluations of the time series, so it can differ from the original estimation if some time series don't have data to evaluate, or if the alert rule uses features that can make it scale dynamically.
-For example, an alert rule can show a high price estimation if it leverages the multi-dimension feature, and a large number of dimension values combinations are selected, resulting in the monitoring of many time-series. But the actual charge for that alert rule can be lower if not all the time-series resulting from the dimension values combinations actually have data to evaluate.
+For example, an alert rule can show a high price estimation if it uses the multi-dimension feature, and a large number of dimension values combinations are selected, which results in the monitoring of many time series. But the actual charge for that alert rule can be lower if not all the time series resulting from the dimension values combinations actually have data to evaluate.
## Number of time series monitored by a single alert rule
-To prevent excess costs, each alert rule can monitor up to 5000 time-series by default. To lift this limit from your subscription, open a support ticket.
-
+To prevent excess costs, each alert rule can monitor up to 5,000 time series by default. To lift this limit from your subscription, open a support ticket.
## Next steps
-Learn more about monitoring at scale using metric alerts and [dynamic thresholds](../alerts/alerts-dynamic-thresholds.md).
+Learn more about monitoring at scale by using metric alerts and [dynamic thresholds](../alerts/alerts-dynamic-thresholds.md).
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
If you have 100 regions, 200 departments, and 2,000 customers, that gives you 10
Again, this limit isn't for an individual metric. It's for the sum of all such metrics across a subscription and region.
-The following steps will provide more information to assist with troubleshooting.
+Follow the steps below to see your current total active time series metrics, and more information to assist with troubleshooting.
1. Navigate to the Monitor section of the Azure portal. 1. Select **Metrics** on the left hand side.
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
Title: Getting started with Azure metrics explorer
-description: Learn how to create your first metric chart with Azure metrics explorer.
+ Title: Get started with Azure Monitor metrics explorer
+description: Learn how to create your first metric chart with Azure Monitor metrics explorer.
-# Getting started with Azure Metrics Explorer
+# Get started with metrics explorer
-## Where do I start
-Azure Monitor metrics explorer is a component of the Microsoft Azure portal that allows plotting charts, visually correlating trends, and investigating spikes and dips in metrics' values. Use the metrics explorer to investigate the health and utilization of your resources. Start in the following order:
+Azure Monitor metrics explorer is a component of the Azure portal that you can use to plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. Use metrics explorer to investigate the health and utilization of your resources.
-1. [Pick a resource and a metric](#create-your-first-metric-chart) and you see a basic chart. Then [select a time range](#select-a-time-range) that is relevant for your investigation.
+## Where do I start?
+
+Start in the following order:
+
+1. [Pick a resource and a metric](#create-your-first-metric-chart) and you see a basic chart. Then [select a time range](#select-a-time-range) that's relevant for your investigation.
1. Try [applying dimension filters and splitting](#apply-dimension-filters-and-splitting). The filters and splitting allow you to analyze which segments of the metric contribute to the overall metric value and identify possible outliers.
-1. Use [advanced settings](#advanced-chart-settings) to customize the chart before pinning to dashboards. [Configure alerts](../alerts/alerts-metric-overview.md) to receive notifications when the metric value exceeds or drops below a threshold.
+1. Use [advanced settings](#advanced-chart-settings) to customize the chart before you pin it to dashboards. [Configure alerts](../alerts/alerts-metric-overview.md) to receive notifications when the metric value exceeds or drops below a threshold.
## Create your first metric chart To create a metric chart, from your resource, resource group, subscription, or Azure Monitor view, open the **Metrics** tab and follow these steps:
-1. Select the "Select a scope" button to open the resource scope picker. This allows you to select the resource(s) you want to see metrics for. The resource should already be populated if you opened metrics explorer from the resource's menu. To learn how to view metrics across multiple resources, [read this article](./metrics-dynamic-scope.md).
- > ![Select a resource](./media/metrics-getting-started/scope-picker.png)
+1. Select the **Select a scope** button to open the resource scope picker. You can use the picker to select the resources you want to see metrics for. The resource should already be populated if you opened metrics explorer from the resource's menu. To learn how to view metrics across multiple resources, see [View multiple resources in Azure Monitor metrics explorer](./metrics-dynamic-scope.md).
-1. For some resources, you must pick a namespace. The namespace is just a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing Files, Tables, Blobs, and Queues metrics. Many resource types only have one namespace.
+ > ![Screenshot that shows selecting a resource.](./media/metrics-getting-started/scope-picker.png)
+
+1. For some resources, you must pick a namespace. The namespace is a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing metrics for files, tables, blobs, and queues. Many resource types have only one namespace.
1. Select a metric from a list of available metrics.
- > ![Select a metric](./media/metrics-getting-started/metrics-dropdown.png)
+ > ![Screenshot that shows selecting a metric.](./media/metrics-getting-started/metrics-dropdown.png)
1. Optionally, you can [change the metric aggregation](../essentials/metrics-charts.md#aggregation). For example, you might want your chart to show minimum, maximum, or average values of the metric. > [!TIP]
-> Use the **Add metric** button and repeat these steps if you want to see multiple metrics plotted in the same chart. For multiple charts in one view, select the **Add chart** button on top.
+> Select **Add metric** and repeat these steps to see multiple metrics plotted in the same chart. For multiple charts in one view, select **Add chart**.
## Select a time range > [!WARNING]
-> [Most metrics in Azure are stored for 93 days](../essentials/data-platform-metrics.md#retention-of-metrics). However, you can query no more than 30 days worth of data on any single chart. You can [pan](metrics-charts.md#pan) the chart to view the full retention. The 30 day limitation doesn't apply to [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics).
+> [Most metrics in Azure are stored for 93 days](../essentials/data-platform-metrics.md#retention-of-metrics). You can query no more than 30 days' worth of data on any single chart. You can [pan](metrics-charts.md#pan) the chart to view the full retention. The 30-day limitation doesn't apply to [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics).
-By default, the chart shows the most recent 24 hours of metrics data. Use the **time picker** panel to change the time range, zoom in, or zoom out on your chart.
+By default, the chart shows the most recent 24 hours of metrics data. Use the **time picker** panel to change the time range, zoom in, or zoom out on your chart.
-![Change time range panel](./media/metrics-getting-started/time.png)
+![Screenshot that shows changing the time range panel.](./media/metrics-getting-started/time.png)
> [!TIP]
-> Use the **time brush** to investigate an interesting area of the chart (spike or a dip). Put the mouse pointer at the beginning of the area, click and hold the left mouse button, drag to the other side of area and then release the button. The chart will zoom in on that time range.
+> Use the **time brush** to investigate an interesting area of the chart like a spike or a dip. Position the mouse pointer at the beginning of the area, select and hold the left mouse button, drag to the other side of the area, and then release the button. The chart will zoom in on that time range.
## Apply dimension filters and splitting
-[Filtering](../essentials/metrics-charts.md#filters) and [splitting](../essentials/metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments ("dimension values") impact the overall value of the metric, and allow you to identify possible outliers.
--- **Filtering** lets you choose which dimension values are included in the chart. For example, you might want to show successful requests when charting the *server response time* metric. You would need to apply the filter on the *success of request* dimension.
+[Filtering](../essentials/metrics-charts.md#filters) and [splitting](../essentials/metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments ("dimension values") affect the overall value of the metric. You can use them to identify possible outliers.
-- **Splitting** controls whether the chart displays separate lines for each value of a dimension, or aggregates the values into a single line. For example, you can see one line for an average response time across all server instances, or see separate lines for each server. You would need to apply splitting on the *server instance* dimension to see separate lines.
+- **Filtering** lets you choose which dimension values are included in the chart. For example, you might want to show successful requests when you chart the *server response time* metric. You apply the filter on the *success of request* dimension.
+- **Splitting** controls whether the chart displays separate lines for each value of a dimension or aggregates the values into a single line. For example, you can see one line for an average response time across all server instances. Or you can see separate lines for each server. You apply splitting on the *server instance* dimension to see separate lines.
-See [examples of the charts](../essentials/metric-chart-samples.md) that have filtering and splitting applied. The article shows the steps were used to configure the charts.
+For examples that have filtering and splitting applied, see [Metric chart examples](../essentials/metric-chart-samples.md). The article shows the steps that were used to configure the charts.
## Share your metric chart
-There are three ways to share your metric chart. See the instructions below on how to share information from your metrics charts using Excel, a link and a workbook.
-
+
+There are three ways to share your metric chart. See the following instructions on how to share information from your metric charts by using Excel, a link, or a workbook.
+ ### Download to Excel
-Select "Share" and "Download to Excel". Your download should start immediately.
+Select **Share** > **Download to Excel**. Your download should start immediately.
+ ### Share a link
-Select "Share" and "Copy link". You should get a notification that the link was copied successfully.
+Select **Share** > **Copy link**. You should get a notification that the link was copied successfully.
+ ### Send to workbook
-Select "Share" and "Send to Workbook". The **Send to Workbook** window opens for you to send the metric chart to a new or existing workbook.
+Select **Share** > **Send to Workbook**. In the **Send to Workbook** window, you can send the metric chart to a new or existing workbook.
## Advanced chart settings
-You can customize chart style, title, and modify advanced chart settings. When done with customization, pin it to a dashboard or save to a workbook to save your work. You can also configure metrics alerts. Follow [product documentation](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
+You can customize the chart style and title, and modify advanced chart settings. When you're finished with customization, pin the chart to a dashboard or save it to a workbook. You can also configure metrics alerts. Follow [product documentation](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
## Next steps
-* [Learn about advanced features of Metrics Explorer](../essentials/metrics-charts.md)
-* [Viewing multiple resources in Metrics Explorer](./metrics-dynamic-scope.md)
-* [Troubleshooting Metrics Explorer](metrics-troubleshoot.md)
+* [Learn about advanced features of metrics explorer](../essentials/metrics-charts.md)
+* [Viewing multiple resources in metrics explorer](./metrics-dynamic-scope.md)
+* [Troubleshooting metrics explorer](metrics-troubleshoot.md)
* [See a list of available metrics for Azure services](./metrics-supported.md) * [See examples of configured charts](../essentials/metric-chart-samples.md)
azure-monitor Data Collection Rule Sample Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collection-rule-sample-custom-logs.md
The sample [data collection rule](../essentials/data-collection-rule-overview.md
```json { "properties": {
- "dataCollectionEndpointId": "https://my-dcr.westus2-1.ingest.monitor.azure.com",
+ "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/my-resource-groups/providers/Microsoft.Insights/dataCollectionEndpoints/my-data-collection-endpoint",
"streamDeclarations": { "Custom-MyTableRawData": { "columns": [
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
Supported data types:
* [IIS Logs](../agents/data-sources-iis-logs.md) ## Using Private links
-Customer-managed storage accounts are used to ingest Custom logs or IIS logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
+Customer-managed storage accounts are used to ingest Custom logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
### Using a customer-managed storage account over a Private Link #### Workspace requirements
azure-netapp-files Azure Netapp Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-metrics.md
na Previously updated : 09/29/2021 Last updated : 08/11/2022 # Metrics for Azure NetApp Files Azure NetApp Files provides metrics on allocated storage, actual storage usage, volume IOPS, and latency. By analyzing these metrics, you can gain a better understanding on the usage pattern and volume performance of your NetApp accounts.
-You can find metrics for a capacity pool or volume by selecting the **capacity pool** or **volume**. Then click **Metric** to view the available metrics:
+You can find metrics for a capacity pool or volume by selecting the **capacity pool** or **volume**. Then select **Metric** to view the available metrics:
[ ![Snapshot that shows how to navigate to the Metric pull-down.](../media/azure-netapp-files/metrics-navigate-volume.png) ](../media/azure-netapp-files/metrics-navigate-volume.png#lightbox)
You can find metrics for a capacity pool or volume by selecting the **capacity p
- *Is volume replication transferring* Whether the status of the volume replication is ΓÇÿtransferringΓÇÖ.
+- *Volume replication lag time* <br>
+ Lag time is the actual amount of time the replication lags behind the source. It indicates the age of the replicated data in the destination volume relative to the source volume.
+
+> [!NOTE]
+> When assessing the health status of the volume replication, consider the volume replication lag time. If the lag time is greater than the replication schedule, the replication volume will not catch up to the source. To resolve this issue, adjust the replication speed or the replication schedule.
+ - *Volume replication last transfer duration* The amount of time in seconds it took for the last transfer to complete.
You can find metrics for a capacity pool or volume by selecting the **capacity p
Write throughput in bytes per second. * *Other throughput*
- Other throughput (that is not read or write) in bytes per second.
+ Other throughput (that isn't read or write) in bytes per second.
## Volume backup metrics
You can find metrics for a capacity pool or volume by selecting the **capacity p
Shows whether the last volume backup or restore operation is successfully completed. `1` is successful. `0` is unsuccessful. * *Is Volume Backup Suspended*
- Shows whether the backup policy is suspended for the volume. `1` is not suspended. `0` is suspended.
+ Shows whether the backup policy is suspended for the volume. `1` isn't suspended. `0` is suspended.
* *Volume Backup Bytes* The total bytes backed up for this volume.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 08/08/2022 Last updated : 08/11/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files volumes are designed to be contained in a special purpose sub
## Configurable network features
- The [**Standard network features**](configure-network-features.md) configuration for Azure NetApp Files is available for public preview. After registering for this feature with your subscription, you can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features.
+ Register for the [**configurable network features**](configure-network-features.md) to create volumes with standard network features. You can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features.
* ***Standard*** Selecting this setting enables higher IP limits and standard VNet features such as [network security groups](../virtual-network/network-security-groups-overview.md) and [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) on delegated subnets, and additional connectivity patterns as indicated in this article.
Azure NetApp Files standard network features are supported for the following reg
You should understand a few considerations when you plan for Azure NetApp Files network.
+> [!IMPORTANT]
+> [!INCLUDE [Standard network features pricing](includes/standard-networking-pricing.md)]
+ ### Constraints The following table describes whatΓÇÖs supported for each network features configuration:
The following table describes whatΓÇÖs supported for each network features confi
| Load balancers for Azure NetApp Files traffic | No | No | | Dual stack (IPv4 and IPv6) VNet | No <br> (IPv4 only supported) | No <br> (IPv4 only supported) |
+> [!IMPORTANT]
+> Upgrade from basic to standard network feature is not currently supported.
+ ### Supported network topologies The following table describes the network topologies supported by each network features configuration of Azure NetApp Files.
The following table describes the network topologies supported by each network f
|||| | Connectivity to volume in a local VNet | Yes | Yes | | Connectivity to volume in a peered VNet (Same region) | Yes | Yes |
-| Connectivity to volume in a peered VNet (Cross region or global peering) | No | No |
+| Connectivity to volume in a peered VNet (Cross region or global peering) | Yes* | No |
| Connectivity to a volume over ExpressRoute gateway | Yes | Yes | | ExpressRoute (ER) FastPath | Yes | No | | Connectivity from on-premises to a volume in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit | Yes | Yes |
The following table describes the network topologies supported by each network f
| Connectivity over Active/Passive VPN gateways | Yes | Yes | | Connectivity over Active/Active VPN gateways | Yes | No | | Connectivity over Active/Active Zone Redundant gateways | No | No |
-| Connectivity over Virtual WAN (VWAN) | No | No |
+| Connectivity over Virtual WAN (VWAN) | No | No |
+
+\* This option will incur a charge on ingress and egress traffic that uses a virtual network peering connection. For more information, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). For more general information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
## Virtual network for Azure NetApp Files volumes
Before provisioning an Azure NetApp Files volume, you need to create an Azure vi
Subnets segment the virtual network into separate address spaces that are usable by the Azure resources in them. Azure NetApp Files volumes are contained in a special-purpose subnet called a [delegated subnet](../virtual-network/virtual-network-manage-subnet.md).
-Subnet delegation gives explicit permissions to the Azure NetApp Files service to create service-specific resources in the subnet. It uses a unique identifier in deploying the service. In this case, a network interface is created to enable connectivity to Azure NetApp Files.
+Subnet delegation gives explicit permissions to the Azure NetApp Files service to create service-specific resources in the subnet. It uses a unique identifier in deploying the service. In this case, a network interface is created to enable connectivity to Azure NetApp Files.
If you use a new VNet, you can create a subnet and delegate the subnet to Azure NetApp Files by following instructions in [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md). You can also delegate an existing empty subnet that's not delegated to other services.
User-defined routes (UDRs) and Network security groups (NSGs) are only supported
> [!NOTE] > Associating NSGs at the network interface level is not supported for the Azure NetApp Files network interfaces.
-If the subnet has a combination of volumes with the Standard and Basic network features (or for existing volumes not registered for the feature preview), UDRs and NSGs applied on the delegated subnets will only apply to the volumes with the Standard network features.
+If the subnet has a combination of volumes with the Standard and Basic network features (or for existing volumes not registered for the feature), UDRs and NSGs applied on the delegated subnets will only apply to the volumes with the Standard network features.
Configuring user-defined routes (UDRs) on the source VM subnets with address prefix of delegated subnet and next hop as NVA isn't supported for volumes with the Basic network features. Such a setting will result in connectivity issues.
Configuring user-defined routes (UDRs) on the source VM subnets with address pre
The following diagram illustrates an Azure-native environment:
-![Azure-native networking environment](../media/azure-netapp-files/azure-netapp-files-network-azure-native-environment.png)
### Local VNet A basic scenario is to create or connect to an Azure NetApp Files volume from a VM in the same VNet. For VNet 2 in the diagram, Volume 1 is created in a delegated subnet and can be mounted on VM 1 in the default subnet.
-### VNet peering
+### <a name="vnet-peering"></a> VNet peering
-If you have additional VNets in the same region that need access to each otherΓÇÖs resources, the VNets can be connected using [VNet peering](../virtual-network/virtual-network-peering-overview.md) to enable secure connectivity through the Azure infrastructure.
+If you have other VNets in the same region that need access to each otherΓÇÖs resources, the VNets can be connected using [VNet peering](../virtual-network/virtual-network-peering-overview.md) to enable secure connectivity through the Azure infrastructure.
Consider VNet 2 and VNet 3 in the diagram above. If VM 1 needs to connect to VM 2 or Volume 2, or if VM 2 needs to connect to VM 1 or Volume 1, then you need to enable VNet peering between VNet 2 and VNet 3.
-Also, consider a scenario where VNet 1 is peered with VNet 2, and VNet 2 is peered with VNet 3 in the same region. The resources from VNet 1 can connect to resources in VNet 2, but it can't connect to resources in VNet 3 unless VNet 1 and VNet 3 are peered.
+Also, consider a scenario where VNet 1 is peered with VNet 2, and VNet 2 is peered with VNet 3 in the same region. The resources from VNet 1 can connect to resources in VNet 2 but can't connect to resources in VNet 3 unless VNet 1 and VNet 3 are peered.
In the diagram above, although VM 3 can connect to Volume 1, VM 4 can't connect to Volume 2. The reason for this is that the spoke VNets aren't peered, and _transit routing isn't supported over VNet peering_.
+### Global or cross-region VNet peering
+
+The following diagram illustrates an Azure-native environment with cross-region VNet peering.
++
+With the standard network feature, VMs are able to connect to volumes in another region via global or cross-region VNet peering. The above diagram adds a second region to the configuration in the [local VNet peering section](#vnet-peering). For VNet 4 in this diagram, an Azure NetApp Files volume is created in a delegated subnet and can be mounted on VM5 in the application subnet.
+
+In the diagram, VM2 in Region 1 can connect to Volume 3 in Region 2. VM5 in Region 2 can connect to Volume 2 in Region 1 via VNet peering between Region 1 and Region 2.
+ ## Hybrid environments The following diagram illustrates a hybrid environment:
-![Hybrid networking environment](../media/azure-netapp-files/azure-netapp-files-network-hybrid-environment.png)
-In the hybrid scenario, applications from on-premises datacenters need access to the resources in Azure. This is the case whether you want to extend your datacenter to Azure, or you want to use Azure native services or for disaster recovery. See [VPN Gateway planning options](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json#planningtable) for information on how to connect multiple resources on-premises to resources in Azure through a site-to-site VPN or an ExpressRoute.
+In the hybrid scenario, applications from on-premises datacenters need access to the resources in Azure. This is the case whether you want to extend your datacenter to Azure or you want to use Azure native services or for disaster recovery. See [VPN Gateway planning options](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json#planningtable) for information on how to connect multiple resources on-premises to resources in Azure through a site-to-site VPN or an ExpressRoute.
In a hybrid hub-spoke topology, the hub VNet in Azure acts as a central point of connectivity to your on-premises network. The spokes are VNets peered with the hub, and they can be used to isolate workloads.
In the topology illustrated above, the on-premises network is connected to a hub
* VM 3 in the hub VNet can connect to Volume 2 in spoke VNet 1 and Volume 3 in spoke VNet 2. * VM 4 from spoke VNet 1 and VM 5 from spoke VNet 2 can connect to Volume 1 in the hub VNet. * VM 4 in spoke VNet 1 can't connect to Volume 3 in spoke VNet 2. Also, VM 5 in spoke VNet2 can't connect to Volume 2 in spoke VNet 1. This is the case because the spoke VNets aren't peered and _transit routing isn't supported over VNet peering_.
-* In the above architecture if there's a gateway in the spoke VNet as well, the connectivity to the ANF volume from on-prem connecting over the gateway in the Hub will be lost. By design, preference would be given to the gateway in the spoke VNet and so only machines connecting over that gateway can connect to the ANF volume.
+* In the above architecture if there's a gateway in the spoke VNet as well, the connectivity to the ANF volume from on-premises connecting over the gateway in the Hub will be lost. By design, preference would be given to the gateway in the spoke VNet and so only machines connecting over that gateway can connect to the ANF volume.
## Next steps
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
na Previously updated : 08/03/2021 Last updated : 08/11/2022
The **Network Features** functionality enables you to indicate whether you want
This article helps you understand the options and shows you how to configure network features.
->[!IMPORTANT]
->The **Network Features** functionality is currently in public preview. It is not available in Azure Government regions. See [supported regions](azure-netapp-files-network-topologies.md#supported-regions) for a full list.
+The **Network Features** functionality is not available in Azure Government regions. See [supported regions](azure-netapp-files-network-topologies.md#supported-regions) for a full list.
## Options for network features
Two settings are available for network features:
## Register the feature
-The network features capability is currently in public preview. If you are using this feature for the first time, you need to register the feature first.
+Follow the registration steps if you're using the feature for the first time.
1. Register the feature by running the following commands:
This section shows you how to set the Network Features option.
![Screenshot that shows volume creation for Basic network features.](../media/azure-netapp-files/network-features-create-basic.png)
-2. Before completing the volume creation process, you can display the specified network features setting in the **Review + Create** tab of the Create a Volume screen. Click **Create** to complete the volume creation.
+2. Before completing the volume creation process, you can display the specified network features setting in the **Review + Create** tab of the Create a Volume screen. Select **Create** to complete the volume creation.
![Screenshot that shows the Review and Create tab of volume creation.](../media/azure-netapp-files/network-features-review-create-tab.png)
-3. You can click **Volumes** to display the network features setting for each volume:
+3. You can select **Volumes** to display the network features setting for each volume:
[ ![Screenshot that shows the Volumes page displaying the network features setting.](../media/azure-netapp-files/network-features-volume-list.png)](../media/azure-netapp-files/network-features-volume-list.png#lightbox)
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
na Previously updated : 04/18/2022 Last updated : 08/11/2022 # Dynamically change the service level of a volume
-You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not impact access to the volume.
+You can change the service level of an existing volume by moving the volume to another capacity pool in the same NetApp account that uses the [service level](azure-netapp-files-service-levels.md) you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not affect access to the volume.
This functionality enables you to meet your workload needs on demand. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. For example, if the volume is currently in a capacity pool that uses the *Standard* service level and you want the volume to use the *Premium* service level, you can move the volume dynamically to a capacity pool that uses the *Premium* service level.
The capacity pool that you want to move the volume to must already exist. The ca
* This functionality is supported within the same NetApp account. You can't move the volume to a capacity pool in a different NetApp Account.
-* After the volume is moved to another capacity pool, you will no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool.
+* After the volume is moved to another capacity pool, you'll no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool.
* If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*). You can always change to higher service level without wait time.+
+* You cannot change the service level for volumes in a cross-region replication relationship.
## Move a volume to another capacity pool
The capacity pool that you want to move the volume to must already exist. The ca
![Change pool](../media/azure-netapp-files/change-pool.png)
-3. Click **OK**.
+3. Select **OK**.
## Next steps
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 07/29/2022 Last updated : 08/11/2022 - # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## August 2022
+
+* [Standard network features](configure-network-features.md) are now generally available.
+ Standard network features now includes Global VNet peering. You must still [register the feature](configure-network-features.md#register-the-feature) before using it.
+ [!INCLUDE [Standard network features pricing](includes/standard-networking-pricing.md)]
+
+* [Cloud Backup for Virtual Machines on Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/install-cloud-backup-virtual-machines.md)
+ You can now create VM consistent snapshot backups of VMs on Azure NetApp Files datastores using [Cloud Backup for Virtual Machines](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). The associated virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated and consistent backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores.
+
## July 2022
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can [Back up Azure NetApp Files datastores and VMs using Cloud Backup](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores.
+ * [Active Directory connection enhancement: Reset Active Directory computer account password](create-active-directory-connections.md#reset-active-directory) (Preview)
- If you (accidentally) reset the password of the AD computer account on the AD server or the AD server is unreachable, you can now safely reset the computer account password to preserve connectivity to your volumes directly from the portal.
## June 2022 * [Disaster Recovery with Azure NetApp Files, JetStream DR and Azure VMware Solution](../azure-vmware/deploy-disaster-recovery-using-jetstream.md#disaster-recovery-with-azure-netapp-files-jetstream-dr-and-azure-vmware-solution)
- Disaster Recovery to cloud is a resilient and cost-effective way of protecting the workloads against site outages and data corruption events like ransomware. Leveraging the VMware VAIO framework, on-premises VMware workloads can be replicated to Azure Blob storage and recovered with minimal or close to no data loss and near-zero Recovery Time Objective (RTO). JetStream DR can now seamlessly recover workloads replicated from on-premises to Azure VMware Solution to Azure NetApp Files. JetStream DR enables cost-effective disaster recovery by consuming minimal resources at the DR site and using cost-effective cloud storage. JetStream DR automates recovery to Azure NetApp Files datastores using Azure Blob Storage. It can recover independent VMs or groups of related VMs into the recovery site infrastructure according to runbook settings. It also provides point-in-time recovery for ransomware protection.
- * [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) (Preview)
- [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for AVS provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
+ [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for Azure VMware Solution provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
Regional Coverage: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East US, France Central, Germany West Central, Japan West, North Central US, North Europe, South Central US, Southeast Asia, Switzerland West, UK South, UK West, West Europe, West US. Regional coverage will expand as the preview progresses.
azure-portal Azure Portal Quickstart Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-quickstart-center.md
Azure Quickstart Center has two options in the **Get started** tab:
## Take an online course
-The **Take an online course** tab of the Azure Quickstart Center highlights free introductory course modules from Microsoft Learn.
+The **Take an online course** tab of the Azure Quickstart Center highlights free introductory course modules.
Select a tile to launch a course and learn more about cloud concepts and managing resources in Azure.
You can also select **Browse our full Azure catalog** to see all Azure learning
## Next steps * Learn more about Azure setup and migration in the [Microsoft Cloud Adoption Framework for Azure](/azure/architecture/cloud-adoption/).
-* Unlock your cloud skills with more courses from [Microsoft Learn](/learn/azure/).
+* Unlock your cloud skills with more [Learn modules]](/learn/azure/).
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Last updated 08/03/2022
This quickstart shows you how to integrate Bicep files with Azure Pipelines for continuous integration and continuous deployment (CI/CD).
-It provides a short introduction to the pipeline task you need for deploying a Bicep file. If you want more detailed steps on setting up the pipeline and project, see [Deploy Azure resources by using Bicep and Azure Pipelines](/learn/paths/bicep-azure-pipelines/) on **Microsoft Learn**.
+It provides a short introduction to the pipeline task you need for deploying a Bicep file. If you want more detailed steps on setting up the pipeline and project, see [Deploy Azure resources by using Bicep and Azure Pipelines](/learn/paths/bicep-azure-pipelines/).
## Prerequisites
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/best-practices.md
Last updated 05/16/2022
This article recommends practices to follow when developing your Bicep files. These practices make your Bicep file easier to understand and use.
-### Microsoft Learn
+### Training resources
-If you would rather learn about Bicep best practices through step-by-step guidance, see [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/) on **Microsoft Learn**.
+If you would rather learn about Bicep best practices through step-by-step guidance, see [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/).
## Parameters
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
"prefer-unquoted-property-names": { "level": "warning" },
- "secure-parameter-default": {
+ "protect-commandtoexecute-secrets": {
"level": "warning" },
- "simplify-interpolation": {
+ "secure-parameter-default": {
"level": "warning" },
- "use-protectedsettings-for-commandtoexecute-secrets": {
+ "simplify-interpolation": {
"level": "warning" }, "secure-secrets-in-params": {
azure-resource-manager Child Resource Name Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/child-resource-name-type.md
Each parent resource accepts only certain resource types as child resources. The
This article show different ways you can declare a child resource.
-### Microsoft Learn
+### Training resources
-If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates) on **Microsoft Learn**.
+If you would rather learn about about child resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates).
## Name and type pattern
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
Sometimes you need to optionally deploy a resource or module in Bicep. Use the `
> [!NOTE] > Conditional deployment doesn't cascade to [child resources](child-resource-name-type.md). If you want to conditionally deploy a resource and its child resources, you must apply the same condition to each resource type.
-### Microsoft Learn
+### Training resources
-If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/) on **Microsoft Learn**.
+If you would rather learn about conditions through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
## Deploy condition
output mgmtStatus string = ((!empty(logAnalytics)) ? 'Enabled monitoring for VM!
## Next steps
-* For a Microsoft Learn module about conditions and loops, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
+* Review the Learn module [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
* For recommendations about creating Bicep files, see [Best practices for Bicep](best-practices.md). * To create multiple instances of a resource, see [Iterative loops in Bicep](loops.md).
azure-resource-manager Contribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/contribute.md
Bicep is an open-source project. That means you can contribute to Bicep's develo
## Contribution types - **Azure Quickstart Templates.** You can contribute example Bicep files and ARM templates to the Azure Quickstart Templates repository. For more information, see the [Azure Quickstart Templates contribution guide](https://github.com/Azure/azure-quickstart-templates/blob/master/1-CONTRIBUTION-GUIDE/README.md#contribution-guide).-- **Documentation.** Bicep's documentation is open to contributions, too. For more information, see the [Microsoft contributor guide overview](/contribute/).
+- **Documentation.** Bicep's documentation is open to contributions, too. For more information, see our [contributor guide overview](/contribute/).
- **Snippets.** Do you have a favorite snippet you think the community would benefit from? You can add it to the Visual Studio Code extension's collection of snippets. For more information, see [Contributing to Bicep](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md#snippets). - **Code changes.** If you're a developer and you have ideas you'd like to see in the Bicep language or tooling, you can contribute a pull request. For more information, see [Contributing to Bicep](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md).
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-management-group.md
This article describes how to set scope with Bicep when deploying to a managemen
As your organization matures, you can deploy a Bicep file to create resources at the management group level. For example, you may need to define and assign [policies](../../governance/policy/overview.md) or [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for a management group. With management group level templates, you can declaratively apply policies and assign roles at the management group level.
-### Microsoft Learn
+### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/) on **Microsoft Learn**.
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md
To simplify the management of resources, you can deploy resources at the level o
> [!NOTE] > You can deploy to 800 different resource groups in a subscription level deployment.
-### Microsoft Learn
+### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/) on **Microsoft Learn**.
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-tenant.md
Last updated 11/22/2021
As your organization matures, you may need to define and assign [policies](../../governance/policy/overview.md) or [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) across your Azure AD tenant. With tenant level templates, you can declaratively apply policies and assign roles at a global level.
-### Microsoft Learn
+### Training resources
-If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/) on **Microsoft Learn**.
+If you would rather learn about deployment scopes through step-by-step guidance, see [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/).
## Supported resources
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-what-if.md
Before deploying a Bicep file, you can preview the changes that will happen. Azu
You can use the what-if operation with Azure PowerShell, Azure CLI, or REST API operations. What-if is supported for resource group, subscription, management group, and tenant level deployments.
-### Microsoft Learn
+### Training resources
-If you would rather learn about the what-if operation through step-by-step guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/) on **Microsoft Learn**.
+If you would rather learn about the what-if operation through step-by-step guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/).
[!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)]
You can use the what-if operation through the Azure SDKs.
* To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). * If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).
-* For a Microsoft Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+* For a Learn module that demonstrates using what-if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
The deployment script resource is only available in the regions where Azure Cont
> [!NOTE] > Retry logic for Azure sign in is now built in to the wrapper script. If you grant permissions in the same Bicep file as your deployment scripts, the deployment script service retries sign in for 10 minutes with 10-second interval until the managed identity role assignment is replicated.
-### Microsoft Learn
+### Training resources
-If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts) on **Microsoft Learn**.
+If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
After the script is tested successfully, you can use it as a deployment script i
## Next steps
-In this article, you learned how to use deployment scripts. To walk through a Microsoft Learn module:
+In this article, you learned how to use deployment scripts. To walk through a Learn module:
> [!div class="nextstepaction"] > [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts)
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md
New-AzResourceGroupDeployment `
- For general information about key vaults, see [What is Azure Key Vault?](../../key-vault/general/overview.md) - For complete examples of referencing key secrets, see [key vault examples](https://github.com/rjmax/ArmExamples/tree/master/keyvaultexamples) on GitHub.-- For a Microsoft Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/learn-bicep.md
Title: Discover Bicep on Microsoft Learn
-description: Provides an overview of the units that are available on Microsoft Learn for Bicep.
+ Title: Learn modules for Bicep
+description: Provides an overview of the Learn modules for Bicep.
Last updated 12/03/2021
-# Bicep on Microsoft Learn
+# Learn modules for Bicep
-Ready to see how Bicep can help simplify and accelerate your deployments to Azure? Check out the many hands-on courses on Microsoft Learn.
+Ready to see how Bicep can help simplify and accelerate your deployments to Azure? Check out the many hands-on courses.
> [!TIP] > Want to learn Bicep live from subject matter experts? [Learn Live with our experts every Tuesday (Pacific time) beginning March 8, 2022.](/events/learntv/learnlive-iac-and-bicep/) ## Get started
-If you're new to Bicep, a great way to get started is by taking this module on Microsoft Learn.
+If you're new to Bicep, a great way to get started is by reviewing the following Learn module. You'll learn how Bicep makes it easier to define how your Azure resources should be configured and deployed in a way that's automated and repeatable. YouΓÇÖll deploy several Azure resources so you can see for yourself how Bicep works. We provide free access to Azure resources to help you practice the concepts.
-There you'll learn how Bicep makes it easier to define how your Azure resources should be configured and deployed in a way that's automated and repeatable. YouΓÇÖll deploy several Azure resources so you can see for yourself how Bicep works. We provide free access to Azure resources to help you practice the concepts.
-
-[<img src="media/learn-bicep/build-first-bicep-template.svg" width="101" height="120" alt="The badge for the Build your first Bicep template module on Microsoft Learn." role="presentation"></img>](/learn/modules/build-first-bicep-template/)
+[<img src="media/learn-bicep/build-first-bicep-template.svg" width="101" height="120" alt="The badge for the Build your first Bicep template module." role="presentation"></img>](/learn/modules/build-first-bicep-template/)
[Build your first Bicep template](/learn/modules/build-first-bicep-template/)
After that, you might be interested in adding your Bicep code to a deployment pi
## Next steps * For a short introduction to Bicep, see [Bicep quickstart](quickstart-create-bicep-use-visual-studio-code.md).
-* For suggestions about how to improve your Bicep files, see [Best practices for Bicep](best-practices.md).
+* For suggestions about how to improve your Bicep files, see [Best practices for Bicep](best-practices.md).
azure-resource-manager Linter Rule Outputs Should Not Contain Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-outputs-should-not-contain-secrets.md
This rule finds possible exposure of secrets in a template's outputs.
## Linter rule code Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:-
+ΓÇï
`outputs-should-not-contain-secrets` ## Solution Don't include any values in an output that could potentially expose secrets. For example, secure parameters of type secureString or secureObject, or [`list*`](./bicep-functions-resource.md#list) functions such as listKeys.-
-The output from a template is stored in the deployment history, so a malicious user could find that information.
-
+ΓÇï
+The output from a template is stored in the deployment history, so a user with read-only permissions could gain access to information otherwise not available with read-only permission.
+ΓÇï
The following example fails because it includes a secure parameter in an output value. ```bicep+ @secure() param secureParam string-
+ΓÇï
output badResult string = 'this is the value ${secureParam}' ```
param storageName string
resource stg 'Microsoft.Storage/storageAccounts@2021-04-01' existing = { name: storageName }-
+ΓÇï
output badResult object = { value: stg.listKeys().keys[0].value }
The following example fails because the output name contains 'password', indicat
output accountPassword string = '...' ```
-To fix it, you will need to remove the secret data from the output.
+To fix it, you will need to remove the secret data from the output. The recommended practice is to output the resourceId of the resource containing the secret and retrieve the secret when the resource needing the information is created or updated. Secrets may also be stored in KeyVault for more complex deployment scenarios.
+
+The following example shows a secure pattern for retrieving a storageAccount key from a module.
+
+```bicep
+output storageId string = stg.id
+```
+
+Which can be used in a subsequent deployment as sown in the following example
+
+```bicep
+someProperty: listKeys(myStorageModule.outputs.storageId.value, '2021-09-01').keys[0].value
+```
## Silencing false positives
It is good practice to add a comment explaining why the rule does not apply to t
## Next steps
-For more information about the linter, see [Use Bicep linter](./linter.md).
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter Rule Protect Commandtoexecute Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-protect-commandtoexecute-secrets.md
+
+ Title: Linter rule - use protectedSettings for commandToExecute secrets
+description: Linter rule - use protectedSettings for commandToExecute secrets
+ Last updated : 12/17/2021++
+# Linter rule - use protectedSettings for commandToExecute secrets
+
+This rule finds possible exposure of secrets in the settings property of a custom script resource.
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`protect-commandtoexecute-secrets`
+
+## Solution
+
+For custom script resources, the `commandToExecute` value should be placed under the `protectedSettings` property object instead of the `settings` property object if it includes secret data such as a password. For example, secret data could be found in secure parameters, [`list*`](./bicep-functions-resource.md#list) functions such as listKeys, or in custom scripts arguments.
+
+Don't use secret data in the `settings` object because it uses clear text. For more information, see [Microsoft.Compute virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions), [Custom Script Extension for Windows](../../virtual-machines/extensions/custom-script-windows.md), and [Use the Azure Custom Script Extension Version 2 with Linux virtual machines](../../virtual-machines/extensions/custom-script-linux.md).
+
+The following example fails because `commandToExecute` is specified under `settings` and uses a secure parameter.
+
+```bicep
+param vmName string
+param location string
+param fileUris string
+param storageAccountName string
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' existing = {
+ name: storageAccountName
+}
+
+resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2019-08-02-preview' = {
+ name: '${vmName}/CustomScriptExtension'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Compute'
+ type: 'CustomScriptExtension'
+ autoUpgradeMinorVersion: true
+ settings: {
+ fileUris: split(fileUris, ' ')
+ commandToExecute: 'mycommand ${storageAccount.listKeys().keys[0].value}'
+ }
+ }
+}
+```
+
+You can fix it by moving the commandToExecute property to the `protectedSettings` object.
+
+```bicep
+param vmName string
+param location string
+param fileUris string
+param storageAccountName string
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' existing = {
+ name: storageAccountName
+}
+
+resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2019-08-02-preview' = {
+ name: '${vmName}/CustomScriptExtension'
+ location: location
+ properties: {
+ publisher: 'Microsoft.Compute'
+ type: 'CustomScriptExtension'
+ autoUpgradeMinorVersion: true
+ settings: {
+ fileUris: split(fileUris, ' ')
+ }
+ protectedSettings: {
+ commandToExecute: 'mycommand ${storageAccount.listKeys().keys[0].value}'
+ }
+ }
+}
+```
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Last updated 12/02/2021
This article shows you how to use the `for` syntax to iterate over items in a collection. This functionality is supported starting in v0.3.1 onward. You can use loops to define multiple copies of a resource, module, variable, property, or output. Use loops to avoid repeating syntax in your Bicep file and to dynamically set the number of copies to create during deployment. To go through a quickstart, see [Quickstart: Create multiple instances](./quickstart-loops.md).
-### Microsoft Learn
+### Training resources
-If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/) on **Microsoft Learn**.
+If you would rather learn about loops through step-by-step guidance, see [Build flexible Bicep templates by using conditions and loops](/learn/modules/build-flexible-bicep-templates-conditions-loops/).
## Loop syntax
azure-resource-manager Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate.md
The first step in the process is to capture an initial representation of your Az
:::image type="content" source="./media/migrate/migrate-bicep.png" alt-text="Diagram of the recommended workflow for migrating Azure resources to Bicep." border="false":::
-In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/learn/modules/migrate-azure-resources-bicep/) on Microsoft Learn.
+In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/learn/modules/migrate-azure-resources-bicep/).
## Phase 1: Convert
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
To share modules with other people in your organization, create a [template spec
Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template).
-### Microsoft Learn
+### Training resources
-If you would rather learn about modules through step-by-step guidance, see [Create composable Bicep files by using modules](/learn/modules/create-composable-bicep-files-using-modules/) on **Microsoft Learn**.
+If you would rather learn about modules through step-by-step guidance, see [Create composable Bicep files by using modules](/learn/modules/create-composable-bicep-files-using-modules/).
## Definition syntax
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md
Bicep provides the following advantages:
To start with Bicep: 1. **Install the tools**. See [Set up Bicep development and deployment environments](./install.md). Or, you can use the [VS Code Devcontainer/Codespaces repo](https://github.com/Azure/vscode-remote-try-bicep) to get a pre-configured authoring environment.
-2. **Complete the [quickstart](./quickstart-create-bicep-use-visual-studio-code.md) and the [Microsoft Learn Bicep modules](./learn-bicep.md)**.
+2. **Complete the [quickstart](./quickstart-create-bicep-use-visual-studio-code.md) and the [Learn modules for Bicep](./learn-bicep.md)**.
To decompile an existing ARM template to Bicep, see [Decompiling ARM template JSON to Bicep](./decompile.md). You can use the [Bicep Playground](https://aka.ms/bicepdemo) to view Bicep and equivalent JSON side by side.
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
Resource Manager resolves parameter values before starting the deployment operat
Each parameter must be set to one of the [data types](data-types.md).
-### Microsoft Learn
+### Training resources
-If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/learn/modules/build-reusable-bicep-templates-parameters) on **Microsoft Learn**.
+If you would rather learn about parameters through step-by-step guidance, see [Build reusable Bicep templates by using parameters](/learn/modules/build-reusable-bicep-templates-parameters).
## Declaration
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
To share [modules](modules.md) within your organization, you can create a privat
To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008 or later**. To use with Azure CLI, you must also have version **2.31.0 or later**; to use with Azure PowerShell, you must also have version **7.0.0** or later.
-### Microsoft Learn
+### Training resources
-If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/learn/modules/share-bicep-modules-using-private-registries) on **Microsoft Learn**.
+If you would rather learn about parameters through step-by-step guidance, see [Share Bicep modules by using private registries](/learn/modules/share-bicep-modules-using-private-registries).
## Configure private registry
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md
Remove-AzResourceGroup -Name exampleRG
## Next steps > [!div class="nextstepaction"]
-> [Bicep in Microsoft Learn](learn-bicep.md)
+> [Learn modules for Bicep](learn-bicep.md)
azure-resource-manager Quickstart Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-loops.md
Remove-AzResourceGroup -Name $resourceGroupName
## Next steps > [!div class="nextstepaction"]
-> [Bicep in Microsoft Learn](learn-bicep.md)
+> [Learn modules for Bicep](learn-bicep.md)
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
Remove-AzResourceGroup -Name $resourceGroupName
## Next steps > [!div class="nextstepaction"]
-> [Bicep in Microsoft Learn](learn-bicep.md)
+> [Learn modules for Bicep](learn-bicep.md)
azure-resource-manager Scope Extension Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scope-extension-resources.md
This article shows how to set the scope for an extension resource type when depl
> [!NOTE] > The scope property is only available to extension resource types. To specify a different scope for a resource type that isn't an extension type, use a [module](modules.md).
-### Microsoft Learn
+### Training resources
-If you would rather learn about extension resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates) on **Microsoft Learn**.
+If you would rather learn about extension resources through step-by-step guidance, see [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates).
## Apply at deployment scope
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
When designing your deployment, always consider the lifecycle of the resources a
> - Content in the Bicep module registry can only be deployed from another Bicep file. Template specs can be deployed directly from the API, Azure PowerShell, Azure CLI, and the Azure portal. You can even use [`UiFormDefinition`](../templates/template-specs-create-portal-forms.md) to customize the portal deployment experience. > - Bicep has some limited capabilities for embedding other project artifacts (including non-Bicep and non-ARM-template files. For example, PowerShell scripts, CLI scripts and other binaries) by using the [`loadTextContent`](./bicep-functions-files.md#loadtextcontent) and [`loadFileAsBase64`](./bicep-functions-files.md#loadfileasbase64) functions. Template specs can't package these artifacts.
-### Microsoft Learn
+### Training resources
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs) on **Microsoft Learn**.
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
## Why use template specs?
After creating a template spec, you can link to that template spec in a Bicep mo
## Next steps
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs) on **Microsoft Learn**.
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Protect your Azure resources with a lock description: You can safeguard Azure resources from updates or deletions by locking all users and roles. Previously updated : 08/08/2022 Last updated : 08/11/2022
As an administrator, you can lock an Azure subscription, resource group, or resource to protect them from accidental user deletions and modifications. The lock overrides any user permissions.
-You can set locks that prevent either deletions or modifications. In the portal, these locks are called **Delete** and **Read-only**. In the command line, these locks are called **CanNotDelete** and **ReadOnly**. In the left navigation panel, the subscription lock feature's name is **Resource locks**, while the resource group lock feature's name is **Locks**.
+You can set locks that prevent either deletions or modifications. In the portal, these locks are called **Delete** and **Read-only**. In the command line, these locks are called **CanNotDelete** and **ReadOnly**.
- **CanNotDelete** means authorized users can read and modify a resource, but they can't delete it. - **ReadOnly** means authorized users can read a resource, but they can't delete or update it. Applying this lock is similar to restricting all authorized users to the permissions that the **Reader** role provides.
Unlike role-based access control (RBAC), you use management locks to apply a res
When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the same parent lock. The most restrictive lock in the inheritance takes precedence.
+[Extension resources](extension-resource-types.md) inherit locks from the resource they're applied to. For example, Microsoft.Insights/diagnosticSettings is an extension resource type. If you apply a diagnostic setting to a storage blob, and lock the storage account, you're unable to delete the diagnostic setting. This inheritance makes sense because the full resource ID of the diagnostic setting is:
+
+```json
+/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storage-name}/blobServices/default/providers/microsoft.insights/diagnosticSettings/{setting-name}"
+```
+
+Which matches the scope of the resource ID of the resource that is locked:
+
+```json
+/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storage-name}
+```
+ If you have a **Delete** lock on a resource and attempt to delete its resource group, the feature blocks the whole delete operation. Even if the resource group or other resources in the resource group are unlocked, the deletion doesn't happen. You never have a partial deletion. When you [cancel an Azure subscription](../../cost-management-billing/manage/cancel-azure-subscription.md#what-happens-after-subscription-cancellation):
When you [cancel an Azure subscription](../../cost-management-billing/manage/can
* Azure preserves your resources by deactivating them instead of immediately deleting them. * Azure only deletes your resources permanently after a waiting period. ++ ## Understand scope of locks > [!NOTE]
To delete everything for the service, including the locked infrastructure resour
### Portal
+In the left navigation panel, the subscription lock feature's name is **Resource locks**, while the resource group lock feature's name is **Locks**.
+ [!INCLUDE [resource-manager-lock-resources](../../../includes/resource-manager-lock-resources.md)] ### Template
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/conditional-resource-deployment.md
If you deploy a template with [complete mode](deployment-modes.md) and a resourc
## Next steps
-* For a Microsoft Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+* For a Learn module that covers conditional deployment, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
* For recommendations about creating templates, see [ARM template best practices](./best-practices.md). * To create multiple instances of a resource, see [Resource iteration in ARM templates](copy-resources.md).
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-resources.md
The following examples show common scenarios for creating more than one instance
- To set dependencies on resources that are created in a copy loop, see [Define the order for deploying resources in ARM templates](./resource-dependency.md). - To go through a tutorial, see [Tutorial: Create multiple resource instances with ARM templates](template-tutorial-create-multiple-instances.md).-- For a Microsoft Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
- For other uses of the copy loop, see: - [Property iteration in ARM templates](copy-properties.md) - [Variable iteration in ARM templates](copy-variables.md)
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-what-if.md
Before deploying an Azure Resource Manager template (ARM template), you can prev
You can use the what-if operation with Azure PowerShell, Azure CLI, or REST API operations. What-if is supported for resource group, subscription, management group, and tenant level deployments.
-### Microsoft Learn
+### Training resources
-To learn more about what-if, and for hands-on guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif) on **Microsoft Learn**.
+To learn more about what-if, and for hands-on guidance, see [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif).
[!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)]
You can use the what-if operation through the Azure SDKs.
- [ARM Deployment Insights](https://marketplace.visualstudio.com/items?itemName=AuthorityPartnersInc.arm-deployment-insights) extension provides an easy way to integrate the what-if operation in your Azure DevOps pipeline. - To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). - If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues).-- For a Microsoft Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
The deployment script resource is only available in the regions where Azure Cont
> [!NOTE] > Retry logic for Azure sign in is now built in to the wrapper script. If you grant permissions in the same template as your deployment scripts, the deployment script service retries sign in for 10 minutes with 10-second interval until the managed identity role assignment is replicated.
-### Microsoft Learn
+### Training resources
-To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts) on **Microsoft Learn**.
+To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/key-vault-parameter.md
The following template dynamically creates the key vault ID and passes it as a p
- For general information about key vaults, see [What is Azure Key Vault?](../../key-vault/general/overview.md) - For complete examples of referencing key secrets, see [key vault examples](https://github.com/rjmax/ArmExamples/tree/master/keyvaultexamples) on GitHub.-- For a Microsoft Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+- For a Learn module that covers passing a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md
This approach means you can safely share templates that meet your organization's
## Next steps * For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md).
-* To learn about ARM templates through a guided set of modules on Microsoft Learn, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
+* To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
* For information about the properties in template files, see [Understand the structure and syntax of ARM templates](./syntax.md). * To learn about exporting templates, see [Quickstart: Create and deploy ARM templates by using the Azure portal](quickstart-create-templates-use-the-portal.md). * For answers to common questions, see [Frequently asked questions about ARM templates](./frequently-asked-questions.yml).
azure-resource-manager Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md
For information about assessing the deployment order and resolving dependency er
## Next steps * To go through a tutorial, see [Tutorial: Create ARM templates with dependent resources](template-tutorial-create-templates-with-dependent-resources.md).
-* For a Microsoft Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+* For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
* For recommendations when setting dependencies, see [ARM template best practices](./best-practices.md). * To learn about troubleshooting dependencies during deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md). * To learn about creating Azure Resource Manager templates, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
Last updated 07/18/2022
This article describes the structure of an Azure Resource Manager template (ARM template). It presents the different sections of a template and the properties that are available in those sections.
-This article is intended for users who have some familiarity with ARM templates. It provides detailed information about the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). To learn about ARM templates through a guided set of modules on Microsoft Learn, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
+This article is intended for users who have some familiarity with ARM templates. It provides detailed information about the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). To learn about ARM templates through a guided set of Learn modules, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/).
> [!TIP] > Bicep is a new language that offers the same capabilities as ARM templates but with a syntax that's easier to use. If you're considering infrastructure as code options, we recommend looking at Bicep.
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
To deploy the template spec, you use standard Azure tools like PowerShell, Azure
When designing your deployment, always consider the lifecycle of the resources and group the resources that share similar lifecycle into a single template spec. For instance, your deployments include multiple instances of Cosmos DB with each instance containing its own databases and containers. Given the databases and the containers don't change much, you want to create one template spec to include a Cosmo DB instance and its underlying databases and containers. You can then use conditional statements in your templates along with copy loops to create multiple instances of these resources.
-### Microsoft Learn
+### Training resources
-To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs) on **Microsoft Learn**.
+To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs).
> [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Azure Resource Manager template specs in Bicep](../bicep/template-specs.md).
azure-resource-manager Template Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-test-cases.md
The following example **passes** because `expressionEvaluationOptions` uses `inn
## Next steps - To learn about running the test toolkit, see [Use ARM template test toolkit](test-toolkit.md).-- For a Microsoft Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using the test toolkit, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
- To test parameter files, see [Test cases for parameter files](parameters.md). - For createUiDefinition tests, see [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md). - To learn about tests for all files, see [Test cases for all files](all-files-test-cases.md).
azure-resource-manager Template Tutorial Create First Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md
This tutorial introduces you to Azure Resource Manager templates (ARM templates)
This tutorial is the first of a series. As you progress through the series, you modify the starting template, step by step, until you explore all of the core parts of an ARM template. These elements are the building blocks for more complex templates. We hope by the end of the series you're confident in creating your own templates and ready to automate your deployments with templates.
-If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of modules on [Microsoft Learn](/learn), see [Deploy and manage resources in Azure by using JSON ARM templates](/learn/paths/deploy-manage-resource-manager-templates).
+If you want to learn about the benefits of using templates and why you should automate deployments with templates, see [ARM template overview](overview.md). To learn about ARM templates through a guided set of [Learn modules](/learn), see [Deploy and manage resources in Azure by using JSON ARM templates](/learn/paths/deploy-manage-resource-manager-templates).
If you don't have a Microsoft Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Template Tutorial Create Multiple Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-multiple-instances.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Microsoft Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers resource copy, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Template Tutorial Create Templates With Dependent Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-templates-with-dependent-resources.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Microsoft Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers resource dependencies, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Template Tutorial Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deployment-script.md
This tutorial covers the following tasks:
> * Debug the failed script > * Clean up resources
-For a Microsoft Learn module that covers deployment scripts, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/).
+For a Learn module that covers deployment scripts, see [Extend ARM templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/).
## Prerequisites
azure-resource-manager Template Tutorial Use Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-conditions.md
This tutorial only covers a basic scenario of using conditions. For more informa
* [Template function: If](./template-functions-logical.md#if). * [Comparison functions for ARM templates](./template-functions-comparison.md)
-For a Microsoft Learn module that covers conditions, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that covers conditions, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
azure-resource-manager Template Tutorial Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-key-vault.md
This tutorial covers the following tasks:
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-For a Microsoft Learn module that uses a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
+For a Learn module that uses a secure value from a key vault, see [Manage complex cloud deployments by using advanced ARM template features](/learn/modules/manage-deployments-advanced-arm-template-features/).
## Prerequisites
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/test-toolkit.md
The toolkit contains four sets of tests:
> [!NOTE] > The test toolkit is only available for ARM templates. To validate Bicep files, use the [Bicep linter](../bicep/linter.md).
-### Microsoft Learn
+### Training resources
-To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test) on **Microsoft Learn**.
+To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test).
## Install on Windows
The next example shows how to run the tests.
- To test parameter files, see [Test cases for parameter files](parameters.md). - For createUiDefinition tests, see [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md). - To learn about tests for all files, see [Test cases for all files](all-files-test-cases.md).-- For a Microsoft Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test/).
+- For a Learn module that covers using the test toolkit, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test/).
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
Title: Manage an Azure Video Indexer account
-description: Learn how to manage an Azure Video Indexer account connected to Azure.
+ Title: Repair the connection to Azure, check errors/warnings
+description: Learn how to manage an Azure Video Indexer account connected to Azure repair the connection, examine errors/warnings.
Last updated 01/14/2021
-# Manage an Azure Video Indexer account connected to Azure
+# Repair the connection to Azure, examine errors/warnings
This article demonstrates how to manage an Azure Video Indexer account that's connected to your Azure subscription and an Azure Media Services account.
azure-vmware Backup Azure Netapp Files Datastores Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/backup-azure-netapp-files-datastores-vms.md
+
+ Title: Back up Azure NetApp Files datastores and VMs using Cloud Backup
+description: Learn how to back up datastores and Virtual Machines to the cloud.
++ Last updated : 08/10/2022++
+# Back up Azure NetApp Files datastores and VMs using Cloud Backup for Virtual Machines
+
+From the VMware vSphere client, you can back up datastores and Virtual Machines (VMs) to the cloud.
+
+## Configure subscriptions
+
+Before you back up your Azure NetApp Files datastores, you must add your Azure and Azure NetApp Files cloud subscriptions.
+
+### Add Azure cloud subscription
+
+1. Sign in to the VMware vSphere client.
+2. From the left navigation, select **Cloud Backup for Virtual Machines**.
+3. Select the **Settings** page and then select the **Cloud Subscription** tab.
+4. Select **Add** and then provide the required values from your Azure subscription.
+
+### Add Azure NetApp Files cloud subscription account
+
+1. From the left navigation, select **Cloud Backup for Virtual Machines**.
+2. Select **Storage Systems**.
+3. Select **Add** to add the Azure NetApp Files cloud subscription account details.
+4. Provide the required values and then select **Add** to save your settings.
+
+## Create a backup policy
+
+You must create backup policies before you can use Cloud Backup for Virtual Machines to back up Azure NetApp Files datastores and VMs.
+
+1. In the left navigation of the vCenter web client page, select **Cloud Backup for Virtual Machines** > **Policies**.
+2. On the **Policies** page, select **Create** to initiate the wizard.
+3. On the **New Backup Policy** page, select the vCenter Server that will use the policy, then enter the policy name and a description.
+* **Only alphanumeric characters and underscores (_) are supported in VM, datastore, cluster, policy, backup, or resource group names.** Other special characters are not supported.
+4. Specify the retention settings.
+ The maximum retention value is 255 backups. If the **"Backups to keep"** option is selected during the backup operation, Cloud Backup for Virtual Machines will retain backups with the specified retention count and delete the backups that exceed the retention count.
+5. Specify the frequency settings.
+ The policy specifies the backup frequency only. The specific protection schedule for backing up is defined in the resource group. Therefore, two or more resource groups can share the same policy and backup frequency but have different backup schedules.
+6. **Optional:** In the **Advanced** fields, select the fields that are needed. The Advanced field details are listed in the following table.
+
+ | Field | Action |
+ | - | - |
+ | VM consistency | Check this box to pause the VMs and create a VMware snapshot each time the backup job runs. <br> When you check the VM consistency box, backup operations might take longer and require more storage space. In this scenario, the VMs are first paused, then VMware performs a VM consistent snapshot. Cloud Backup for Virtual Machines then performs its backup operation, and then VM operations are resumed. <br> VM guest memory is not included in VM consistency snapshots. |
+ | Include datastores with independent disks | Check this box to include any datastores with independent disks that contain temporary data in your backup. |
+ | Scripts | Enter the fully qualified path of the prescript or postscript that you want the Cloud Backup for Virtual Machines to run before or after backup operations. For example, you can run a script to update Simple Network Management Protocol (SNMP) traps, automate alerts, and send logs. The script path is validated at the time the script is executed. <br> **NOTE**: Prescripts and postscripts must be located on the virtual appliance VM. To enter multiple scripts, press **Enter** after each script path to list each script on a separate line. The semicolon (;) character is not allowed. |
+7. Select **Add** to save your policy.
+ You can verify that the policy has been created successfully and review the policy configuration by selecting the policy in the **Policies** page.
+
+## Resource groups
+
+A resource group is the container for VMs and datastores that you want to protect.
+
+Do not add VMs in an inaccessible state to a resource group. Although a resource group can contain a VM in an inaccessible state, the inaccessible state will cause backups for the resource group to fail.
+
+### Considerations for resource groups
+
+You can add or remove resources from a resource group at any time.
+* Back up a single resource
+ To back up a single resource (for example, a single VM), you must create a resource group that contains that single resource.
+* Back up multiple resources
+ To back up multiple resources, you must create a resource group that contains multiple resources.
+* Optimize snapshot copies
+ To optimize snapshot copies, group the VMs and datastores that are associated with the same volume into one resource group.
+* Backup policies
+ Although it's possible to create a resource group without a backup policy, you can only perform scheduled data protection operations when at least one policy is attached to the resource group. You can use an existing policy, or you can create a new policy while creating a resource group.
+* Compatibility checks
+ Cloud Backup for VMs performs compatibility checks when you create a resource group. Reasons for incompatibility might be:
+ * Virtual machine disks (VMDKs) are on unsupported storage.
+ * A shared PCI device is attached to a VM.
+ * You have not added the Azure subscription account.
+
+### Create a resource group using the wizard
+
+1. In the left navigation of the vCenter web client page, select **Cloud Backup** for **Virtual Machines** > **Resource Groups**. Then select **+ Create** to start the wizard
+
+ :::image type="content" source="./media/cloud-backup/vsphere-create-resource-group.png" alt-text="Screenshot of the vSphere Client Resource Group interface shows a red box highlights a button with a green plus sign that reads Create, instructing you to select this button." lightbox="./media/cloud-backup/vsphere-create-resource-group.png":::
+
+1. On the **General Info & Notification** page in the wizard, enter the required values.
+1. On the **Resource** page, do the following:
+
+ | Field | Action |
+ | -- | -- |
+ | Scope | Select the type of resource you want to protect: <ul><li>Datastores</li><li>Virtual Machines</li></ul> |
+ | Datacenter | Navigate to the VMs or datastores |
+ | Available entities | Select the resources you want to protect. Then select **>** to move your selections to the Selected entities list. |
+
+ When you select **Next**, the system first checks that Cloud Backup for Virtual Machines manages and is compatible with the storage on which the selected resources are located.
+
+ >[!IMPORTANT]
+ >If you receive the message `selected <resource-name> is not Cloud Backup for Virtual Machines compatible` then a selected resource is not compatible with Cloud Backup for Virtual Machines.
+
+1. On the **Spanning disks** page, select an option for VMs with multiple VMDKs across multiple datastores:
+ * Always exclude all spanning datastores
+ (This is the default option for datastores)
+ * Always include all spanning datastores
+ (This is the default for VMs)
+ * Manually select the spanning datastores to be included
+1. On the **Policies** page, select or create one or more backup policies.
+ * To use **an existing policy**, select one or more policies from the list.
+ * To **create a new policy**:
+ 1. Select **+ Create**.
+ 1. Complete the New Backup Policy wizard to return to the Create Resource Group wizard.
+1. On the **Schedules** page, configure the backup schedule for each selected policy.
+ In the **Starting** field, enter a date and time other than zero. The date must be in the format day/month/year. You must fill in each field. The Cloud Backup for Virtual Machines creates schedules in the time zone in which the Cloud Backup for Virtual Machines is deployed. You can modify the time zone by using the Cloud Backup for Virtual Machines GUI.
+
+ :::image type="content" source="./media/cloud-backup/backup-schedules.png" alt-text="A screenshot of the Backup schedules interface showing an hourly backup beginning at 10:22 a.m. on April 26, 2022." lightbox="./media/cloud-backup/backup-schedules.png":::
+1. Review the summary. If you need to change any information, you can return to any page in the wizard to do so. Select **Finish** to save your settings.
+
+ After you select **Finish**, the new resource group will be added to the resource group list.
+
+ If the pause operation fails for any of the VMs in the backup, then the backup is marked as not VM-consistent even if the policy selected has VM consistency selected. In this case, it's possible that some of the VMs were successfully paused.
+
+### Other ways to create a resource group
+
+In addition to using the wizard, you can:
+* **Create a resource group for a single VM:**
+ 1. Select **Menu** > **Hosts and Clusters**.
+ 1. Right-click the Virtual Machine you want to create a resource group for and select **Cloud Backup for Virtual Machines**. Select **+ Create**.
+* **Create a resource group for a single datastore:**
+ 1. Select **Menu** > **Hosts and Clusters**.
+ 1. Right-click a datastore, then select **Cloud Backup for Virtual Machines**. Select **+ Create**.
+
+## Back up resource groups
+
+Backup operations are performed on all the resources defined in a resource group. If a resource group has a policy attached and a schedule configured, backups occur automatically according to the schedule.
+
+### Prerequisites
+
+* You must have created a resource group with a policy attached.
+ Do not start an on-demand backup job when a job to back up the Cloud Backup for Virtual Machines MySQL database is already running. Use the maintenance console to see the configured backup schedule for the MySQL database.
+
+### Back up resource groups on demand
+
+1. In the left navigation of the vCenter web client page, select **Cloud Backup for Virtual Machines** > **Resource Groups**, then select a resource group. Select **Run Now** to start the backup.
+
+ :::image type="content" source="./media/cloud-backup/resource-groups-run-now.png" alt-text="Image of the vSphere Client Resource Group interface. At the top left, a red box highlights a green circular button with a white arrow inside next to text reading Run Now, instructing you to select this button." lightbox="./media/cloud-backup/resource-groups-run-now.png":::
+
+ 1.1 If the resource group has multiple policies configured, then in the **Backup Now** dialog box, select the policy you want to use for this backup operation.
+1. Select **OK** to initiate the backup.
+ >[!NOTE]
+ >You can't rename a backup once it is created.
+1. **Optional:** Monitor the operation progress by selecting **Recent Tasks** at the bottom of the window or on the dashboard Job Monitor for more details.
+ If the pause operation fails for any of the VMs in the backup, then the backup completes with a warning and is marked as not VM-consistent even if the selected policy has VM consistency selected. In this case, it is possible that some of the VMs were successfully paused. In the job monitor, the failed VM details will show the paused as failed.
+
+## Next steps
+
+* [Restore VMs using Cloud Backup for Virtual Machines](restore-azure-netapp-files-vms.md)
azure-vmware Install Cloud Backup Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-cloud-backup-virtual-machines.md
+
+ Title: Install Cloud Backup for Virtual Machines
+description: Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines.
++ Last updated : 08/10/2022++
+# Install Cloud Backup for Virtual Machines
+
+Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines (VMs).
+
+Use Cloud Backup for VMs to:
+* Build and securely connect both legacy and cloud-native workloads across environments and unify operations
+* Provision and resize datastore volumes right from the Azure portal
+* Take VM consistent snapshots for quick checkpoints
+* Quickly recover VMs
+
+## Prerequisites
+
+Before you can install Cloud Backup for Virtual Machines, you need to create an Azure service principle with the required Azure NetApp Files privileges. If you've already created one, you can skip to the installation steps below.
+
+## Install Cloud Backup for Virtual Machines using the Azure portal
+
+You'll need to install Cloud Backup for Virtual Machines through the Azure portal as an add-on.
+
+1. Sign in to your Azure VMware Solution private cloud.
+1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Install-NetAppCBSA**.
+
+ :::image type="content" source="./media/cloud-backup/run-command.png" alt-text="Screenshot of the Azure interface that shows the configure signal logic step with a backdrop of the Create alert rule page." lightbox="./media/cloud-backup/run-command.png":::
+
+1. Provide the required values, then select **Run**.
+
+ :::image type="content" source="./media/cloud-backup/run-commands-fields.png" alt-text="Image of the Run Command fields which are described in the table below." lightbox="./media/cloud-backup/run-commands-fields.png":::
+
+ | Field | Value |
+ | | -- |
+ | ApplianceVirtualMachineName | VM name for the appliance. |
+ | EsxiCluster | Destination ESXi cluster name to be used for deploying the appliance. |
+ | VmDatastore | Datastore to be used for the appliance. |
+ | NetworkMapping | Destination network to be used for the appliance. |
+ | ApplianceNetworkName | Network name to be used for the appliance. |
+ | ApplianceIPAddress | IPv4 address to be used for the appliance. |
+ | Netmask | Subnet mask. |
+ | Gateway | Gateway IP address. |
+ | PrimaryDNS | Primary DNS server IP address. |
+ | ApplianceUser | User Account for hosting API services in the appliance. |
+ | AppliancePassword | Password of the user hosting API services in the appliance. |
+ | MaintenanceUserPassword | Password of the appliance maintenance user. |
+
+ >[!IMPORTANT]
+ >You can also install Cloud Backup for Virtual Machines using DHCP by running the package `NetAppCBSApplianceUsingDHCP`. If you install Cloud Backup for Virtual Machines using DHCP, you don't need to provide the values for the PrimaryDNS, Gateway, Netmask, and ApplianceIPAddress fields. These values will be automatically generated.
+
+1. Check **Notifications** or the **Run Execution Status** tab to see the progress. For more information about the status of the execution, see [Run command in Azure VMware Solution](concepts-run-command.md).
+
+Upon successful execution, the Cloud Backup for Virtual Machines will automatically be displayed in the VMware vSphere client.
+
+## Upgrade Cloud Backup for Virtual Machines
+
+You can execute this run command to upgrade the Cloud Backup for Virtual Machines to the next available version.
+
+>[!IMPORTANT]
+> Before you initiate the upgrade, you must:
+> * Back up the MySQL database of Cloud Backup for Virtual Machines.
+> * Take snapshot copies of Cloud Backup for Virtual Machines.
+
+1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Invoke-UpgradeNetAppCBSAppliance**.
+
+1. Provide the required values, and then select **Run**.
+
+1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
+
+## Uninstall Cloud Backup for Virtual Machines
+
+You can execute the run command to uninstall Cloud Backup for Virtual Machines.
+
+> [!IMPORTANT]
+> Before you initiate the upgrade, you must:
+> * Backup the MySQL database of Cloud Backup for Virtual Machines.
+> * Ensure that there are no other VMs installed in the VMware vSphere tag: `AVS_ANF_CLOUD_ADMIN_VM_TAG`. All VMs with this tag will be deleted when you uninstall.
+
+1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Uninstall-NetAppCBSAppliance**.
+
+1. Provide the required values, and then select **Run**.
+
+1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
+
+## Change vCenter account password
+
+You can execute this command to reset the vCenter account password:
+
+1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Invoke-ResetNetAppCBSApplianceVCenterPasswordA**.
+
+1. Provide the required values, then select **Run**.
+
+1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
+
+## Next steps
+
+* [Back up Azure NetApp Files datastores and VMs using Cloud Backup for Virtual Machines](backup-azure-netapp-files-datastores-vms.md)
+* [Restore VMs using Cloud Backup for Virtual Machines](restore-azure-netapp-files-vms.md)
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
VMware HCX Advanced and its associated Cloud Manager are no longer pre-deployed
Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud) in a single HCX manager system. The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. Once the service is generally available, you'll have 30 days to decide on your next steps. You can turn off or opt out of the HCX Enterprise Edition service but keep HCX Advanced as it's part of the node cost.
-Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) are in use, and site pairings are three or fewer.
+Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) are in use.
>[!TIP] >You can also [uninstall HCX Advanced](#uninstall-hcx-advanced) through the portal. When you uninstall HCX Advanced, make sure you don't have any active migrations in progress. Removing HCX Advanced returns the resources to your private cloud occupied by the HCX virtual appliances.
azure-vmware Restore Azure Netapp Files Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/restore-azure-netapp-files-vms.md
+
+ Title: Restore VMs using Cloud Backup for Virtual Machines
+description: Learn how to restore virtual machines from a cloud backup to the vCenter.
++ Last updated : 08/10/2022++
+# Restore VMs using Cloud Backup for Virtual Machines
+
+Cloud Backup for Virtual Machines enables you to restore virtual machines (VMs) from the cloud backup to the vCenter.
+
+This topic covers how to:
+* Restore VMs from backups
+* Restore deleted VMs from backups
+* Restore VM disks (VMDKs) from backups
+* Recovery of Cloud Backup for Virtual Machines internal database
+
+## Restore VMs from backups
+
+When you restore a VM, you can overwrite the existing content with the backup copy that you select or you can restore to a new VM.
+
+You can restore VMs to the original datastore mounted on the original ESXi host (this overwrites the original VM).
+
+## Prerequisites to restore VMs
+
+* A backup must exist. <br>
+You must have created a backup of the VM using the Cloud Backup for Virtual Machines before you can restore the VM.
+>[!NOTE]
+>Restore operations cannot finish successfully if there are snapshots of the VM that were performed by software other than the Cloud Backup for Virtual Machines.
+* The VM must not be in transit. <br>
+ The VM that you want to restore must not be in a state of vMotion or Storage vMotion.
+* High Availability (HA) configuration errors <br>
+ Ensure there are no HA configuration errors displayed on the vCenter ESXi Host Summary screen before restoring backups to a different location.
+
+### Considerations for restoring VMs from backups
+
+* VM is unregistered and registered again
+ The restore operation for VMs unregisters the original VM, restores the VM from a backup snapshot, and registers the restored VM with the same name and configuration on the same ESXi server. You must manually add the VMs to resource groups after the restore.
+* Restoring datastores
+ You cannot restore a datastore, but you can restore any VM in the datastore.
+* VMware consistency snapshot failures for a VM
+ Even if a VMware consistency snapshot for a VM fails, the VM is nevertheless backed up. You can view the entities contained in the backup copy in the Restore wizard and use it for restore operations.
+
+### Restore a VM from a backup
+
+1. In the VMware vSphere web client GUI, select **Menu** in the toolbar. Select **Inventory** and then **Virtual Machines and Templates**.
+1. In the left navigation, right-click a Virtual Machine, then select **NetApp Cloud Backup**. In the drop-down list, select **Restore** to initiate the wizard.
+1. In the Restore wizard, on the **Select Backup** page, select the backup snapshot copy that you want to restore.
+ > [!NOTE]
+ > You can search for a specific backup name or a partial backup name, or you can filter the backup list by selecting the filter icon and then choosing a date and time range, selecting whether you want backups that contain VMware snapshots, whether you want mounted backups, and the location. Select **OK** to return to the wizard.
+1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope** field, then select **Restore location**, and then enter the destination ESXi information where the backup should be mounted.
+1. When restoring partial backups, the restore operation skips the Select Scope page.
+1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation.
+1. On the **Select Location** page, select the location for the primary or secondary location.
+1. Review the **Summary** page and then select **Finish**.
+1. **Optional:** Monitor the operation progress by selecting Recent Tasks at the bottom of the screen.
+1. Although the VMs are restored, they are not automatically added to their former resource groups. Therefore, you must manually add the restored VMs to the appropriate resource groups.
+
+## Restore deleted VMs from backups
+
+You can restore a deleted VM from a datastore primary or secondary backup to an ESXi host that you select. You can also restore VMs to the original datastore mounted on the original ESXi host, which creates a clone of the VM.
+
+## Prerequisites to restore deleted VMs
+
+* You must have added the Azure cloud Subscription account.
+ The user account in vCenter must have the minimum vCenter privileges required for Cloud Backup for Virtual Machines.
+* A backup must exist.
+ You must have created a backup of the VM using the Cloud Backup for Virtual Machines before you can restore the VMDKs on that VM.
+
+### Considerations for restoring deleted VMs
+
+You cannot restore a datastore, but you can restore any VM in the datastore.
+
+### Restore deleted VMs
+
+1. Select **Menu** and then select the **Inventory** option.
+1. Select a datastore, then select the **Configure** tab, then the **Backups in the Cloud Backup for Virtual Machines** section.
+1. Select (double-click) a backup to see a list of all VMs that are included in the backup.
+1. Select the deleted VM from the backup list and then select **Restore**.
+1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope field**, then select the restore location, and then enter the destination ESXi information where the backup should be mounted.
+1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation.
+1. On the **Select Location** page, select the location of the backup that you want to restore to.
+1. Review the **Summary** page, then select **Finish**.
+
+## Restore VMDKs from backups
+
+You can restore existing VMDKs or deleted or detached VMDKs from either a primary or secondary backup. You can restore one or more VMDKs on a VM to the same datastore.
+
+## Prerequisites to restore VMDKs
+
+* A backup must exist.
+ You must have created a backup of the VM using the Cloud Backup for Virtual Machines.
+* The VM must not be in transit.
+ The VM that you want to restore must not be in a state of vMotion or Storage vMotion.
+
+### Considerations for restoring VMDKs
+
+* If the VMDK is deleted or detached from the VM, then the restore operation attaches the VMDK to the VM.
+* Attach and restore operations connect VMDKs using the default SCSI controller. VMDKs that are attached to a VM with a NVME controller are backed up, but for attach and restore operations they are connected back using a SCSI controller.
+
+### Restore VMDKs
+
+1. In the VMware vSphere web client GUI, select **Menu** in the toolbar. Select **Inventory**, then **Virtual Machines and Templates**.
+1. In the left navigation, right-click a VM and select **NetApp Cloud Backup**. In the drop-down list, select **Restore**.
+1. In the Restore wizard, on the **Select Backup** page, select the backup copy from which you want to restore. To find the backup, do one of the following options:
+ * Search for a specific backup name or a partial backup name
+ * Filter the backup list by selecting the filter icon and a date and time range. Select if you want backups that contain VMware snapshots, if you want mounted backups, and primary location.
+ Select **OK** to return to the wizard.
+1. On the **Select Scope** page, select **Particular virtual disk** in the Restore scope field, then select the virtual disk and destination datastore.
+1. On the **Select Location** page, select the snapshot copy that you want to restore.
+1. Review the **Summary** page and then select **Finish**.
+1. **Optional:** Monitor the operation progress by clicking Recent Tasks at the bottom of the screen.
+
+## Recovery of Cloud Backup for Virtual Machines internal database
+
+You can use the maintenance console to restore a specific backup of the MySQL database (also called an NSM database) for Cloud Backup for Virtual Machines.
+
+1. Open a maintenance console window.
+1. From the main menu, enter option **1) Application Configuration**.
+1. From the Application Configuration menu, enter option **6) MySQL backup and restore**.
+1. From the MySQL Backup and Restore Configuration menu, enter option **2) List MySQL backups**. Make note of the backup you want to restore.
+1. From the MySQL Backup and Restore Configuration menu, enter option **3) Restore MySQL backup**.
+1. At the prompt ΓÇ£Restore using the most recent backup,ΓÇ¥ enter **n**.
+1. At the prompt ΓÇ£Backup to restore from,ΓÇ¥ enter the backup name, and then select **Enter**.
+ The selected backup MySQL database will be restored to its original location.
+
+If you need to change the MySQL database backup configuration, you can modify:
+* The backup location (the default is: `/opt/netapp/protectionservice/mysqldumps`)
+* The number of backups kept (the default value is three)
+* The time of day the backup is recorded (the default value is 12:39 a.m.)
+
+1. Open a maintenance console window.
+1. From the main menu, enter option **1) Application Configuration**.
+1. From the Application Configuration menu, enter option **6) MySQL backup and restore**.
+1. From the MySQL Backup & Restore Configuration, menu, enter option **1) Configure MySQL backup**.
++
+ :::image type="content" source="./media/cloud-backup/mysql-backup-configuration.png" alt-text="Screenshot of the CLI maintenance menu depicting menu options." lightbox="./media/cloud-backup/mysql-backup-configuration.png":::
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
You can use this library in your app server side to manage the WebSocket client
[Source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub) |
-[API reference documentation](/javascript/api/overview/azure/webpubsub) |
+[API reference documentation](/javascript/api/overview/azure/web-pubsub) |
[Product documentation](./index.yml) | [Samples][samples_ref]
Use **Live Trace** from the Web PubSub service portal to view the live traffic.
## Next steps
backup Azure Backup Architecture For Sap Hana Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md
Title: Azure Backup Architecture for SAP HANA Backup description: Learn about Azure Backup architecture for SAP HANA backup. Previously updated : 09/27/2021- Last updated : 08/11/2022+++ # Azure Backup architecture for SAP HANA backup
Refer to the following SAP HANA setups and see the execution of backup operation
## Next steps
-[Back up SAP HANA databases in Azure VMs](./backup-azure-sap-hana-database.md).
+- Learn about the supported configurations and scenarios in the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
+- Learn about how to [back up SAP HANA databases in Azure VMs](./backup-azure-sap-hana-database.md).
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
In this article, you'll learn how to:
> * Run an on-demand backup job >[!NOTE]
-Refer to the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
## Prerequisites
backup Backup Azure Sql Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-backup-cli.md
Title: Back up SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to back up SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 07/07/2022 Last updated : 08/11/2022
az backup protection auto-enable-for-azurewl --resource-group SQLResourceGroup \
To trigger an on-demand backup, use the [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now) command. >[!NOTE]
->The retention policy of an on-demand backup is determined by the underlying retention policy for the database.
+>The retention period of this backup is determined by the type of on-demand backup you have run.
+>
+>- *On-demand full* retains backups for a minimum of *45 days* and a maximum of *99 years*.
+>- *On-demand copy only full* accepts any v0alue for retaintion.
+>- *On-demand differential* retains backup as per the retention of scheduled differentials set in policy.
+>- *On-demand log* retains backups as per the retention of scheduled logs set in policy.
```azurecli-interactive az backup protection backup-now --resource-group SQLResourceGroup \
backup Backup Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-database.md
Title: Back up SQL Server databases to Azure description: This article explains how to back up SQL Server to Azure. The article also explains SQL Server recovery. Previously updated : 08/20/2021 Last updated : 08/11/2022 # About SQL Server Backup in Azure VMs
Last updated 08/20/2021
>[!Note] >Snapshot-based backup for SQL databases in Azure VM is now in preview. This unique offering combines the goodness of snapshots, leading to a better RTO and low impact on the server along with the benefits of frequent log backups for low RPO. For any queries/access, write to us at [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com).
-To view the backup and restore scenarios that we support today, refer to the [support matrix](sql-support-matrix.md#scenario-support).
+To view the backup and restore scenarios that we support today, see the [support matrix](sql-support-matrix.md#scenario-support).
## Backup process
backup Backup Azure Sql Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-manage-cli.md
Title: Manage SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to manage SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 07/07/2022 Last updated : 08/11/2022
If you've used [Back up an SQL database in Azure using CLI](backup-azure-sql-bac
Azure CLI eases the process of managing an SQL database running on an Azure VM that's backed-up using Azure Backup. The following sections describe each of the management operations.
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Monitor backup and restore jobs Use the [az backup job list](/cli/azure/backup/job#az-backup-job-list) command to monitor completed or currently running jobs (backup or restore). CLI also allows you to [suspend a currently running job](/cli/azure/backup/job#az-backup-job-stop) or [wait until a job completes](/cli/azure/backup/job#az-backup-job-wait).
backup Backup Azure Sql Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-restore-cli.md
Title: Restore SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to restore SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 07/15/2022 Last updated : 08/11/2022
This article assumes you've an SQL database running on Azure VM that's backed-up
* Backed-up database/item named *sqldatabase;mssqlserver;master* * Resources in the *westus2* region
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## View restore points for a backed-up database To view the list of all recovery points for a database, use the [az backup recoverypoint list](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) command as:
Name Operation Status Item Name
- - -- -- 0d863259-b0fb-4935-8736-802c6667200b CrossRegionRestore InProgress master [testSQLVM] AzureWorkload 2022-06-21T08:29:24.919138+00:00 0:00:12.372421 ```
+>[!Note]
+>The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
## Restore as files
backup Backup Azure Sql Vm Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-vm-rest-api.md
Title: Back up SQL server databases in Azure VMs using Azure Backup via REST API description: Learn how to use REST API to back up SQL server databases in Azure VMs in the Recovery Services vault Previously updated : 11/30/2021 Last updated : 08/11/2022
This article describes how to back up SQL server databases in Azure VMs using Azure Backup via REST API.
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Prerequisites - A Recovery Services vault
Once you configure a database for backup, backups run according to the policy sc
Triggering an on-demand backup is a *POST* operation.
+>[!Note]
+>The retention period of this backup is determined by the type of on-demand backup you have run.
+>
+>- *On-demand full* retains backups for a minimum of *45 days* and a maximum of *99 years*.
+>- *On-demand copy only full* accepts any v0alue for retaintion.
+>- *On-demand differential* retains backup as per the retention of scheduled differentials set in policy.
+>- *On-demand log* retains backups as per the retention of scheduled logs set in policy.
+ ```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/backup?api-version=2016-12-01 ```
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 06/01/2022 Last updated : 08/11/2022
In this article, you'll learn how to:
> * Discover databases and set up backups. > * Set up auto-protection for databases.
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Prerequisites Before you back up a SQL Server database, check the following criteria:
backup Backup Sql Server On Availability Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-on-availability-groups.md
Title: Back up SQL Server always on availability groups description: In this article, learn how to back up SQL Server on availability groups. Previously updated : 08/20/2021 Last updated : 08/11/2022 # Back up SQL Server always on availability groups Azure Backup offers an end-to-end support for backing up SQL Server always on availability groups (AG) if all nodes are in the same region and subscription as the Recovery Services vault. However, if the AG nodes are spread across regions/subscriptions/on-premises and Azure, there are a few considerations to keep in mind. >[!Note]
->Backup of Basic Availability Group databases is not supported by Azure Backup.
+>- Backup of Basic Availability Group databases is not supported by Azure Backup.
+>- See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
The backup preference used by Azure Backup SQL AG supports full and differential backups only from the primary replica. So, these backup jobs always run on the Primary node irrespective of the backup preference. For copy-only full and transaction log backups, the AG backup preference is considered while deciding the node where backup will run.
The backup preference used by Azure Backup SQL AG supports full and differential
The workload backup extension gets installed on the node when it is registered with the Azure Backup service. When an AG database is configured for backup, the backup schedules are pushed to all the registered nodes of the AG. The schedules fire on all the AG nodes and the workload backup extensions on these nodes synchronize between themselves to decide which node will perform the backup. The node selection depends on the backup type and the backup preference as explained in section 1.
-The selected node proceeds with the backup job, whereas the job triggered on the other nodes bail out, that is, it skips the job.
+The selected node proceeds with the backup job, whereas the job triggered on the other nodes bails out, that is, it skips the job.
>[!Note] >Azure Backup doesnΓÇÖt consider backup priorities or replicas while deciding among the secondary replicas.
LetΓÇÖs consider the following AG deployment as a reference.
:::image type="content" source="./media/backup-sql-server-on-availability-groups/ag-deployment.png" alt-text="Diagram for AG deployment as reference.":::
-Taking the above sample AG deployment, following are various considerations:
+Based on the above sample AG deployment, following are various considerations:
- As the primary node is in region 1 and subscription 1, the Recovery Services vault (Vault 1) must be in Region 1 and Subscription 1 for protecting this AG. - VM3 can't be registered to Vault 1 as it's in a different subscription.
After the AG has failed over to one of the secondary nodes:
>[!Note] >Log chain breaks do not happen on failover if the failover doesnΓÇÖt coincide with a backup.
-Taking the above sample AG deployment, following are the various failover possibilities:
+Based on the above sample AG deployment, following are the various failover possibilities:
- Failover to VM2 - Full and differential backups will happen from VM2.
Taking the above sample AG deployment, following are the various failover possib
Recovery services vault doesnΓÇÖt support cross-subscription or cross-region backups. This section summarizes how to enable backups for AGs that are spanning subscriptions or Azure regions and the associated considerations. -- Evaluate if you really need to enable backups from all nodes. If one region/subscription has most of the AG nodes and failover to other nodes happens very rarely, setting up backup in that first region may be enough. If the failovers to other region/subscription happen frequently and for prolonged duration, then you may want to setup backups proactively in the other region as well.
+- Evaluate if you really need to enable backups from all nodes. If one region/subscription has most of the AG nodes and failover to other nodes happens very rarely, setting up the backup in that first region may be enough. If the failovers to other region/subscription happen frequently and for prolonged duration, then you may want to set aup backups proactively in the other region as well.
- Each vault where the backup gets enabled will have its own set of recovery point chains. Restores from these recovery points can be done to VMs registered in that vault only.
Recovery services vault doesnΓÇÖt support cross-subscription or cross-region bac
To avoid log backup conflicts between the two vaults, we recommend you to set the backup preference to Primary. Then, whichever vault has the primary node will also take the log backups.
-Taking the above sample AG deployment, here are the steps to enable backup from all the nodes. The assumption is that backup preference is satisfied in all the steps.
+Based on the above sample AG deployment, here are the steps to enable backup from all the nodes. The assumption is that backup preference is satisfied in all the steps.
### Step 1: Enable backups in Region 1, Subscription 1 (Vault 1)
For example, the first node has 50 standalone databases protected and both the n
As the AG database jobs are queued on one node and running on another, the backup synchronization (mentioned in section 6) wonΓÇÖt work properly. Node 2 might assume that Node 1 is down and therefore jobs from there aren't coming up for synchronization. This can lead to log chain breaks or extra backups as both nodes can take backups independently.
-Similar problem can happen if the number of AG databases protected are more than the throttling limit. In such case, backup for, say, DB1 can be queued on Node 1 whereas it runs on Node 2.
+Similar problem can happen if the number of AG databases protected is more than the throttling limit. In such case, backup for, say, DB1 can be queued on Node 1 whereas it runs on Node 2.
We recommend you to use the following backup preferences to avoid these synchronization issues:
backup Backup Sql Server Vm From Vm Pane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-vm-from-vm-pane.md
Title: Back up a SQL Server VM from the VM pane description: In this article, learn how to back up SQL Server databases on Azure virtual machines from the VM pane. Previously updated : 08/13/2020 Last updated : 08/11/2022 # Back up a SQL Server from the VM pane
This article explains how to back up SQL Server running in Azure VMs with the [A
2. Get an [overview](backup-azure-sql-database.md) of Azure Backup for SQL Server VM. 3. Verify that the VM has [network connectivity](backup-sql-server-database-azure-vms.md#establish-network-connectivity).
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Configure backup on the SQL Server You can enable backup on your SQL Server VM from the **Backup** pane in the VM. This method does two things:
backup Manage Azure Sql Vm Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-sql-vm-rest-api.md
Title: Manage SQL server databases in Azure VMs with REST API description: Learn how to use REST API to manage and monitor SQL server databases in Azure VM that are backed up by Azure Backup. Previously updated : 11/29/2021 Last updated : 08/11/2022
This article explains how to manage and monitor the SQL server databases that are backed-up by [Azure Backup](backup-overview.md).
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know about the supported configurations and scenarios.
+ ## Monitor jobs The Azure Backup service triggers jobs that run in the background. This includes scenarios, such as triggering backup, restore operations, and disabling backup. You can track these jobs using their IDs.
backup Manage Monitor Sql Database Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-monitor-sql-database-backup.md
Title: Manage and monitor SQL Server DBs on an Azure VM description: This article describes how to manage and monitor SQL Server databases that are running on an Azure VM. Previously updated : 01/20/2022 Last updated : 08/11/2022
This article describes common tasks for managing and monitoring SQL Server datab
If you haven't yet configured backups for your SQL Server databases, see [Back up SQL Server databases on Azure VMs](backup-azure-sql-database.md)
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Monitor backup jobs in the portal Azure Backup shows all scheduled and on-demand operations under **Backup jobs** in **Backup center** in the Azure portal, except the scheduled log backups since they can be very frequent. The jobs you see in this portal includes database discovery and registration, configure backup, and backup and restore operations.
You can run different types of on-demand backups:
- Differential backup - Log backup
-While you need to specify the retention duration for Copy-only full backup, the retention range for on-demand full backup will automatically be set to 45 days from current time.
+>[!Note]
+>The retention period of this backup is determined by the type of on-demand backup you have run.
+>
+>- *On-demand full* retains backups for a minimum of *45 days* and a maximum of *99 years*.
+>- *On-demand copy only full* accepts any v0alue for retaintion.
+>- *On-demand differential* retains backup as per the retention of scheduled differentials set in policy.
+>- *On-demand log* retains backups as per the retention of scheduled logs set in policy.
For more information, see [SQL Server backup types](backup-architecture.md#sql-server-backup-types).
backup Restore Azure Sql Vm Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-sql-vm-rest-api.md
Title: Restore SQL server databases in Azure VMs with REST API description: Learn how to use REST API to restore SQL server databases in Azure VM from a restore point created by Azure Backup Previously updated : 11/30/2021 Last updated : 08/11/2022
By the end of this article, you'll learn how to perform the following operations
- View the restore points for a backed-up SQL database. - Restore a full SQL database.
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Prerequisites We assume that you have a backed-up SQL database for restore. If you donΓÇÖt have one, see [Backup SQL Server databases in Azure VMs using REST API](backup-azure-sql-vm-rest-api.md) to create.
If you've enabled Cross-region restore, then the recovery points will be replica
1. Choose a target server, which is registered to a vault within the secondary paired region. 1. Trigger restore to that server and track it using *JobId*.
+>[!Note]
+>The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
+ ### Fetch distinct recovery points from the secondary region Use the [List Recovery Points API](/rest/api/backup/recovery-points-crr/list) to fetch the list of available recovery points for the database in the secondary region. In the following example, an optional filter is applied to fetch full and differential recovery points in a given time range.
backup Restore Sql Database Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md
Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 07/15/2021 Last updated : 08/11/2022
This article describes how to restore a SQL Server database that's running on an
This article describes how to restore SQL Server databases. For more information, see [Back up SQL Server databases on Azure VMs](backup-azure-sql-database.md).
+>[!Note]
+>See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Restore to a time or a recovery point Azure Backup can restore SQL Server databases that are running on Azure VMs as follows:
For eg., when you have a backup policy of weekly fulls, daily differentials and
#### Excluding backup file types
-The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file, that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
+The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ` RecoveryPointsToBeExcludedForRestoreAsFiles ` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
1. In the target machine where files are to be downloaded, go to "C:\Program Files\Azure Workload Backup\bin" folder 2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
The secondary region restore user experience will be similar to the primary regi
>[!NOTE] >- After the restore is triggered and in the data transfer phase, the restore job can't be cancelled. >- The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription.
+>- The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
### Monitoring secondary region restore jobs
backup Sap Hana Db About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-about.md
Title: About SAP HANA database backup in Azure VMs description: In this article, learn about backing up SAP HANA databases that are running on Azure virtual machines. Previously updated : 09/27/2021 Last updated : 08/11/2022 # About SAP HANA database backup in Azure VMs
Using Azure Backup to back up and restore SAP HANA databases, gives the followin
* **Long-term retention**: For rigorous compliance and audit needs. Retain your backups for years, based on the retention duration, beyond which the recovery points will be pruned automatically by the built-in lifecycle management capability. * **Backup Management from Azure**: Use Azure Backup's management and monitoring capabilities for improved management experience. Azure CLI is also supported.
-To view the backup and restore scenarios that we support today, refer to the [SAP HANA scenario support matrix](./sap-hana-backup-support-matrix.md#scenario-support).
+To view the backup and restore scenarios that we support today, see the [SAP HANA scenario support matrix](./sap-hana-backup-support-matrix.md#scenario-support).
## Backup architecture
backup Sap Hana Db Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-manage.md
Title: Manage backed up SAP HANA databases on Azure VMs description: In this article, learn common tasks for managing and monitoring SAP HANA databases that are running on Azure virtual machines. Previously updated : 08/09/2022 Last updated : 08/11/2022
This article describes common tasks for managing and monitoring SAP HANA databas
If you haven't configured backups yet for your SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md).
+>[!Note]
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Monitor manual backup jobs in the portal Azure Backup shows all manually triggered jobs in the **Backup jobs** section in **Backup center**.
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, discover how to restore SAP HANA databases that are running on Azure Virtual Machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 07/15/2022 Last updated : 08/11/2022
This article describes how to restore SAP HANA databases running on an Azure Vir
For more information, on how to back up SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md).
+>[!Note]
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Restore to a point in time or to a recovery point Azure Backup can restore SAP HANA databases that are running on Azure VMs as follows:
For eg., when you have a backup policy of weekly fulls, daily differentials and
#### Excluding backup file types
-The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file, that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
+The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ` RecoveryPointsToBeExcludedForRestoreAsFiles ` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
1. In the target machine where files are to be downloaded, go to "opt/msawb/bin" folder 2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
The secondary region restore user experience will be similar to the primary regi
>[!NOTE] >* After the restore is triggered and in the data transfer phase, the restore job can't be cancelled. >* The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription.
+>* The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
### Monitoring secondary region restore jobs
backup Tutorial Sap Hana Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-backup-cli.md
Title: Tutorial - SAP HANA DB backup on Azure using Azure CLI description: In this tutorial, learn how to back up SAP HANA databases running on an Azure VM to an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 07/22/2022 Last updated : 08/11/2022
To get container name, run the following command. [Learn about this CLI command]
While the section above details how to configure a scheduled backup, this section talks about triggering an on-demand backup. To do this, we use the [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now) command. >[!NOTE]
-> By default, the retention of on-demand backups is set to 45 days.
+>The retention period of this backup is determined by the type of on-demand backup you have run.
+>- *On-demand full backups* are retained for a minimum of *45 days* and a maximum of *99 years*.
+>- *On-demand differential backups* are retained as per the *log retention set in the policy*.
+>- *On-demand incremental backups* aren't currently supported.
```azurecli-interactive az backup protection backup-now --resource-group saphanaResourceGroup \
backup Tutorial Sap Hana Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-manage-cli.md
Title: 'Tutorial: Manage backed-up SAP HANA DB using CLI' description: In this tutorial, learn how to manage backed-up SAP HANA databases running on an Azure VM using Azure CLI. Previously updated : 12/4/2019 Last updated : 08/11/2022
If you've used [Back up an SAP HANA database in Azure using CLI](tutorial-sap-ha
Azure CLI makes it easy to manage an SAP HANA database running on an Azure VM that's backed-up using Azure Backup. This tutorial details each of the management operations.
+>[!Note]
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## Monitor backup and restore jobs To monitor completed or currently running jobs (backup or restore), use the [az backup job list](/cli/azure/backup/job#az-backup-job-list) cmdlet. CLI also allows you to [suspend a currently running job](/cli/azure/backup/job#az-backup-job-stop) or [wait until a job completes](/cli/azure/backup/job#az-backup-job-wait).
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md
Title: Tutorial - SAP HANA DB restore on Azure using CLI description: In this tutorial, learn how to restore SAP HANA databases running on an Azure VM from an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 12/23/2021 Last updated : 08/11/2022
This tutorial assumes you have an SAP HANA database running on Azure VM that's b
* Backed-up database/item named *saphanadatabase;hxe;hxe* * Resources in the *westus2* region
+>[!Note]
+>See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+ ## View restore points for a backed-up database To view the list of all the recovery points for a database, use the [az backup recoverypoint list](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) cmdlet as follows:
Name Operation Status Item Name
00000000-0000-0000-0000-000000000000 CrossRegionRestore InProgress H10 [hanasnapcvt01] AzureWorkload 2021-12-22T05:21:34.165617+00:00 0:00:05.665470 ```
+>[!Note]
+>The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes).
+ ## Restore as files To restore the backup data as files instead of a database, we'll use **RestoreAsFiles** as the restore mode. Then choose the restore point, which can either be a previous point-in-time or any of the previous restore points. Once the files are dumped to a specified path, you can take these files to any SAP HANA machine where you want to restore them as a database. Because you can move these files to any machine, you can now restore the data across subscriptions and regions.
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-edge-secured-core.md
Validation|Device to be validated through toolset to ensure the device supports
|Requirements dependency|HVCI is enabled on the device.| |Validation Type|Manual/Tools| |Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure that HVCI is enabled on the device.|
-|Resources|https://docs.microsoft.com/windows-hardware/design/device-experiences/oem-hvci-enablement|
+|Resources| [Hypervisor-protected Code Integrity enablement](/windows-hardware/design/device-experiences/oem-hvci-enablement) |
</br>
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 8/3/2022 Last updated : 8/11/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## August 2022 Guest OS
+
+>[!NOTE]
+
+>The August Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the August Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-08 | [5016623] | Latest Cumulative Update(LCU) | 6.45 | Aug 9, 2022 |
+| Rel 22-08 | [5016618] | IE Cumulative Updates | 2.127, 3.117, 4.105 | Aug 9, 2022 |
+| Rel 22-08 | [5016627] | Latest Cumulative Update(LCU) | 7.15 | Aug 9, 2022 |
+| Rel 22-08 | [5016622] | Latest Cumulative Update(LCU) | 5.71 | Aug 9, 2022 |
+| Rel 22-08 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | 2.127 | Aug 9, 2022 |
+| Rel 22-08 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup | 2.127 | May 10, 2022 |
+| Rel 22-08 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | 4.107 | Jun 14, 2022 |
+| Rel 22-08 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup | 4.107 | May 10, 2022 |
+| Rel 22-08 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | 3.114 | Aug 9, 2022 |
+| Rel 22-08 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup | 3.114 | May 10, 2022 |
+| Rel 22-08 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.47 | May 10, 2022 |
+| Rel 22-08 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.15 | May 10, 2022 |
+| Rel 22-08 | [5016676] | Monthly Rollup | 2.127 | Aug 9, 2022 |
+| Rel 22-08 | [5016672] | Monthly Rollup | 3.114 | Aug 9, 2022 |
+| Rel 22-08 | [5016681] | Monthly Rollup | 4.107 | Aug 9, 2022 |
+| Rel 22-08 | [5016263] | Servicing Stack update | 3.114 | Jul 12, 2022 |
+| Rel 22-08 | [5016264] | Servicing Stack update | 4.107 | Jul 12, 2022 |
+| Rel 22-08 | [4578013] | OOB Standalone Security Update | 4.107 | Aug 19, 2020 |
+| Rel 22-08 | [5017095] | Servicing Stack update | 5.71 | Aug 9, 2022 |
+| Rel 22-08 | [5016057] | Servicing Stack update | 2.127 | Jul 12, 2022 |
+| Rel 22-08 | [4494175] | Microcode | 5.71 | Sep 1, 2020 |
+| Rel 22-08 | [4494174] | Microcode | 6.47 | Sep 1, 2020 |
+[5016623]: https://support.microsoft.com/kb/5016623
+[5016618]: https://support.microsoft.com/kb/5016618
+[5016627]: https://support.microsoft.com/kb/5016627
+[5016622]: https://support.microsoft.com/kb/5016622
+[5013637]: https://support.microsoft.com/kb/5013637
+[5013644]: https://support.microsoft.com/kb/5013644
+[5013638]: https://support.microsoft.com/kb/5013638
+[5013643]: https://support.microsoft.com/kb/5013643
+[5013635]: https://support.microsoft.com/kb/5013635
+[5013642]: https://support.microsoft.com/kb/5013642
+[5013641]: https://support.microsoft.com/kb/5013641
+[5013630]: https://support.microsoft.com/kb/5013630
+[5016676]: https://support.microsoft.com/kb/5016676
+[5016672]: https://support.microsoft.com/kb/5016672
+[5016681]: https://support.microsoft.com/kb/5016681
+[5016263]: https://support.microsoft.com/kb/5016263
+[5016264]: https://support.microsoft.com/kb/5016264
+[4578013]: https://support.microsoft.com/kb/4578013
+[5017095]: https://support.microsoft.com/kb/5017095
+[5016057]: https://support.microsoft.com/kb/5016057
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
## July 2022 Guest OS
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
This documentation contains the following types of articles:
* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
-For a more structured approach, follow a Microsoft Learn module for Face.
+For a more structured approach, follow a Learn module for Face.
* [Detect and analyze faces with the Face service](/learn/modules/detect-analyze-faces/) ## Example use cases
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
This documentation contains the following types of articles:
* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
-For a more structured approach, follow a Microsoft Learn module for Image Analysis.
+For a more structured approach, follow a Learn module for Image Analysis.
* [Analyze images with the Computer Vision service](/learn/modules/analyze-images-computer-vision/) ## Image Analysis features
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
This documentation contains the following types of articles:
<!--* The [conceptual articles](how-to/call-read-api.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. -->
-For a more structured approach, follow a Microsoft Learn module for OCR.
+For a more structured approach, follow a Learn module for OCR.
* [Read Text in Images and Documents with the Computer Vision Service](/learn/modules/read-text-images-documents-with-computer-vision-service/) ## Read API
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/overview.md
This documentation contains the following article types:
* [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features. * [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions.
-For a more structured approach, follow a Microsoft Learn module for Content Moderator.
+For a more structured approach, follow a Learn module for Content Moderator.
* [Introduction to Content Moderator](/learn/modules/intro-to-content-moderator/) * [Classify and moderate text with Azure Content Moderator](/learn/modules/classify-and-moderate-text-with-azure-content-moderator/)
As with all of the Cognitive Services, developers using the Content Moderator se
## Next steps
-To get started using Content Moderator on the web portal, follow [Try Content Moderator on the web](quick-start.md). Or, complete a [client library or REST API quickstart](client-libraries.md) to implement the basic scenarios in code.
+To get started using Content Moderator on the web portal, follow [Try Content Moderator on the web](quick-start.md). Or, complete a [client library or REST API quickstart](client-libraries.md) to implement the basic scenarios in code.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
This documentation contains the following types of articles:
* The [tutorials](./iot-visual-alerts-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. <!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.-->
-For a more structured approach, follow a Microsoft Learn module for Custom Vision:
+For a more structured approach, follow a Learn module for Custom Vision:
* [Classify images with the Custom Vision service](/learn/modules/classify-images-custom-vision/) * [Classify endangered bird species with Custom Vision](/learn/modules/cv-classify-bird-species/)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the prebuilt neural voices supported in each language.
| Arabic (United Arab Emirates) | `ar-AE` | Male | `ar-AE-HamdanNeural` | General | | Arabic (Yemen) | `ar-YE` | Female | `ar-YE-MaryamNeural` | General | | Arabic (Yemen) | `ar-YE` | Male | `ar-YE-SalehNeural` | General |
-| Azerbaijani (Azerbaijan) | `az-AZ` | Female | `az-AZ-BabekNeural` <sup>New</sup> | General |
-| Azerbaijani (Azerbaijan) | `az-AZ` | Male | `az-AZ-BanuNeural` <sup>New</sup> | General |
+| Azerbaijani (Azerbaijan) | `az-AZ` | Male | `az-AZ-BabekNeural` <sup>New</sup> | General |
+| Azerbaijani (Azerbaijan) | `az-AZ` | Female | `az-AZ-BanuNeural` <sup>New</sup> | General |
| Bangla (Bangladesh) | `bn-BD` | Female | `bn-BD-NabanitaNeural` | General | | Bangla (Bangladesh) | `bn-BD` | Male | `bn-BD-PradeepNeural` | General | | Bengali (India) | `bn-IN` | Female | `bn-IN-TanishaaNeural` | General |
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
The core operation of the Translator service is translating text. In this quicks
> [!TIP] >
- > If you're new to Visual Studio, try the [**Introduction to Visual Studio**](/learn/modules/go-get-started/) Microsoft Learn module.
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/learn/modules/go-get-started/) Learn module.
1. Open Visual Studio.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
-> If you're new to Go, try the [**Get started with Go**](/learn/modules/go-get-started/) Microsoft Learn module.
+> If you're new to Go, try the [Get started with Go](/learn/modules/go-get-started/) Learn module.
1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
After a successful call, you should see the following response:
> [!TIP] >
- > If you're new to Node.js, try the [**Introduction to Node.js**](/learn/modules/intro-to-nodejs/) Microsoft Learn module.
+ > If you're new to Node.js, try the [Introduction to Node.js](/learn/modules/intro-to-nodejs/) Learn module.
1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-app`.
After a successful call, you should see the following response:
> [!TIP] >
- > If you're new to Python, try the [**Introduction to Python**](/learn/paths/beginner-python/) Microsoft Learn module.
+ > If you're new to Python, try the [Introduction to Python](/learn/paths/beginner-python/) Learn module.
1. Open a terminal window and use pip to install the Requests library and uuid0 package:
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
To call the Translator service via the [REST API](reference/rest-api-guide.md),
> [!TIP] >
- > If you're new to Visual Studio, try the [**Introduction to Visual Studio**](/learn/modules/go-get-started/) Microsoft Learn module.
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/learn/modules/go-get-started/) Learn module.
1. Open Visual Studio.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
-> If you're new to Go, try the [**Get started with Go**](/learn/modules/go-get-started/) Microsoft Learn module.
+> If you're new to Go, try the [Get started with Go](/learn/modules/go-get-started/) Learn module.
1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
- > If you're new to Node.js, try the [**Introduction to Node.js**](/learn/modules/intro-to-nodejs/) Microsoft Learn module.
+ > If you're new to Node.js, try the [Introduction to Node.js](/learn/modules/intro-to-nodejs/) Learn module.
1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-text-app`.
You can use any text editor to write Go applications. We recommend using the lat
> [!TIP] >
- > If you're new to Python, try the [**Introduction to Python**](/learn/paths/beginner-python/) Microsoft Learn module.
+ > If you're new to Python, try the [Introduction to Python](/learn/paths/beginner-python/) Learn module.
1. Open a terminal window and use pip to install the Requests library and uuid0 package:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
Previously updated : 06/17/2022 Last updated : 08/10/2022
As you use CLU, see the following reference documentation and samples for Azure
|||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-apis) | |
-|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
-|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
## Responsible AI
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/tutorials/cognitive-search.md
Typically after you create a project, you go ahead and start [tagging the docume
## Deploy your model
-Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/view-model-evaluation.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
[!INCLUDE [Deploy a model using Language Studio](../includes/language-studio/deploy-model.md)]
Training could take sometime between 10 and 30 minutes for this sample dataset.
## Deploy your model
-Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this tutorial, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review its [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/view-model-evaluation.md) if necessary. In this tutorial, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
### Start deployment job
Generally after training a model you would review its [evaluation details](../ho
### Run the indexer command
-After youΓÇÖve published your Azure function and prepared your configs file, you can run the indexer command.
+After you've published your Azure function and prepared your configs file, you can run the indexer command.
```cli indexer index --index-name <name-your-index-here> --configs <absolute-path-to-configs-file> ```
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/tutorials/cognitive-search.md
Typically after you create a project, you go ahead and start [tagging the docume
## Deploy your model
-Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/view-model-evaluation.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
[!INCLUDE [Deploy a model using Language Studio](../includes/language-studio/deploy-model.md)]
Training could take sometime between 10 and 30 minutes for this sample dataset.
## Deploy your model
-Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/improve-model.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
+Generally after training a model you would review it's [evaluation details](../how-to/view-model-evaluation.md) and [make improvements](../how-to/view-model-evaluation.md) if necessary. In this quickstart, you will just deploy your model, and make it available for you to try in Language Studio, or you can call the [prediction API](https://aka.ms/ct-runtime-swagger).
### Submit deployment job
Generally after training a model you would review it's [evaluation details](../h
### Run the indexer command
-After youΓÇÖve published your Azure function and prepared your configs file, you can run the indexer command.
+After you've published your Azure function and prepared your configs file, you can run the indexer command.
```cli indexer index --index-name <name-your-index-here> --configs <absolute-path-to-configs-file> ```
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md
Previously updated : 06/17/2022 Last updated : 08/10/2022
As you use orchestration workflow, see the following reference documentation and
|||| |REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-runtime-api) | |
-|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
-|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
## Responsible AI
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/embeddings.md
An embedding is a special format of data representation that can be easily utili
To obtain an embedding vector for a piece of text we make a request to the embeddings endpoint as shown in the following code snippets: ```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-06-01-preview\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-06-01-preview\
-H 'Content-Type: application/json' \ -H 'api-key: YOUR_API_KEY' \ -d '{"input": "Sample Document goes here"}'
Our embedding models may be unreliable or pose social risks in certain cases, an
## Next steps
-Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
+Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md
An exception policy controls the behavior of a Job based on a trigger and execut
[azure_sub]: https://azure.microsoft.com/free/dotnet/ [cla]: https://cla.microsoft.com [nuget]: https://www.nuget.org/
-[netstandars2mappings]:https://github.com/dotnet/standard/blob/master/docs/versions.md
-[useraccesstokens]:https://docs.microsoft.com/azure/communication-services/quickstarts/access-tokens?pivots=programming-language-csharp
+[netstandars2mappings]: https://github.com/dotnet/standard/blob/master/docs/versions.md
+[useraccesstokens]: /azure/communication-services/quickstarts/access-tokens?pivots=programming-language-csharp
[communication_resource_docs]: ../../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows [communication_resource_create_portal]: ../../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows [communication_resource_create_power_shell]: /powershell/module/az.communication/new-azcommunicationservice
communication-services Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-access.md
+
+ Title: Azure Communication Services Calling SDK RAW media overview
+
+description: Provides an overview of media access
+++++ Last updated : 07/21/2022+++++
+# Media access overview
++
+Azure Communication Services provides support for developers to get real-time access to media streams to capture, analyze and process audio or video content during active calls.
+
+Consumption of live audio and video content is very prevalent in our world today in the forms of online meetings, conferences, live events, online classes and customer support. The modern communications world allows people around the globe to connect with anyone anywhere any moment on any matter virtually. With raw media access, developers can analyze audio or video streams for each participant in a call in real-time. In contact centers these streams can be used to run custom AI models for analysis such as your homegrown NLP for conversation analysis or provide real-time insights and suggestions to boost agent productivity. In virtual appointments media streams can be used to analyze sentiment when providing virtual care for patients or provide remote assistance during video calls leveraging Mixed Reality capabilities. This also opens a path for developers to leverage newer innovations with endless possibilities to enhance interaction experiences.
+
+The Azure Communication Services SDKs provides access to the media streams from the client and server side to enable developers building more inclusive and richer virtual experiences during voice or video interactions.
++
+## The workflow can be split into three operations:
+ΓÇó Capture Media: Media can be captured locally via the client SDKs or on the server side.
+
+ΓÇó Process/Transform: Media can be transformed locally on the client (for example add background blur) or be used for processing in a cloud service (for example to use it with your customer NLPU for conversation insights).
+
+ΓÇó Provide context or inject back the Transformed Media: The output of the transformed media streams (ex, sentiment analysis) can be used to provide context or augmented media streams can be injected into the interaction through the client SDK or through the media streaming API via the server SDK.
+
+## Media access via the Calling Client SDK
+During a call, developers can access the audio and video media streams. Outgoing local audio and video media streams can be pre-processed, before being sent to the encoder. Incoming remote captured media streams can be post-processed before playback on screen or speaker. For incoming audio mixed media access, the client calling SDK can have access to the mixed incoming remote audio stream which includes the mixed audio streams of the top four most dominant speakers on the call. For incoming remote unmixed audio the client calling SDK will have access to the individual audio streams of each participant on the call.
+++
+## Media access use cases
+ΓÇó Screen share: Local outgoing video access can be used to enable screen sharing, developers are able to implement the foreground services to capture the frames and send them to be published using the calling SDK OutgoingVirtualVideoStreamOptions.
+
+ΓÇó Background blur: Local outgoing video access can be used to capture the video frames from the camera and implement background blur before sending the blurred frames to be published using the calling SDK OutgoingVirtualVideoStreamOptions.
+
+ΓÇó Video filters: Local outgoing video access can be used to capture the video frames from the camera and implement AI video filters on the captured frames before sending the video frames to be published using the calling SDK OutgoingVirtualVideoStreamOptions.
+
+ΓÇó Augmented reality/Virtual reality: Remote incoming video media streams can be captured and augmented with a virtual environment before rendering on the screen.
+
+ΓÇó Spatial audio: Remote incoming audio access can be used to inject spatial audio into the incoming audio stream.
++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with raw media](../../quickstarts/voice-video-calling/get-started-raw-media-access.md)
+
+For more information, see the following articles:
+- Familiarize yourself with general [call flows](../call-flows.md)
+- Learn about [call types](../voice-video-calling/about-call-types.md)
+- [Plan your PSTN solution](../telephony/plan-solution.md)
communication-services Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/learn-modules.md
Title: Microsoft Learn modules for Azure Communication Services
+ Title: Learn modules for Azure Communication Services
description: Learn about the available Learn modules for Azure Communication Services.
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-indexing.md
In the preceding example, omitting the ```"university":1``` clause returns an er
Unique indexes need to be created while the collection is empty.
-Support for unique index on existing collections with data is available in preview for accounts that do not use Synapse Link or Continuous backup. You can sign up for the feature ΓÇ£Azure Cosmos DB API for MongoDB New Unique Indexes in existing collectionΓÇ¥ through the [Preview Features blade in the portal](./../access-previews.md).
-
-#### Unique partial indexes
-
-Unique partial indexes can be created by specifying a partialFilterExpression along with the 'unique' constraint in the index. This results in the unique constraint being applied only to the documents that meet the specified filter expression.
-
-The unique constraint will not be effective for documents that do not meet the specified criteria. As a result, other documents will not be prevented from being inserted into the collection.
-
-This feature is supported with the Cosmos DB API for MongoDB versions 3.6 and above.
-
-To create a unique partial index from Mongo Shell, use the command `db.collection.createIndex()` with the 'partialFilterExpression' option and 'unique' constraint.
-The partialFilterExpression option accepts a json document that specifies the filter condition using:
-
-* equality expressions (i.e. field: value or using the $eq operator),
-*'$exists: true' expression,
-* $gt, $gte, $lt, $lte expressions,
-* $type expressions,
-* $and operator at the top-level only
-
-The following command creates an index on collection `books` that specifies a unique constraint on the `title` field and a partial filter expression `rating: { $gte: 3 }`:
-
-```shell
-db.books.createIndex(
- { Title: 1 },
- { unique: true, partialFilterExpression: { rating: { $gte: 3 } } }
-)
-```
-
-To delete a partial unique index using from Mongo Shell, use the command `getIndexes()` to list the indexes in the collection.
-Then drop the index with the following command:
-
-```shell
-db.books.dropIndex("indexName")
-```
- ### TTL indexes To enable document expiration in a particular collection, you need to create a [time-to-live (TTL) index](../time-to-live.md). A TTL index is an index on the `_ts` field with an `expireAfterSeconds` value.
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 07/04/2022 Last updated : 08/11/2022 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
The following sections provide details about properties that define Data Factory
## Linked service properties
-The following properties are supported for an Azure Synapse Analytics linked service:
+These generic properties are supported for an Azure Synapse Analytics linked service:
| Property | Description | Required | | : | :-- | :-- | | type | The type property must be set to **AzureSqlDW**. | Yes | | connectionString | Specify the information needed to connect to the Azure Synapse Analytics instance for the **connectionString** property. <br/>Mark this field as a SecureString to store it securely. You can also put password/service principal key in Azure Key Vault,and if it's SQL authentication pull the `password` configuration out of the connection string. See the JSON example below the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
-| servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal. |
-| servicePrincipalKey | Specify the application's key. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal. |
-| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal. |
| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are `AzurePublic`, `AzureChina`, `AzureUsGovernment`, and `AzureGermany`. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
-| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication. |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure Integration Runtime. | No |
-For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively:
+For different authentication types, refer to the following sections on specific properties, prerequisites and JSON samples, respectively:
- [SQL authentication](#sql-authentication) - [Service principal authentication](#service-principal-authentication)
For different authentication types, refer to the following sections on prerequis
### SQL authentication
+To use SQL authentication authentication type, specify the generic properties that are described in the preceding section.
+ #### Linked service example that uses SQL authentication ```json
For different authentication types, refer to the following sections on prerequis
### Service principal authentication
-To use service principal-based Azure AD application token authentication, follow these steps:
+To use service principal authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+| : | :-- | :-- |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalKey | Specify the application's key. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes |
+
+You also need to follow the steps below:
1. **[Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal)** from the Azure portal. Make note of the application name and the following values that define the linked service:
To use service principal-based Azure AD application token authentication, follow
A data factory or Synapse workspace can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity) that represents the resource. You can use this managed identity for Azure Synapse Analytics authentication. The designated resource can access and copy data from or to your data warehouse by using this identity.
-To use system-assigned managed identity authentication, follow these steps:
+To use system-assigned managed identity authentication, specify the generic properties that are described in the preceding section, and follow these steps.
1. **[Provision an Azure Active Directory administrator](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database)** for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or Azure AD group. If you grant the group with system-assigned managed identity an admin role, skip steps 3 and 4. The administrator will have full access to the database.
To use system-assigned managed identity authentication, follow these steps:
A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the resource. You can use this managed identity for Azure Synapse Analytics authentication. The designated resource can access and copy data from or to your data warehouse by using this identity.
-To use user-assigned managed identity authentication, follow these steps:
+To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+| : | :-- | : |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+
+You also need to follow the steps below:
1. **[Provision an Azure Active Directory administrator](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database)** for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or Azure AD group. If you grant the group with user-assigned managed identity an admin role, skip steps 3. The administrator will have full access to the database.
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Previously updated : 07/04/2022 Last updated : 08/10/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
The following sections provide details about properties that are used to define
## Linked service properties
-These properties are supported for an Azure SQL Database linked service:
+These generic properties are supported for an Azure SQL Database linked service:
| Property | Description | Required | |: |: |: | | type | The **type** property must be set to **AzureSqlDatabase**. | Yes | | connectionString | Specify information needed to connect to the Azure SQL Database instance for the **connectionString** property. <br/>You also can put a password or service principal key in Azure Key Vault. If it's SQL authentication, pull the `password` configuration out of the connection string. For more information, see the JSON example following the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
-| servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal |
-| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal |
-| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal |
| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No | | alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No |
-| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication |
| connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is located in a private network. If not specified, the default Azure integration runtime is used. | No |
-For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively:
+For different authentication types, refer to the following sections on specific properties, prerequisites and JSON samples, respectively:
- [SQL authentication](#sql-authentication) - [Service principal authentication](#service-principal-authentication)
For different authentication types, refer to the following sections on prerequis
### SQL authentication
+To use SQL authentication authentication type, specify the generic properties that are described in the preceding section.
+ **Example: using SQL authentication** ```json
For different authentication types, refer to the following sections on prerequis
### Service principal authentication
-To use a service principal-based Azure AD application token authentication, follow these steps:
+To use service principal authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal.| Yes |
+
+You also need to follow the steps below:
1. [Create an Azure Active Directory application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) from the Azure portal. Make note of the application name and the following values that define the linked service:
To use a service principal-based Azure AD application token authentication, foll
A data factory or Synapse workspace can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity) that represents the service when authenticating to other resources in Azure. You can use this managed identity for Azure SQL Database authentication. The designated factory or Synapse workspace can access and copy data from or to your database by using this identity.
-To use system-assigned managed identity authentication, follow these steps.
+To use system-assigned managed identity authentication, specify the generic properties that are described in the preceding section, and follow these steps.
1. [Provision an Azure Active Directory administrator](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database) for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or an Azure AD group. If you grant the group with managed identity an admin role, skip steps 3 and 4. The administrator has full access to the database.
To use system-assigned managed identity authentication, follow these steps.
A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the service when authenticating to other resources in Azure. You can use this managed identity for Azure SQL Database authentication. The designated factory or Synapse workspace can access and copy data from or to your database by using this identity.
-To use user-assigned managed identity authentication, follow these steps.
+To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+
+You also need to follow the steps below:
1. [Provision an Azure Active Directory administrator](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database) for your server on the Azure portal if you haven't already done so. The Azure AD administrator can be an Azure AD user or an Azure AD group. If you grant the group with user-assigned managed identity an admin role, skip steps 3. The administrator has full access to the database.
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 07/04/2022 Last updated : 08/11/2022 # Copy and transform data in Azure SQL Managed Instance using Azure Data Factory or Synapse Analytics
The following sections provide details about properties that are used to define
## Linked service properties
-The following properties are supported for the SQL Managed Instance linked service:
+These generic properties are supported for an SQL Managed Instance linked service:
| Property | Description | Required | |: |: |: | | type | The type property must be set to **AzureSqlMI**. | Yes | | connectionString |This property specifies the **connectionString** information that's needed to connect to SQL Managed Instance by using SQL authentication. For more information, see the following examples. <br/>The default port is 1433. If you're using SQL Managed Instance with a public endpoint, explicitly specify port 3342.<br> You also can put a password in Azure Key Vault. If it's SQL authentication, pull the `password` configuration out of the connection string. For more information, see the JSON example following the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
-| servicePrincipalId | Specify the application's client ID. | Yes, when you use Azure AD authentication with a service principal |
-| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes, when you use Azure AD authentication with a service principal |
-| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes, when you use Azure AD authentication with a service principal |
| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the service's cloud environment is used. | No | | alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No |
-| credentials | Specify the user-assigned managed identity as the credential object. | Yes, when you use user-assigned managed identity authentication |
| connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. You can use a self-hosted integration runtime or an Azure integration runtime if your managed instance has a public endpoint and allows the service to access it. If not specified, the default Azure integration runtime is used. |Yes |
-For different authentication types, refer to the following sections on prerequisites and JSON samples, respectively:
+For different authentication types, refer to the following sections on specific properties, prerequisites and JSON samples, respectively:
- [SQL authentication](#sql-authentication) - [Service principal authentication](#service-principal-authentication)
For different authentication types, refer to the following sections on prerequis
### SQL authentication
+To use SQL authentication authentication type, specify the generic properties that are described in the preceding section.
+ **Example 1: use SQL authentication** ```json
For different authentication types, refer to the following sections on prerequis
### Service principal authentication
-To use a service principal-based Azure AD application token authentication, follow these steps:
+To use service principal authentication, in addition to the generic properties that are described in the preceding section, specify the following properties
+
+| Property | Description | Required |
+|: |: |: |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| tenant | Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
+
+You also need to follow the steps below:
1. Follow the steps to [Provision an Azure Active Directory administrator for your Managed Instance](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance).
To use a service principal-based Azure AD application token authentication, foll
A data factory or Synapse workspace can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity) that represents the service for authentication to other Azure services. You can use this managed identity for SQL Managed Instance authentication. The designated service can access and copy data from or to your database by using this identity.
-To use system-assigned managed identity authentication, follow these steps.
+To use system-assigned managed identity authentication, specify the generic properties that are described in the preceding section, and follow these steps.
1. Follow the steps to [Provision an Azure Active Directory administrator for your Managed Instance](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance).
To use system-assigned managed identity authentication, follow these steps.
A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the service for authentication to other Azure services. You can use this managed identity for SQL Managed Instance authentication. The designated service can access and copy data from or to your database by using this identity.
-To use user-assigned managed identity authentication, follow these steps.
+To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+
+You also need to follow the steps below:
1. Follow the steps to [Provision an Azure Active Directory administrator for your Managed Instance](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance).
data-factory Continuous Integration Delivery Resource Manager Custom Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
Below is the current default parameterization template. If you need to add only
}, "location": "=" },
+ "Microsoft.DataFactory/factories/globalparameters": {
+ "properties": {
+ "*": {
+ "value": "="
+ }
+ }
+ },
"Microsoft.DataFactory/factories/pipelines": { }, "Microsoft.DataFactory/factories/dataflows": {
data-factory How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-access-secured-purview-account.md
Previously updated : 09/02/2021 Last updated : 08/09/2022 # Access a secured Microsoft Purview account from Azure Data Factory
data-factory How To Clean Up Ssisdb Logs With Elastic Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-clean-up-ssisdb-logs-with-elastic-jobs.md
description: This article describes how to clean up SSIS project deployment and
Previously updated : 02/15/2022 Last updated : 08/09/2022
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
Previously updated : 02/15/2022 Last updated : 08/09/2022 # Customize the setup for an Azure-SSIS Integration Runtime
data-factory How To Configure Azure Ssis Ir Enterprise Edition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md
description: "This article describes the features of Enterprise Edition for the
Previously updated : 02/15/2022 Last updated : 08/09/2022
data-factory How To Configure Shir For Log Analytics Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-shir-for-log-analytics-collection.md
Previously updated : 02/22/2022 Last updated : 08/09/2022
data-factory How To Create Custom Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-custom-event-trigger.md
Previously updated : 05/07/2021 Last updated : 08/09/2022 # Create a custom event trigger to run a pipeline in Azure Data Factory
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md
Previously updated : 09/09/2021 Last updated : 08/09/2022 # Create a trigger that runs a pipeline in response to a storage event
This section shows you how to create a storage event trigger within the Azure Da
1. Select trigger type **Storage Event** # [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1.png" alt-text="Screenshot of Author page to create a new storage event trigger in Data Factory UI.":::
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1.png" alt-text="Screenshot of Author page to create a new storage event trigger in Data Factory UI." :::
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot of Author page to create a new storage event trigger in the Azure Synapse UI.":::
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" lightbox="media/how-to-create-event-trigger/event-based-trigger-image-1-synapse.png" alt-text="Screenshot of Author page to create a new storage event trigger in the Azure Synapse UI.":::
5. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is required, but be mindful that selecting all containers can lead to a large number of events.
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
Previously updated : 09/09/2021 Last updated : 08/09/2022
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md
Previously updated : 09/09/2021 Last updated : 08/09/2022 # Create a trigger that runs a pipeline on a tumbling window
data-factory How To Data Flow Dedupe Nulls Snippets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-data-flow-dedupe-nulls-snippets.md
Previously updated : 01/31/2022 Last updated : 08/09/2022 # Dedupe rows and find nulls by using data flow snippets
data-factory How To Data Flow Error Rows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-data-flow-error-rows.md
Previously updated : 01/31/2022 Last updated : 08/09/2022
data-factory How To Develop Azure Ssis Ir Licensed Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-develop-azure-ssis-ir-licensed-components.md
Previously updated : 02/17/2022 Last updated : 08/09/2022 # Install paid or licensed custom components for the Azure-SSIS integration runtime
data-factory How To Discover Explore Purview Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-discover-explore-purview-data.md
Previously updated : 08/10/2021 Last updated : 08/09/2022 # Discover and explore data in ADF using Microsoft Purview
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-expression-language-functions.md
Previously updated : 01/21/2022 Last updated : 08/09/2022 # How to use parameters, expressions and functions in Azure Data Factory
data-factory How To Fixed Width https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-fixed-width.md
Previously updated : 01/27/2022 Last updated : 08/09/2022
data-factory How To Invoke Ssis Package Azure Enabled Dtexec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-azure-enabled-dtexec.md
description: Learn how to execute SQL Server Integration Services (SSIS) package
Previously updated : 10/22/2021 Last updated : 08/09/2022
data-factory How To Invoke Ssis Package Managed Instance Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-managed-instance-agent.md
Previously updated : 02/15/2022 Last updated : 08/09/2022 # Run SSIS packages by using Azure SQL Managed Instance Agent
data-factory How To Invoke Ssis Package Ssdt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssdt.md
Previously updated : 10/22/2021 Last updated : 08/09/2022 # Execute SSIS packages in Azure from SSDT
data-factory How To Invoke Ssis Package Ssis Activity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity-powershell.md
Previously updated : 10/22/2021 Last updated : 08/09/2022 # Run an SSIS package with the Execute SSIS Package activity in Azure Data Factory with PowerShell
data-factory How To Invoke Ssis Package Ssis Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md
Previously updated : 02/15/2022 Last updated : 08/09/2022 # Run an SSIS package with the Execute SSIS Package activity in Azure portal
Create an Azure-SSIS integration runtime (IR) if you don't have one already by f
In this step, you use the Data Factory UI or app to create a pipeline. You add an Execute SSIS Package activity to the pipeline and configure it to run your SSIS package. # [Azure Data Factory](#tab/data-factory)
-1. On your Data Factory overview or home page in the Azure portal, select the **Author & Monitor** tile to start the Data Factory UI or app in a separate tab.
+1. On your Data Factory overview or home page in the Azure portal, select the **Open Azure Data Factory Studio** tile to start the Data Factory UI or app in a separate tab.
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/data-factory-home-page.png" alt-text="Data Factory home page":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot of the Azure Data Factory home page.":::
On the home page, select **Orchestrate**.
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/orchestrate-button.png" alt-text="Screenshot that shows the Orchestrate button on the Azure Data Factory home page.":::
# [Synapse Analytics](#tab/synapse-analytics)
Navigate to the Integrate tab in Synapse Studio (represented by the pipeline ico
-1. In the **Activities** toolbox, expand **General**. Then drag an **Execute SSIS Package** activity to the pipeline designer surface.
+1. In the **Activities** toolbox, search for **SSIS**. Then drag an **Execute SSIS Package** activity to the pipeline designer surface.
:::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-designer.png" alt-text="Drag an Execute SSIS Package activity to the designer surface":::
On the **Settings** tab of Execute SSIS Package activity, complete the following
1. If your Azure-SSIS IR isn't running or the **Manual entries** check box is selected, enter your package and environment paths from SSISDB directly in the following formats: `<folder name>/<project name>/<package name>.dtsx` and `<folder name>/<environment name>`.
- :::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-settings2.png" alt-text="Set properties on the Settings tab - Manual":::
+ :::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-settings-2.png" alt-text="Set properties on the Settings tab - Manual":::
#### Package location: File System (Package) **File System (Package)** as your package location is automatically selected if your Azure-SSIS IR was provisioned without SSISDB or you can select it yourself. If it's selected, complete the following steps. 1. Specify your package to run by providing a Universal Naming Convention (UNC) path to your package file (with `.dtsx`) in the **Package path** box. You can browse and select your package by selecting **Browse file storage** or enter its path manually. For example, if you store your package in Azure Files, its path is `\\<storage account name>.file.core.windows.net\<file share name>\<package name>.dtsx`.
For all UNC paths previously mentioned, the fully qualified file name must be fe
If you select **File System (Project)** as your package location, complete the following steps. 1. Specify your package to run by providing a UNC path to your project file (with `.ispac`) in the **Project path** box and a package file (with `.dtsx`) from your project in the **Package name** box. You can browse and select your project by selecting **Browse file storage** or enter its path manually. For example, if you store your project in Azure Files, its path is `\\<storage account name>.file.core.windows.net\<file share name>\<project name>.ispac`.
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md
ms.devlang: powershell Previously updated : 02/15/2022 Last updated : 08/10/2022
This article describes how to run an SSIS package in an Azure Data Factory pipel
## Prerequisites ### Azure SQL Database
-The walkthrough in this article uses Azure SQL Database to host the SSIS catalog. You can also use Azure SQL Managed Instance.
+The walk through in this article uses Azure SQL Database to host the SSIS catalog. You can also use Azure SQL Managed Instance.
-## Create an Azure-SSIS integration runtime
-Create an Azure-SSIS integration runtime if you don't have one by following the step-by-step instruction in the [Tutorial: Deploy SSIS packages](./tutorial-deploy-ssis-packages-azure.md).
+### Data Factory
+You will need an instance of Azure Data Factory to implement this walk through. If you do not have one already provisioned, you can follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md).
-## Data Factory UI (Azure portal)
-In this section, you use Data Factory UI to create a Data Factory pipeline with a stored procedure activity that invokes an SSIS package.
+### Azure-SSIS integration runtime
+Finally, you will also require an Azure-SSIS integration runtime if you don't have one by following the step-by-step instruction in the [Tutorial: Deploy SSIS packages](./tutorial-deploy-ssis-packages-azure.md).
-### Create a data factory
-First step is to create a data factory by using the Azure portal.
+## Create a pipeline with stored procedure activity
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-2. Navigate to the [Azure portal](https://portal.azure.com).
-3. Click **New** on the left menu, click **Data + Analytics**, and click **Data Factory**.
-
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/new-azure-data-factory-menu.png" alt-text="New->DataFactory":::
-2. In the **New data factory** page, enter **ADFTutorialDataFactory** for the **name**.
-
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/new-azure-data-factory.png" alt-text="New data factory page":::
-
- The name of the Azure data factory must be **globally unique**. If you see the following error for the name field, change the name of the data factory (for example, yournameADFTutorialDataFactory). See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
-
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/name-not-available-error.png" alt-text="Name not available - error":::
-3. Select your Azure **subscription** in which you want to create the data factory.
-4. For the **Resource Group**, do one of the following steps:
-
- - Select **Use existing**, and select an existing resource group from the drop-down list.
- - Select **Create new**, and enter the name of a resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-4. Select **V2** for the **version**.
-5. Select the **location** for the data factory. Only locations that are supported by Data Factory are shown in the drop-down list. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other locations.
-6. Select **Pin to dashboard**.
-7. Click **Create**.
-8. On the dashboard, you see the following tile with status: **Deploying data factory**.
-
- :::image type="content" source="media//how-to-invoke-ssis-package-stored-procedure-activity/deploying-data-factory.png" alt-text="deploying data factory tile":::
-9. After the creation is complete, you see the **Data Factory** page as shown in the image.
-
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/data-factory-home-page.png" alt-text="Data factory home page":::
-10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) application in a separate tab.
+In this step, you use the Data Factory UI to create a pipeline. If you have not navigated to the Azure Data Factory Studio already, open your data factory in the Azure Portal and click the **Open Azure Data Factory Studio** button to open it.
+
-### Create a pipeline with stored procedure activity
-In this step, you use the Data Factory UI to create a pipeline. You add a stored procedure activity to the pipeline and configure it to run the SSIS package by using the sp_executesql stored procedure.
+Next, you will add a stored procedure activity to a new pipeline and configure it to run the SSIS package by using the sp_executesql stored procedure.
1. In the home page, click **Orchestrate**:
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/orchestrate-button.png" alt-text="Screenshot that shows the Orchestrate button on the Azure Data Factory home page.":::
-2. In the **Activities** toolbox, expand **General**, and drag-drop **Stored Procedure** activity to the pipeline designer surface.
+2. In the **Activities** toolbox, search for **Stored procedure**, and drag-drop a **Stored procedure** activity to the pipeline designer surface.
:::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/drag-drop-sproc-activity.png" alt-text="Drag-and-drop stored procedure activity":::
-3. In the properties window for the stored procedure activity, switch to the **SQL Account** tab, and click **+ New**. You create a connection to the database in Azure SQL Database that hosts the SSIS Catalog (SSIDB database).
+
+3. Select the **Stored procedure** activity you just added to the designer surface, and then the **Settings** tab, and click **+ New** beside the **Linked service**. You create a connection to the database in Azure SQL Database that hosts the SSIS Catalog (SSIDB database).
:::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/new-linked-service-button.png" alt-text="New linked service button":::+ 4. In the **New Linked Service** window, do the following steps: 1. Select **Azure SQL Database** for **Type**.
- 2. Select the **Default** Azure Integration Runtime to connect to the Azure SQL Database that hosts the `SSISDB` database.
+ 2. Select the **Default** AutoResolveIntegrationRuntime to connect to the Azure SQL Database that hosts the `SSISDB` database.
3. Select the Azure SQL Database that hosts the SSISDB database for the **Server name** field. 4. Select **SSISDB** for **Database name**. 5. For **User name**, enter the name of user who has access to the database.
In this step, you use the Data Factory UI to create a pipeline. You add a stored
8. Save the linked service by clicking the **Save** button. :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/azure-sql-database-linked-service-settings.png" alt-text="Screenshot that shows the process for adding a new linked service.":::
-5. In the properties window, switch to the **Stored Procedure** tab from the **SQL Account** tab, and do the following steps:
+
+5. Back in the properties window on the **Settings** tab, complete the following steps:
1. Select **Edit**. 2. For the **Stored procedure name** field, Enter `sp_executesql`.
In this step, you use the Data Factory UI to create a pipeline. You add a stored
``` :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/stored-procedure-settings.png" alt-text="Azure SQL Database linked service":::+ 6. To validate the pipeline configuration, click **Validate** on the toolbar. To close the **Pipeline Validation Report**, click **>>**. :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/validate-pipeline.png" alt-text="Validate pipeline":::
In this section, you trigger a pipeline run and then monitor it.
2. In the **Pipeline Run** window, select **Finish**. 3. Switch to the **Monitor** tab on the left. You see the pipeline run and its status along with other information (such as Run Start time). To refresh the view, click **Refresh**.
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/pipeline-runs.png" alt-text="Pipeline runs":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/pipeline-runs.png" alt-text="Screenshot that shows pipeline runs":::
3. Click **View Activity Runs** link in the **Actions** column. You see only one activity run as the pipeline has only one activity (stored procedure activity).
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/activity-runs.png" alt-text="Activity runs":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/activity-runs.png" alt-text="Screenshot that shows activity runs":::
4. You can run the following **query** against the SSISDB database in SQL Database to verify that the package executed.
data-factory How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-settings.md
Previously updated : 05/24/2022 Last updated : 08/10/2022 # Manage Azure Data Factory settings and preferences
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 07/13/2022 Last updated : 08/10/2022 # Manage Azure Data Factory studio preview experience
data-factory How To Migrate Ssis Job Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-migrate-ssis-job-ssms.md
Previously updated : 10/22/2021 Last updated : 08/10/2022 # Migrate SQL Server Agent jobs to ADF with SSMS
data-factory How To Run Self Hosted Integration Runtime In Windows Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-run-self-hosted-integration-runtime-in-windows-container.md
Previously updated : 07/07/2022 Last updated : 08/10/2022 # How to run Self-Hosted Integration Runtime in Windows container
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
Alternatively, you can create Web activities in ADF or Synapse pipelines to star
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites+
+### Data Factory
+You will need an instance of Azure Data Factory to implement this walk through. If you do not have one already provisioned, you can follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md).
+
+### Azure-SSIS Integration Runtime (IR)
If you have not provisioned your Azure-SSIS IR already, provision it by following instructions in the [tutorial](./tutorial-deploy-ssis-packages-azure.md). ## Create and schedule ADF pipelines that start and or stop Azure-SSIS IR
For example, you can create two triggers, the first one is scheduled to run dail
If you create a third trigger that is scheduled to run daily at midnight and associated with the third pipeline, that pipeline will run at midnight every day, starting your IR just before package execution, subsequently executing your package, and immediately stopping your IR just after package execution, so your IR will not be running idly.
-### Create your ADF
-
-1. Sign in to [Azure portal](https://portal.azure.com/).
-2. Click **New** on the left menu, click **Data + Analytics**, and click **Data Factory**.
-
- :::image type="content" source="./media/tutorial-create-azure-ssis-runtime-portal/new-data-factory-menu.png" alt-text="New->DataFactory":::
-
-3. In the **New data factory** page, enter **MyAzureSsisDataFactory** for **Name**.
-
- :::image type="content" source="./media/tutorial-create-azure-ssis-runtime-portal/new-azure-data-factory.png" alt-text="New data factory page":::
-
- The name of your ADF must be globally unique. If you receive the following error, change the name of your ADF (e.g. yournameMyAzureSsisDataFactory) and try creating it again. See [Data Factory - Naming Rules](naming-rules.md) article to learn about naming rules for ADF artifacts.
-
- `Data factory name MyAzureSsisDataFactory is not available`
-
-4. Select your Azure **Subscription** under which you want to create your ADF.
-5. For **Resource Group**, do one of the following steps:
-
- - Select **Use existing**, and select an existing resource group from the drop-down list.
- - Select **Create new**, and enter the name of your new resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md) article.
-
-6. For **Version**, select **V2** .
-7. For **Location**, select one of the locations supported for ADF creation from the drop-down list.
-8. Select **Pin to dashboard**.
-9. Click **Create**.
-10. On Azure dashboard, you will see the following tile with status: **Deploying Data Factory**.
-
- :::image type="content" source="media/tutorial-create-azure-ssis-runtime-portal/deploying-data-factory.png" alt-text="deploying data factory tile":::
-
-11. After the creation is complete, you can see your ADF page as shown below.
-
- :::image type="content" source="./media/tutorial-create-azure-ssis-runtime-portal/data-factory-home-page.png" alt-text="Data factory home page":::
-
-12. Click **Author & Monitor** to launch ADF UI/app in a separate tab.
- ### Create your pipelines 1. In the home page, select **Orchestrate**.
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/orchestrate-button.png" alt-text="Screenshot that shows the Orchestrate button on the Azure Data Factory home page.":::
-2. In **Activities** toolbox, expand **General** menu, and drag & drop a **Web** activity onto the pipeline designer surface. In **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to **Settings** tab, and do the following actions:
+2. In the **Activities** toolbox, expand **General** menu, and drag & drop a **Web** activity onto the pipeline designer surface. In **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to **Settings** tab, and do the following actions:
> [!NOTE] > For Azure-SSIS in Azure Synapse, use corresponding Azure Synapse REST API to [Get Integration Runtime status](/rest/api/synapse/integration-runtimes/get), [Start Integration Runtime](/rest/api/synapse/integration-runtimes/start) and [Stop Integration Runtime](/rest/api/synapse/integration-runtimes/stop).
Now that your pipelines work as you expected, you can create triggers to run the
4. In **Trigger Run Parameters** page, review any warning, and select **Finish**. 5. Publish the whole ADF settings by selecting **Publish All** in the factory toolbar.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/publish-all.png" alt-text="Publish All":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/publish-all-button.png" alt-text="Screenshot that shows the Publish All button.":::
### Monitor your pipelines and triggers in Azure portal
data-factory How To Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-email.md
Previously updated : 06/07/2021 Last updated : 08/10/2022 # Send an email with an Azure Data Factory or Azure Synapse pipeline
data-factory How To Send Notifications To Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-notifications-to-teams.md
Previously updated : 09/29/2021
-update: 19/03/2022
Last updated : 08/10/2022 # Send notifications to a Microsoft Teams channel from an Azure Data Factory or Synapse Analytics pipeline
data-factory How To Sqldb To Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-sqldb-to-cosmosdb.md
Previously updated : 05/19/2022 Last updated : 08/10/2022 # Migrate normalized database schema from Azure SQL Database to Azure Cosmos DB denormalized container
data-factory How To Use Azure Key Vault Secrets Pipeline Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-azure-key-vault-secrets-pipeline-activities.md
Previously updated : 10/22/2021 Last updated : 08/10/2022 # Use Azure Key Vault secrets in pipeline activities
data-factory How To Use Sql Managed Instance With Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md
Previously updated : 02/15/2022 Last updated : 08/10/2022 # Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory or Azure Synapse Analytics
data-factory How To Use Trigger Parameterization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-trigger-parameterization.md
Previously updated : 03/02/2021 Last updated : 08/10/2022 # Reference trigger metadata in pipeline runs
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in July include:
- [Defender for Container's VA adds support for the detection of language specific packages (Preview)](#defender-for-containers-va-adds-support-for-the-detection-of-language-specific-packages-preview) - [Protect against the Operations Management Suite vulnerability CVE-2022-29149](#protect-against-the-operations-management-suite-vulnerability-cve-2022-29149) - [Integration with Entra Permissions Management](#integration-with-entra-permissions-management)
+- [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit)
+- [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service)
### General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection
Each Azure subscription, AWS account, and GCP project that you onboard, will now
Learn more about [Entra Permission Management (formerly Cloudknox)](other-threat-protections.md#entra-permission-management-formerly-cloudknox)
+### Key Vault recommendations changed to "audit"
+
+The effect for the Key Vault recommendations listed here was changed to "audit":
+
+| Recommendation name | Recommendation ID |
+| - | |
+| Validity period of certificates stored in Azure Key Vault should not exceed 12 months | fc84abc0-eee6-4758-8372-a7681965ca44 |
+| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b |
+| Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 |
++
+### Deprecate API App policies for App Service
+
+We deprecated the following policies to corresponding policies that already exist to include API apps:
+
+| To be deprecated | Changing to |
+|--|--|
+|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
+| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest 'Python version` |
+| `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` |
+| `Managed identity should be used in your API App` | `App Service apps should use managed identity` |
+| `Remote debugging should be turned off for API Apps` | `App Service apps should have remote debugging turned off` |
+| `Ensure that 'PHP version' is the latest, if used as a part of the API app` | `App Service apps that use PHP should use the latest 'PHP version'`|
+| `FTPS only should be required in your API App` | `App Service apps should require FTPS only` |
+| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version` |
+| `Latest TLS version should be used in your API App` | `App Service apps should use the latest TLS version` |
+ ## June 2022 Updates in June include:
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | June 2022 |
-| [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | June 2022 |
| [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022|
-| [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service) | July 2022 |
| [Change in pricing of Runtime protection for Arc-enabled Kubernetes clusters](#change-in-pricing-of-runtime-protection-for-arc-enabled-kubernetes-clusters) | August 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | September 2022 | | [Removing security alerts for machines reporting to cross tenant Log Analytics workspaces](#removing-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces) | September 2022 | | [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) | September 2022 |
-### Changes to recommendations for managing endpoint protection solutions
-
-**Estimated date for change:** August 2022
-
-In August 2021, we added two new **preview** recommendations to deploy and maintain the endpoint protection solutions on your machines. For full details, [see the release note](release-notes-archive.md#two-new-recommendations-for-managing-endpoint-protection-solutions-in-preview).
-
-When the recommendations are released to general availability, they will replace the following existing recommendations:
--- **Endpoint protection should be installed on your machines** will replace:
- - [Install endpoint protection solution on virtual machines (key: 83f577bd-a1b6-b7e1-0891-12ca19d1e6df)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/83f577bd-a1b6-b7e1-0891-12ca19d1e6df)
- - [Install endpoint protection solution on your machines (key: 383cf3bc-fdf9-4a02-120a-3e7e36c6bfee)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/383cf3bc-fdf9-4a02-120a-3e7e36c6bfee)
--- **Endpoint protection health issues should be resolved on your machines** will replace the existing recommendation that has the same name. The two recommendations have different assessment keys:
- - Assessment key for the **preview** recommendation: 37a3689a-818e-4a0e-82ac-b1392b9bb000
- - Assessment key for the **GA** recommendation: 3bcd234d-c9c7-c2a2-89e0-c01f419c1a8a
-
-Learn more:
--- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported)-- [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md)-
-### Key Vault recommendations changed to "audit"
-
-**Estimated date for change:** June 2022
-
-The Key Vault recommendations listed here are currently disabled so that they don't impact your secure score. We will change their effect to "audit".
-
-| Recommendation name | Recommendation ID |
-| - | |
-| Validity period of certificates stored in Azure Key Vault should not exceed 12 months | fc84abc0-eee6-4758-8372-a7681965ca44 |
-| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b |
-| Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 |
- ### Deprecating three VM alerts **Estimated date for change:** June 2022
The following table lists the alerts that will be deprecated during June 2022.
These alerts are used to notify a user about suspicious activity connected to a Kubernetes cluster. The alerts will be replaced with matching alerts that are part of the Microsoft Defender for Cloud Container alerts (`K8S.NODE_ImageBuildOnNode`, `K8S.NODE_ KubernetesAPI` and `K8S.NODE_ ContainerSSH`) which will provide improved fidelity and comprehensive context to investigate and act on the alerts. Learn more about alerts for [Kubernetes Clusters](alerts-reference.md).
-### Deprecate API App policies for App Service
-
-**Estimated date for change:** July 2022
-
-We will be deprecating the following policies to corresponding policies that already exist to include API apps:
-
-| To be deprecated | Changing to |
-|--|--|
-|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
-| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest 'Python version` |
-| `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` |
-| `Managed identity should be used in your API App` | `App Service apps should use managed identity` |
-| `Remote debugging should be turned off for API Apps` | `App Service apps should have remote debugging turned off` |
-| `Ensure that 'PHP version' is the latest, if used as a part of the API app` | `App Service apps that use PHP should use the latest 'PHP version'`|
-| `FTPS only should be required in your API App` | `App Service apps should require FTPS only` |
-| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version` |
-| `Latest TLS version should be used in your API App` | `App Service apps should use the latest TLS version` |
- ### Change in pricing of runtime protection for Arc-enabled Kubernetes clusters **Estimated date for change:** August 2022
Runtime protection is currently a preview feature for Arc-enabled Kubernetes clu
### Multiple changes to identity recommendations
-**Estimated date for change:** July 2022
+**Estimated date for change:** September 2022
Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In June, we'll be making the changes outlined below.
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
In this article, you learned about creating Logic Apps, automating their executi
For related material, see: -- [The Microsoft Learn module on how to use workflow automation to automate a security response](/learn/modules/resolve-threats-with-azure-security-center/)
+- [The Learn module on how to use workflow automation to automate a security response](/learn/modules/resolve-threats-with-azure-security-center/)
- [Security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md) - [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [About Azure Logic Apps](../logic-apps/logic-apps-overview.md)
defender-for-iot Tutorial Qradar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-qradar.md
For the integration to work, you will need to setup in the Defender for IoT appl
1. Select **Save**.
+The following is an example of a payload sent to QRadar:
+
+```sample payload
+<9>May 5 12:29:23 sensor_Agent LEEF:1.0|CyberX|CyberX platform|2.5.0|CyberX platform Alert|devTime=May 05 2019 15:28:54 devTimeFormat=MMM dd yyyy HH:mm:ss sev=2 cat=XSense Alerts title=Device is Suspected to be Disconnected (Unresponsive) score=81 reporter=192.168.219.50 rta=0 alertId=6 engine=Operational senderName=sensor Agent UUID=5-1557059334000 site=Site zone=Zone actions=handle dst=192.168.2.2 dstName=192.168.2.2 msg=Device 192.168.2.2 is suspected to be disconnected (unresponsive).
+```
+ ## Map notifications to QRadar The rule must then be mapped on the on-premises management console.
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
To use the control plane APIs:
* You can call the APIs directly by referencing the latest Swagger folder in the [control plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/stable). This folder also includes a folder of examples that show the usage. * You can currently access SDKs for control APIs in... - [.NET (C#)](https://www.nuget.org/packages/Microsoft.Azure.Management.DigitalTwins/) ([reference [auto-generated]](/dotnet/api/overview/azure/digitaltwins/management?view=azure-dotnet&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/digitaltwins/Microsoft.Azure.Management.DigitalTwins))
- - [Java](https://search.maven.org/search?q=a:azure-mgmt-digitaltwins) ([reference [auto-generated]](/java/api/overview/azure/digitaltwins?view=azure-java-stable&preserve-view=true)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins))
+ - [Java](https://search.maven.org/search?q=a:azure-mgmt-digitaltwins) ([reference [auto-generated]](/java/api/overview/azure/digital-twins)) ([source](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins))
- [JavaScript](https://www.npmjs.com/package/@azure/arm-digitaltwins) ([source](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/digitaltwins/arm-digitaltwins)) - [Python](https://pypi.org/project/azure-mgmt-digitaltwins/) ([source](https://github.com/Azure/azure-sdk-for-python/tree/release/v3/sdk/digitaltwins/azure-mgmt-digitaltwins)) - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/services/digitaltwins/mgmt) ([source](https://github.com/Azure/azure-sdk-for-go/tree/main/services/digitaltwins/mgmt))
To use the data plane APIs:
- You can see detailed information and usage examples by continuing to the [.NET (C#) SDK (data plane)](#net-c-sdk-data-plane) section of this article. * You can use the Java SDK. To use the Java SDK... - You can view and install the package from Maven: [`com.azure:azure-digitaltwins-core`](https://search.maven.org/artifact/com.azure/azure-digitaltwins-core/1.0.0/jar)
- - You can view the [SDK reference documentation](/java/api/overview/azure/digitaltwins)
+ - You can view the [SDK reference documentation](/java/api/overview/azure/digital-twins)
- You can find the SDK source in GitHub: [Azure IoT Digital Twins client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins/azure-digitaltwins-core) * You can use the JavaScript SDK. To use the JavaScript SDK... - You can view and install the package from npm: [Azure Azure Digital Twins Core client library for JavaScript](https://www.npmjs.com/package/@azure/digital-twins-core).
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-azure-digital-twins-explorer.md
Azure Digital Twins Explorer is an open-source tool that welcomes contributions
To view the source code for the tool and read detailed instructions on how to contribute to the code, visit its GitHub repository: [digital-twins-explorer](https://github.com/Azure-Samples/digital-twins-explorer).
-To view instructions for contributing to this documentation, visit the [Microsoft contributor guide](/contribute/).
+To view instructions for contributing to this documentation, review our [contributor guide](/contribute/).
## Other considerations
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
The following tables show which migration scenarios are supported when using Azu
### Offline (one-time) migration support
-The following table shows Azure Database Migration Service support for offline migrations.
+The following table shows Azure Database Migration Service support for **offline** migrations.
| Target | Source | Support | Status | | - | - |:-:|:-:| | **Azure SQL DB** | SQL Server | Γ£ö | GA |
-| | RDS SQL | Γ£ö | GA |
+| | Amazon RDS SQL Server | Γ£ö | PP |
| | Oracle | X | | | **Azure SQL DB MI** | SQL Server | Γ£ö | GA |
-| | RDS SQL | Γ£ö | GA |
+| | Amazon RDS SQL Server | X | |
| | Oracle | X | | | **Azure SQL VM** | SQL Server | Γ£ö | GA |
+| | Amazon RDS SQL Server | X | |
| | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA | | **Azure DB for MySQL - Single Server** | MySQL | Γ£ö | GA |
-| | RDS MySQL | Γ£ö | GA |
+| | Amazon RDS MySQL | Γ£ö | GA |
| | Azure DB for MySQL <sup>1</sup> | Γ£ö | GA | | **Azure DB for MySQL - Flexible Server** | MySQL | Γ£ö | GA |
-| | RDS MySQL | Γ£ö | GA |
+| | Amazon RDS MySQL | Γ£ö | GA |
| | Azure DB for MySQL <sup>1</sup> | Γ£ö | GA | | **Azure DB for PostgreSQL - Single server** | PostgreSQL | X |
-| | RDS PostgreSQL | X | |
+| | Amazon RDS PostgreSQL | X | |
| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | X |
-| | RDS PostgreSQL | X | |
+| | Amazon RDS PostgreSQL | X | |
| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | X |
-| | RDS PostgreSQL | X | |
+| | Amazon RDS PostgreSQL | X | |
1. If your source database is already in Azure PaaS (for example, Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you're migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose MySQL as the source engine during scenario creation. If you're migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation. ### Online (continuous sync) migration support
-The following table shows Azure Database Migration Service support for online migrations.
+The following table shows Azure Database Migration Service support for **online** migrations.
| Target | Source | Support | Status | | - | - |:-:|:-:| | **Azure SQL DB** | SQL Server | X | |
-| | RDS SQL | X | |
+| | Amazon RDS SQL | X | |
| | Oracle | X | | | **Azure SQL DB MI** | SQL Server | Γ£ö | GA |
-| | RDS SQL | Γ£ö | GA |
+| | Amazon RDS SQL | X | |
| | Oracle | X | |
-| **Azure SQL VM** | SQL Server <sup>2</sup> | X | |
+| **Azure SQL VM** | SQL Server | Γ£ö | GA |
+| | Amazon RDS SQL | X | |
| | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA | | **Azure DB for MySQL** | MySQL | X | |
-| | RDS MySQL | X | |
+| | Amazon RDS MySQL | X | |
| **Azure DB for PostgreSQL - Single server** | PostgreSQL | Γ£ö | GA | | | Azure DB for PostgreSQL - Single server <sup>1</sup> | Γ£ö | GA |
-| | RDS PostgreSQL | Γ£ö | GA |
+| | Amazon DS PostgreSQL | Γ£ö | GA |
| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | Γ£ö | GA | | | Azure DB for PostgreSQL - Single server <sup>1</sup> | Γ£ö | GA |
-| | RDS PostgreSQL | Γ£ö | GA |
+| | Amazon RDS PostgreSQL | Γ£ö | GA |
| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | Γ£ö | GA |
-| | RDS PostgreSQL | Γ£ö | GA |
+| | Amazon RDS PostgreSQL | Γ£ö | GA |
1. If your source database is already in Azure PaaS (for example, Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you're migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose MySQL as the source engine during scenario creation. If you're migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation.
education-hub Get Started Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/get-started-education-hub.md
# Getting started with Azure Education Hub
-The Education Hub Get Started page provides quick links upon first landing into the Education Hub. There, you can find information about how to set up your course, learn about different services through Microsoft Learn, or easily deploy your first services through Quickstart Templates.
+The Education Hub Get Started page provides quick links upon first landing into the Education Hub. There, you can find information about how to set up your course, learn about different services, or easily deploy your first services through Azure Quickstart Templates.
:::image type="content" source="media/get-started-education-hub/get-started-page.png" alt-text="The Get Started page in the Azure Education Hub." border="false":::
education-hub Hub Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/hub-overview-page.md
Your main landing page in the Azure Education Hub is the Overview page. This pag
1. **Labs** shows the total number of active labs that have been passed out to students. 1. **Action needed** lists any actions you need to complete, such as accepting a Lab invitation. 1. **Software** lists free software available to download as an Educator.
-1. **Learning** links to free Azure learning pathways available through Microsoft Learn.
+1. **Learning** links to free Azure learning paths and modules.
1. **Quickstart Templates** includes Azure templates to help speed up and simplify deployment for common tasks. ## Next steps
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md
For more information about the auto-inflate feature, see [Automatically scale th
## Processing units
- [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation with in a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit*(PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
+ [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit*(PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more.
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
From a security standpoint, Microsoft doesn't recommend disabling certificate su
* Azure Front Door Standard and Premium - it is present in the origin settings. * Azure Front Door (classic) - it is present under the Azure Front Door settings in the Azure portal and in the Backend PoolsSettings in the Azure Front Door API.
- under the Azure Front Door settings in the Azure portal and on the BackendPoolsSettings in the Azure Front Door API.
- ## Frontend TLS connection (Client to Front Door) To enable the HTTPS protocol for secure delivery of contents on an Azure Front Door custom domain, you can choose to use a certificate that is managed by Azure Front Door or use your own certificate.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
Azure API for FHIR is provisioned.
| West US 2 | 40.64.135.77 | > [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to convert data to FHIR (Preview). For more information, see [Host and use templates](../../healthcare-apis/fhir/convert-data.md#host-and-use-templates)
+> The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure ACR firewall](../../healthcare-apis/fhir/convert-data.md#configure-acr-firewall).
### Allowing specific IP addresses for the Azure storage account in the same region
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
Previously updated : 06/06/2022 Last updated : 08/03/2022
FHIR service is provisioned.
| West US 2 | 40.64.135.77 | > [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to convert data to FHIR. For more information, see [Host and use templates](./convert-data.md#host-and-use-templates)
+> The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure ACR firewall](./convert-data.md#configure-acr-firewall).
### Allowing specific IP addresses for the Azure storage account in the same region
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
Title: Data conversion for Azure Health Data Services
-description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure Health Data Services
+ Title: FHIR data conversion for Azure Health Data Services
+description: Use the $convert-data endpoint and custom converter templates to convert data to FHIR in Azure Health Data Services.
Previously updated : 06/06/2022 Last updated : 08/02/2022
# Converting your data to FHIR
-The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**.
+The `$convert-data` custom endpoint in the FHIR service enables converting health data from different formats to FHIR. The `$convert-data` operation uses [Liquid](https://shopify.github.io/liquid/) templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project for FHIR data conversion mapping. You can customize these conversion templates as needed. Currently the `$convert-data` operation supports three types of data conversion: **HL7v2 to FHIR**, **C-CDA to FHIR**, and **JSON to FHIR** (JSON to FHIR templates are intended for custom conversion mapping).
> [!NOTE]
-> `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of raw healthcare data from legacy formats into FHIR format. However, it is not an ETL pipeline in itself. We recommend you to use an ETL engine such as Logic Apps or Azure Data Factory for a complete workflow in preparing your FHIR data to be persisted into the FHIR server. The workflow might include: data reading and ingestion, data validation, making $convert-data API calls, data pre/post-processing, data enrichment, and data de-duplication.
+> The `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of health data formats into the FHIR format. However, the `$convert-data` operation is not an ETL pipeline in itself. We recommend you use an ETL engine based on Azure Logic Apps or Azure Data Factory for a complete workflow in converting your data to FHIR. The workflow might include: data reading and ingestion, data validation, making `$convert-data` API calls, data pre/post-processing, data enrichment, data de-duplication, and loading the data for persistence in the FHIR service.
-## Use the $convert-data endpoint
+## Using the `$convert-data` endpoint
-The `$convert-data` operation is integrated into the FHIR service to run as part of the service. After enabling `$convert-data` in your server, you can make API calls to the server to convert your data into FHIR:
+The `$convert-data` operation is integrated into the FHIR service as a RESTful API action. Calling the `$convert-data` endpoint causes the FHIR service to perform a conversion on health data sent in an API request:
-`https://<<FHIR service base URL>>/$convert-data`
+`POST {{fhirurl}}/$convert-data`
-### Parameter Resource
+The health data is delivered to the FHIR service in the body of the `$convert-data` request. If the request is successful, the FHIR service will return a FHIR `Bundle` response with the data converted to FHIR.
-$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource in the request body as described in the table below. In the API call request body, you would include the following parameters:
+### Parameters Resource
+
+A `$convert-data` API call packages the health data for conversion inside a JSON-formatted [Parameters resource](http://hl7.org/fhir/parameters.html) in the body of the request. See the table below for a description of the parameters.
| Parameter Name | Description | Accepted values | | -- | -- | -- |
-| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
-| inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
-| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
+| `inputData` | Data payload to be converted to FHIR. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
+| `inputDataType` | Type of data input. | ```HL7v2```, ``Ccda``, ``Json`` |
+| `templateCollectionReference` | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection in [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). The reference is to an image containing Liquid templates to use for conversion. This can be a reference either to default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting them on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> `<RegistryServer>/<imageName>@<imageDigest>`, `<RegistryServer>/<imageName>:<imageTag>` |
+| `rootTemplate` | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
> [!NOTE]
-> JSON templates are sample templates for use, not "default" templates that adhere to any pre-defined JSON message types. JSON doesn't have any standardized message types, unlike HL7v2 messages or C-CDA documents. Therefore, instead of default templates we provide you with some sample templates that you can use as a starting guide for your own customized templates.
+> JSON templates are sample templates for use in building your own conversion mappings ΓÇô not "default" templates that adhere to any pre-defined health data message types. JSON itself is not specified as a health data format, unlike HL7v2 or C-CDA. Therefore, instead of "default" JSON templates, we provide you with some sample JSON templates that you can use as a starting guide for your own customized mappings.
> [!WARNING] > Default templates are released under MIT License and are **not** supported by Microsoft Support. >
-> Default templates are provided only to help you get started quickly. They may get updated when we update versions of the FHIR service. Therefore, you must verify the conversion behavior and **host your own copy of templates** on an Azure Container Registry, register those to the FHIR service, and use in your API calls in order to have consistent data conversion behavior across the different versions of services.
+> Default templates are provided only to help you get started with your data conversion workflow. These default templates are not intended for production and may change at any point when Microsoft releases updates for the FHIR service. In order to have consistent data conversion behavior across different versions of the FHIR service, you must 1) **host your own copy of templates** in an Azure Container Registry instance, 2) register the templates to the FHIR service, 3) use your registered templates in your API calls, and 4) verify that the conversion behavior meets your requirements.
#### Sample Request
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
"id": "9d697ec3-48c3-3e17-db6a-29a1765e22c6", ... ...
+ }
"request": { "method": "PUT", "url": "Location/50becdb5-ff56-56c6-40a1-6d554dca80f0"
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
## Customize templates
-You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) for Visual Studio Code to customize the templates as per your needs. The extension provides an interactive editing experience, and makes it easy to download Microsoft-published templates and sample data. Refer to the documentation in the extension for more details.
+You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) for Visual Studio Code to customize templates according to your specific requirements. The extension provides an interactive editing experience and makes it easy to download Microsoft-published templates and sample data. Refer to the extension documentation for more details.
+
+## Host your own templates
-## Host and use templates
+It's recommended that you host your own copy of templates in an Azure Container Registry (ACR) instance. There are six steps involved in hosting your own templates and using them for `$convert-data` operations:
-It's recommended that you host your own copy of templates on ACR. There are four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
+1. Create an Azure Container Registry instance.
+2. Push the templates to your Azure Container Registry.
+3. Enable Managed Identity in your FHIR service instance.
+4. Provide ACR access to the FHIR service Managed Identity.
+5. Register the ACR server in the FHIR service.
+6. Optionally configure ACR firewall for secure access.
-1. Push the templates to your Azure Container Registry.
-1. Enable Managed Identity on your FHIR service instance.
-1. Provide access of the ACR to the FHIR service Managed Identity.
-1. Register the ACR servers in the FHIR service.
-1. Optionally configure ACR firewall for secure access.
+### Create an ACR instance
+
+Read the [Introduction to Container registries in Azure](../../container-registry/container-registry-intro.md) and follow the instructions for creating your own ACR instance. It's recommended to place your ACR instance in the same resource group where your FHIR service is located.
### Push templates to Azure Container Registry
-After creating an ACR instance, you can use the _FHIR Converter: Push Templates_ command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push the customized templates to the ACR. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose.
+After creating an ACR instance, you can use the _FHIR Converter: Push Templates_ command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push your custom templates to your ACR instance. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose.
-### Enable Managed Identity on FHIR service
+### Enable Managed Identity in the FHIR service
-Browse to your instance of FHIR service service in the Azure portal, and then select the **Identity** blade.
-Change the status to **On** to enable managed identity in FHIR service.
+Browse to your instance of the FHIR service in Azure portal and select the **Identity** blade.
+Change the status to **On** to enable managed identity in the FHIR service.
[ ![Screen image of Enable Managed Identity.](media/convert-data/fhir-mi-enabled.png) ](media/convert-data/fhir-mi-enabled.png#lightbox)
-### Provide access of the ACR to FHIR service
+### Provide ACR access to the FHIR service
-1. Select **Access control (IAM)**.
+1. In your resource group, go to your **Container registry** instance and select the **Access control (IAM)** blade.
-1. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task.
+2. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task.
:::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Role** tab, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
+3. On the **Role** tab, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
[![Screen shot showing user interface of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
-1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+4. On the **Members** tab, select **Managed identity**, and then click **Select members**.
-1. Select your Azure subscription.
+5. Select your Azure subscription.
-1. Select **System-assigned managed identity**, and then select the FHIR service.
+6. Select **System-assigned managed identity**, and then select the FHIR service.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+7. On the **Review + assign** tab, click **Review + assign** to assign the role.
For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
-### Register the ACR servers in FHIR service
+### Register the ACR server in FHIR service
-You can register the ACR server using the Azure portal, or using CLI.
+You can register the ACR server using the Azure portal, or using the CLI.
#### Registering the ACR server using Azure portal
-Browse to the **Artifacts** blade under **Data transformation** in your FHIR service instance. You'll see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
+Browse to the **Artifacts** blade under **Data transformation** in your FHIR service instance. You'll see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to click **Save** for the registration to take effect. It may take a few minutes to apply the change.
-#### Registering the ACR server using CLI
+#### Registering the ACR server using the CLI
You can register up to 20 ACR servers in the FHIR service.
-Install the Azure Health Data Services CLI from Azure PowerShell if needed:
+Install the Azure Health Data Services CLI if needed:
```azurecli az extension add -n healthcareapis ```
-Register the acr servers to FHIR service following the examples below:
+Register the ACR servers to the FHIR service following the examples below:
##### Register a single ACR server
az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io fhiracr2020.az
``` ### Configure ACR firewall
-Select **Networking** of the Azure storage account from the portal.
+In your Azure portal, select **Networking** for the ACR instance.
[ ![Screen image of configure ACR firewall.](media/convert-data/networking-container-registry.png) ](media/convert-data/networking-container-registry.png#lightbox)
-Select **Selected networks**.
+Click the **Selected networks** button.
Under the **Firewall** section, specify the IP address in the **Address range** box. Add IP ranges to allow access from the internet or your on-premises networks.
-In the table below, you'll find the IP address for the Azure region where the FHIR service service is provisioned.
+In the table below, you'll find the IP address for the Azure region where the FHIR service is provisioned.
|**Azure Region** |**Public IP Address** | |:-|:-|
In the table below, you'll find the IP address for the Azure region where the FH
| West US 2 | 40.64.135.77 | > [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to configure FHIR export settings. For more information, see [Configure export settings](./configure-export-data.md)
+> The above steps are similar to the configuration steps described in the document **Configure export settings and set up a storage account**. For more information, see [Configure settings for export](./configure-export-data.md).
-For a private network access (that is, private link), you can also disable the public network access of ACR.
-* Select Networking blade of the Azure storage account from the portal.
-* Select `Disabled`.
-* Select Firewall exception: Allow trusted Microsoft services to access this container registry.
+For private network access (that is, a private link), you can also disable the public network access to your ACR instance.
+* Select the **Networking** blade for the Container registry in the portal.
+* Make sure you are in the **Public access** tab.
+* Select **Disabled**.
+* Under **Firewall exception** select **Allow trusted Microsoft services to access this container registry**.
[ ![Screen image of private link for ACR.](media/convert-data/configure-private-network-container-registry.png) ](media/convert-data/configure-private-network-container-registry.png#lightbox)
-### Verify
+### Verify `$convert-data` operation
-Make a call to the $convert-data API specifying your template reference in the templateCollectionReference parameter.
+Make a call to the `$convert-data` API specifying your template reference in the `templateCollectionReference` parameter.
`<RegistryServer>/<imageName>@<imageDigest>`
+You should receive a `Bundle` response containing the health data converted into the FHIR format.
+ ## Next steps
-In this article, you've learned about the $convert-data endpoint and customize-converter templates to convert data in the Azure Health Data Services. For more information about how to export FHIR data, see
+In this article, you've learned about the `$convert-data` endpoint for converting health data to FHIR using the FHIR service in Azure Health Data Services. For information about how to export FHIR data from the FHIR service, see
>[!div class="nextstepaction"] >[Export data](export-data.md)
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
You can configure the server to export the data to any kind of Azure storage acc
#### Using `$export` command
-After configuring your FHIR server, you can follow the [documentation](./export-data.md#using-export-command) to export your FHIR resources at System, Patient, or Group level. For example, you can export all of your FHIR data related to the patients in a `Group` with the following `$export` command, in which you specify your ADL Gen 2 blob storage name in the field `{{BlobContainer}}`:
+After configuring your FHIR server, you can follow the [documentation](./export-data.md#calling-the-export-endpoint) to export your FHIR resources at System, Patient, or Group level. For example, you can export all of your FHIR data related to the patients in a `Group` with the following `$export` command, in which you specify your ADL Gen 2 blob storage name in the field `{{BlobContainer}}`:
```rest https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
Previously updated : 06/06/2022 Last updated : 08/03/2022 # How to export FHIR data
+The bulk `$export` operation in the FHIR service allows users to export data as described in the [HL7 FHIR Bulk Data Access specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
-The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+Before attempting to use `$export`, make sure that your FHIR service is configured to connect with an ADLS Gen2 storage account. For configuring export settings and creating an ADLS Gen2 storage account, refer to the [Configure settings for export](./configure-export-data.md) page.
-Before using $export, you'll want to make sure that the FHIR service is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md).
+## Calling the `$export` endpoint
-## Using $export command
+After setting up the FHIR service to connect with an ADLS Gen2 storage account, you can call the `$export` endpoint and the FHIR service will export data into a blob storage container inside the storage account. The example request below exports all resources into a container specified by name (`{{containerName}}`). Note that the container in the ADLS Gen2 account must be created beforehand if you want to specify the `{{containerName}}` in the request.
-After configuring the FHIR service for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+```
+GET {{fhirurl}}/$export?_container={{containerName}}
+```
+
+If you don't specify a container name in the request (e.g., by calling `GET {{fhirurl}}/$export`), then a new container with an auto-generated name will be created for the exported data.
+
+For general information about the FHIR `$export` API spec, please see the [HL7 FHIR Export Request Flow](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#request-flow) documentation.
**Jobs stuck in a bad state**
-In some situations, there's a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions haven't been set up properly. One way to validate if your export is successful is to check your storage account to see if the corresponding container (that is, ndjson) files are present. If they aren't present, and there are no other export jobs running, then there's a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try requeuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
+In some situations, there's a potential for a job to be stuck in a bad state while attempting to `$export` data from the FHIR service. This can occur especially if the ADLS Gen2 storage account permissions haven't been set up correctly. One way to check the status of your `$export` operation is to go to your storage account's **Storage browser** and see if any `.ndjson` files are present in the export container. If the files aren't present and there are no other `$export` jobs running, then there's a possibility the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later you can requeue the `$export` job and try again. Information about canceling an `$export` operation can be found in the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7.
-The FHIR service supports $export at the following levels:
-* [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET https://<<FHIR service base URL>>/$export>>`
-* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET https://<<FHIR service base URL>>/Patient/$export>>`
-* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - FHIR service exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
+> [!NOTE]
+> In the FHIR service, the default time for an `$export` operation to idle in a bad state is 10 minutes before the service will stop the operation and move to a new job.
-When data is exported, a separate file is created for each resource type. To ensure that the exported files don't become too large. We create a new file after the size of a single exported file becomes larger than 64 MB. The result is that you may get multiple files for each resource type, which will be enumerated (that is, Patient-1.ndjson, Patient-2.ndjson).
+The FHIR service supports `$export` at the following levels:
+* [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET {{fhirurl}}/$export`
+* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET {{fhirurl}}/Patient/$export`
+* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) ΓÇô *The FHIR service exports all referenced resources but doesn't export the characteristics of the group resource itself: `GET {{fhirurl}}/Group/[ID]/$export`
+When data is exported, a separate file is created for each resource type. The FHIR service will create a new file when the size of a single exported file exceeds 64 MB. The result is that you may get multiple files for a resource type, which will be enumerated (e.g., `Patient-1.ndjson`, `Patient-2.ndjson`).
> [!Note]
-> `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if the resource is in a compartment of more than one resource, or is in multiple groups.
+> `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if a resource is in multiple groups or in a compartment of more than one resource.
-In addition, checking the export status through the URL returned by the location header during the queuing is supported along with canceling the actual export job.
+In addition to checking the presence of exported files in your storage account, you can also check your `$export` operation status through the URL in the `Content-Location` header returned in the FHIR service response. See the HL7 [Bulk Data Status Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-status-request) documentation for more information.
### Exporting FHIR data to ADLS Gen2
-Currently we support $export for ADLS Gen2 enabled storage accounts, with the following limitation:
+Currently the FHIR service supports `$export` to ADLS Gen2 storage accounts, with the following limitations:
-- User canΓÇÖt take advantage of [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target export to a specific subdirectory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).-- Once an export is complete, we never export anything to that folder again, since subsequent exports to the same container will be inside a newly created folder.
+- ADLS Gen2 provides [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target `$export` operations to a specific subdirectory within a container. The FHIR service is only able to specify the destination container for the export (where a new folder for each `$export` operation is created).
+- Once an `$export` operation is complete and all data has been written inside a folder, the FHIR service doesn't export anything to that folder again since subsequent exports to the same container will be inside a newly created folder.
-To export data to storage accounts behind the firewalls, see [Configure settings for export](configure-export-data.md).
+To export data to a storage account behind a firewall, see [Configure settings for export](configure-export-data.md).
## Settings and parameters ### Headers
-There are two required header parameters that must be set for $export jobs. The values are defined by the current [$export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#headers).
-* **Accept** - application/fhir+json
-* **Prefer** - respond-async
+There are two required header parameters that must be set for `$export` jobs. The values are set according to the current HL7 [$export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#headers).
+* **Accept** - `application/fhir+json`
+* **Prefer** - `respond-async`
### Query parameters
-The FHIR service supports the following query parameters. All of these parameters are optional:
+The FHIR service supports the following query parameters for filtering exported data. All of these parameters are optional.
|Query parameter | Defined by the FHIR Spec? | Description| ||||
-| \_outputFormat | Yes | Currently supports three values to align to the FHIR Spec: application/fhir+ndjson, application/ndjson, or just ndjson. All export jobs will return `ndjson` and the passed value has no effect on code behavior. |
-| \_since | Yes | Allows you to only export resources that have been modified since the time provided |
-| \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources|
-| \_typeFilter | Yes | To request finer-grained filtering, you can use \_typeFilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results |
-| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isn't specified, the data will be exported to a new container. |
+| `_outputFormat` | Yes | Currently supports three values to align to the FHIR Spec: `application/fhir+ndjson`, `application/ndjson`, or just `ndjson`. All export jobs will return `.ndjson` files and the passed value has no effect on code behavior. |
+| `_since` | Yes | Allows you to only export resources that have been modified since the time provided. |
+| `_type` | Yes | Allows you to specify which types of resources will be included. For example, `_type=Patient` would return only patient resources.|
+| `_typeFilter` | Yes | To request finer-grained filtering, you can use `_typeFilter` along with the `_type` parameter. The value of the `_typeFilter` parameter is a comma-separated list of FHIR queries that further restrict the results. |
+| `_container` | No | Specifies the name of the container in the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder in that container. If the container isn't specified, the data will be exported to a new container with an auto-generated name. |
> [!Note]
-> Only storage accounts in the same subscription as that for FHIR service are allowed to be registered as the destination for $export operations.
+> Only storage accounts in the same subscription as that for the FHIR service are allowed to be registered as the destination for `$export` operations.
## Next steps
-In this article, you've learned how to export FHIR resources using the $export command. For more information about how to set up and use de-identified export or how to export data from Azure API for FHIR to Azure Synapse Analytics, see
+In this article, you've learned about exporting FHIR resources using the `$export` operation. For information about how to set up and use additional options for export, see
>[!div class="nextstepaction"] >[Export de-identified data](de-identified-export.md)
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
az group list
## Next steps
-This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try this Microsoft Learn tutorial [NVIDIA DeepStream development with Microsoft Azure](/learn/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
+This article helped you set up your virtual machine and IoT Edge device to be GPU-accelerated. To run an application with a similar setup, try the learning path for [NVIDIA DeepStream development with Microsoft Azure](/learn/paths/nvidia-deepstream-development-with-microsoft-azure/?WT.mc_id=iot-47680-cxa). The Learn tutorial shows you how to develop optimized Intelligent Video Applications that can consume multiple video, image, and audio sources.
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
Some of the key differences between the latest release and version 1.1 and earli
* The workload API in the latest version saves encrypted secrets in a new format. If you upgrade from an older version to latest version, the existing master encryption key is imported. The workload API can read secrets saved in the prior format using the imported encryption key. However, the workload API can't write encrypted secrets in the old format. Once a secret is re-encrypted by a module, it is saved in the new format. Secrets encrypted in the latest version are unreadable by the same module in version 1.1. If you persist encrypted data to a host-mounted folder or volume, always create a backup copy of the data *before* upgrading to retain the ability to downgrade if necessary. * For backward compatibility when connecting devices that do not support TLS 1.2, you can configure Edge Hub to still accept TLS 1.0 or 1.1 via the [SslProtocols environment variable](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md#edgehub).  Please note that support for [TLS 1.0 and 1.1 in IoT Hub is considered legacy](../iot-hub/iot-hub-tls-support.md) and may also be removed from Edge Hub in future releases.  To avoid future issues, use TLS 1.2 as the only TLS version when connecting to Edge Hub or IoT Hub. * The preview for the experimental MQTT broker in Edge Hub 1.2 has ended and is not included in Edge Hub 1.3. We are continuing to refine our plans for an MQTT broker based on feedback received. In the meantime, if you need a standards-compliant MQTT broker on IoT Edge, consider deploying an open-source broker like Mosquitto as an IoT Edge module.
+* Starting with version 1.2, when a backing image is removed from a container, the container keeps running and it persists across restarts. In 1.1, when a backing image is removed, the container is immediately recreated and the backing image is updated.
Before automating any update processes, validate that it works on test machines.
iot-hub-device-update Connected Cache Industrial Iot Nested https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-industrial-iot-nested.md
description: Microsoft Connected Cache within an Azure IoT Edge for Industrial I
Last updated 2/16/2021-+
iot-hub-device-update Connected Cache Nested Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-nested-level.md
description: Microsoft Connected Cache two level nested Azure IoT Edge Gateway w
Last updated 2/16/2021-+
iot-hub-device-update Connected Cache Single Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-single-level.md
description: Microsoft Connected Cache preview deployment scenario samples tutor
Last updated 2/16/2021-+
iot-hub Iot Hub C C Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-c-c-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, it allows for isolated configuration and conditions for each component.
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, it allows for isolated configuration and conditions for each component.
At the end of this article, you have two C apps:
-* **CreateIdentities**, which creates a device identity, a module identity and associated security key to connect your device and module clients.
+* **CreateIdentities**: creates a device identity, a module identity and associated security key to connect your device and module clients.
-* **UpdateModuleTwinReportedProperties**, which sends updated module twin reported properties to your IoT Hub.
+* **UpdateModuleTwinReportedProperties**: sends updated module twin, reported properties to your IoT Hub.
> [!NOTE]
-> For information about the Azure IoT SDKs that you can use to build both applications to run on devices, and your solution backend, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
-* An active Azure account. (If you don't have an account, you can create an [Azure free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). * The latest [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
At the end of this article, you have two C apps:
## Create a device identity and a module identity in IoT Hub
-In this section, you create a C app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module cannot connect to IoT hub unless it has an entry in the identity registry. For more information, see the **Identity registry** section of the [IoT Hub developer guide](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
+In this section, you create a C app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
Add the following code to your C file:
This app creates a device identity with ID **myFirstDevice** and a module identi
In this section, you create a C app on your simulated device that updates the module twin reported properties.
-1. **Get your module connection string** -- now if you login to [Azure portal](https://portal.azure.com). Navigate to your IoT Hub and click IoT Devices. Find myFirstDevice, open it and you see myFirstModule was successfully created. Copy the module connection string. It is needed in the next step.
+1. **Get your module connection string** -- now if you sign in to [Azure portal](https://portal.azure.com). Navigate to your IoT Hub and click IoT Devices. Find myFirstDevice, open it and you see myFirstModule was successfully created. Copy the module connection string. It is needed in the next step.
![Azure portal module detail](./media/iot-hub-c-c-module-twin-getstarted/module-detail.png)
int main(void)
To continue getting started with IoT Hub and to explore other IoT scenarios, see: * [Getting started with device management](iot-hub-node-node-device-management-get-started.md)
-* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
+* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub Csharp Csharp Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provide visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provide visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
At the end of this article, you have two .NET console apps:
-* **CreateIdentities**. This app creates a device identity, a module identity, and associated security key to connect your device and module clients.
+* **CreateIdentities**: creates a device identity, a module identity, and associated security key to connect your device and module clients.
-* **UpdateModuleTwinReportedProperties**. This app sends updated module twin reported properties to your IoT hub.
+* **UpdateModuleTwinReportedProperties**: sends updated module twin, reported properties to your IoT hub.
> [!NOTE]
-> For information about the Azure IoT SDKs that you can use to build both applications to run on devices, and your solution back end, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites * Visual Studio.
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). ## Get the IoT hub connection string
At the end of this article, you have two .NET console apps:
## Update the module twin using .NET device SDK
-In this section, you create a .NET console app on your simulated device that updates the module twin reported properties.
+Now let's communicate to the cloud from your simulated device. Once a module identity is created, a module twin is implicitly created in IoT Hub. In this section, you create a .NET console app on your simulated device that updates the module twin reported properties.
-Here's how to get your module connection string from the Azure portal. Sign in to the [Azure portal](https://portal.azure.com/). Navigate to your hub and select **Devices**. Find **myFirstDevice**. Select **myFirstDevice** to open it, and then select **myFirstModule** to open it. In **Module Identity Details**, copy the **Connection string (primary key)** to save it for the console app.
+To retrieve your module connection string, navigate to your [IoT hub](https://ms.portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Devices%2FIotHubs) then select **Devices**. Find and select **myFirstDevice** to open it and then select **myFirstModule** to open it. In **Module Identity Details**, copy the **Connection string (primary key)** and save it for the console app.
:::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/module-identity-detail.png" alt-text="Screenshot that shows the 'Module Identity Details' page." lightbox="./media/iot-hub-csharp-csharp-module-twin-getstarted/module-identity-detail.png":::
Here's how to get your module connection string from the Azure portal. Sign in t
} ```
- This code sample shows you how to retrieve the module twin and update reported properties with AMQP protocol. In public preview, we only support AMQP for module twin operations.
+ Now you know how to retrieve the module twin and update reported properties with AMQP protocol.
1. Optionally, you can add these statements to the **Main** method to send an event to IoT Hub from your module. Place these lines below the `try catch` block.
iot-hub Iot Hub Csharp Csharp Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-twin-getstarted.md
[!INCLUDE [iot-hub-selector-twin-get-started](../../includes/iot-hub-selector-twin-get-started.md)]
-In this article, you create these .NET console apps:
+In this article, you create two .NET console apps:
-* **AddTagsAndQuery**. This back-end app adds tags and queries device twins.
+* **AddTagsAndQuery**: a back-end app that adds tags and queries device twins.
-* **ReportConnectivity**. This device app simulates a device that connects to your IoT hub with the device identity created earlier, and reports its connectivity condition.
+* **ReportConnectivity**: a simulated device app that connects to your IoT hub and reports its connectivity condition.
> [!NOTE]
-> The article [Azure IoT SDKs](iot-hub-devguide-sdks.md) provides information about the Azure IoT SDKs that you can use to build both device and back-end apps.
->
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
In this article, you create these .NET console apps:
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Get the IoT hub connection string
In this article, you create these .NET console apps:
## Create the service app
-In this section, you create a .NET console app, using C#, that adds location metadata to the device twin associated with **myDeviceId**. It then queries the device twins stored in the IoT hub selecting the devices located in the US, and then the ones that reported a cellular connection.
+In this section, you create a .NET console app, using C#, that adds location metadata to the device twin associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
1. In Visual Studio, select **File > New > Project**. In **Create a new project**, select **Console App (.NET Framework)**, and then select **Next**.
In this section, you create a .NET console app, using C#, that adds location met
![Query results in window](./media/iot-hub-csharp-csharp-twin-getstarted/addtagapp.png)
-In the next section, you create a device app that reports the connectivity information and changes the result of the query in the previous section.
+In the next section, you create a device app that reports connectivity information and changes the result of the query in the previous section.
## Create the device app
-In this section, you create a .NET console app that connects to your hub as **myDeviceId**, and then updates its reported properties to contain the information that it is connected using a cellular network.
+In this section, you create a .NET console app that connects to your hub as **myDeviceId**, and then updates its reported properties to confirm that it's connected using a cellular network.
1. In Visual Studio, select **File** > **New** > **Project**. In **Create new project**, choose **Console App (.NET Framework)**, and then select **Next**.
In this section, you create a .NET console app that connects to your hub as **my
![Device connectivity reported successfully](./media/iot-hub-csharp-csharp-twin-getstarted/tagappsuccess.png)
-## Next steps
+In this article, you:
-In this article, you configured a new IoT hub in the Azure portal, and then created a device identity in the IoT hub's identity registry. You added device metadata as tags from a back-end app, and wrote a simulated device app to report device connectivity information in the device twin. You also learned how to query this information using the SQL-like IoT Hub query language.
+* Configured a new IoT hub in the Azure portal
+* Created a device identity in the IoT hub's identity registry
+* Added device metadata as tags from a back-end app
+* Reported device connectivity information in the device twin
+* Queried the device twin information, using SQL-like IoT Hub query language
+
+## Next steps
-You can learn more from the following resources:
+To learn how to:
-* To learn how to send telemetry from devices, see the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) article.
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp).
-* To learn how to configure devices using device twin's desired properties, see the [Use desired properties to configure devices](tutorial-device-twins.md) article.
+* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
-* To learn how to control devices interactively, such as turning on a fan from a user-controlled app, see the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-csharp) quickstart.
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-csharp).
iot-hub Iot Hub Java Java Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-twin-getstarted.md
In this article, you create two Java console apps:
-* **add-tags-query**, a Java back-end app that adds tags and queries device twins.
-* **simulated-device**, a Java device app that connects to your IoT hub and reports its connectivity condition using a reported property.
+* **add-tags-query**: a back-end app that adds tags and queries device twins.
+* **simulated-device**: a simulated device app that connects to your IoT hub and reports its connectivity condition.
> [!NOTE]
-> The article [Azure IoT SDKs](iot-hub-devguide-sdks.md) provides information about the Azure IoT SDKs that you can use to build both device and back-end apps.
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
In this article, you create two Java console apps:
* [Maven 3](https://maven.apache.org/download.cgi)
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Get the IoT hub connection string
In this article, you create two Java console apps:
## Create the service app
-In this section, you create a Java app that adds location metadata as a tag to the device twin in IoT Hub associated with **myDeviceId**. The app first queries IoT hub for devices located in the US, and then for devices that report a cellular network connection.
+In this section, you create a Java app that adds location metadata as a tag to the device twin in IoT Hub associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
1. On your development machine, create an empty folder named **iot-java-twin-getstarted**.
In this section, you create a Java app that adds location metadata as a tag to t
mvn clean package -DskipTests ```
-## Create a device app
+In the next section, you create a device app that reports connectivity information and changes the result of the query in the previous section.
+
+## Create the device app
-In this section, you create a Java console app that sets a reported property value that is sent to IoT Hub.
+In this section, you create a Java console app that connects to your hub as **myDeviceId**, and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
1. In the **iot-java-twin-getstarted** folder, create a Maven project named **simulated-device** using the following command at your command prompt:
You are now ready to run the console apps.
Now that your device has sent the **connectivityType** property to IoT Hub, the second query returns your device.
+In this article, you:
+
+* Configured a new IoT hub in the Azure portal
+* Created a device identity in the IoT hub's identity registry
+* Added device metadata as tags from a back-end app
+* Reported device connectivity information in the device twin
+* Queried the device twin information, using SQL-like IoT Hub query language
+ ## Next steps
-In this article, you configured a new IoT hub in the Azure portal, and then created a device identity in the IoT hub's identity registry. You added device metadata as tags from a back-end app, and wrote a device app to report device connectivity information in the device twin. You also learned how to query the device twin information using the SQL-like IoT Hub query language.
+To learn how to:
-Use the following resources to learn how to:
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java)
-* Send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) article.
+* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md)
-* Control devices interactively (such as turning on a fan from a user-controlled app) with the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-java) quickstart.
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-java)
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
result = iothub_job_manager.create_import_export_job(JobProperties(
## SDK samples - [.NET SDK sample](https://aka.ms/iothubmsicsharpsample) - [Java SDK sample](https://aka.ms/iothubmsijavasample)-- [Python SDK sample](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-hub/samples)
+- [Python SDK sample](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples)
- Node.js SDK samples: [bulk device import](https://aka.ms/iothubmsinodesampleimport), [bulk device export](https://aka.ms/iothubmsinodesampleexport) ## Next steps
iot-hub Iot Hub Node Node Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system-based devices or firmware devices, it allows for isolated configuration and conditions for each component.
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, it allows for isolated configuration and conditions for each component.
At the end of this article, you have two Node.js apps:
-* **CreateIdentities**, which creates a device identity, a module identity, and associated security keys to connect your device and module clients.
+* **CreateIdentities**: creates a device identity, a module identity, and associated security keys to connect your device and module clients.
-* **UpdateModuleTwinReportedProperties**, which sends updated module twin reported properties to your IoT Hub.
+* **UpdateModuleTwinReportedProperties**: sends updated module twin, reported properties to your IoT Hub.
> [!NOTE]
-> For information about the Azure IoT SDKs that you can use to build both applications to run on devices, and your solution back end, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
At the end of this article, you have two Node.js apps:
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- ## Get the IoT hub connection string [!INCLUDE [iot-hub-howto-module-twin-shared-access-policy-text](../../includes/iot-hub-howto-module-twin-shared-access-policy-text.md)]
At the end of this article, you have two Node.js apps:
## Create a device identity and a module identity in IoT Hub
-In this section, you create a Node.js app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module cannot connect to IoT hub unless it has an entry in the identity registry. For more information, see the "Identity registry" section of the [IoT Hub developer guide](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
+In this section, you create a Node.js app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. The ID and key are case-sensitive. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub.
1. Create a directory to hold your code.
iot-hub Iot Hub Node Node Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-twin-getstarted.md
[!INCLUDE [iot-hub-selector-twin-get-started](../../includes/iot-hub-selector-twin-get-started.md)]
-At the end of this article, you will have two Node.js console apps:
+In this article, you create two Node.js console apps:
-* **AddTagsAndQuery.js**, a Node.js back-end app, which adds tags and queries device twins.
+* **AddTagsAndQuery.js**: a back-end app that adds tags and queries device twins.
-* **TwinSimulatedDevice.js**, a Node.js app, which simulates a device that connects to your IoT hub with the device identity created earlier, and reports its connectivity condition.
+* **TwinSimulatedDevice.js**: a simulated device app that connects to your IoT hub and reports its connectivity condition.
> [!NOTE]
-> The article [Azure IoT SDKs](iot-hub-devguide-sdks.md) provides information about the Azure IoT SDKs that you can use to build both device and back-end apps.
->
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
To complete this article, you need:
* Node.js version 10.0.x or later.
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Get the IoT hub connection string
To complete this article, you need:
## Create the service app
-In this section, you create a Node.js console app that adds location metadata to the device twin associated with **myDeviceId**. It then queries the device twins stored in the IoT hub selecting the devices located in the US, and then the ones that are reporting a cellular connection.
+In this section, you create a Node.js console app that adds location metadata to the device twin associated with **myDeviceId**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
1. Create a new empty folder called **addtagsandqueryapp**. In the **addtagsandqueryapp** folder, create a new package.json file using the following command at your command prompt. The `--yes` parameter accepts all the defaults.
In this section, you create a Node.js console app that adds location metadata to
![See the one device in the query results](media/iot-hub-node-node-twin-getstarted/service1.png)
-In the next section, you create a device app that reports the connectivity information and changes the result of the query in the previous section.
+In the next section, you create a device app that reports connectivity information and changes the result of the query in the previous section.
## Create the device app
-In this section, you create a Node.js console app that connects to your hub as **myDeviceId**, and then updates its device twin's reported properties to contain the information that it is connected using a cellular network.
+In this section, you create a Node.js console app that connects to your hub as **myDeviceId**, and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
1. Create a new empty folder called **reportconnectivity**. In the **reportconnectivity** folder, create a new package.json file using the following command at your command prompt. The `--yes` parameter accepts all the defaults.
In this section, you create a Node.js console app that connects to your hub as *
![Show myDeviceId in both query results](media/iot-hub-node-node-twin-getstarted/service2.png)
-## Next steps
+In this article, you:
-In this article, you configured a new IoT hub in the Azure portal, and then created a device identity in the IoT hub's identity registry. You added device metadata as tags from a back-end app, and wrote a simulated device app to report device connectivity information in the device twin. You also learned how to query this information using the SQL-like IoT Hub query language.
+* Configured a new IoT hub in the Azure portal
+* Created a device identity in the IoT hub's identity registry
+* Added device metadata as tags from a back-end app
+* Reported device connectivity information in the device twin
+* Queried the device twin information, using SQL-like IoT Hub query language
+
+## Next steps
-Use the following resources to learn how to:
+To learn how to:
-* send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) article,
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
-* configure devices using device twin's desired properties with the [Use desired properties to configure devices](tutorial-device-twins.md) article,
+* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md)
-* control devices interactively (such as turning on a fan from a user-controlled app), with the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-nodejs) quickstart.
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-nodejs)
iot-hub Iot Hub Portal Csharp Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-portal-csharp-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provide visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
->
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provide visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, module identities and module twins allow for isolated configuration and conditions for each component.
-In this article, you will learn:
+In this article, you will learn how to:
-* How to create a module identity in the portal.
+* Create a module identity in the portal.
-* How to use a .NET device SDK to update the module twin from your device.
+* Use a .NET device SDK to update the module twin from your device.
> [!NOTE]
-> For information about the Azure IoT SDKs that you can use to build both applications to run on devices and your solution back end, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
->
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites * Visual Studio.
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). * A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
Save the **Connection string (primary key)**. You use it in the next section to
## Update the module twin using .NET device SDK
-You've successfully created the module identity in your IoT Hub. Let's try to communicate to the cloud from your simulated device. Once a module identity is created, a module twin is implicitly created in IoT Hub. In this section, you will create a .NET console app on your simulated device that updates the module twin reported properties.
+Now let's communicate to the cloud from your simulated device. Once a module identity is created, a module twin is implicitly created in IoT Hub. In this section, you will create a .NET console app on your simulated device that updates the module twin reported properties.
### Create a Visual Studio project
-To create an app that updates the module twin reported properties, follow these steps:
+To create an app that updates the module twin, reported properties, follow these steps:
1. In Visual Studio, select **Create a new project**, then choose **Console App (.NET Framework)**, and select **Next**.
To create an app that updates the module twin reported properties, follow these
### Install the latest Azure IoT Hub .NET device SDK
-Module identity and module twin is in public preview. It's only available in the IoT Hub pre-release device SDKs. To install it, follow these steps:
+Module identity and module twin is only available in the IoT Hub pre-release device SDKs. To install it, follow these steps:
1. In Visual Studio, open **Tools** > **NuGet Package Manager** > **Manage NuGet Packages for Solution**. 1. Select **Browse**, and then select **Include prerelease**. Search for *Microsoft.Azure.Devices.Client*. Select the latest version and install.
- :::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png" alt-text="Screenshot showing how to install the Microsoft.Azure.Devices.Client.":::
+ :::image type="content" source="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png" alt-text="Screenshot showing how to install the Microsoft.Azure.Devices.Client." lightbox="./media/iot-hub-csharp-csharp-module-twin-getstarted/install-client-sdk.png":::
Now you have access to all the module features.
To create your app, follow these steps:
You can build and run this app by using **F5**.
-This code sample shows you how to retrieve the module twin and update reported properties with AMQP protocol. In public preview, we only support AMQP for module twin operations.
+Now you know how to retrieve the module twin and update reported properties with AMQP protocol.
## Next steps To continue getting started with IoT Hub and to explore other IoT scenarios, see:
-* [Get started with IoT Hub module identity and module twin using .NET backup and .NET device](iot-hub-csharp-csharp-module-twin-getstarted.md)
+* [Getting started with device management](iot-hub-node-node-device-management-get-started.md)
* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub Python Python Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-module-twin-getstarted.md
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)]
-> [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identities and device twins, but provide finer granularity. While Azure IoT Hub device identities and device twins enable a back-end application to configure a device and provide visibility on the device's conditions, module identities and module twins provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, they allow for isolated configuration and conditions for each component.
+[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identities and device twins, but provide finer granularity. While Azure IoT Hub device identities and device twins enable a back-end application to configure a device and provide visibility on the device's conditions, module identities and module twins provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, they allow for isolated configuration and conditions for each component.
> At the end of this article, you have three Python apps:
-* **CreateModule**, which creates a device identity, a module identity, and associated security keys to connect your device and module clients.
+* **CreateModule**: creates a device identity, a module identity, and associated security keys to connect your device and module clients.
-* **UpdateModuleTwinDesiredProperties**, which sends updated module twin desired properties to your IoT Hub.
+* **UpdateModuleTwinDesiredProperties**: sends updated module twin, desired properties to your IoT Hub.
-* **ReceiveModuleTwinDesiredPropertiesPatch**, which receives the module twin desired properties patch on your device.
+* **ReceiveModuleTwinDesiredPropertiesPatch**: receives the module twin, desired properties patch on your device.
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
In this article, you create a back-end service that adds a device in the identit
## Create a device identity and a module identity in IoT Hub
-In this section, you create a Python service app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
+In this section, you create a Python service app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. The ID and key are case-sensitive. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub.
1. At your command prompt, run the following command to install the **azure-iot-hub** package:
iot-hub Iot Hub Python Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-twin-getstarted.md
[!INCLUDE [iot-hub-selector-twin-get-started](../../includes/iot-hub-selector-twin-get-started.md)]
-At the end of this article, you will have two Python console apps:
+In this article, you create two Python console apps:
-* **AddTagsAndQuery.py**, a Python back-end app, which adds tags and queries device twins.
+* **AddTagsAndQuery.py**: a back-end app that adds tags and queries device twins.
-* **ReportConnectivity.py**, a Python app, which simulates a device that connects to your IoT hub with the device identity created earlier, and reports its connectivity condition.
+* **ReportConnectivity.py**: a simulated device app that connects to your IoT hub and reports its connectivity condition.
+> [!NOTE]
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
At the end of this article, you will have two Python console apps:
## Create the service app
-In this section, you create a Python console app that adds location metadata to the device twin associated with your **{Device ID}**. It then queries the device twins stored in the IoT hub selecting the devices located in Redmond, and then the ones that are reporting a cellular connection.
+In this section, you create a Python console app that adds location metadata to the device twin associated with your **{Device ID}**. The app queries IoT hub for devices located in the US and then queries devices that report a cellular network connection.
1. In your working directory, open a command prompt and install the **Azure IoT Hub Service SDK for Python**.
In this section, you create a Python console app that adds location metadata to
![first query showing all devices in Redmond](./media/iot-hub-python-twin-getstarted/service-1.png)
-In the next section, you create a device app that reports the connectivity information and changes the result of the query in the previous section.
+In the next section, you create a device app that reports connectivity information and changes the result of the query in the previous section.
## Create the device app
-In this section, you create a Python console app that connects to your hub as your **{Device ID}**, and then updates its device twin's reported properties to contain the information that it is connected using a cellular network.
+In this section, you create a Python console app that connects to your hub as your **{Device ID}** and then updates its device twin's reported properties to confirm that it's connected using a cellular network.
1. From a command prompt in your working directory, install the **Azure IoT Hub Device SDK for Python**:
In this section, you create a Python console app that connects to your hub as yo
![receive desired properties on device app](./media/iot-hub-python-twin-getstarted/device-2.png)
-## Next steps
+In this article, you:
+
+* Configured a new IoT hub in the Azure portal
+* Created a device identity in the IoT hub's identity registry
+* Added device metadata as tags from a back-end app
+* Reported device connectivity information in the device twin
+* Queried the device twin information, using SQL-like IoT Hub query language
-In this article, you configured a new IoT hub in the Azure portal, and then created a device identity in the IoT hub's identity registry. You added device metadata as tags from a back-end app, and wrote a simulated device app to report device connectivity information in the device twin. You also learned how to query this information using the registry.
+## Next steps
-Use the following resources to learn how to:
+To learn how to:
-* Send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) article.
+* Send telemetry from devices, see [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) article.
-* Configure devices using device twin's desired properties with the [Use desired properties to configure devices](tutorial-device-twins.md) article.
+* Configure devices using device twin's desired properties, see [Tutorial: Configure your devices from a back-end service](tutorial-device-twins.md).
-* Control devices interactively (such as turning on a fan from a user-controlled app), with the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-python) quickstart.
+* Control devices interactively, such as turning on a fan from a user-controlled app, see [Quickstart: Control a device connected to an IoT hub](./quickstart-control-device.md?pivots=programming-language-python).
marketplace Azure Consumption Commitment Enrollment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-consumption-commitment-enrollment.md
An offer must meet the following requirements to be enrolled in the MACC program
## Next steps - To learn more about how the MACC program benefits customers and how they can find solutions that are enabled for MACC, see [Azure Consumption Commitment benefit](/marketplace/azure-consumption-commitment-benefit).-- To learn more about how your organization can leverage Azure Marketplace, complete our Microsoft Learn module: [Simplify cloud procurement and governance with Azure Marketplace](/learn/modules/simplify-cloud-procurement-governance-azure-marketplace/)
+- To learn more about how your organization can leverage Azure Marketplace, complete our Learn module, [Simplify cloud procurement and governance with Azure Marketplace](/learn/modules/simplify-cloud-procurement-governance-azure-marketplace/)
- [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md#transact-publishing-option)
marketplace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/overview.md
When you create a commercial marketplace offer in Partner Center, it may be list
## Next steps -- Get an [Introduction to the Microsoft commercial marketplace](/learn/modules/intro-commercial-marketplace/) on Microsoft Learn.-- Find videos and hands on labs at [Mastering the marketplace](https://go.microsoft.com/fwlink/?linkid=2195692)
+- Get an [Introduction to the Microsoft commercial marketplace](/learn/modules/intro-commercial-marketplace/).
+- Find videos and hands-on labs at [Mastering the marketplace](https://go.microsoft.com/fwlink/?linkid=2195692)
- For new Microsoft partners who are interested in publishing to the commercial marketplace, see [Create a commercial marketplace account in Partner Center](create-account.md). - To learn more about recent and future releases, join the conversation in the [Microsoft Partner Community](https://www.microsoftpartnercommunity.com/).
migrate Concepts Migration Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-planning.md
Before finalizing your migration plan, make sure you consider and mitigate other
- **Network requirements**: Evaluate network bandwidth and latency constraints, which might cause unforeseen delays and disruptions to migration replication speed. - **Testing/post-migration tweaks**: Allow a time buffer to conduct performance and user acceptance testing for migrated apps, or to configure/tweak apps post-migration, such as updating database connection strings, configuring web servers, performing cut-overs/cleanup etc. - **Permissions**: Review recommended Azure permissions, and server/database access roles and permissions needed for migration.-- **Training**: Prepare your organization for the digital transformation. A solid training foundation is important for successful organizational change. Check out free training on [Microsoft Learn](/learn/azure/?ocid=CM_Discovery_Checklist_PDF), including courses on Azure fundamentals, solution architectures, and security. Encourage your team to exploreΓÇ»[Azure certifications](https://www.microsoft.com/learning/certification-overview.aspx?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
+- **Training**: Prepare your organization for the digital transformation. A solid training foundation is important for successful organizational change. Check out [free Microsoft training](/learn/azure/?ocid=CM_Discovery_Checklist_PDF), including courses on Azure fundamentals, solution architectures, and security. Encourage your team to exploreΓÇ»[Azure certifications](https://www.microsoft.com/learning/certification-overview.aspx?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
- **Implementation support**: Get support for your implementation if you need it. Many organizations opt for outside help to support their cloud migration. To move to Azure quickly and confidently with personalized assistance, consider anΓÇ»[Azure Expert Managed Service Provider](https://www.microsoft.com/solution-providers/search?cacheId=9c2fed4f-f9e2-42fb-8966-4c565f08f11e&ocid=CM_Discovery_Checklist_PDF), orΓÇ»[FastTrack for Azure](https://azure.microsoft.com/programs/azure-fasttrack/?ocid=CM_Discovery_Checklist_PDF).ΓÇ»
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table to define the packet core instance
|The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. | **N6 gateway** (for 5G) or **SGi gateway** (for 4G). | | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**| | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). We recommend that you collect these addresses to allow the UEs to resolve domain names. </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network (for example, if you want to use this data network for local [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) only). | **DNS Addresses** |
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses.</br></br>If you want to use [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) in this data network, keep NAPT disabled. |**NAPT**| ## Next steps
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
For each of these networks, allocate a subnet and then identify the listed IP ad
- Default gateway. - One IP address for port 6 on the Azure Stack Edge Pro device. - One IP address for the user plane interface. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface.
+- Optionally, one or more Domain Name System (DNS) server addresses.
## Allocate user equipment (UE) IP address pools
Do the following for each site you want to add to your private mobile network. D
| 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data network</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-install.md) | | 4. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md) | | 5. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md) |
-| 6. | Configure a name, Domain Name System (DNS) name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
+| 6. | Configure a name, DNS name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
| 7. | Configure certificates for your Azure Stack Edge Pro device. | [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md) | | 8. | Activate your Azure Stack Edge Pro device. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) | | 9. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning.</br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
- Use the same value for both the **S1-MME subnet** and **S1-U subnet** fields (if this site will support 4G UEs). - Use the same value for both the **S1-MME gateway** and **S1-U gateway** fields (if this site will support 4G UEs).
-1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields and select **Submit**. Note that you can only connect the packet core instance to a single data network.
+1. In the **Attached data networks** section, select **Add data network**. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. If you decided not to configure a DNS server, untick the **Specify DNS addresses for UEs?** checkbox.
:::image type="content" source="media/create-a-site/create-site-add-data-network.png" alt-text="Screenshot of the Azure portal showing the Add data network screen.":::
+1. Select **Submit**. Note that you can only connect the packet core instance to a single data network.
1. Select **Review + create**. 1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Four Azure resources are defined in the template.
|**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. | | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | | **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. |
+ | **Dns Addresses** | Enter the DNS server addresses. You can omit this if you don't want to configure a DNS server for the UEs in this data network. |
| **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. | 1. Select **Review + create**.
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
|**Data Network Name** | Enter the name of the data network. | |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. | |**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.|
+ | **Dns Addresses** | Enter the DNS server addresses. You can omit this if you don't want to configure a DNS server for the UEs in this data network. |
|**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.| 1. Select **Review + create**.
purview How To Data Owner Policies Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-azure-sql-db.md
Previously updated : 07/20/2022 Last updated : 08/11/2022 # Provision access by data owner for Azure SQL DB (preview)
This how-to guide describes how a data owner can delegate authoring policies in
[!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)] ### Azure SQL Database configuration
-Each Azure SQL Database server needs a Managed Identity assigned to it.
-You can use the following PowerShell script:
+Each Azure SQL Database server needs a Managed Identity assigned to it. You can do this from Azure Portal by navigating to the Azure SQL Server that hosts the Azure SQL DB, navigating to Identity on the side menu, checking status to *On* and then saving. See screenshot:
+![Screenshot shows how to assign system managed identity to Azure SQL Server.](./media/how-to-data-owner-policies-sql//assign-identity-azure-sql-db.png)
++
+You will also need to enable external policy based authorization on the server. You can do this in Power Shell
+ ```powershell Connect-AzAccount $context = Get-AzSubscription -SubscriptionId xxxx-xxxx-xxxx-xxxx Set-AzContext $context
-Set-AzSqlServer -ResourceGroupName "RESOURCEGROUPNAME" -ServerName "SERVERNAME" -AssignIdentity
-```
-You will also need to enable external policy based authorization on the server.
-
-```powershell
$server = Get-AzSqlServer -ResourceGroupName "RESOURCEGROUPNAME" -ServerName "SERVERNAME" #Initiate the call to the REST API to set externalPolicyBasedAuthorization to true
This section contains a reference of how actions in Microsoft Purview data polic
Check blog, demo and related how-to guides * [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2) * [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md)
-* Blog: [Private preview: controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491)
+* Blog: [Microsoft Purview Data Policy for SQL DevOps access provisioning now in public preview](https://techcommunity.microsoft.com/t5/microsoft-purview-blog/microsoft-purview-data-policy-for-sql-devops-access-provisioning/ba-p/3403174)
+* Blog: [Controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491)
* [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md) * [Enable Microsoft Purview data owner policies on an Arc-enabled SQL Server](./how-to-data-owner-policies-arc-sql-server.md)
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
Previously updated : 05/27/2022 Last updated : 8/11/2022 # Access provisioning by data owner to Azure Storage datasets (Preview)
Execute the steps in the **Create a new policy** and **Publish a policy** sectio
>[!Important] > - Publish is a background operation. Azure Storage accounts can take up to **2 hours** to reflect the changes.
-## Additional information
-- Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container (like Storage Explorer does), and there's no access at that level, the request will fail. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
+## Data Consumption
+- Data consumer can access the requested dataset using tools such as PowerBI or Azure Synapse Analytics workspace.
+- Sub-container access: Policy statements set below container level on a Storage account are supported. However, users will not be able to browse to the data asset using Azure Portal's Storage Browser or Microsoft Azure Storage Explorer tool if access is granted only at file or folder level of the Azure Storage account. This is because these apps attempt to crawl down the hierarchy starting at container level, and the request fails because no access has been granted at that level. Instead, the App that requests the data must execute a direct access by providing a fully qualified name to the data object. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide.
- [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster) - [*az storage blob download* for Blob Storage](../storage/blobs/storage-quickstart-blobs-cli.md#download-a-blob)+
+## Additional information
- Creating a policy at Storage account level will enable the Subjects to access system containers, for example *$logs*. If this is undesired, first scan the data source(s) and then create finer-grained policies for each (that is, at container or subcontainer level). - The root blob in a container will be accessible to the Azure AD principals in a Microsoft Purview *allow*-type RBAC policy if the scope of such policy is either subscription, resource group, Storage account or container in Storage account. - The root container in a Storage account will be accessible to the Azure AD principals in a Microsoft Purview *allow*-type RBAC policy if the scope of such policy is either subscription, resource group, or Storage account.
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
Last updated 07/15/2022
-# Microsoft Purview - Profisee Integration
+# Microsoft Purview - Profisee MDM Integration
-Master data management (MDM) is a key pillar of any unified data governance solution. Microsoft Purview supports master data management with our partner [Profisee](https://profisee.com/profisee-advantage/). This tutorial compiles reference and integration deployment materials in one place to get you started on your MDM journey with Microsoft Purview through our integration with Profisee.
+Master data management (MDM) is a key pillar of any unified data governance solution. Microsoft Purview supports master data management with our partner [Profisee](https://profisee.com/profisee-advantage/). This tutorial compiles reference and integration deployment materials in one place; firstly to put Purview Unified Data Governance and MDM in the context of an Azure data estate; and more importantly, to get you started on your MDM journey with Microsoft Purview through our integration with Profisee.
-## What, why and how of MDM - Master Data Management?
+## Why Data Governance and Master Data Management (MDM) are essential to the modern Data Estate?
-Many businesses today have large data estates that move massive amounts of data between applications, storage systems, analytics systems, and across departments within their organization. During these movements, and over time, data can be accidentally duplicated or become fragmented, and become stale or out of date. Hence, accuracy becomes a concern when using this data to drive insights into your business.
+All organizations have multiple data sources, and the larger the organization the greater the number of data sources. Typically, there will be ERPs, CRMs, Legacy applications, regional versions of each of these, external data feeds and so on. Most of these businesses move massive amounts of data between applications, storage systems, analytics systems, and across departments within their organization. During these movements, and over time, data can get duplicated or become fragmented, and become stale or out of date. Hence, accuracy becomes a concern when using this data to drive insights into your business.
+
+Inevitably, data that was created in different ΓÇÿsilosΓÇÖ with different (or no) governance standards to meet the needs of their respective applications will always have issues. When you look at the data drawn from each of these applications, you'll see that it's inconsistent in terms of both the standardization of data. Often, there are numerous inconsistencies in terms of the values themselves, and most often individual records are incomplete. In fact, it would be surprising if these inconsistencies weren't the case ΓÇô but it does present a problem. What is needed is data that is complete, and consistent, and accurate.
To protect the quality of data within an organization, master data management (MDM) arose as a discipline that creates a source of truth for enterprise data so that an organization can check and validate their key assets. These key assets, or master data assets, are critical records that provide context for a business. For example, master data might include information on specific products, employees, customers, financial structures, suppliers, or locations. Master data management ensures data quality across an entire organization by maintaining an authoritative consolidated de-duplicated set of the master data records, and ensuring data remains consistent across your organization's complete data estate.
-As an example, it can be difficult for a company to have a clear, single view of their customers. Customer data may differ between systems, there may be duplicated records due to incorrect entry, or shipping and customer service systems may vary due to name, address, or other attributes. Master data management consolidates all this differing information about the customer it into a single, standard format that can be used to check data across an organizations entire data estate. Not only does this improve quality of data by eliminating mismatched data across departments, but it ensures that data analyzed for business intelligence (BI) and other applications is trustworthy and up to date, reduces data load by removing duplicate records across the organization, and streamlines communications between business systems.
+As an example, it can be difficult for a company to have a clear, single view of their customers. Customer data may differ between systems, there may be duplicated records due to incorrect entry, or shipping and customer service systems may vary due to name, address, or other attributes. Master data management consolidates and standardizes all this differing information about the customer. This standardization process may involve automatic or user-defined rules, validations and checks. It's the job of the MDM system to ensure your data remains consistent within the framework of these rules over time. Not only does this improve quality of data by eliminating mismatched data across departments, but it ensures that data analyzed for business intelligence (BI) and other applications is trustworthy and up to date, reduces data load by removing duplicate records across the organization, and streamlines communications between business systems.
-More Details on [Profisee MDM](https://profisee.com/master-data-management-what-why-how-who/) and [Profisee-Purview MDM Concepts and Azure Architecture](/azure/architecture/reference-architectures/data/profisee-master-data-management-purview).
+The ability to consolidate data from multiple disparate systems is key if we want to use the data to drive business insights and operational efficiencies ΓÇô or any form of ΓÇÿdigital transformationΓÇÖ. What we need in that case is high quality, trusted data that is ready to use, whether it's being consumed in basic enterprise metrics or advanced AI algorithms. Bridging this gap is the job of Data Governance and MDM, and in the Azure world that means [Microsoft Purview](https://azure.microsoft.com/services/purview/) and [Profisee MDM](https://profisee.com/platform).
+
-## Microsoft Purview & Profisee Integrated MDM - Better Together!
+While governance systems can *define* data standards, MDM is where they're *enforced*. Data from different systems can be matched and merged, validated against data quality and governance standards, and remediated where required. Then the new corrected and validated ΓÇÿmasterΓÇÖ data can be shared to downstream analytics systems and then back into source systems to drive operational improvements. By properly creating and maintaining enterprise master data, we ensure that data is no longer a liability and cause for concern, but an asset of the business that enables improved operation and innovation.
-### Profisee MDM: True SaaS experience
+More Details on [Profisee MDM](https://profisee.com/master-data-management-what-why-how-who/) and [Profisee-Purview MDM Concepts and Azure Architecture](/azure/architecture/reference-architectures/data/profisee-master-data-management-purview).
-A fully managed instance of Profisee MDM hosted in the Azure cloud. Full turn-key service for the easiest and fastest MDM deployment.
+## Microsoft Purview & Profisee MDM - Better Together!
-- **Platform and Management in One** - Apply a true, end-to-end SaaS platform with one agreement and no third parties. -- **Industry-leading Cloud Service** - Hosted on Azure for industry-leading scalability and availability. -- **The fastest path to trusted data** - Leave the networking, firewalls and storage to us so you can deploy in minutes.
+Microsoft Purview and Profisee MDM are often discussed as being a ΓÇÿBetter TogetherΓÇÖ value proposition due to the complementary nature of the solutions. Microsoft Purview excels at cataloging data sources and defining data standards, while Profisee MDM enforces those standards across master data drawn from multiple siloed sources. It's clear not only that either system has independent value to offer, but also that each reinforces the other for a natural ΓÇÿBetter TogetherΓÇÖ synergy that goes deeper than the independent offerings.
+ - Common technical foundation ΓÇô Profisee was born out of Microsoft technologies using common tools, databases & infrastructure so any ΓÇÿMicrosoft shopΓÇÖ will find the Profisee solution familiar. In fact, for many years Profisee MDM was built on Microsoft Master Data Services (MDS) and now that MDS is nearing end of life, Profisee is the premier upgrade/replacement solution for MDS.
+ - Developer collaboration and joint development ΓÇô Profisee and Purview developers have collaborated extensively to ensure a good complementary fit between their respective solutions to deliver a seamless integration that meets the needs of their customers.
+ - Joint sales and deployments ΓÇô Profisee has more MDM deployments on Azure, and jointly with Purview, than any other MDM vendor, and can be purchased through Azure Marketplace. In FY2023 Profisee is the only MDM vendor with a Top Tier Microsoft partner certification available as an IaaS/CaaS or SaaS offering through Azure Marketplace.
+ - Rapid and reliable deployment ΓÇô Rapid and reliable deployment is critical for any enterprise software and Gartner points out that Profisee has more implementations taking under 90 days than any other MDM vendor.
+ - Inherently multi-domain ΓÇô Profisee offers an inherently multi-domain approach to MDM where there are no limitations to the number of specificity of master data domains. This design aligns well with customers looking to modernize their data estate who may start with a limited number of domains, but ultimately will benefit from maximizing domain coverage (matched to their data governance coverage) across their whole data estate.
+ - Engineered for Azure ΓÇô Profisee has been engineered to be cloud-native with options for both SaaS and managed IaaS/CaaS deployments on Azure (see next section)
-### Profisee MDM: Ultimate PaaS flexibility
+## Profisee MDM: Deployment Flexibility ΓÇô Turnkey SaaS Experience or IaaS/CaaS Flexibility
+Profisee MDM has been engineered for a cloud-native experience and may be deployed on Azure in two ways ΓÇô SaaS and Azure IaaS/CaaS/Kubernetes Cluster.
-Complete deployment flexibility and control, using the most efficient and low-maintenance option on the [Microsoft Azure](https://azure.microsoft.com/) cloud or on-premises.
+### Turnkey SaaS Experience
+A fully managed instance of Profisee MDM hosted by Profisee in the Azure cloud. Full turn-key service for the easiest and fastest MDM deployment. Profisee MDM SaaS can be purchased on [Azure Marketplace Profisee MDM - SaaS](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/profisee.profisee_saas_private/product~/).
+- **Platform and Management in one** ΓÇô Leverage a true, end-to-end SaaS platform with one agreement and no third parties.
+- **Industry-leading Cloud service** ΓÇô Hosted on Azure for industry-leading scalability and availability.
+- **The fastest path to Trusted Data** ΓÇô Deploy in minutes with minimal technical knowledge. Leave the networking, firewalls and storage to us so you can deploy in minutes.
+### Ultimate IaaS/CaaS Flexibility
+Complete deployment flexibility and control, using the most efficient and low-maintenance option on the [Microsoft Azure](https://azure.microsoft.com/) Kubernetes Service, functioning as a customer hosted fully managed IaaS/CaaS (container-as-a-service) deployment. The section below on "Microsoft Purview - Profisee integration deployment on Azure Kubernetes Service (AKS)" describes this deployment route in detail.
- **Modern Cloud Architecture** - Platform available as a containerized Kubernetes service. -- **Complete Flexibility & Autonomy** - Available in Azure, AWS, Google Cloud or on-prem. -- **Fast to Deploy, Easy to Maintain** - Fully containerized configuration streamlines patches and upgrades.
+- **Complete Flexibility & Autonomy** - Available in Azure, AWS, Google Cloud or on-premises.
+- **Fast to Deploy, Easy to Maintain** - 100% containerized configuration streamlines patches and upgrades.
More Details on [Profisee MDM Benefits On Modern Cloud Architecture](https://profisee.com/our-technology/modern-cloud-architecture/), [Profisee Advantage Videos](https://profisee.com/profisee-advantage/) and why it fits best with [Microsoft Azure](https://azure.microsoft.com/) cloud deployments!
-## Microsoft Purview - Profisee reference architecture
+## Microsoft Purview - Profisee Reference Architecture
+
+The reference architecture shows how both Microsoft Purview and Profisee MDM work together to provide a foundation of high-quality, trusted data for the Azure data estate. It's also available as a short video walk-through.
+
+**Video: [Profisee Reference Architecture: MDM and Governance for Azure](https://profisee.wistia.com/medias/k72zte2wbr)**
:::image type="content" alt-text="Diagram of Profisee-Purview Reference Architecture." source="./medim-reference-architecture.png":::
+1. Scan & classify metadata from LOB systems ΓÇô uses pre-built Purview connectors to scan data sources and populate the Purview Data Catalog
+2. Publish master data model to Purview ΓÇô any master data entities created in Profisee MDM are seamlessly published into Purview to further populate the Purview Data Catalog and ensure Purview is ΓÇÿawareΓÇÖ of this critical source of data
+3. Enrich master data model with governance details ΓÇô Governance Data Stewards can enrich master data entity definitions with data dictionary and glossary information as well as ownership and sensitive data classifications, etc. in Purview
+4. Leverage enriched governance data for data stewardship ΓÇô any definitions and metadata available on Purview are visible in real-time in Profisee as guidance for the MDM Data Stewards
+5. Load source data from business applications ΓÇô Azure Data Factory extracts data from source systems with 100+ pre-built connectors and/or REST gateway
+ Transactional and unstructured data is loaded to downstream analytics solution ΓÇô All ΓÇÿrawΓÇÖ source data can be loaded to analytics database such as Synapse (Synapse is generally the preferred analytic database but other such as Snowflake are also common). Analysis on this raw information without proper master (ΓÇÿgoldenΓÇÖ) data will be subject to inaccuracy as data overlaps, mismatches and conflicts won't yet have been resolved.
+7. Master data from source systems is loaded to Profisee MDM application ΓÇô Multiple streams of ΓÇÿmasterΓÇÖ data is loaded to Profisee MDM. Master data is the data that defines a domain entity such as customer, product, asset, location, vendor, patient, household, menu item, ingredient, and so on. This data is typically present in multiple systems and resolving differing definitions and matching and merging this data across systems is critical to the ability to use any cross-system data in a meaningful way.
+8. Master data is standardized, matched, merged, enriched and validated according to governance rules ΓÇô Although data quality and governance rules may be defined in other systems (such as Purview), Profisee MDM is where they're enforced. Source records are matched and merged both within and across source systems to create the most complete and correct record possible. Data quality rules check each record for compliance with business and technical requirements.
+9. Extra data stewardship to review and confirm matches, data quality, and data validation issues, as required ΓÇô Any record failing validation or matching with only a low probability score is subject to remediation. To remediate failed validations, a workflow process assigns records requiring review to Data Stewards who are experts in their business data domain. Once records have been verified or corrected, they're ready to use as a ΓÇÿgolden recordΓÇÖ master.
+10. Direct access to curated master data including secure data access for reporting in Power BI ΓÇô Power BI users may report directly on master data through a dedicated Power BI Connector that recognizes and enforces role-based security and hides various system fields for simplicity.
+11. High-quality, curated master data published to downstream analytics solution ΓÇô Verified master data can be published out to any target system using Azure Data Factory. Master data including the parent-child lineage of merged records published into Azure Synapse (or wherever the ΓÇÿrawΓÇÖ source transactional data was loaded). With this combination of properly curated master data plus transactional data, we have a solid foundation of trusted data for further analysis.
+12. Visualization and analytics with high-quality master data eliminates common data quality issues and delivers improved insights ΓÇô Irrespective of the tools used for analysis, including machine learning, and visualization, well-curated master data forms a better and more reliable data foundation. The alternative is to use whatever information you can get ΓÇô and risk misleading results that can damage the business.
+ ### Reference architecture guides/reference documents - [Data Governance with Profisee and Microsoft Purview](/azure/architecture/reference-architectures/data/profisee-master-data-management-purview) - [Operationalize Profisee with ADF Azure Data Factory, Azure Synapse Analytics and Power BI](/azure/architecture/reference-architectures/data/profisee-master-data-management-data-factory) - [MDM on Azure Overview](/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/govern-master-data)
-### Example scenario: Business & technical use case
-
-Let's take an example of a sample manufacturing company working across multiple data sources; it uses ADF to load the business critical data sources into Profisee, which is when Profisee works its magic and finds out the golden records and matching records and then we finally are able to enrich the metadata with Microsoft Purview (updates made by Microsoft Purview on Classifications, Sensitivity Labels, Glossary and all other Catalog features are reflected seamlessly into Profisee). Finally, they connect the enriched metadata detected by Microsoft Purview and cleansed/curated data by Profisee with Power BI or Azure ML for advanced analytics.
-
-## Microsoft Purview - Profisee integration SaaS deployment on Azure Kubernetes Service (AKS) guide
+## Microsoft Purview - Profisee integration deployment on Azure Kubernetes Service (AKS)
+Go to [https://github.com/Profisee/kubernetes](https://github.com/Profisee/kubernetes) and select Microsoft Purview [**Azure ARM**]. The deployment process detailed below is owned and hosted by you on your Azure subscription as an IaaS / CaaS (container-as-a-service) AKS Cluster.
1. [Create a user-assigned managed identity in Azure](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. This managed identity must have the following permissions when running a deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity: - Contributor role to the resource group where AKS will be deployed. It can either be assigned directly to the resource group **OR** at the subscription level and down.
Recommended: Keep it to "Yes, use default Azure DNS". Choosing Yes, the deployer
:::image type="content" alt-text="Image 12 - Screenshot of Profisee Azure ARM Wizard Select Outputs Get FinalDeployment URL." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-click-outputs-get-final-deployment-url.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-click-outputs-get-final-deployment-url.png"::: -- Populate and hydrate data to the newly installed Profisee environment by installing FastApp. Go to your Profisee SaaS deployment URL and select **/Profisee/api/client**. It should look something like - "https://[profisee_name].[region].cloudapp.azure.com/profisee/api/client".
+- Populate and hydrate data to the newly installed Profisee environment by installing FastApp. Go to your Profisee deployment URL and select **/Profisee/api/client**. It should look something like - "https://[profisee_name].[region].cloudapp.azure.com/profisee/api/client". Select the Downloads for "Profisee FastApp Studio" utility and the "Profisee Platform Tools". Install both these tools on your local client machine.
+
+ :::image type="content" alt-text="Image 13 - Screenshot of Profisee Client Tools Download." source="./media/how-to-deploy-profisee-purview/profisee-download-fastapp-tools.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-download-fastapp-tools.png":::
+
+- Log in to FastApp Studio and perform the rest of the MDM Administration and configuration management for Profisee. Once you log in with the administrator email address supplied during the setup; you should be able to see the administration menu on the left pane of the Profisee FastApp Studio. Navigate to these menus and perform the rest of your MDM journey using FastApp tool. Being able to see the administration menu as seen in the image below confirms successful installation of Profisee on Azure Platform.
+
+ :::image type="content" alt-text="Image 14 - Screenshot of Profisee FastApp Studio once you sign in." source="./media/how-to-deploy-profisee-purview/profisee-fastapp-studio-home-screen.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-fastapp-studio-home-screen.png":::
+
+- As a final validation step to ensure successful installation and for checking whether Profisee has been successfully connected to your Microsoft Purview instance, go to **/Profisee/api/governance/health** It should look something like - "https://[profisee_name].[region].cloudapp.azure.com//Profisee/api/governance/health". The output response will indicate the words **"Status": "Healthy"** on all the Purview subsystems.
+
+```{
+ "OverallStatus": "Healthy",
+ "TotalCheckDuration": "0:XXXXXXX",
+ "DependencyHealthChecks": {
+ "purview_service_health_check": {
+ "Status": "Healthy",
+ "Duration": "00:00:NNNN",
+ "Description": "Successfully connected to Purview."
+ },
+ "governance_service_health_check": {
+ "Status": "Healthy",
+ "Duration": "00:00:NNNN",
+ "Description": "Purview cache loaded successfully.
+ Total assets: NNN; Instances: 1; Entities: NNN; Attributes: NNN; Relationships: NNN; Hierarchies: NNN"
+ },
+ "messaging_db_health_check": {
+ "Status": "Healthy",
+ "Duration": "00:00:NNNN",
+ "Description": null
+ },
+ "logging_db_health_check": {
+ "Status": "Healthy",
+ "Duration": "00:00:NNNN",
+ "Description": null
+ }
+ }
+}
+```
+An output response that looks similar as the above confirms successful installation, completes all the deployment steps; and validates whether Profisee has been successfully connected to your Microsoft Purview and indicates that the two systems are able to communicate properly.
## Next steps
-Through this guide, we learned how to set up and deploy a Microsoft Purview-Profisee integration.
-For more usage details on Profisee and Profisee FastApp, especially how to configure data models, data quality, MDM and various other features of Profisee - Register on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/) for further detailed tutorials on the Profisee side of MDM!
+Through this guide, we learned of the importance of MDM in driving and supporting Data Governance in the context of the Azure data estate, and how to set up and deploy a Microsoft Purview-Profisee integration.
+For more usage details on Profisee MDM, register for scheduled trainings, live product demonstration and Q&A on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/)!
purview How To Enable Data Use Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-management.md
Previously updated : 4/21/2022 Last updated : 8/10/2022
To disable Data Use Management for a source, resource group, or subscription, a
1. Set the **Data Use Management** toggle to **Disabled**. ## Additional considerations related to Data Use Management- - Make sure you write down the **Name** you use when registering in Microsoft Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name. - To disable a source for *Data Use Management*, remove it first from being bound (i.e. published) in any policy. - While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data Use Management*, either of those roles can independently disable it.-- Make sure you write down the **Name** you use when registering in Microsoft Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name.-- To disable a source for *Data Use Management*, remove it first from being bound (i.e., published) in any policy.-- While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data Use Management*, either of those roles can independently disable it. - Disabling *Data Use Management* for a subscription will disable it also for all assets registered in that subscription. > [!WARNING] > **Known issues** related to source registration
-> - Moving data sources to a different resource group or subscription is not yet supported. If want to do that, de-register the data source in Microsoft Purview before moving it and then register it again after that happens.
+> - Moving data sources to a different resource group or subscription is not supported. If want to do that, de-register the data source in Microsoft Purview before moving it and then register it again after that happens. Note that policies are bound to the data source ARM path. Changing the data source subscription or resource group makes policies ineffective.
> - Once a subscription gets disabled for *Data Use Management* any underlying assets that are enabled for *Data Use Management* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that. ## Data Use Management best practices
purview How To Lineage Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-powerbi.md
Previously updated : 03/30/2021 Last updated : 08/11/2022 # How to get lineage from Power BI into Microsoft Purview
purview How To Lineage Sql Server Integration Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-sql-server-integration-services.md
Previously updated : 06/30/2021 Last updated : 08/11/2022 # How to get lineage from SQL Server Integration Services (SSIS) into Microsoft Purview
search Search Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-filters.md
You can't modify existing fields to make them filterable. Instead, you need to a
Text filters match string fields against literal strings that you provide in the filter: `$filter=Category eq 'Resort and Spa'`
-Unlike full-text search, there is no lexical analysis or word-breaking for text filters, so comparisons are for exact matches only. For example, assume a field *f* contains "sunny day", `$filter=f eq 'Sunny'` does not match, but `$filter=f eq 'sunny day'` will.
+Unlike full-text search, there is no lexical analysis or word-breaking for text filters, so comparisons are for exact matches only. For example, assume a field *f* contains "sunny day", `$filter=f eq 'sunny'` does not match, but `$filter=f eq 'sunny day'` will.
-Text strings are case-sensitive. There is no lower-casing of upper-cased words: `$filter=f eq 'Sunny day'` will not find "sunny day".
+Text strings are case-sensitive which means text filters are case sensitive by default. For example, `$filter=f eq 'Sunny day'` will not find "sunny day". However, you can use a [normalizer](search-normalizers.md) to make it so filtering isn't case sensitive.
### Approaches for filtering on text
To work with more examples, see [OData Filter Expression Syntax > Examples](./se
+ [Search Documents REST API](/rest/api/searchservice/search-documents) + [Simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) + [Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search)
-+ [Supported data types](/rest/api/searchservice/supported-data-types)
++ [Supported data types](/rest/api/searchservice/supported-data-types)
search Search Synapseml Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synapseml-cognitive-services.md
+
+ Title: Use Search with SynapseML
+
+description: Add full text search to big data on Apache Spark that's been loaded and transformed through the open source SynapseML library. In this walkthrough, you'll load invoice files into data frames, apply machine learning through SynapseML, then send it into a generated search index.
++++++ Last updated : 08/09/2022++
+# Add search to AI-enriched data from Apache Spark using SynapseML
+
+In this Azure Cognitive Search article, learn how to add data exploration and full text search to a SynapseML solution.
+
+[SynapseML](/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/) is an open source library that supports massively parallel machine learning over big data. One of the ways in which machine learning is exposed is through *transformers* that perform specialized tasks. Transformers tap into a wide range of AI capabilities, but in this article, we'll focus on just those that call Cognitive Services and Cognitive Search.
+
+In this walkthrough, you'll set up a workbook that does the following:
+
+> [!div class="checklist"]
+> + Load various forms (invoices) into a data frame in an Apache Spark session
+> + Analyze them to determine their features
+> + Assemble the resulting output into a tabular data structure
+> + Write the output to a search index in Azure Cognitive Search
+> + Explore and search over the content you created
+
+Although Azure Cognitive Search has native [AI enrichment](cognitive-search-concept-intro.md), this walkthrough shows you how to access AI capabilities outside of Cognitive Search. By using SynapseML instead of indexers or skills, you're not subject to data limits or any other constraint associated with those objects.
+
+> [!TIP]
+> Watch a demo at [https://www.youtube.com/watch?v=iXnBLwp7f88](https://www.youtube.com/watch?v=iXnBLwp7f88). The demo expands on this walkthrough with more steps and visuals.
+
+## Prerequisites
+
+You'll need the `synapseml` library and several Azure resources. If possible, use the same subscription and region for your Azure resources and put everything into one resource group for simple cleanup later. The following links are for portal installs. The sample data is imported from a public site.
+++ [Azure Cognitive Search](search-create-service-portal.md) (any tier) <sup>1</sup> ++ [Azure Cognitive Services](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#create-a-new-azure-cognitive-services-resource) (any tier) <sup>2</sup> ++ [Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal?tabs=azure-portal) (any tier) <sup>3</sup>+
+<sup>1</sup> You can use the free tier for this walkthrough but [choose a higher tier](search-sku-tier.md) if data volumes are large. You'll need the [API key](search-security-api-keys.md#find-existing-keys) for this resource.
+
+<sup>2</sup> This walkthrough uses Azure Forms Recognizer and Azure Translator. In the instructions below, you'll provide a [Cognitive Services multi-service key](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource) and the region, and it'll work for both services.
+
+<sup>3</sup> In this walkthrough, Azure Databricks provides the computing platform. You could also use Azure Synapse Analytics or any other computing platform supported by `synapseml`. The Azure Databricks article listed in the prerequisites includes multiple steps. For this walkthrough, follow only the instructions in "Create a workspace".
+
+> [!NOTE]
+> All of the above resources support security features in the Microsoft Identity platform. For simplicity, this walkthrough assumes key-based authentication, using endpoints and keys copied from the portal pages of each service. If you implement this workflow in a production environment, or share the solution with others, remember to replace hard-coded keys with integrated security or encrypted keys.
+
+## Create a Spark cluster and notebook
+
+In this section, you'll create a cluster, install the `synapseml` library, and create a notebook to run the code.
+
+1. In Azure portal, find your Azure Databricks workspace and select **Launch workspace**.
+
+1. On the left menu, select **Compute**.
+
+1. Select **Create cluster**.
+
+1. Give the cluster a name, accept the default configuration, and then create the cluster. It takes several minutes to create the cluster.
+
+1. Install the `synapseml` library after the cluster is created:
+
+ 1. Select **Library** from the tabs at the top of the cluster's page.
+
+ 1. Select **Install new**.
+
+ :::image type="content" source="media/search-synapseml-cognitive-services/install-library.png" alt-text="Screenshot of the Install New command." border="true":::
+
+ 1. Select **Maven**.
+
+ 1. In Coordinates, enter `com.microsoft.azure:synapseml_2.12:0.10.0`
+
+ 1. Select **Install**.
+
+1. On the left menu, select **Create** > **Notebook**.
+
+ :::image type="content" source="media/search-synapseml-cognitive-services/create-notebook.png" alt-text="Screenshot of the Create Notebook command." border="true":::
+
+1. Give the notebook a name, select **Python** as the default language, and select the cluster that has the `synapseml` library.
+
+1. Create seven consecutive cells. You'll paste code into each one.
+
+ :::image type="content" source="media/search-synapseml-cognitive-services/create-seven-cells.png" alt-text="Screenshot of the notebook with placeholder cells." border="true":::
+
+## Set up dependencies
+
+Paste the following code into the first cell of your notebook. Replace the placeholders with endpoints and access keys for each resource. No other modifications are required, so run the code when you're ready.
+
+This code imports packages and sets up access to the Azure resources used in this workflow.
+
+```python
+import os
+from pyspark.sql.functions import udf, trim, split, explode, col, monotonically_increasing_id, lit
+from pyspark.sql.types import StringType
+from synapse.ml.core.spark import FluentAPI
+
+cognitive_services_key = "placeholder-cognitive-services-multi-service-key"
+cognitive_services_region = "placeholder-cognitive-services-region"
+
+search_service = "placeholder-search-service-name"
+search_key = "placeholder-search-service-api-key"
+search_index = "placeholder-search-index-name"
+```
+
+## Load data into Spark
+
+Paste the following code into the second cell. No modifications are required, so run the code when you're ready.
+
+This code loads a small number of external files from an Azure storage account that's used for demo purposes. The files are various invoices, and they're read into a data frame.
+
+```python
+def blob_to_url(blob):
+ [prefix, postfix] = blob.split("@")
+ container = prefix.split("/")[-1]
+ split_postfix = postfix.split("/")
+ account = split_postfix[0]
+ filepath = "/".join(split_postfix[1:])
+ return "https://{}/{}/{}".format(account, container, filepath)
++
+df2 = (spark.read.format("binaryFile")
+ .load("wasbs://ignite2021@mmlsparkdemo.blob.core.windows.net/form_subset/*")
+ .select("path")
+ .limit(10)
+ .select(udf(blob_to_url, StringType())("path").alias("url"))
+ .cache())
+
+display(df2)
+```
+
+## Apply form recognition
+
+Paste the following code into the third cell. No modifications are required, so run the code when you're ready.
+
+This code loads the [AnalyzeInvoices transformer](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#analyzeinvoices) and passes a reference to the data frame containing the invoices. It calls the pre-built [invoice model](/azure/applied-ai-services/form-recognizer/concept-invoice) of Azure Forms Analyzer.
+
+```python
+from synapse.ml.cognitive import AnalyzeInvoices
+
+analyzed_df = (AnalyzeInvoices()
+ .setSubscriptionKey(cognitive_services_key)
+ .setLocation(cognitive_services_region)
+ .setImageUrlCol("url")
+ .setOutputCol("invoices")
+ .setErrorCol("errors")
+ .setConcurrency(5)
+ .transform(df2)
+ .cache())
+
+display(analyzed_df)
+```
+
+## Apply data restructuring
+
+Paste the following code into the fourth cell and run it. No modifications are required.
+
+This code loads [FormOntologyLearner](https://mmlspark.blob.windows.net/docs/0.10.0/pyspark/synapse.ml.cognitive.html?highlight=formontologylearner#module-synapse.ml.cognitive.FormOntologyLearner), a transformer that analyzes the output of Form Recognizer transformers and infers a tabular data structure. The output of AnalyzeInvoices is dynamic and varies based on the features detected in your content. Furthermore, the AnalyzeInvoices transformer consolidates output into a single column. Because the output is dynamic and consolidated, it's difficult to use in downstream transformations that require more structure.
+
+FormOntologyLearner extends the utility of the AnalyzeInvoices transformer by looking for patterns that can be used to create a tabular data structure. Organizing the output into multiple columns and rows makes the content consumable in other transformers, like AzureSearchWriter.
+
+```python
+from synapse.ml.cognitive import FormOntologyLearner
+
+itemized_df = (FormOntologyLearner()
+ .setInputCol("invoices")
+ .setOutputCol("extracted")
+ .fit(analyzed_df)
+ .transform(analyzed_df)
+ .select("url", "extracted.*").select("*", explode(col("Items")).alias("Item"))
+ .drop("Items").select("Item.*", "*").drop("Item"))
+
+display(itemized_df)
+```
+
+## Apply translations
+
+Paste the following code into the fifth cell. No modifications are required, so run the code when you're ready.
+
+This code loads [Translate](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#translate), a transformer that calls the Azure Translator service in Cognitive Services. The original text, which is in English in the "Description" column, is machine-translated into various languages. All of the output is consolidated into "output.translations" array.
+
+```python
+from synapse.ml.cognitive import Translate
+
+translated_df = (Translate()
+ .setSubscriptionKey(cognitive_services_key)
+ .setLocation(cognitive_services_region)
+ .setTextCol("Description")
+ .setErrorCol("TranslationError")
+ .setOutputCol("output")
+ .setToLanguage(["zh-Hans", "fr", "ru", "cy"])
+ .setConcurrency(5)
+ .transform(itemized_df)
+ .withColumn("Translations", col("output.translations")[0])
+ .drop("output", "TranslationError")
+ .cache())
+
+display(translated_df)
+```
+
+> [!TIP]
+> To check for translated strings, scroll to the end of the rows.
+>
+> :::image type="content" source="media/search-synapseml-cognitive-services/translated-strings.png" alt-text="Screenshot of table output, showing the Translations column." border="true":::
+
+## Apply search indexing
+
+Paste the following code in the sixth cell and then run it. No modifications are required.
+
+This code loads [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch). It consumes a tabular dataset and infers a search index schema that defines one field for each column. The translations structure is an array, so it's articulated in the index as a complex collection with subfields for each language translation. The generated index will have a document key and use the default values for fields created using the [Create Index REST API](/rest/api/searchservice/create-index).
+
+```python
+from synapse.ml.cognitive import *
+
+(translated_df.withColumn("DocID", monotonically_increasing_id().cast("string"))
+ .withColumn("SearchAction", lit("upload"))
+ .writeToAzureSearch(
+ subscriptionKey=search_key,
+ actionCol="SearchAction",
+ serviceName=search_service,
+ indexName=search_index,
+ keyCol="DocID",
+ ))
+```
+
+## Query the index
+
+Paste the following code into the seventh cell and then run it. No modifications are required, except that you might want to vary the [query syntax](query-simple-syntax.md) or [review these query examples](search-query-simple-examples.md) to further explore your content.
+
+This code calls the [Search Documents REST API](/rest/api/searchservice/search-documents) that queries an index. This particular example is searching for the word "door". This query returns a count of the number of matching documents. It also returns just the contents of the "Description' and "Translations" fields. If you want to see the full list of fields, remove the "select" parameter.
+
+```python
+import requests
+
+url = "https://{}.search.windows.net/indexes/{}/docs/search?api-version=2020-06-30".format(search_service, search_index)
+requests.post(url, json={"search": "door", "count": "true", "select": "Description, Translations"}, headers={"api-key": search_key}).json()
+```
+
+The following screenshot shows the cell output for above script.
++
+## Clean up resources
+
+When you're working in your own subscription, at the end of a project, it's a good idea to remove the resources that you no longer need. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
+
+You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
+
+## Next steps
+
+In this walkthrough, you learned about the [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch) transformer in SynapseML, which is a new way of creating and loading search indexes in Azure Cognitive Search. The transformer takes structured JSON as an input. The FormOntologyLearner can provide the necessary structure for output produced by the Forms Recognizer transformers in SynapseML.
+
+As a next step, review the other SynapseML tutorials that produce transformed content you might want to explore through Azure Cognitive Search:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Text Analytics with Cognitive Service](/azure/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark)
search Tutorial Csharp Orders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-orders.md
Consider the following takeaways from this project:
You have completed this series of C# tutorials - you should have gained valuable knowledge of the Azure Cognitive Search APIs.
-For further reference and tutorials, consider browsing [Microsoft Learn](/learn/browse/?products=azure), or the other tutorials in the [Azure Cognitive Search Documentation](./index.yml).
+For further reference and tutorials, consider browsing [Microsoft Learn](/learn/browse/?products=azure), or the other tutorials in the [Azure Cognitive Search documentation](./index.yml).
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
_Im_NetworkSession (hostname_has_any = torProxies)
The Network Session information model is aligned with the [OSSEM Network entity schema](https://github.com/OTRF/OSSEM/blob/master/docs/cdm/entities/network.md).
-Network session events use the descriptors `Src` and `Dst` to denote the roles of the devices and related users and applications involved in the session. So, for example, the source device hostname and IP address are named `SrcHostname` and `SrcIpAddr`. Note that other ASIM schemas typically use `Target` instead of `Dst`.
+Network session events use the descriptors `Src` and `Dst` to denote the roles of the devices and related users and applications involved in the session. So, for example, the source device hostname and IP address are named `SrcHostname` and `SrcIpAddr`. Other ASIM schemas typically use `Target` instead of `Dst`.
For events reported by an endpoint and for which the event type is `EndpointNetworkSession`, the descriptors `Local` and `Remote` denote the endpoint itself and the device at the other end of the network session respectively.
The following list mentions fields that have specific guidelines for Network Ses
| Field | Class | Type | Description | ||-||--| | **EventCount** | Mandatory | Integer | Netflow sources support aggregation, and the **EventCount** field should be set to the value of the Netflow **FLOWS** field. For other sources, the value is typically set to `1`. |
-| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`: for sessions reported by endpoint systems, including clients and servers. For such systems, the schema supports the `remote` and `local` alias fields. <br> - `NetworkSession`: for sessions reported by intermediary systems and network taps. <br> - `Flow`: for `NetFlow` type aggregated flows which group multiple similar sessions together. For such records, [EventSubType](#eventsubtype) should be left empty. |
+| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`: for sessions reported by endpoint systems, including clients and servers. For such systems, the schema supports the `remote` and `local` alias fields. <br> - `NetworkSession`: for sessions reported by intermediary systems and network taps. <br> - `Flow`: for `NetFlow` type aggregated flows, which group multiple similar sessions together. For such records, [EventSubType](#eventsubtype) should be left empty. |
| <a name="eventsubtype"></a>**EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End` |
-| **EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. |
+| <a name="eventresult"></a>**EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. |
+| **EventResultDetails** | Recommended | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Supported values are:<br> - Failover <br> - Invalid TCP <br> - Invalid Tunnel <br> - Maximum Retry <br> - Reset <br> - Routing issue <br> - Simulation <br> - Terminated <br> - Timeout <br> - Unknown <br> - NA.<br><br>The original, source specific, value is stored in the [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails) field. |
| **EventSchema** | Mandatory | String | The name of the schema documented here is `NetworkSession`. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.4`. | | <a name="dvcaction"></a>**DvcAction** | Recommended | Enumerated | The action taken on the network session. Supported values are:<br>- `Allow`<br>- `Deny`<br>- `Drop`<br>- `Drop ICMP`<br>- `Reset`<br>- `Reset Source`<br>- `Reset Destination`<br>- `Encrypt`<br>- `Decrypt`<br>- `VPNroute`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. The original value should be stored in the [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction) field.<br><br>Example: `drop` |
The following list mentions fields that have specific guidelines for Network Ses
#### All common fields
-Fields that appear in the table below are common to all ASIM schemas. Any guideline specified above overrides the general guidelines for the field. For example, a field might be optional in general, but mandatory for a specific schema. For further details on each field, refer to the [ASIM Common Fields](normalization-common-fields.md) article.
+Fields that appear in the table below are common to all ASIM schemas. Any guideline specified above overrides the general guidelines for the field. For example, a field might be optional in general, but mandatory for a specific schema. For more information on each field, refer to the [ASIM Common Fields](normalization-common-fields.md) article.
| **Class** | **Fields** | | | - |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **NetworkPackets** | Optional | Long | The number of packets sent in both directions. If both **PacketsReceived** and **PacketsSent** exist, **BytesTotal** should equal their sum. The meaning of a packet is defined by the reporting device. If the event is aggregated, **NetworkPackets** should be the sum over all aggregated sessions.<br><br>Example: `6924` | |<a name="networksessionid"></a>**NetworkSessionId** | Optional | string | The session identifier as reported by the reporting device. <br><br>Example: `172\_12\_53\_32\_4322\_\_123\_64\_207\_1\_80` | | **SessionId** | Alias | String | Alias to [NetworkSessionId](#networksessionid). |
-| **TcpFlagsAck** | Optional | Boolean | The TCP ACK Flag reported. The acknowledgment flag is used to acknowledge the successful receipt of a packet. As we can see from the diagram above, the receiver sends an ACK as well as a SYN in the second step of the three way handshake process to tell the sender that it received its initial packet. |
+| **TcpFlagsAck** | Optional | Boolean | The TCP ACK Flag reported. The acknowledgment flag is used to acknowledge the successful receipt of a packet. As we can see from the diagram above, the receiver sends an ACK and a SYN in the second step of the three way handshake process to tell the sender that it received its initial packet. |
| **TcpFlagsFin** | Optional | Boolean | The TCP FIN Flag reported. The finished flag means there is no more data from the sender. Therefore, it is used in the last packet sent from the sender. | | **TcpFlagsSyn** | Optional | Boolean | The TCP SYN Flag reported. The synchronization flag is used as a first step in establishing a three way handshake between two hosts. Only the first packet from both the sender and receiver should have this flag set. | | **TcpFlagsUrg** | Optional | Boolean | The TCP URG Flag reported. The urgent flag is used to notify the receiver to process the urgent packets before processing all other packets. The receiver will be notified when all known urgent data has been received. See [RFC 6093](https://tools.ietf.org/html/rfc6093) for more details. |
-| **TcpFlagsPsh** | Optional | Boolean | The TCP PSH Flag reported. The push flag is somewhat similar to the URG flag and tells the receiver to process these packets as they are received instead of buffering them. |
+| **TcpFlagsPsh** | Optional | Boolean | The TCP PSH Flag reported. The push flag is similar to the URG flag and tells the receiver to process these packets as they are received instead of buffering them. |
| **TcpFlagsRst** | Optional | Boolean | The TCP RST Flag reported. The reset flag gets sent from the receiver to the sender when a packet is sent to a particular host that was not expecting it. | | **TcpFlagsEce** | Optional | Boolean | The TCP ECE Flag reported. This flag is responsible for indicating if the TCP peer is [ECN capable](https://en.wikipedia.org/wiki/Explicit_Congestion_Notification). See [RFC 3168](https://tools.ietf.org/html/rfc3168) for more details. | | **TcpFlagsCwr** | Optional | Boolean | The TCP CWR Flag reported. The congestion window reduced flag is used by the sending host to indicate it received a packet with the ECE flag set. See [RFC 3168](https://tools.ietf.org/html/rfc3168) for more details. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-| | <a name="dst"></a>**Dst** | Recommended | Alias | A unique identifier of the server receiving the DNS request. <br><br>This field might alias the [DstDvcId](#dstdvcid), [DstHostname](#dsthostname), or [DstIpAddr](#dstipaddr) fields. <br><br>Example: `192.168.12.1` |
-|<a name="dstipaddr"></a> **DstIpAddr** | Recommended | IP address | The IP address of the connection or session destination. If the session uses network address translation, this is the publicly visible address, and not the original address of the source which is stored in [DstNatIpAddr](#dstnatipaddr)<br><br>Example: `2001:db8::ff00:42:8329`<br><br>**Note**: This value is mandatory if [DstHostname](#dsthostname) is specified. |
+|<a name="dstipaddr"></a> **DstIpAddr** | Recommended | IP address | The IP address of the connection or session destination. If the session uses network address translation, `DstIpAddr` is the publicly visible address, and not the original address of the source, which is stored in [DstNatIpAddr](#dstnatipaddr)<br><br>Example: `2001:db8::ff00:42:8329`<br><br>**Note**: This value is mandatory if [DstHostname](#dsthostname) is specified. |
| <a name="dstportnumber"></a>**DstPortNumber** | Optional | Integer | The destination IP port.<br><br>Example: `443` | | <a name="dsthostname"></a>**DstHostname** | Recommended | Hostname | The destination device hostname, excluding domain information. If no device name is available, store the relevant IP address in this field.<br><br>Example: `DESKTOP-1282V4D` | | <a name="dstdomain"></a>**DstDomain** | Recommended | String | The domain of the destination device.<br><br>Example: `Contoso` |
-| <a name="dstdomaintype"></a>**DstDomainType** | Recommended | Enumerated | The type of [DstDomain](#dstdomain). For a list of allowed values and further information refer to [DomainType](normalization-about-schemas.md#domaintype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Required if [DstDomain](#dstdomain) is used. |
+| <a name="dstdomaintype"></a>**DstDomainType** | Recommended | Enumerated | The type of [DstDomain](#dstdomain). For a list of allowed values and further information, refer to [DomainType](normalization-about-schemas.md#domaintype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Required if [DstDomain](#dstdomain) is used. |
| **DstFQDN** | Optional | String | The destination device hostname, including domain information when available. <br><br>Example: `Contoso\DESKTOP-1282V4D` <br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [DstDomainType](#dstdomaintype) reflects the format used. | | <a name="dstdvcid"></a>**DstDvcId** | Optional | String | The ID of the destination device. If multiple IDs are available, use the most important one, and store the others in the fields `DstDvc<DvcIdType>`. <br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` |
-| **DstDvcIdType** | Optional | Enumerated | The type of [DstDvcId](#dstdvcid). For a list of allowed values and further information refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>Required if **DstDeviceId** is used.|
-| **DstDeviceType** | Optional | Enumerated | The type of the destination device. For a list of allowed values and further information refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). |
+| **DstDvcIdType** | Optional | Enumerated | The type of [DstDvcId](#dstdvcid). For a list of allowed values and further information, refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>Required if **DstDeviceId** is used.|
+| **DstDeviceType** | Optional | Enumerated | The type of the destination device. For a list of allowed values and further information, refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). |
| **DstZone** | Optional | String | The network zone of the destination, as defined by the reporting device.<br><br>Example: `Dmz` | | **DstInterfaceName** | Optional | String | The network interface used for the connection or session by the destination device.<br><br>Example: `Microsoft Hyper-V Network Adapter` | | **DstInterfaceGuid** | Optional | String | The GUID of the network interface used on the destination device.<br><br>Example:<br>`46ad544b-eaf0-47ef-`<br>`827c-266030f545a6` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-| | <a name="dstuserid"></a>**DstUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the destination user. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12` |
-| <a name="dstuseridtype"></a>**DstUserIdType** | Optional | UserIdType | The type of the ID stored in the [DstUserId](#dstuserid) field. For a list of allowed values and further information refer to [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md). |
+| <a name="dstuseridtype"></a>**DstUserIdType** | Optional | UserIdType | The type of the ID stored in the [DstUserId](#dstuserid) field. For a list of allowed values and further information, refer to [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md). |
| <a name="dstusername"></a>**DstUsername** | Optional | String | The destination username, including domain information when available. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). Use the simple form only if domain information isn't available.<br><br>Store the Username type in the [DstUsernameType](#dstusernametype) field. If other username formats are available, store them in the fields `DstUsername<UsernameType>`.<br><br>Example: `AlbertE` | | <a name="user"></a>**User** | Alias | | Alias to [DstUsername](#dstusername). |
-| <a name="dstusernametype"></a>**DstUsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [DstUsername](#dstusername) field. For a list of allowed values and further information refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` |
-| **DstUserType** | Optional | UserType | The type of destination user. For a list of allowed values and further information refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [DstOriginalUserType](#dstoriginalusertype) field. |
+| <a name="dstusernametype"></a>**DstUsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [DstUsername](#dstusername) field. For a list of allowed values and further information, refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` |
+| **DstUserType** | Optional | UserType | The type of destination user. For a list of allowed values and further information, refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [DstOriginalUserType](#dstoriginalusertype) field. |
| <a name="dstoriginalusertype"></a>**DstOriginalUserType** | Optional | String | The original destination user type, if provided by the source. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-| | <a name="dstappname"></a>**DstAppName** | Optional | String | The name of the destination application.<br><br>Example: `Facebook` |
-| <a name="dstappid"></a>**DstAppId** | Optional | String | The ID of the destination application, as reported by the reporting device.<br><br>Example: `124` |
-| **DstAppType** | Optional | AppType | The type of the destination application. For a list of allowed values and further information refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [DstAppName](#dstappname) or [DstAppId](#dstappid) are used. |
+| <a name="dstappid"></a>**DstAppId** | Optional | String | The ID of the destination application, as reported by the reporting device.If [DstAppType](#dstapptype) is `Process`, `DstAppId` and `DstProcessId` should have the same value.<br><br>Example: `124` |
+| <a name="dstapptype"></a>**DstAppType** | Optional | AppType | The type of the destination application. For a list of allowed values and further information, refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [DstAppName](#dstappname) or [DstAppId](#dstappid) are used. |
+| <a name="dstprocessname"></a>**DstProcessName** | Optional | String | The file name of the process that terminated the network session. This name is typically considered to be the process name. <br><br>Example: `C:\Windows\explorer.exe` |
+| <a name="process"></a>**Process** | Alias | | Alias to the [DstProcessName](#dstprocessname) <br><br>Example: `C:\Windows\System32\rundll32.exe`|
+| **SrcProcessId**| Optional | String | The process ID (PID) of the process that terminated the network session.<br><br>Example: `48610176` <br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+| **SrcProcessGuid** | Optional | String | A generated unique identifier (GUID) of the process that terminated the network session. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` |
### Source system fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-| | <a name="src"></a>**Src** | Recommended | Alias | A unique identifier of the source device. <br><br>This field might alias the [SrcDvcId](#srcdvcid), [SrcHostname](#srchostname), or [SrcIpAddr](#srcipaddr) fields. <br><br>Example: `192.168.12.1` |
-| <a name="srcipaddr"></a>**SrcIpAddr** | Recommended | IP address | The IP address from which the connection or session originated. This value is mandatory if **SrcHostname** is specified. If the session uses network address translation, this is the publicly visible address, and not the original address of the source which is stored in [SrcNatIpAddr](#srcnatipaddr)<br><br>Example: `77.138.103.108` |
+| <a name="srcipaddr"></a>**SrcIpAddr** | Recommended | IP address | The IP address from which the connection or session originated. This value is mandatory if **SrcHostname** is specified. If the session uses network address translation, `SrcIpAddr` is the publicly visible address, and not the original address of the source, which is stored in [SrcNatIpAddr](#srcnatipaddr)<br><br>Example: `77.138.103.108` |
| **SrcPortNumber** | Optional | Integer | The IP port from which the connection originated. Might not be relevant for a session comprising multiple connections.<br><br>Example: `2335` | | <a name="srchostname"></a> **SrcHostname** | Recommended | Hostname | The source device hostname, excluding domain information. If no device name is available, store the relevant IP address in this field.<br><br>Example: `DESKTOP-1282V4D` | |<a name="srcdomain"></a> **SrcDomain** | Recommended | String | The domain of the source device.<br><br>Example: `Contoso` |
-| <a name="srcdomaintype"></a>**SrcDomainType** | Recommended | DomainType | The type of [SrcDomain](#srcdomain). For a list of allowed values and further information refer to [DomainType](normalization-about-schemas.md#domaintype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Required if [SrcDomain](#srcdomain) is used. |
+| <a name="srcdomaintype"></a>**SrcDomainType** | Recommended | DomainType | The type of [SrcDomain](#srcdomain). For a list of allowed values and further information, refer to [DomainType](normalization-about-schemas.md#domaintype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Required if [SrcDomain](#srcdomain) is used. |
| **SrcFQDN** | Optional | String | The source device hostname, including domain information when available. <br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [SrcDomainType](#srcdomaintype) field reflects the format used. <br><br>Example: `Contoso\DESKTOP-1282V4D` | | <a name="srcdvcid"></a>**SrcDvcId** | Optional | String | The ID of the source device. If multiple IDs are available, use the most important one, and store the others in the fields `SrcDvc<DvcIdType>`.<br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` |
-| **SrcDvcIdType** | Optional | DvcIdType | The type of [SrcDvcId](#srcdvcid). For a list of allowed values and further information refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. |
-| **SrcDeviceType** | Optional | DeviceType | The type of the source device. For a list of allowed values and further information refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). |
+| **SrcDvcIdType** | Optional | DvcIdType | The type of [SrcDvcId](#srcdvcid). For a list of allowed values and further information, refer to [DvcIdType](normalization-about-schemas.md#dvcidtype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. |
+| **SrcDeviceType** | Optional | DeviceType | The type of the source device. For a list of allowed values and further information, refer to [DeviceType](normalization-about-schemas.md#devicetype) in the [Schema Overview article](normalization-about-schemas.md). |
| **SrcZone** | Optional | String | The network zone of the source, as defined by the reporting device.<br><br>Example: `Internet` | | **SrcInterfaceName** | Optional | String | The network interface used for the connection or session by the source device. <br><br>Example: `eth01` | | **SrcInterfaceGuid** | Optional | String | The GUID of the network interface used on the source device.<br><br>Example:<br>`46ad544b-eaf0-47ef-`<br>`827c-266030f545a6` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-| | <a name="srcuserid"></a>**SrcUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the source user. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12` |
-| <a name="srcuseridtype"></a>**SrcUserIdType** | Optional | UserIdType | The type of the ID stored in the [SrcUserId](#srcuserid) field. For a list of allowed values and further information refer to [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md). |
+| <a name="srcuseridtype"></a>**SrcUserIdType** | Optional | UserIdType | The type of the ID stored in the [SrcUserId](#srcuserid) field. For a list of allowed values and further information, refer to [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md). |
| <a name="srcusername"></a>**SrcUsername** | Optional | String | The source username, including domain information when available. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). Use the simple form only if domain information isn't available.<br><br>Store the Username type in the [SrcUsernameType](#srcusernametype) field. If other username formats are available, store them in the fields `SrcUsername<UsernameType>`.<br><br>Example: `AlbertE` |
-| <a name="srcusernametype"></a>**SrcUsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [SrcUsername](#srcusername) field. For a list of allowed values and further information refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` |
-| **SrcUserType** | Optional | UserType | The type of source user. For a list of allowed values and further information refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [SrcOriginalUserType](#srcoriginalusertype) field. |
+| <a name="srcusernametype"></a>**SrcUsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [SrcUsername](#srcusername) field. For a list of allowed values and further information, refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` |
+| **SrcUserType** | Optional | UserType | The type of source user. For a list of allowed values and further information, refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [SrcOriginalUserType](#srcoriginalusertype) field. |
| <a name="srcoriginalusertype"></a>**SrcOriginalUserType** | Optional | String | The original destination user type, if provided by the reporting device. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | |-|-||-| | <a name="srcappname"></a>**SrcAppName** | Optional | String | The name of the source application. <br><br>Example: `filezilla.exe` |
-| <a name="srcappid"></a>**SrcAppId** | Optional | String | The ID of the source application, as reported by the reporting device.<br><br>Example: `124` |
-| **SrcAppType** | Optional | AppType | The type of the source application. For a list of allowed values and further information refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [SrcAppName](#srcappname) or [SrcAppId](#srcappid) are used. |
+| <a name="srcappid"></a>**SrcAppId** | Optional | String | The ID of the source application, as reported by the reporting device. If [SrcAppType](#srcapptype) is `Process`, `SrcAppId` and `SrcProcessId` should have the same value.<br><br>Example: `124` |
+| <a name="srcapptype"></a>**SrcAppType** | Optional | AppType | The type of the source application. For a list of allowed values and further information, refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [SrcAppName](#srcappname) or [SrcAppId](#srcappid) are used. |
+| <a name="srcprocessname"></a>**SrcProcessName** | Optional | String | The file name of the process that initiated the network session. This name is typically considered to be the process name. <br><br>Example: `C:\Windows\explorer.exe` |
+| **SrcProcessId**| Optional | String | The process ID (PID) of the process that initiated the network session.<br><br>Example: `48610176` <br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+| **SrcProcessGuid** | Optional | String | A generated unique identifier (GUID) of the process that initiated the network session. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` |
The following fields are used to represent that inspection which a security devi
| | | | | | <a name="networkrulename"></a>**NetworkRuleName** | Optional | String | The name or ID of the rule by which [DvcAction](#dvcaction) was decided upon.<br><br> Example: `AnyAnyDrop` | | <a name="networkrulenumber"></a>**NetworkRuleNumber** | Optional | Integer | The number of the rule by which [DvcAction](#dvcaction) was decided upon.<br><br>Example: `23` |
-| **Rule** | Mandatory | String | Either the value of [NetworkRuleName](#networkrulename) or the value of [NetworkRuleNumber](#networkrulenumber). Note that if the value of [NetworkRuleNumber](#networkrulenumber) is used, the type should be converted to string. |
+| **Rule** | Mandatory | String | Either the value of [NetworkRuleName](#networkrulename) or the value of [NetworkRuleNumber](#networkrulenumber). If the value of [NetworkRuleNumber](#networkrulenumber) is used, the type should be converted to string. |
| **ThreatId** | Optional | String | The ID of the threat or malware identified in the network session.<br><br>Example: `Tr.124` | | **ThreatName** | Optional | String | The name of the threat or malware identified in the network session.<br><br>Example: `EICAR Test File` | | **ThreatCategory** | Optional | String | The category of the threat or malware identified in the network session.<br><br>Example: `Trojan` |
-| **ThreatRiskLevel** | Optional | Integer | The risk level associated with the session. The level should be a number between **0** and **100**.<br><br>**Note**: The value might be provided in the source record by using a different scale, which should be normalized to this scale. The original value should be stored in [ThreatRiskLevelOriginal](#threatriskleveloriginal). |
-| <a name="threatriskleveloriginal"></a>**ThreatRiskLevelOriginal** | Optional | String | The risk level as reported by the reporting device. |
+| **ThreatRiskLevel** | Optional | Integer | The risk level associated with the session. The level should be a number between **0** and **100**.<br><br>**Note**: The value might be provided in the source record by using a different scale, which should be normalized to this scale. The original value should be stored in [ThreatRiskLevelOriginal](#threatoriginalriskleveloriginal). |
+| <a name="threatoriginalriskleveloriginal"></a>**ThreatOriginalRiskLevel** | Optional | String | The risk level as reported by the reporting device. |
| **ThreatIpAddr** | Optional | IP Address | An IP address for which a threat was identified. The field [ThreatField](#threatfield) contains the name of the field **ThreatIpAddr** represents. | | <a name="threatfield"></a>**ThreatField** | Optional | Enumerated | The field for which a threat was identified. The value is either `SrcIpAddr` or `DstIpAddr`. | | **ThreatConfidence** | Optional | Integer | The confidence level of the threat identified, normalized to a value between 0 and a 100.|
If the event is reported by one of the endpoints of the network session, it migh
### Schema updates
-These are the changes in version 0.2.1 of the schema:
+The following are the changes in version 0.2.1 of the schema:
- Added `Src` and `Dst` as aliases to a leading identifier for the source and destination systems. - Added the fields `**`NetworkConnectionHistory`**`, `**`SrcVlanId`**`, `**`DstVlanId`**`, `InnerVlanId`, and `OuterVlanId`.
-These are the changes in version 0.2.2 of the schema:
+The following are the changes in version 0.2.2 of the schema:
- Added `Remote` and `Local` aliases. - Added the event type `EndpointNetworkSession`.
These are the changes in version 0.2.2 of the schema:
- Added the fields `NetworkProtocolVersion`, `SrcSubscriptionId`, and `DstSubscriptionId`. - Deprecated `DstUserDomain` and `SrcUserDomain`.
-Theses are the changes in version 0.2.3 of the schema:
+The following are the changes in version 0.2.3 of the schema:
- Added the `ipaddr_has_any_prefix` filtering parameter. - The `hostname_has_any` filtering parameter now matches either source or destination hostnames. - Added the fields `ASimMatchingHostname` and `ASimMatchingIpAddr`.
-Theses are the changes in version 0.2.4 of the schema:
+The following are the changes in version 0.2.4 of the schema:
- Added the `TcpFlags` fields. - Updated `NetworkIcpmType` and `NetworkIcmpCode` to reflect the number value for both. - Added additional inspection fields.
+- The field 'ThreatRiskLevelOriginal' was renamed to `ThreatOriginalRiskLevel` to alighn with ASIM convensions. Existing Microsoft parsers will maintain `ThreatRiskLevelOriginal` until May 1st 2023.
+- Marked `EventResultDetails` as recommended, and specified the allowed values.
## Next steps
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
Last updated 04/27/2022
# Configure Microsoft Sentinel Solution for SAP + This article provides best practices for configuring the Microsoft Sentinel Solution for SAP. The full deployment process is detailed in a whole set of articles linked under [Deployment milestones](deployment-overview.md#deployment-milestones).
+> [!IMPORTANT]
+> Some components of the Microsoft Sentinel Solution for SAP are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+ Deployment of the data collector agent and solution in Microsoft Sentinel provides you with the ability to monitor SAP systems for suspicious activities and identify threats. However, for best results, best practices for operating the solution strongly recommend carrying out several additional configuration steps that are very dependent on the SAP deployment. ## Deployment milestones
Microsoft Sentinel Solution for SAP configuration is accomplished by providing c
> If you edit a watchlist and find it is empty, please wait a few minutes and retry opening the watchlist for editing. ### SAP - Systems watchlist
-SAP - Systems watchlist defines which SAP Systems are present in the monitored environment. For every system, specify its SID, whether it is a production system or a dev/test environment, as well as a description.
+SAP - Systems watchlist defines which SAP Systems are present in the monitored environment. For every system, specify its SID, whether it's a production system or a dev/test environment, as well as a description.
This information is used by some analytics rules, which may react differently if relevant events appear in a Development or a Production system. ### SAP - Networks watchlist
-SAP - Networks watchlist outlines all networks used by the organization. It is primarily used to identify whether or not user logons are originating from within known segments of the network, also if user logon origin changes unexpectedly.
+SAP - Networks watchlist outlines all networks used by the organization. It's primarily used to identify whether or not user logons are originating from within known segments of the network, also if user logon origin changes unexpectedly.
There are a number of approaches for documenting network topology. You could define a broad range of addresses, like 172.16.0.0/16, and name it "Corporate Network", which will be good enough for tracking logons from outside that range. A more segmented approach, however, allows you better visibility into potentially atypical activity.
All of these watchlists identify sensitive actions or data that can be carried o
- SAP - Sensitive Roles - SAP - Privileged Users
-Microsoft Sentinel Solution for SAP uses User Master data gathered from SAP systems to identify which users, profiles, and roles should be considered sensitive. Some sample data is included in the watchlists, though we recommend you consult with the SAP BASIS team to identify sensitive users, roles and profiles and populate the watchlists accordingly.
+The Microsoft Sentinel Solution for SAP uses User Master data gathered from SAP systems to identify which users, profiles, and roles should be considered sensitive. Some sample data is included in the watchlists, though we recommend you consult with the SAP BASIS team to identify sensitive users, roles and profiles and populate the watchlists accordingly.
## Start enabling analytics rules
-By default, all analytics rules provided in the Microsoft Sentinel Solution for SAP are disabled. When you install the solution, it's best if you don't enable all the rules at once so you don't end up with a lot of noise. Instead, use a staged approach, enabling rules over time, ensuring you are not receiving noise or false positives. Ensure alerts are operationalized, that is, have a response plan for each of the alerts. We consider the following rules to be easiest to implement, so best to start with them:
+By default, all analytics rules provided in the Microsoft Sentinel Solution for SAP are provided as [alert rule templates](../manage-analytics-rule-templates.md#manage-template-versions-for-your-scheduled-analytics-rules-in-microsoft-sentinel). We recommend a staged approach, where a few rules are created from templates at a time, allowing time for fine tuning each scenario.
+ We consider the following rules to be easiest to implement, so best to start with those:
-1. Deactivation of Security Audit Log
-1. Client Configuration Change
1. Change in Sensitive Privileged User
-1. Client configuration change
-1. Sensitive privileged user logon
-1. Sensitive privileged user makes a change in other
-1. Sensitive privilege user password change and login
-1. System configuration change
-1. Brute force (RFC)
-1. Function module tested
+2. Client configuration change
+3. Sensitive privileged user logon
+4. Sensitive privileged user makes a change in other
+5. Sensitive privilege user password change and login
+6. Brute force (RFC)
+7. Function module tested
+8. The SAP audit log monitoring analytics rules
+
+#### Configuring the SAP audit log monitoring analytics rules
+The two [SAP Audit log monitor rules](sap-solution-security-content.md#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) are delivered as ready to run out of the box, and allow for further fine tuning using watchlists:
+- **SAP_Dynamic_Audit_Log_Monitor_Configuration**
+ The **SAP_Dynamic_Audit_Log_Monitor_Configuration** is a watchlist detailing all available SAP standard audit log message IDs and can be extended to contain additional message IDs you might create on your own using ABAP enhancements on your SAP NetWeaver systems.This watchlist allows for customizing an SAP message ID (=event type), at different levels:
+ - Severities per production/ non-production systems -for example, debugging activity gets ΓÇ£HighΓÇ¥ for production systems, and ΓÇ£DisabledΓÇ¥ for other systems
+ - Assigning different thresholds for production/ non-production systems- which are considered as ΓÇ£speed limitsΓÇ¥. Setting a threshold of 60 events an hour, will trigger an incident if more than 30 events were observed within 30 minutes
+ - Assigning Rule Types- either ΓÇ£DeterministicΓÇ¥ or ΓÇ£AnomaliesOnlyΓÇ¥ determines by which manner this event is considered
+ - Roles and Tags to Exclude- specific users can be excluded from specific event types. This field can either accept SAP roles, SAP profiles or Tags:
+ - Listing SAP roles or SAP profiles ([see User Master data collection](sap-solution-deploy-alternate.md#configuring-user-master-data-collection)) would exclude any user bearing those roles/ profiles from these event types for the same SAP system. For example, specifying the ΓÇ£BASIC_BO_USERSΓÇ¥ ABAP role for the RFC related event types will ensure Business Objects users won't trigger incidents when making massive RFC calls.
+ - Listing tags to be used as identifiers. Tagging an event type works just like specifying SAP roles or profiles, except that tags can be created within the Sentinel workspace, allowing the SOC personnel freedom in excluding users per activity without the dependency on the SAP team. For example, the audit message IDs AUB (authorization changes) and AUD (User master record changes) are assigned with the tag ΓÇ£MassiveAuthChangesΓÇ¥. Users assigned with this tag are excluded from the checks for these activities. Running the workspace function **SAPAuditLogConfigRecommend** will produce a list of recommended tags to be assigned to users, such as 'Add the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV using the SAP_User_Config watchlist'
+- **SAP_User_Config**
+ This configuration-based watchlist is there to allow for specifying user related tags and other active directory identifiers for the SAP user. Tags are then used for identifying the user in specific contexts. For example, assigning the user GRC_ADMIN with the tag ΓÇ£MassiveAuthChangesΓÇ¥ will prevent incidents from being created on user master record and authorization events made by GRC_ADMIN.
+
+More information is available [in this blog](https://aka.ms/Sentinel4sapDynamicDeterministicAuditRuleBlog)
+++
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
Users are *strongly encouraged* to use the functions as the subjects of their an
- [SAPUsersEmail](#sapusersemail) - [SAPAuditLogConfiguration](#sapauditlogconfiguration) - [SAPAuditLogAnomalies](#sapauditloganomalies)
+- [SAPAuditLogConfigRecommend](#sapauditlogconfigrecommend)
- [SAPSystems](#sapsystems) - [SAPUsersGetVIP](#sapusersgetvip) - [SAPUsersHeader](#sapusersheader)
SAPAuditLogAnomalies(LearningTime = 14d, DetectingTime=0h, SelectedSystems= dyna
| MaxTime | Time of last event observed| | Score | the anomaly scores as produced by the anomaly model|
+See [Built-in SAP analytics rules for monitoring the SAP audit log](sap-solution-security-content.md#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) for more information.
+
+### SAPAuditLogConfigRecommend
+The **SAPAuditLogConfigRecommend** is a helper function designed to offer recommendations for the configuration of the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](sap-solution-security-content.md#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview) analytics rule. See detailed explanation in the [Configuring the SAP audit log monitoring analytics rules](deployment-solution-configuration.md#configuring-the-sap-audit-log-monitoring-analytics-rules) guide.
+ ### SAPUsersGetVIP The Sentinel for SAP solution uses a concept of central user tagging, designed to allow for lower false positive rate with minimal effort on the customer end:
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
Title: Microsoft Sentinel Solution for SAP - security content reference
+ Title: Microsoft Sentinel Solution for SAP - security content reference | Microsoft Docs
description: Learn about the built-in security content provided by the Microsoft Sentinel Solution for SAP.
Last updated 04/27/2022
This article details the security content available for the Microsoft Sentinel Solution for SAP. > [!IMPORTANT]
-> The Microsoft Sentinel Solution for SAP is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Some components of the Microsoft Sentinel Solution for SAP are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
-Available security content includes a built-in workbook and built-in analytics rules. You can also add SAP-related [watchlists](../watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
+Available security content includes built-in workbooks and analytics rules. You can also add SAP-related [watchlists](../watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
## Built-in workbooks
Use the following built-in workbooks to visualize and monitor data ingested via
For more information, see [Tutorial: Visualize and monitor your data](../monitor-your-data.md) and [Deploy Microsoft Sentinel Solution for SAP](deployment-overview.md). ## Built-in analytics rules
+### Built-in SAP analytics rules for monitoring the SAP audit log
+The SAP Audit log data is used across many of the analytics rules of the Microsoft Sentinel Solution for SAP. Some analytics rules look for specific events on the log, while others correlate indications from several logs to produce high fidelity alerts and incidents.
+In addition, there are two analytics rules which are designed to accommodate the entire set of standard SAP audit log events (183 different events), and any other custom events you may choose to log using the SAP audit log.
+
+#### SAP - Dynamic Deterministic Audit Log Monitor
+
+A dynamic analytics rule that is intended for covering the entire set of SAP audit log event types which have a deterministic definition in terms of user population, event thresholds.
+
+#### SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)
+
+A dynamic analytics rule designed to learn normal system behavior, and alert on activities observed on the SAP audit log that are considered anomalous. Apply this rule on the SAP audit log event types which are harder to define in terms of user population, network attributes and thresholds.
+
+Both SAP audit log monitoring analytics rules share the same data sources and the same configuration but differ in one critical aspect. While the ΓÇ£SAP - Dynamic Deterministic Audit Log MonitorΓÇ¥ requires deterministic alert thresholds and user exclusion rules, the ΓÇ£SAP - Dynamic Anomaly-based Audit Log Monitor Alerts (PREVIEW)ΓÇ¥ applies additional machine learning algorithms to filter out background noise in an unsupervised manner. For this reason, by default, most event types (or SAP message IDs) of the SAP audit log are being sent to the ΓÇ£Anomaly basedΓÇ¥ analytics rule, while the easier to define event types are sent to the deterministic analytics rule. This setting, along with other related settings, can be further configured to suit any system conditions. See the [Configuring the SAP audit log monitoring analytics rules](deployment-solution-configuration.md#configuring-the-sap-audit-log-monitoring-analytics-rules)
++
+More information is available [in this blog](https://aka.ms/Sentinel4sapDynamicDeterministicAuditRuleBlog )
+ The following tables list the built-in [analytics rules](deploy-sap-security-content.md) that are included in the Microsoft Sentinel Solution for SAP, deployed from the Microsoft Sentinel Solutions marketplace.
The following tables list the built-in [analytics rules](deploy-sap-security-con
| Rule name | Description | Source action | Tactics | | | | | |
-| **SAP - High - Login from unexpected network** | Identifies a sign-in from an unexpected network. <br><br>Maintain networks in the [SAP - Networks](#networks) watchlist. | Sign in to the backend system from an IP address that is not assigned to one of the networks. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
-| **SAP - High - SPNego Attack** | Identifies SPNego Replay Attack. | **Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
-| **SAP - High- Dialog logon attempt from a privileged user** | Identifies dialog sign-in attempts, with the **AUM** type, by privileged users in a SAP system. For more information, see the [SAPUsersGetPrivileged](sap-solution-log-reference.md#sapusersgetprivileged). | Attempt to sign in from the same IP to several systems or clients within the scheduled time interval<br><br>**Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
-| **SAP - Medium - Brute force attacks** | Identifies brute force attacks on the SAP system using RFC logons | Attempt to login from the same IP to several systems/clients within the scheduled time interval using RFC<br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
-| **SAP - Medium - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
-| **SAP - Medium - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |
+| **SAP - Login from unexpected network** | Identifies a sign-in from an unexpected network. <br><br>Maintain networks in the [SAP - Networks](#networks) watchlist. | Sign in to the backend system from an IP address that is not assigned to one of the networks. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
+| **SAP - SPNego Attack** | Identifies SPNego Replay Attack. | **Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
+| **SAP - Dialog logon attempt from a privileged user** | Identifies dialog sign-in attempts, with the **AUM** type, by privileged users in a SAP system. For more information, see the [SAPUsersGetPrivileged](sap-solution-log-reference.md#sapusersgetprivileged). | Attempt to sign in from the same IP to several systems or clients within the scheduled time interval<br><br>**Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
+| **SAP - Brute force attacks** | Identifies brute force attacks on the SAP system using RFC logons | Attempt to login from the same IP to several systems/clients within the scheduled time interval using RFC<br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
+| **SAP - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
+| **SAP - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |
| **SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement an SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |
The following tables list the built-in [analytics rules](deploy-sap-security-con
| Rule name | Description | Source action | Tactics | | | | | |
-| **SAP - Medium - FTP for non authorized servers** |Identifies an FTP connection for a non-authorized server. | Create a new FTP connection, such as by using the FTP_CONNECT Function Module. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Initial Access, Command and Control |
-| **SAP - Medium - Insecure FTP servers configuration** |Identifies insecure FTP server configurations, such as when an FTP allowlist is empty or contains placeholders. | Do not maintain or maintain values that contain placeholders in the `SAPFTP_SERVERS` table, using the `SAPFTP_SERVERS_V` maintenance view. (SM30) <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Command and Control |
-| **SAP - Medium - Multiple Files Download** |Identifies multiple file downloads for a user within a specific time-range. | Download multiple files using the SAPGui for Excel, lists, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
-| **SAP - Medium - Multiple Spool Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
-| **SAP - Medium - Multiple Spool Output Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
-| **SAP - Medium - Sensitive Tables Direct Access By RFC Logon** |Identifies a generic table access by RFC sign in. <br><br> Maintain tables in the [SAP - Sensitive Tables](#tables) watchlist.<br><br> **Note**: Relevant for production systems only. | Open the table contents using SE11/SE16/SE16N.<br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
-| **SAP - Medium - Spool Takeover** |Identifies a user printing a spool request that was created by someone else. | Create a spool request using one user, and then output it in using a different user. <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Command and Control |
-| **SAP - Low - Dynamic RFC Destination** | Identifies the execution of RFC using dynamic destinations. <br><br>**Sub-use case**: [Attempts to bypass SAP security mechanisms](#built-in-sap-analytics-rules-for-attempts-to-bypass-sap-security-mechanisms)| Execute an ABAP report that uses dynamic destinations (cl_dynamic_destination). For example, DEMO_RFC_DYNAMIC_DEST. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration |
-| **SAP - Low - Sensitive Tables Direct Access By Dialog Logon** | Identifies generic table access via dialog sign-in. | Open table contents using `SE11`/`SE16`/`SE16N`. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
+| **SAP - FTP for non authorized servers** |Identifies an FTP connection for a non-authorized server. | Create a new FTP connection, such as by using the FTP_CONNECT Function Module. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Initial Access, Command and Control |
+| **SAP - Insecure FTP servers configuration** |Identifies insecure FTP server configurations, such as when an FTP allowlist is empty or contains placeholders. | Do not maintain or maintain values that contain placeholders in the `SAPFTP_SERVERS` table, using the `SAPFTP_SERVERS_V` maintenance view. (SM30) <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Command and Control |
+| **SAP - Multiple Files Download** |Identifies multiple file downloads for a user within a specific time-range. | Download multiple files using the SAPGui for Excel, lists, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
+| **SAP - Multiple Spool Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
+| **SAP - Multiple Spool Output Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
+| **SAP - Sensitive Tables Direct Access By RFC Logon** |Identifies a generic table access by RFC sign in. <br><br> Maintain tables in the [SAP - Sensitive Tables](#tables) watchlist.<br><br> **Note**: Relevant for production systems only. | Open the table contents using SE11/SE16/SE16N.<br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access |
+| **SAP - Spool Takeover** |Identifies a user printing a spool request that was created by someone else. | Create a spool request using one user, and then output it in using a different user. <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Command and Control |
+| **SAP - Dynamic RFC Destination** | Identifies the execution of RFC using dynamic destinations. <br><br>**Sub-use case**: [Attempts to bypass SAP security mechanisms](#built-in-sap-analytics-rules-for-attempts-to-bypass-sap-security-mechanisms)| Execute an ABAP report that uses dynamic destinations (cl_dynamic_destination). For example, DEMO_RFC_DYNAMIC_DEST. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration |
+| **SAP - Sensitive Tables Direct Access By Dialog Logon** | Identifies generic table access via dialog sign-in. | Open table contents using `SE11`/`SE16`/`SE16N`. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
### Built-in SAP analytics rules for persistency | Rule name | Description | Source action | Tactics | | | | | |
-| **SAP - High - Activation or Deactivation of ICF Service** | Identifies activation or deactivation of ICF Services. | Activate a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
-| **SAP - High - Function Module tested** | Identifies the testing of a function module. | Test a function module using `SE37` / `SE80`. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Defense Evasion, Lateral Movement |
-| **SAP - High - HANA DB - User Admin actions** | Identifies user administration actions. | Create, update, or delete a database user. <br><br>**Data Sources**: Linux Agent - Syslog* |Privilege Escalation |
-| **SAP - High - New ICF Service Handlers** | Identifies creation of ICF Handlers. | Assign a new handler to a service using SICF.<br><br>**Data sources**: SAPcon - Audit Log | Command and Control, Lateral Movement, Persistence |
-| **SAP - High - New ICF Services** | Identifies creation of ICF Services. | Create a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
-| **SAP - Medium - Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control |
-| **SAP - Medium - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control |
-| **SAP - Low - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
+| **SAP - Activation or Deactivation of ICF Service** | Identifies activation or deactivation of ICF Services. | Activate a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
+| **SAP - Function Module tested** | Identifies the testing of a function module. | Test a function module using `SE37` / `SE80`. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Defense Evasion, Lateral Movement |
+| **SAP - (PREVIEW) HANA DB -User Admin actions** | Identifies user administration actions. | Create, update, or delete a database user. <br><br>**Data Sources**: Linux Agent - Syslog* |Privilege Escalation |
+| **SAP - New ICF Service Handlers** | Identifies creation of ICF Handlers. | Assign a new handler to a service using SICF.<br><br>**Data sources**: SAPcon - Audit Log | Command and Control, Lateral Movement, Persistence |
+| **SAP - New ICF Services** | Identifies creation of ICF Services. | Create a service using SICF.<br><br>**Data sources**: SAPcon - Table Data Log | Command and Control, Lateral Movement, Persistence |
+| **SAP - (PREVIEW) Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control |
+| **SAP - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control |
+| **SAP - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
The following tables list the built-in [analytics rules](deploy-sap-security-con
| Rule name | Description | Source action | Tactics | | | | | |
-| **SAP - High - Client Configuration Change** | Identifies changes for client configuration such as the client role or the change recording mode. | Perform client configuration changes using the `SCC4` transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Defense Evasion, Exfiltration, Persistence |
-| **SAP - High - Data has Changed during Debugging Activity** | Identifies changes for runtime data during a debugging activity. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | 1. Activate Debug ("/h"). <br>2. Select a field for change and update its value.<br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement |
-| **SAP - High - Deactivation of Security Audit Log** | Identifies deactivation of the Security Audit Log, | Disable security Audit Log using `SM19/RSAU_CONFIG`. <br><br>**Data sources**: SAPcon - Audit Log | Exfiltration, Defense Evasion, Persistence |
-| **SAP - High - Execution of a Sensitive ABAP Program** |Identifies the direct execution of a sensitive ABAP program. <br><br>Maintain ABAP Programs in the [SAP - Sensitive ABAP Programs](#programs) watchlist. | Run a program directly using `SE38`/`SA38`/`SE80`. <br> <br>**Data sources**: SAPcon - Audit Log | Exfiltration, Lateral Movement, Execution |
-| **SAP - High - Execution of a Sensitive Transaction Code** | Identifies the execution of a sensitive Transaction Code. <br><br>Maintain transaction codes in the [SAP - Sensitive Transaction Codes](#transactions) watchlist. | Run a sensitive transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Execution |
-| **SAP - High - Execution of Sensitive Function Module** | Identifies the execution of a sensitive ABAP function module. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency)<br><br>**Note**: Relevant for production systems only. <br><br>Maintain sensitive functions in the [SAP - Sensitive Function Modules](#modules) watchlist, and make sure to activate table logging changes in the backend for the EUFUNC table. (SE13) | Run a sensitive function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control
-| **SAP - High - HANA DB - Audit Trail Policy Changes** | Identifies changes for HANA DB audit trail policies. | Create or update the existing audit policy in security definitions. <br> <br>**Data sources**: Linux Agent - Syslog | Lateral Movement, Defense Evasion, Persistence |
-| **SAP - High - HANA DB - Deactivation of Audit Trail** | Identifies the deactivation of the HANA DB audit log. | Deactivate the audit log in the HANA DB security definition. <br><br>**Data sources**: Linux Agent - Syslog | Persistence, Lateral Movement, Defense Evasion |
-| **SAP - High - RFC Execution of a Sensitive Function Module** | Sensitive function models to be used in relevant detections. <br><br>Maintain function modules in the [SAP - Sensitive Function Modules](#module) watchlist. | Run a function module using RFC. <br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement, Discovery |
-| **SAP - High - System Configuration Change** | Identifies changes for system configuration. | Adapt system change options or software component modification using the `SE06` transaction code.<br><br>**Data sources**: SAPcon - Audit Log |Exfiltration, Defense Evasion, Persistence |
-| **SAP - Medium - Debugging Activities** | Identifies all debugging related activities. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |Activate Debug ("/h") in the system, debug an active process, add breakpoint to source code, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
-| **SAP - Medium - Security Audit Log Configuration Change** | Identifies changes in the configuration of the Security Audit Log | Change any Security Audit Log Configuration using `SM19`/`RSAU_CONFIG`, such as the filters, status, recording mode, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Exfiltration, Defense Evasion |
-| **SAP - Medium - Transaction is unlocked** |Identifies unlocking of a transaction. | Unlock a transaction code using `SM01`/`SM01_DEV`/`SM01_CUS`. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Execution |
-| **SAP - Low - Dynamic ABAP Program** | Identifies the execution of dynamic ABAP programming. For example, when ABAP code was dynamically created, changed, or deleted. <br><br> Maintain excluded transaction codes in the [SAP - Transactions for ABAP Generations](#transactions) watchlist. | Create an ABAP Report that uses ABAP program generation commands, such as INSERT REPORT, and then run the report. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control, Impact |
+| **SAP - Client Configuration Change** | Identifies changes for client configuration such as the client role or the change recording mode. | Perform client configuration changes using the `SCC4` transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Defense Evasion, Exfiltration, Persistence |
+| **SAP - Data has Changed during Debugging Activity** | Identifies changes for runtime data during a debugging activity. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | 1. Activate Debug ("/h"). <br>2. Select a field for change and update its value.<br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement |
+| **SAP - Deactivation of Security Audit Log** | Identifies deactivation of the Security Audit Log, | Disable security Audit Log using `SM19/RSAU_CONFIG`. <br><br>**Data sources**: SAPcon - Audit Log | Exfiltration, Defense Evasion, Persistence |
+| **SAP - Execution of a Sensitive ABAP Program** |Identifies the direct execution of a sensitive ABAP program. <br><br>Maintain ABAP Programs in the [SAP - Sensitive ABAP Programs](#programs) watchlist. | Run a program directly using `SE38`/`SA38`/`SE80`. <br> <br>**Data sources**: SAPcon - Audit Log | Exfiltration, Lateral Movement, Execution |
+| **SAP - Execution of a Sensitive Transaction Code** | Identifies the execution of a sensitive Transaction Code. <br><br>Maintain transaction codes in the [SAP - Sensitive Transaction Codes](#transactions) watchlist. | Run a sensitive transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Execution |
+| **SAP - Execution of Sensitive Function Module** | Identifies the execution of a sensitive ABAP function module. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency)<br><br>**Note**: Relevant for production systems only. <br><br>Maintain sensitive functions in the [SAP - Sensitive Function Modules](#modules) watchlist, and make sure to activate table logging changes in the backend for the EUFUNC table. (SE13) | Run a sensitive function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control
+| **SAP - (PREVIEW) HANA DB -Audit Trail Policy Changes** | Identifies changes for HANA DB audit trail policies. | Create or update the existing audit policy in security definitions. <br> <br>**Data sources**: Linux Agent - Syslog | Lateral Movement, Defense Evasion, Persistence |
+| **SAP - (PREVIEW) HANA DB -Deactivation of Audit Trail** | Identifies the deactivation of the HANA DB audit log. | Deactivate the audit log in the HANA DB security definition. <br><br>**Data sources**: Linux Agent - Syslog | Persistence, Lateral Movement, Defense Evasion |
+| **SAP - RFC Execution of a Sensitive Function Module** | Sensitive function models to be used in relevant detections. <br><br>Maintain function modules in the [SAP - Sensitive Function Modules](#module) watchlist. | Run a function module using RFC. <br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement, Discovery |
+| **SAP - System Configuration Change** | Identifies changes for system configuration. | Adapt system change options or software component modification using the `SE06` transaction code.<br><br>**Data sources**: SAPcon - Audit Log |Exfiltration, Defense Evasion, Persistence |
+| **SAP - Debugging Activities** | Identifies all debugging related activities. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |Activate Debug ("/h") in the system, debug an active process, add breakpoint to source code, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
+| **SAP - Security Audit Log Configuration Change** | Identifies changes in the configuration of the Security Audit Log | Change any Security Audit Log Configuration using `SM19`/`RSAU_CONFIG`, such as the filters, status, recording mode, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Exfiltration, Defense Evasion |
+| **SAP - Transaction is unlocked** |Identifies unlocking of a transaction. | Unlock a transaction code using `SM01`/`SM01_DEV`/`SM01_CUS`. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Execution |
+| **SAP - Dynamic ABAP Program** | Identifies the execution of dynamic ABAP programming. For example, when ABAP code was dynamically created, changed, or deleted. <br><br> Maintain excluded transaction codes in the [SAP - Transactions for ABAP Generations](#transactions) watchlist. | Create an ABAP Report that uses ABAP program generation commands, such as INSERT REPORT, and then run the report. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control, Impact |
### Built-in SAP analytics rules for suspicious privileges operations | Rule name | Description | Source action | Tactics | | | | | |
-| **SAP - High - Change in Sensitive privileged user** | Identifies changes of sensitive privileged users. <br> <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Change user details / authorizations using `SU01`. <br><br>**Data sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
-| **SAP - High - HANA DB - Assign Admin Authorizations** | Identifies admin privilege or role assignment. | Assign a user with any admin role or privileges. <br><br>**Data sources**: Linux Agent - Syslog | Privilege Escalation |
-| **SAP - High - Sensitive privileged user logged in** | Identifies the Dialog sign-in of a sensitive privileged user. <br><br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Sign in to the backend system using `SAP*` or another privileged user. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Credential Access |
-| **SAP - High - Sensitive privileged user makes a change in other user** | Identifies changes of sensitive, privileged users in other users. | Change user details / authorizations using SU01. <br><br>**Data Sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
-| **SAP - High - Sensitive Users Password Change and Login** | Identifies password changes for privileged users. | Change the password for a privileged user and sign into the system. <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist.<br><br>**Data sources**: SAPcon - Audit Log | Impact, Command and Control, Privilege Escalation |
-| **SAP - High - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Create a user using SU01, and then sign in, using the newly created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access |
-| **SAP - High - User Unlocks and uses other users** | Identifies a user being unlocked and used by other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Unlock a user using SU01, and then sign in using the unlocked user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log, SAPcon - Change Documents Log | Discovery, PreAttack, Initial Access, Lateral Movement |
-| **SAP - Medium - Assignment of a sensitive profile** | Identifies new assignments of a sensitive profile to a user. <br><br>Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Assign a profile to a user using `SU01`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
-| **SAP - Medium - Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation |
-| **SAP - Medium - Critical authorizations assignment - New Authorization Value** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new authorization object or update an existing one in a role, using `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
-| **SAP - Medium - Critical authorizations assignment - New User Assignment** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new user to a role that holds critical authorization values, using `SU01`/`PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
-| **SAP - Medium - Sensitive Roles Changes** |Identifies changes in sensitive roles. <br><br> Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist. | Change a role using PFCG. <br><br>**Data sources**: SAPcon - Change Documents Log, SAPcon ΓÇô Audit Log | Impact, Privilege Escalation, Persistence |
+| **SAP - Change in Sensitive privileged user** | Identifies changes of sensitive privileged users. <br> <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Change user details / authorizations using `SU01`. <br><br>**Data sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
+| **SAP - (PREVIEW) HANA DB -Assign Admin Authorizations** | Identifies admin privilege or role assignment. | Assign a user with any admin role or privileges. <br><br>**Data sources**: Linux Agent - Syslog | Privilege Escalation |
+| **SAP - Sensitive privileged user logged in** | Identifies the Dialog sign-in of a sensitive privileged user. <br><br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Sign in to the backend system using `SAP*` or another privileged user. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Credential Access |
+| **SAP - Sensitive privileged user makes a change in other user** | Identifies changes of sensitive, privileged users in other users. | Change user details / authorizations using SU01. <br><br>**Data Sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access |
+| **SAP - Sensitive Users Password Change and Login** | Identifies password changes for privileged users. | Change the password for a privileged user and sign into the system. <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist.<br><br>**Data sources**: SAPcon - Audit Log | Impact, Command and Control, Privilege Escalation |
+| **SAP - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Create a user using SU01, and then sign in, using the newly created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access |
+| **SAP - User Unlocks and uses other users** | Identifies a user being unlocked and used by other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Unlock a user using SU01, and then sign in using the unlocked user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log, SAPcon - Change Documents Log | Discovery, PreAttack, Initial Access, Lateral Movement |
+| **SAP - Assignment of a sensitive profile** | Identifies new assignments of a sensitive profile to a user. <br><br>Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Assign a profile to a user using `SU01`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+| **SAP - (PREVIEW) Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation |
+| **SAP - (PREVIEW) Critical authorizations assignment - New Authorization Value** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new authorization object or update an existing one in a role, using `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+| **SAP - Critical authorizations assignment - New User Assignment** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new user to a role that holds critical authorization values, using `SU01`/`PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
+| **SAP - Sensitive Roles Changes** |Identifies changes in sensitive roles. <br><br> Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist. | Change a role using PFCG. <br><br>**Data sources**: SAPcon - Change Documents Log, SAPcon ΓÇô Audit Log | Impact, Privilege Escalation, Persistence |
## Available watchlists
These watchlists provide the configuration for the Microsoft Sentinel Solution f
| <a name="programs"></a>**SAP - Obsolete Programs** | Obsolete ABAP programs (reports), whose execution should be governed. <br><br>- **ABAPProgram**:ABAP Program, such as TH_ RSPFLDOC <br>- **Description**: A meaningful ABAP program description | | <a name="transactions"></a>**SAP - Transactions for ABAP Generations** | Transactions for ABAP generations whose execution should be governed. <br><br>- **TransactionCode**:Transaction Code, such as SE11. <br>- **Description**: A meaningful Transaction Code description | | <a name="servers"></a>**SAP - FTP Servers** | FTP Servers for identification of unauthorized connections. <br><br>- **Client**:such as 100. <br>- **FTP_Server_Name**: FTP server name, such as http://contoso.com/ <br>-**FTP_Server_Port**:FTP server port, such as 22. <br>- **Description**A meaningful FTP Server description |
+| <a name="objects"></a>**SAP_Dynamic_Audit_Log_Monitor_Configuration** | Configure the SAP audit log alerts by assigning each message ID a severity level as required by you, per system role (production, non-production). This watchlist details all available SAP standard audit log message IDs and can be extended to contain additional message IDs you might create on your own using ABAP enhancements on their SAP NetWeaver systems. This watchlist also allows for configuring a designated team to handle each of the event types, and excluding users by SAP roles, SAP profiles or by tags from the SAP_User_Config watchlist. This watchlist is one of the core components used for [configuring ](deployment-solution-configuration.md#configuring-the-sap-audit-log-monitoring-analytics-rules) the [built-inSAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) <br><br>- **MessageID**: The SAP Message ID, or event type, such as `AUD` (User master record changes), or `AUB ` (authorization changes) <br>- **DetailedDescription**: A markdown enabled description to be shown on the incident pane <br>- **ProductionSeverity**: The desired severity for the incident to be created with for production systems `High`, `Medium`. Can be set as `Disabled` <br>- **NonProdSeverity**: The desired severity for the incident to be created with for non-production systems `High`, `Medium`. Can be set as `Disabled` <br>- **ProductionThreshold** The "Per hour" count of events to be considered as suspicious for production systems `60` <br>- **NonProdThreshold** The "Per hour" count of events to be considered as suspicious for non-production systems `10` <br>- **RolesTagsToExclude**: This field accepts SAP role name, SAP profile names or tags from the SAP_User_Config watchlist. These are then used to exclude the associated users from specific event types <br>- **RuleType**: Use `Deterministic` for the event type to be sent off to the [SAP - Dynamic Deterministic Audit Log Monitor](#sapdynamic-deterministic-audit-log-monitor), or `AnomaliesOnly` to have this event covered by the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview)
+| <a name="objects"></a>**SAP_User_Config** | allows for fine tuning alerts by excluding /including users in specific contexts and is also used for [configuring ](deployment-solution-configuration.md#configuring-the-sap-audit-log-monitoring-analytics-rules) the [built-inSAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) <br><br> **SAPUser**: The SAP user <br> **Tags**: Tags are used to identify users against certain activity. For example Adding the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV will prevent RFC related incidents to be created for this specific user <br>**Other active directory user identifiers** <br>- AD User Identifier <br>- User On-Premises Sid <br>- User Principal Name
+|
For more information, see:
- [Deploying Microsoft Sentinel Solution for SAP](deployment-overview.md) - [Microsoft Sentinel Solution for SAP logs reference](sap-solution-log-reference.md)-- [Deploy the Microsoft Sentinel for SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Solution for SAP data connector with SNC](configure-snc.md)
- [Configuration file reference](configuration-file-reference.md)-- [Prerequisites for deploying Microsoft Sentinel Solution for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Prerequisites for deploying the Microsoft Sentinel Solution for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Troubleshooting your Microsoft Sentinel Solution for SAP deployment](sap-deploy-troubleshoot.md)
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 6/06/2022 Last updated : 8/11/2022
The following Azure File Sync agent versions are supported:
| V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported | | V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported | | V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported |
-| V13 Release - [KB4588753](https://support.microsoft.com/topic/632fb833-42ed-4e4d-8abd-746bd01c1064)| 13.0.0.0 | July 12, 2021 | Supported - Agent version expires on August 8, 2022 |
## Unsupported versions The following Azure File Sync agent versions have expired and are no longer supported: | Milestone | Agent version number | Release date | Status | |-|-|--||
+| V13 Release | 13.0.0.0 | N/A | Not Supported - Agent versions expired on August 8, 2022 |
| V12 Release | 12.0.0.0 - 12.1.0.0 | N/A | Not Supported - Agent versions expired on May 23, 2022 | | V11 Release | 11.1.0.0 - 11.3.0.0 | N/A | Not Supported - Agent versions expired on March 28, 2022 | | V10 Release | 10.0.0.0 - 10.1.0.0 | N/A | Not Supported - Agent versions expired on June 28, 2021 |
The following items don't sync, but the rest of the system continues to operate
### Cloud tiering - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. - When copying files using robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.-
-## Agent version 13.0.0.0
-The following release notes are for version 13.0.0.0 of the Azure File Sync agent (released July 12, 2021).
-
-### Improvements and issues that are fixed
-- Authoritative upload
- - Authoritative upload is a new mode available when creating the first server endpoint in a sync group. It is useful for the scenario where the cloud (Azure file share) has some/most of the data but is outdated and needs to be caught up with the more recent data on the new server endpoint. This is the case in offline migration scenarios like DataBox, for instance. When a DataBox is filled and sent to Azure, the users of the local server will keep changing / adding / deleting files on the local server. That makes the data in the DataBox and thus the Azure file share, slightly outdated. With Authoritative Upload, you can now tell the server and cloud, how to resolve this case and get the cloud seamlessly updated with the latest changes on the server.
-
- No matter how the data got to the cloud, this mode can update the Azure file share if the data stems from the matching location on the server. Be sure to avoid large directory restructures between the initial copy to the cloud and catching up with Authoritative Upload. This will ensure you are only transporting updates. Changes to directory names will cause all files in these renamed directories to be uploaded again. This functionality is comparable to semantics of RoboCopy /MIR = mirror source to target, including removing files on the target that no longer exist on the source.
-
- Authoritative Upload replaces the "Offline Data Transfer" feature for DataBox integration with Azure File Sync via a staging share. A staging share is no longer required to use DataBox. New Offline Data Transfer jobs can no longer be started with the AFS V13 agent. Existing jobs on a server will continue even with the upgrade to agent version 13.
--- Portal improvements to view cloud change enumeration and sync progress
- - When a new sync group is created, any connected server endpoint can only begin sync, when cloud change enumeration is complete. In case files already exist in the cloud endpoint (Azure file share) of this sync group, change enumeration of content in the cloud can take some time. The more items (files and folders) exist in the namespace, the longer this process can take. Admins will now be able to obtain cloud change enumeration progress in the Azure portal to estimate an eta for completion / sync to start with servers.
--- Support for server rename
- - If a registered server is renamed, Azure File Sync will now show the new server name in the portal. If the server was renamed prior to the v13 release, the server name in the portal will now be updated to show the correct server name.
--- Support for Windows Server 2022
- - The Azure File Sync agent is now supported on Windows Server 2022.
-
- > [!Note]
- > Windows Server 2022 adds support for TLS 1.3 which is not currently supported by Azure File Sync. If the [TLS settings](/windows-server/security/tls/tls-ssl-schannel-ssp-overview) are managed via group policy, the server must be configured to support TLS 1.2.
--- Miscellaneous improvements
- - Reliability improvements for sync, cloud tiering and cloud change enumeration.
- - If a large number of files is changed on the server, sync upload is now performed from a VSS snapshot which reduces per-item errors and sync session failures.
- - The Invoke-StorageSyncFileRecall cmdlet will now recall all tiered files associated with a server endpoint, even if the file has moved outside the server endpoint location.
- - Explorer.exe is now excluded from cloud tiering last access time tracking.
- - New telemetry (Event ID 6664) to monitor the orphaned tiered files cleanup progress after removing a server endpoint with cloud tiering enabled.
--
-### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
-
-### Agent installation and server configuration
-For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
--- A restart is required for servers that have an existing Azure File Sync agent installation if the agent version is less than version 12.0.-- The agent installation package must be installed with elevated (admin) permissions.-- The agent is not supported on Nano Server deployment option.-- The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022.-- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information.-- The Storage Sync Agent (FileSyncSvc) service does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.-
-### Interoperability
-- Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](file-sync-troubleshoot.md).-- File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen.-- Running sysprep on a server that has the Azure File Sync agent installed is not supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.-
-### Sync limitations
-The following items don't sync, but the rest of the system continues to operate normally:
-- Files with unsupported characters. See [Troubleshooting guide](file-sync-troubleshoot.md#handling-unsupported-characters) for list of unsupported characters.-- Files or directories that end with a period.-- Paths that are longer than 2,048 characters.-- The system access control list (SACL) portion of a security descriptor that's used for auditing.-- Extended attributes.-- Alternate data streams.-- Reparse points.-- Hard links.-- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints.-- Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.-
- > [!Note]
- > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
-
-### Server endpoint
-- A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync.-- Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.-- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs).-- A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint.-- Do not store an OS or application paging file within a server endpoint location.-
-### Cloud endpoint
-- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. In addition, changes made to an Azure file share over the REST protocol will not update the SMB last modified time and will not be seen as a change by sync.-- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](file-sync-troubleshoot.md?tabs=portal1%252cportal#troubleshoot-rbac)).-
- > [!Note]
- > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
-
-### Cloud tiering
-- If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.-- When copying files using robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
synapse-analytics Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/database.md
Lake databases do not allow creation of custom T-SQL objects, such as schemas, u
## Examples
+### Create workspace-level data reader
+
+A login with `GRANT CONNECT ANY DATABASE` and `GRANT SELECT ALL USER SECURABLES` permisisons is able to read all tables using the serverless SQL pool, but not able to create SQL databases or modify the objects in them.
+
+```sql
+CREATE LOGIN [wsdatareader@contoso.com] FROM EXTERNAL PROVIDER
+GRANT CONNECT ANY DATABASE TO [wsdatareader@contoso.com]
+GRANT SELECT ALL USER SECURABLES TO [wsdatareader@contoso.com]
+```
+
+This script enables you to create users without admin priviliges who can read any table in Lake databases.
+ ### Create and connect to Spark database with serverless SQL pool First create a new Spark database named `mytestdb` using a Spark cluster you have already created in your workspace. You can achieve that, for example, using a Spark C# Notebook with the following .NET for Spark statement:
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
Previously updated : 07/12/2022 Last updated : 08/11/2022 # Design and performance for Netezza migrations
You should ensure that statistics on data tables are up to date by building in a
- CSV, PARQUET, and ORC file formats.
-#### Use workload management
+#### Workload management
-Use [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](../../sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
+Running mixed workloads can pose resource challenges on busy systems. A successful [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md) scheme effectively manages resources, ensures highly efficient resource utilization, and maximizes return on investment (ROI). [Workload classification](../../sql-data-warehouse/sql-data-warehouse-workload-classification.md), [workload importance](../../sql-data-warehouse/sql-data-warehouse-workload-importance.md), and [workload isolation](../../sql-data-warehouse/sql-data-warehouse-workload-isolation.md) give more control over how workload utilizes system resources.
+
+The [workload management guide](../../sql-data-warehouse/analyze-your-workload.md) describes the techniques to analyze the workload, [manage and monitor workload importance](../../sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md), and the steps to [convert a resource class to a workload group](../../sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md). Use the [Azure portal](../../sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md) and [T-SQL queries on DMVs](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md) to monitor the workload to ensure that the applicable resources are efficiently utilized.
## Next steps
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
Previously updated : 06/01/2022 Last updated : 08/11/2022 # Security, access, and operations for Netezza migrations
User-defined restore points are also supported, allowing manual triggering of sn
As well as the snapshots described previously, Azure Synapse also performs as standard a geo-backup once per day to a [paired data center](/azure/best-practices-availability-paired-regions). The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any other region where Azure Synapse is supported. A geo-backup ensures that a data warehouse can be restored in case the restore points in the primary region aren't available. +
+### Workload management
+
+> [!TIP]
+> In a production data warehouse, there are typically mixed workloads with different resource usage characteristics running concurrently.
+
+Netezza incorporates various features for managing workloads:
+ | Technique | Description | |--|-| | **Scheduler rules** | Scheduler rules influence the scheduling of plans. Each scheduler rule specifies a condition or set of conditions. Each time the scheduler receives a plan, it evaluates all modifying scheduler rules and carries out the appropriate actions. Each time the scheduler selects a plan for execution, it evaluates all limiting scheduler rules. The plan is executed only if doing so wouldn't exceed a limit imposed by a limiting scheduler rule. Otherwise, the plan waits. This provides you with a way to classify and manipulate plans in a way that influences the other WLM techniques (SQB, GRA, and PQE). |
As well as the snapshots described previously, Azure Synapse also performs as st
| **Short query bias (SQB)** | Resources (that is, scheduling slots, memory, and preferential queuing) are reserved for short queries. A short query is a query for which the cost estimate is less than a specified maximum value (the default is two seconds). With SQB, short queries can run even when the system is busy processing other, longer queries. | | **Prioritized query execution (PQE)** | Based on settings that you configure, the system assigns a priority&mdash;critical, high, normal, or low&mdash;to each query. The priority depends on factors such as the user, group, or session associated with the query. The system can then use the priority as a basis for allocating resources. |
-### Workload management
+Azure Synapse automatically logs resource utilization statistics. Metrics include usage statistics for CPU, memory, cache, I/O, and temporary workspace for each query. Azure Synapse also logs connectivity information, such as failed connection attempts.
-> [!TIP]
-> In a production data warehouse, there are typically mixed workloads with different resource usage characteristics running concurrently.
-
-Netezza incorporates various features for managing workloads:
+>[!TIP]
+>Low-level and system-wide metrics are automatically logged within Azure.
In Azure Synapse, resource classes are pre-determined resource limits that govern compute resources and concurrency for query execution. Resource classes can help you manage your workload by setting limits on the number of queries that run concurrently and on the compute resources assigned to each query. There's a trade-off between memory and concurrency.
-See [Resource classes for workload management](/azure/sql-data-warehouse/resource-classes-for-workload-management) for detailed information.
+Azure Synapse supports these basic workload management concepts:
+
+- **Workload classification**: you can assign a request to a workload group to set importance levels.
+
+- **Workload importance**: you can influence the order in which a request gets access to resources. By default, queries are released from the queue on a first-in, first-out basis as resources become available. Workload importance allows higher priority queries to receive resources immediately regardless of the queue.
+
+- **Workload isolation**: you can reserve resources for a workload group, assign maximum and minimum usage for varying resources, limit the resources a group of requests can consume can, and set a timeout value to automatically kill runaway queries.
+
+Running mixed workloads can pose resource challenges on busy systems. A successful [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md) scheme effectively manages resources, ensures highly efficient resource utilization, and maximizes return on investment (ROI). The [workload classification](../../sql-data-warehouse/sql-data-warehouse-workload-classification.md), [workload importance](../../sql-data-warehouse/sql-data-warehouse-workload-importance.md), and [workload isolation](../../sql-data-warehouse/sql-data-warehouse-workload-isolation.md) gives more control over how workload utilizes system resources.
+
+The [workload management guide](../../sql-data-warehouse/analyze-your-workload.md) describes the techniques to analyze the workload, manage and monitor workload importance](../../sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md), and the steps to [convert a resource class to a workload group](../../sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md). Use the [Azure portal](../../sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md) and [T-SQL queries on DMVs](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md) to monitor the workload to ensure that the applicable resources are efficiently utilized. Azure Synapse provides a set of Dynamic Management Views (DMVs) for monitoring all aspects of workload management. These views are useful when actively troubleshooting and identifying performance bottlenecks in your workload.
This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of "spiky" workloads, such as workloads with temporary, intense bursts of activity surrounded by periods of infrequent activity.
+For more information on workload management in Azure Synapse, see [Workload management with resource classes](../../sql-data-warehouse/resource-classes-for-workload-management.md).
+ ### Scale compute resources > [!TIP]
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/4-visualization-reporting.md
Some BI tools have what is known as a semantic metadata layer. That layer simpli
>[!TIP] >Some BI tools have semantic layers that simplify business user access to physical data structures in your data warehouse or data mart.
-In a data warehouse migration, you might be forced to change column or table names. For example, Oracle allows a `#` character in table names, but Azure Synapse only allows `#` as a table name prefix to indicate a temporary table. In such cases, you might also need to change mappings.
+In a data warehouse migration, changes to column names or table names may be forced upon you. For example, in IBM Netezza, table names can have a "#". In Azure Synapse, the "#" is only allowed as a prefix to a table name to indicate a temporary table. In IBM Netezza, TEMPORARY TABLES do not necessarily have a "#" in the name, but in Synapse they must. You may need to do some rework to change table mappings in such cases.
To achieve consistency across multiple BI tools, create a universal semantic layer by using a data virtualization server that sits between BI tools and applications and Azure Synapse. In the data virtualization server, use common data names for high-level objects like dimensions, measures, hierarchies, and joins. That way you configure everything, including calculated fields, joins, and mappings, only once instead of in every tool. Then, point all BI tools at the data virtualization server.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/1-design-performance-migration.md
+
+ Title: "Design and performance for Oracle migrations"
+description: Learn how Oracle and Azure Synapse SQL databases differ in their approach to high query performance on exceptionally large data volumes.
+++
+ms.devlang:
++++ Last updated : 08/11/2022++
+# Design and performance for Oracle migrations
+
+This article is part one of a seven-part series that provides guidance on how to migrate from Oracle to Azure Synapse Analytics. The focus of this article is best practices for design and performance.
+
+## Overview
+
+Due to the cost and complexity of maintaining and upgrading legacy on-premises Oracle environments, many existing Oracle users want to take advantage of the innovations provided by modern cloud environments. Infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) cloud environments let you delegate tasks like infrastructure maintenance and platform development to the cloud provider.
+
+>[!TIP]
+>More than just a database&mdash;the Azure environment includes a comprehensive set of capabilities and tools.
+
+Although Oracle and Azure Synapse Analytics are both SQL databases that use massively parallel processing (MPP) techniques to achieve high query performance on exceptionally large data volumes, there are some basic differences in approach:
+
+- Legacy Oracle systems are often installed on-premises and use relatively expensive hardware, while Azure Synapse is cloud-based and uses Azure storage and compute resources.
+
+- Upgrading an Oracle configuration is a major task involving extra physical hardware and potentially lengthy database reconfiguration, or dump and reload. Because storage and compute resources are separate in the Azure environment and have elastic scaling capability, those resources can be scaled upwards or downwards independently.
+
+- You can pause or resize Azure Synapse as needed to reduce resource utilization and cost.
+
+Microsoft Azure is a globally available, highly secure, scalable cloud environment that includes Azure Synapse and an ecosystem of supporting tools and capabilities. The next diagram summarizes the Azure Synapse ecosystem.
++
+Azure Synapse provides best-of-breed relational database performance by using techniques such as MPP and automatic in-memory caching. You can see the results of these techniques in independent benchmarks such as the one run recently by [GigaOm](https://research.gigaom.com/report/data-warehouse-cloud-benchmark/), which compares Azure Synapse to other popular cloud data warehouse offerings. Customers who migrate to the Azure Synapse environment see many benefits, including:
+
+- Improved performance and price/performance.
+
+- Increased agility and shorter time to value.
+
+- Faster server deployment and application development.
+
+- Elastic scalability&mdash;only pay for actual usage.
+
+- Improved security/compliance.
+
+- Reduced storage and disaster recovery costs.
+
+- Lower overall TCO, better cost control, and streamlined operational expenditure (OPEX).
+
+To maximize these benefits, migrate new or existing data and applications to the Azure Synapse platform. In many organizations, migration includes moving an existing data warehouse from a legacy on-premises platform, such as Oracle, to Azure Synapse. At a high level, the migration process includes these steps:
+
+ :::column span="":::
+ &#160;&#160;&#160; **Preparation** &#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; &#129094;
+
+ - Define scope&mdash;what is to be migrated.
+
+ - Build inventory of data and processes for migration.
+
+ - Define data model changes (if any).
+
+ - Define source data extract mechanism.
+
+ - Identify the appropriate Azure and third-party tools and features to be used.
+
+ - Train staff early on the new platform.
+
+ - Set up the Azure target platform.
+
+ :::column-end:::
+ :::column span="":::
+ &#160;&#160;&#160; **Migration** &#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; &#129094;
+
+ - Start small and simple.
+
+ - Automate wherever possible.
+
+ - Leverage Azure built-in tools and features to reduce migration effort.
+
+ - Migrate metadata for tables and views.
+
+ - Migrate historical data to be maintained.
+
+ - Migrate or refactor stored procedures and business processes.
+
+ - Migrate or refactor ETL/ELT incremental load processes.
+
+ :::column-end:::
+ :::column span="":::
+ &#160;&#160;&#160; **Post migration**
+
+ - Monitor and document all stages of the process.
+
+ - Use the experience gained to build a template for future migrations.
+
+ - Re-engineer the data model if required (using new platform performance and scalability).
+
+ - Test applications and query tools.
+
+ - Benchmark and optimize query performance.
+
+ :::column-end:::
+
+This article provides general information and guidelines for performance optimization when migrating a data warehouse from an existing Oracle environment to Azure Synapse. The goal of performance optimization is to achieve the same or better data warehouse performance in Azure Synapse after the migration.
+
+## Design considerations
+
+### Migration scope
+
+When you're preparing to migrate from an Oracle environment, consider the following migration choices.
+
+#### Choose the workload for the initial migration
+
+Typically, legacy Oracle environments have evolved over time to encompass multiple subject areas and mixed workloads. When you're deciding where to start on a migration project, choose an area where you'll be able to:
+
+- Prove the viability of migrating to Azure Synapse by quickly delivering the benefits of the new environment.
+
+- Allow your in-house technical staff to gain relevant experience with the processes and tools that they'll use when they migrate other areas.
+
+- Create a template for further migrations that's specific to the source Oracle environment and the current tools and processes that are already in place.
+
+A good candidate for an initial migration from an Oracle environment supports the preceding items, and:
+
+- Implements a BI/Analytics workload rather than an online transaction processing (OLTP) workload.
+
+- Has a data model, such as a star or snowflake schema, that can be migrated with minimal modification.
+
+>[!TIP]
+>Create an inventory of objects that need to be migrated, and document the migration process.
+
+The volume of migrated data in an initial migration should be large enough to demonstrate the capabilities and benefits of the Azure Synapse environment but not too large to quickly demonstrate value. A size in the 1-10 terabyte range is typical.
+
+An initial approach to a migration project is to minimize the risk, effort, and time needed so that you quickly see the benefits of the Azure cloud environment. The following [approaches](#lift-and-shift-migration-vs-phased-approach) limit the scope of the initial migration to just the data marts and doesn't address broader migration aspects, such as ETL migration and historical data migration. However, you can address those aspects in later phases of the project once the migrated data mart layer is backfilled with data and the required build processes.
+
+#### Lift and shift migration vs. Phased approach
+
+In general, there are two types of migration regardless of the purpose and scope of the planned migration: lift and shift as-is and a phased approach that incorporates changes.
+
+##### Lift and shift
+
+In a lift and shift migration, an existing data model, like a star schema, is migrated unchanged to the new Azure Synapse platform. This approach minimizes risk and migration time by reducing the work needed to realize the benefits of moving to the Azure cloud environment. Lift and shift migration is a good fit for these scenarios:
+
+- You have an existing Oracle environment with a single data mart to migrate, or
+- You have an existing Oracle environment with data that's already in a well-designed star or snowflake schema, or
+- You're under time and cost pressures to move to a modern cloud environment.
+
+>[!TIP]
+>Lift and shift is a good starting point, even if subsequent phases implement changes to the data model.
+
+##### Phased approach that incorporates changes
+
+If a legacy data warehouse has evolved over a long period of time, you might need to re-engineer it to maintain the required performance levels. You might also have to re-engineer to support new data like Internet of Things (IoT) streams. As part of the re-engineering process, migrate to Azure Synapse to get the benefits of a scalable cloud environment. Migration can include a change in the underlying data model, such as a move from an Inmon model to a data vault.
+
+Microsoft recommends moving your existing data model as-is to Azure and using the performance and flexibility of the Azure environment to apply the re-engineering changes. That way, you can use Azure's capabilities to make the changes without impacting the existing source system.
+
+#### Use Microsoft facilities to implement a metadata-driven migration
+
+You can automate and orchestrate the migration process by using the capabilities of the Azure environment. This approach minimizes the performance hit on the existing Oracle environment, which may already be running close to capacity.
+
+The [SQL Server Migration Assistant](/sql/ssma/oracle/sql-server-migration-assistant-for-oracle-oracletosql) (SSMA) for Oracle can automate many parts of the migration process, including in some cases functions and procedural code. SSMA supports Azure Synapse as a target environment.
++
+SSMA for Oracle can help you migrate an Oracle data warehouse or data mart to Azure Synapse. SSMA is designed to automate the process of migrating tables, views, and data from an existing Oracle environment.
+
+[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud that orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](/azure/hdinsight/hadoop/apache-hadoop-introduction), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+
+Data Factory can be used to migrate data at source to Azure SQL target. This offline data movement helps to reduce migration downtime significantly.
+
+[Azure Database Migration Services](../../../dms/dms-overview.md) can help you plan and perform a migration from environments like Oracle.
+
+When you're planning to use Azure facilities to manage the migration process, create metadata that lists all the data tables to be migrated and their location.
+
+### Design differences between Oracle and Azure Synapse
+
+As mentioned earlier, there are some basic differences in approach between Oracle and Azure Synapse Analytics databases. [SSMA for Oracle](/sql/ssma/oracle/what-s-new-in-ssma-for-oracle-oracletosql?view=sql-server-ver16&preserve-view=true#ssma-v74) not only helps bridge these gaps but also automates the migration. Although SSMA isn't the most efficient approach for very high volumes of data, it's useful for smaller tables.
+
+#### Multiple databases vs. single database and schemas
+
+The Oracle environment often contains multiple separate databases. For instance, there could be separate databases for: data ingestion and staging tables, core warehouse tables, and data marts&mdash;sometimes referred to as the semantic layer. Processing in ETL or ELT pipelines can implement cross-database joins and move data between the separate databases.
+
+In contrast, the Azure Synapse environment contains a single database and uses schemas to separate tables into logically separate groups. We recommend that you use a series of schemas within the target Azure Synapse database to mimic the separate databases migrated from the Oracle environment. If the Oracle environment already uses schemas, you may need to use a new naming convention when you move the existing Oracle tables and views to the new environment. For example, you could concatenate the existing Oracle schema and table names into the new Azure Synapse table name, and use schema names in the new environment to maintain the original separate database names. Although you can use SQL views on top of the underlying tables to maintain the logical structures, there are potential downsides to that approach:
+
+- Views in Azure Synapse are read-only, so any updates to the data must take place on the underlying base tables.
+
+- There may already be one or more layers of views in existence and adding an extra layer of views could affect performance.
+
+>[!TIP]
+>Combine multiple databases into a single database within Azure Synapse and use schema names to logically separate the tables.
+
+#### Table considerations
+
+When you migrate tables between different environments, typically only the raw data and the metadata that describes it physically migrate. Other database elements from the source system, such as indexes, usually aren't migrated because they might be unnecessary or implemented differently in the new environment.
+
+Performance optimizations in the source environment, such as indexes, indicate where you might add performance optimization in the new environment. For example, if queries in the source Oracle environment frequently use bit-mapped indexes, that suggests that a non-clustered index should be created within Azure Synapse. Other native performance optimization techniques like table replication may be more applicable than straight like-for-like index creation. SSMA for Oracle can be used to provide migration recommendations for table distribution and indexing.
+
+>[!TIP]
+>Existing indexes indicate candidates for indexing in the migrated warehouse.
+
+#### Unsupported Oracle database object types
+
+Oracle-specific features can often be replaced by Azure Synapse features. However, some Oracle database objects aren't directly supported in Azure Synapse. The following list of unsupported Oracle database objects describes how you can achieve an equivalent functionality in Azure Synapse.
+
+- **Various indexing options**: in Oracle, several indexing options, such as bit-mapped indexes, function-based indexes, and domain indexes, have no direct equivalent in Azure Synapse.
+
+ You can find out which columns are indexed and the index type by:
+
+ - Querying system catalog tables and views, such as `ALL_INDEXES`, `DBA_INDEXES`, `USER_INDEXES`, and `DBA_IND_COL`. You can use the built-in queries in [Oracle SQL Developer](https://www.oracle.com/database/technologies/appdev/sqldeveloper-landing.html), as shown in the following screenshot.
+
+ :::image type="content" source="../media/1-design-performance-migration/oracle-sql-developer-queries-1.png" border="true" alt-text="Screenshot showing how to query system catalog tables and views in Oracle SQL Developer." lightbox="../media/1-design-performance-migration/oracle-sql-developer-queries-1-lrg.png":::
+
+ Or, run the following query to find all indexes of a given type:
+
+ ```sql
+ SELECT * FROM dba_indexes WHERE index_type LIKE 'FUNCTION-BASED%';
+ ```
+
+ - Querying the `dba_index_usage` or `v$object_usage` views when monitoring is enabled. You can query those views in Oracle SQL Developer, as shown in the following screenshot.
+
+ :::image type="content" source="../media/1-design-performance-migration/oracle-sql-developer-queries-2.png" border="true" alt-text="Screenshot showing how to find out which indexes are used in Oracle SQL Developer." lightbox="../media/1-design-performance-migration/oracle-sql-developer-queries-2-lrg.png":::
+
+ Function-based indexes, where the index contains the result of a function on the underlying data columns, have no direct equivalent in Azure Synapse. We recommend that you first migrate the data, then in Azure Synapse run the Oracle queries that use function-based indexes to gauge performance. If the performance of those queries in Azure Synapse isn't acceptable, consider creating a column that contains the pre-calculated value and then index that column.
+
+ When you configure the Azure Synapse environment, it makes sense to only implement in-use indexes. Azure Synapse currently supports the index types shown here:
+
+ :::image type="content" source="../media/1-design-performance-migration/azure-synapse-analytics-index-types.png" border="true" alt-text="Screenshot showing the index types that Azure Synapse supports." lightbox="../media/1-design-performance-migration/azure-synapse-analytics-index-types-lrg.png":::
+
+ Azure Synapse features, such as parallel query processing and in-memory caching of data and results, make it likely that fewer indexes are required for data warehouse applications to achieve performance goals. We recommend that you use the following index types in Azure Synapse:
+
+ - **Clustered columnstore indexes**: when no index options are specified for a table, Azure Synapse by default creates a clustered [columnstore index](/sql/relational-databases/indexes/columnstore-indexes-design-guidance). Clustered columnstore tables offer the highest level of data compression, best overall query performance, and generally outperform clustered index or heap tables. A clustered columnstore index is usually the best choice for large tables. When you [create a table](/sql/t-sql/statements/create-table-azure-sql-data-warehouse), choose clustered columnstore if you're unsure how to index your table. However, there are some scenarios where clustered columnstore indexes aren't the best option:
+
+ - Tables with varchar(max), nvarchar(max), or varbinary(max) data types, because a clustered columnstore index doesn't support those data types. Instead, consider using a heap or clustered index.
+ - Tables with transient data, because columnstore tables might be less efficient than heap or temporary tables.
+ - Small tables with less than 100 million rows. Instead, consider using heap tables.
+
+ - **Clustered and nonclustered indexes**: clustered indexes can outperform clustered columnstore indexes when a single row needs to be quickly retrieved. For queries where a single row lookup, or just a few row lookups, must perform at extreme speed, consider using a cluster index or nonclustered secondary index. The disadvantage of using a clustered index is that only queries with a highly selective filter on the clustered index column will benefit. To improve filtering on other columns, you can add a nonclustered index to the other columns. However, each index that you add to a table uses more space and increases the processing time to load.
+
+ - **Heap tables**: when you're temporarily landing data on Azure Synapse, you might find that using a heap table makes the overall process faster. This is because loading data to heap tables is faster than loading data to index tables, and in some cases subsequent reads can be done from cache. If you're loading data only to stage it before running more transformations, it's much faster to load it to a heap table than a clustered columnstore table. Also, loading data to a [temporary table](../../sql-data-warehouse/sql-data-warehouse-tables-temporary.md) is faster than loading a table to permanent storage. For small lookup tables with less than 100 million rows, heap tables are usually the right choice. Cluster columnstore tables begin to achieve optimal compression when they contain more than 100 million rows.
+
+- **Clustered tables**: Oracle tables can be organized so that table rows that are frequently accessed together (based on a common value) are physically stored together to reduce disk I/O when data is retrieved. Oracle also provides a hash-cluster option for individual tables, which applies a hash value to the cluster key and physically stores rows with the same hash value together. To list clusters within an Oracle database, use the `SELECT * FROM DBA_CLUSTERS;` query. To determine whether a table is within a cluster, use the `SELECT * FROM TAB;` query, which shows the table name and cluster ID for each table.
+
+ In Azure Synapse, you can achieve similar results by using materialized and/or replicated tables, because those table types minimize the I/O required at query run time.
+
+- **Materialized views**: Oracle supports [materialized views](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql) and recommends using one or more for large tables with many columns where only a few columns are regularly used in queries. Materialized views are automatically refreshed by the system when data in the base table is updated.
+
+ In 2019, Microsoft announced that Azure Synapse will support materialized views with the same functionality as in Oracle. Materialized views are now a preview feature in Azure Synapse.
+
+- **In-database triggers**: in Oracle, a trigger can be configured to automatically run when a triggering event occurs. Triggering events can be:
+
+ - A data manipulation language (DML) statement, such as `INSERT`, `UPDATE`, or `DELETE`, runs on a table. If you defined a trigger that fires before an `INSERT` statement on a customer table, the trigger will fire once before a new row is inserted into the customer table.
+
+ - A DDL statement, such as `CREATE` or `ALTER`, runs. This trigger is often used for auditing purposes to record schema changes.
+
+ - A system event, such as startup or shutdown of the Oracle database.
+
+ - A user event, such as sign in or sign out.
+
+ You can get a list of the triggers defined in an Oracle database by querying the `ALL_TRIGGERS`, `DBA_TRIGGERS`, or `USER_TRIGGERS` views. The following screenshot shows a `DBA_TRIGGERS` query in Oracle SQL Developer.
+
+ :::image type="content" source="../media/1-design-performance-migration/oracle-sql-developer-triggers.png" border="true" alt-text="Screenshot showing how to query for a list of triggers in Oracle SQL Developer." lightbox="../media/1-design-performance-migration/oracle-sql-developer-triggers-lrg.png":::
+
+ Azure Synapse doesn't support Oracle database triggers. However, you can add equivalent functionality by using Data Factory, although doing so will require you to refactor the processes that use triggers.
+
+- **Synonyms**: Oracle supports defining synonyms as alternative names for several database object types. Those object types include: tables, views, sequences, procedures, stored functions, packages, materialized views, Java class schema objects, user-defined objects, or another synonym.
+
+ Azure Synapse doesn't currently support defining synonyms, although if a synonym in Oracle refers to a table or view, then you can define a view in Azure Synapse to match the alternative name. If a synonym in Oracle refers to a function or stored procedure, then in Azure Synapse you can create another function or stored procedure, with a name to match the synonym, that calls the target.
+
+- **User-defined types**: Oracle supports user-defined objects that can contain a series of individual fields, each with their own definition and default values. Those objects can be referenced within a table definition in the same way as built-in data types like `NUMBER` or `VARCHAR`. You can get a list of user-defined types within an Oracle database by querying the `ALL_TYPES`, `DBA_TYPES`, or `USER_TYPES` views.
+
+ Azure Synapse doesn't currently support user-defined types. If the data you need to migrate includes user-defined data types, either "flatten" them into a conventional table definition, or if they're arrays of data, normalize them in a separate table.
+
+#### Oracle data type mapping
+
+Most Oracle data types have a direct equivalent in Azure Synapse. The following table shows the recommended approach for mapping Oracle data types to Azure Synapse.
+
+| Oracle Data Type | Azure Synapse Data Type |
+|-|-|
+| BFILE | Not supported. Map to VARBINARY (MAX). |
+| BINARY_FLOAT | Not supported. Map to FLOAT. |
+| BINARY_DOUBLE | Not supported. Map to DOUBLE. |
+| BLOB | Not directly supported. Replace with VARBINARY(MAX). |
+| CHAR | CHAR |
+| CLOB | Not directly supported. Replace with VARCHAR(MAX). |
+| DATE | DATE in Oracle can also contain time information. Depending on usage map to DATE or TIMESTAMP. |
+| DECIMAL | DECIMAL |
+| DOUBLE | PRECISION DOUBLE |
+| FLOAT | FLOAT |
+| INTEGER | INT |
+| INTERVAL YEAR TO MONTH | INTERVAL data types aren't supported. Use date comparison functions, such as DATEDIFF or DATEADD, for date calculations. |
+| INTERVAL DAY TO SECOND | INTERVAL data types aren't supported. Use date comparison functions, such as DATEDIFF or DATEADD, for date calculations. |
+| LONG | Not supported. Map to VARCHAR(MAX). |
+| LONG RAW | Not supported. Map to VARBINARY(MAX). |
+| NCHAR | NCHAR |
+| NVARCHAR2 | NVARCHAR |
+| NUMBER | NUMBER |
+| NCLOB | Not directly supported. Replace with NVARCHAR(MAX). |
+| NUMERIC | NUMERIC |
+| ORD media data types | Not supported |
+| RAW | Not supported. Map to VARBINARY. |
+| REAL | REAL |
+| ROWID | Not supported. Map to GUID, which is similar. |
+| SDO Geospatial data types | Not supported |
+| SMALLINT | SMALLINT |
+| TIMESTAMP | DATETIME2 or the CURRENT_TIMESTAMP() function |
+| TIMESTAMP WITH LOCAL TIME ZONE | Not supported. Map to DATETIMEOFFSET. |
+| TIMESTAMP WITH TIME ZONE | Not supported because TIME is stored using wall-clock time without a time zone offset. |
+| URIType | Not supported. Store in a VARCHAR. |
+| UROWID | Not supported. Map to GUID, which is similar. |
+| VARCHAR | VARCHAR |
+| VARCHAR2 | VARCHAR |
+| XMLType | Not supported. Store XML data in a VARCHAR. |
+
+Oracle also supports defining user-defined objects that can contain a series of individual fields, each with their own definition and default values. Those objects can then be referenced within a table definition in the same way as built-in data types like `NUMBER` or `VARCHAR`. Azure Synapse doesn't currently support user-defined types. If the data you need to migrate includes user-defined data types, either "flatten" them into a conventional table definition, or if they're arrays of data, normalize them in a separate table.
+
+>[!TIP]
+>Assess the number and type of unsupported data types during the migration preparation phase.
+
+Third-party vendors offer tools and services to automate migration, including the mapping of data types. If a [third-party](../../partner/data-integration.md) ETL tool is already in use in the Oracle environment, use that tool to implement any required data transformations.
+
+#### SQL DML syntax differences
+
+SQL DML syntax differences exist between Oracle SQL and Azure Synapse T-SQL. Those differences are discussed in detail in [Minimize SQL issues for Oracle migrations](5-minimize-sql-issues.md#sql-ddl-differences-between-oracle-and-azure-synapse). In some cases, you can automate DML migration by using Microsoft tools like SSMA for Oracle and Azure Database Migration Services, or [third-party](../../partner/data-integration.md) migration products and services.
+
+#### Functions, stored procedures, and sequences
+
+When migrating a data warehouse from a mature environment like Oracle, you probably need to migrate elements other than simple tables and views. Check whether tools within the Azure environment can replace the functionality of functions, stored procedures, and sequences because it's usually more efficient to use built-in Azure tools than to recode them for Azure Synapse.
+
+As part of your preparation phase, create an inventory of objects that need to be migrated, define a method for handling them, and allocate appropriate resources in your migration plan.
+
+Microsoft tools like SSMA for Oracle and Azure Database Migration Services, or [third-party](../../partner/data-integration.md) migration products and services, can automate the migration of functions, stored procedures, and sequences.
+
+The following sections further discuss the migration of functions, stored procedures, and sequences.
+
+##### Functions
+
+As with most database products, Oracle supports system and user-defined functions within a SQL implementation. When you migrate a legacy database platform to Azure Synapse, common system functions can usually be migrated without change. Some system functions might have a slightly different syntax, but any required changes can be automated. You can get a list of functions within an Oracle database by querying the `ALL_OBJECTS` view with the appropriate `WHERE` clause. You can use Oracle SQL Developer to get a list of functions, as shown in the following screenshot.
++
+For Oracle system functions or arbitrary user-defined functions that have no equivalent in Azure Synapse, recode those functions using a target environment language. Oracle user-defined functions are coded in PL/SQL, Java, or C. Azure Synapse uses the Transact-SQL language to implement user-defined functions.
+
+##### Stored procedures
+
+Most modern database products support storing procedures within the database. Oracle provides the PL/SQL language for this purpose. A stored procedure typically contains both SQL statements and procedural logic, and returns data or a status. You can get a list of stored procedures within an Oracle database by querying the `ALL_OBJECTS` view with the appropriate `WHERE` clause. You can use Oracle SQL Developer to get a list of stored procedures, as shown in the next screenshot.
++
+Azure Synapse supports stored procedures using T-SQL, so you'll need to recode any migrated stored procedures in that language.
+
+##### Sequences
+
+In Oracle, a sequence is a named database object, created using `CREATE SEQUENCE`. A sequence provides unique numeric values via the `CURRVAL` and `NEXTVAL` methods. You can use the generated unique numbers as surrogate key values for primary keys.
+
+Azure Synapse doesn't implement `CREATE SEQUENCE`, but you can implement sequences using [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property) columns or SQL code that generates the next sequence number in a series.
+
+### Extracting metadata and data from an Oracle environment
+
+#### Data Definition Language generation
+
+The ANSI SQL standard defines the basic syntax for Data Definition Language (DDL) commands. Some DDL commands, such as `CREATE TABLE` and `CREATE VIEW`, are common to both Oracle and Azure Synapse but also provide implementation-specific features such as indexing, table distribution, and partitioning options.
+
+You can edit existing Oracle `CREATE TABLE` and `CREATE VIEW` scripts to achieve equivalent definitions in Azure Synapse. To do so, you might need to use [modified data types](#oracle-data-type-mapping) and remove or modify Oracle-specific clauses such as `TABLESPACE`.
+
+Within the Oracle environment, system catalog tables specify the current table and view definition. Unlike user-maintained documentation, system catalog information is always complete and in sync with current table definitions. You can access system catalog information using utilities such as Oracle SQL Developer. Oracle SQL Developer can generate `CREATE TABLE` DDL statements that you can edit to create equivalent tables in Azure Synapse.
+
+Or, you can use SSMA for Oracle to migrate tables from an existing Oracle environment to Azure Synapse. SSMA for Oracle will apply the appropriate data type mappings and recommended table and distribution types, as shown in the following screenshot.
++
+You can also use [third-party](../../partner/data-integration.md) migration and ETL tools that process system catalog information to achieve similar results.
+
+#### Data extraction from Oracle
+
+You can extract raw table data from Oracle tables to flat delimited files, such as CSV files, using standard Oracle utilities like Oracle SQL Developer, [SQL\*Plus](https://www.oracle.com/database/technologies/instant-client/downloads.html), and [SCLcl](https://www.oracle.com/database/technologies/appdev/sqlcl.html). Then, you can compress the flat delimited files using gzip, and upload the compressed files to Azure Blob Storage using AzCopy or Azure data transport tools like Azure Data Box.
+
+Extract table data as efficiently as possible&mdash;especially when migrating large fact tables. For Oracle tables, use parallelism to maximize extraction throughput. You can achieve parallelism by running multiple processes that individually extract discrete segments of data, or by using tools capable of automating parallel extraction through partitioning.
+
+>[!TIP]
+>Use parallelism for the most efficient data extraction.
+
+If sufficient network bandwidth is available, you can extract data from an on-premises Oracle system directly into Azure Synapse tables or Azure Blob Data Storage. To do so, use Data Factory processes, Azure Database Migration Service, or [third-party](../../partner/data-integration.md) data migration or ETL products.
+
+Extracted data files should contain delimited text in CSV, Optimized Row Columnar (ORC), or Parquet format.
+
+For more information on migrating data and ETL from an Oracle environment, see [Data migration, ETL, and load for Oracle migrations](2-etl-load-migration-considerations.md).
+
+## Performance recommendations for Oracle migrations
+
+The goal of performance optimization is same or better data warehouse performance after migration to Azure Synapse.
+
+### Similarities in performance tuning approach concepts
+
+Many performance tuning concepts for Oracle databases hold true for Azure Synapse databases. For example:
+
+- Use data distribution to collocate data-to-be-joined onto the same processing node.
+
+- Use the smallest data type for a given column to save storage space and accelerate query processing.
+
+- Ensure that columns to be joined have the same data type in order to optimize join processing and reduce the need for data transforms.
+
+- To help the optimizer produce the best execution plan, ensure statistics are up to date.
+
+- Monitor performance using built-in database capabilities to ensure that resources are being used efficiently.
+
+>[!TIP]
+>Prioritize familiarity with Azure Synapse tuning options at the start of a migration.
+
+### Differences in performance tuning approach
+
+This section highlights low-level performance tuning implementation differences between Oracle and Azure Synapse.
+
+#### Data distribution options
+
+For performance, Azure Synapse was designed with multi-node architecture and uses parallel processing. To optimize table performance in Azure Synapse, you can define a data distribution option in `CREATE TABLE` statements using the `DISTRIBUTION` statement. For example, you can specify a hash-distributed table, which distributes table rows across compute nodes by using a deterministic hash function. Many Oracle implementations, especially older on-premises systems, don't support this feature.
+
+Unlike Oracle, Azure Synapse supports local joins between a small table and a large table through small table replication. For instance, consider a small dimension table and a large fact table within a star schema model. Azure Synapse can replicate the smaller dimension table across all nodes to ensure that the value of any join key for the large table has a matching, locally available dimension row. The overhead of dimension table replication is relatively low for a small dimension table. For large dimension tables, a hash distribution approach is more appropriate. For more information on data distribution options, see [Design guidance for using replicated tables](../../sql-data-warehouse/design-guidance-for-replicated-tables.md) and [Guidance for designing distributed tables](../../sql-data-warehouse/sql-data-warehouse-tables-distribute.md).
+
+>[!TIP]
+>Hash distribution improves query performance on large fact tables. Round-robin distribution is useful for improving loading speed.
+
+Hash distribution can be applied on multiple columns for a more even distribution of the base table. Multi-column distribution will allow you to choose up to eight columns for distribution. This not only reduces the data skew over time but also improves query performance.
+
+> [!NOTE]
+> Multi-column distribution is currently in preview for Azure Synapse Analytics. You can use multi-column distribution with [CREATE MATERIALIZED VIEW](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), [CREATE TABLE](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), and [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
+
+#### Distribution Advisor
+
+In Azure Synapse SQL, the way each table is distributed can be customized. The table distribution strategy affects query performance substantially.
+
+The distribution advisor is a new feature in Synapse SQL that analyzes queries and recommends the best distribution strategies for tables to improve query performance. Queries to be considered by the advisor can be provided by you or pulled from your historic queries available in DMV.
+
+For details and examples on how to use the distribution advisor, visit [Distribution Advisor in Azure Synapse SQL](../../sql/distribution-advisor.md).
++
+#### Data indexing
+
+Azure Synapse supports several user-definable indexing options that have a different operation and usage compared to system-managed zone maps in Oracle. For more information about the different indexing options in Azure Synapse, see [Indexes on dedicated SQL pool tables](../../sql-data-warehouse/sql-data-warehouse-tables-index.md).
+
+Index definitions within a source Oracle environment provide a useful indication of data usage and the candidate columns for indexing in the Azure Synapse environment. Typically, you won't need to migrate every index from a legacy Oracle environment because Azure Synapse doesn't over-rely on indexes and implements the following features to achieve outstanding performance:
+
+- Parallel query processing.
+
+- In-memory data and result set caching.
+
+- Data distribution, such as replication of small dimension tables, to reduce I/O.
+
+#### Data partitioning
+
+In an enterprise data warehouse, fact tables can contain billions of rows. Partitioning optimizes the maintenance and querying of these tables by splitting them into separate parts to reduce the amount of data processed. In Azure Synapse, the `CREATE TABLE` statement defines the partitioning specification for a table.
+
+You can only use one field per table for partitioning. That field is frequently a date field because many queries are filtered by date or a date range. It's possible to change the partitioning of a table after initial load by using the `CREATE TABLE AS` (CTAS) statement to recreate the table with a new distribution. For a detailed discussion of partitioning in Azure Synapse, see [Partitioning tables in dedicated SQL pool](/azure/sql-data-warehouse/sql-data-warehouse-tables-partition).
+
+#### PolyBase or COPY INTO for data loading
+
+[PolyBase](/sql/relational-databases/polybase) supports efficient loading of large amounts of data to a data warehouse by using parallel loading streams. For more information, see [PolyBase data loading strategy](../../sql/load-data-overview.md).
+
+[COPY INTO](/sql/t-sql/statements/copy-into-transact-sql) also supports high-throughput data ingestion, and:
+
+- Data retrieval from all files within a folder and subfolders.
+- Data retrieval from multiple locations in the same storage account. You can specify multiple locations by using comma separated paths.
+- [Azure Data Lake Storage](../../../storage/blobs/data-lake-storage-introduction.md) (ADLS) and Azure Blob Storage.
+- CSV, PARQUET, and ORC file formats.
+
+>[!TIP]
+> The recommended method for data loading is to use `COPY INTO` along with PARQUET file format.
+
+#### Workload management
+
+Running mixed workloads can pose resource challenges on busy systems. A successful [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md) scheme effectively manages resources, ensures highly efficient resource utilization, and maximizes return on investment (ROI). [Workload classification](../../sql-data-warehouse/sql-data-warehouse-workload-classification.md), [workload importance](../../sql-data-warehouse/sql-data-warehouse-workload-importance.md), and [workload isolation](../../sql-data-warehouse/sql-data-warehouse-workload-isolation.md) give more control over how workload utilizes system resources.
+
+The [workload management guide](../../sql-data-warehouse/analyze-your-workload.md) describes the techniques to analyze the workload, [manage and monitor workload importance](../../sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md), and the steps to [convert a resource class to a workload group](../../sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md). Use the [Azure portal](../../sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md) and [T-SQL queries on DMVs](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md) to monitor the workload to ensure that the applicable resources are efficiently utilized.
+
+## Next steps
+
+To learn about ETL and load for Oracle migration, see the next article in this series: [Data migration, ETL, and load for Oracle migrations](2-etl-load-migration-considerations.md).
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/2-etl-load-migration-considerations.md
+
+ Title: "Data migration, ETL, and load for Oracle migrations"
+description: Learn how to plan your data migration from Oracle to Azure Synapse Analytics to minimize the risk and impact on users.
+++
+ms.devlang:
++++ Last updated : 07/15/2022++
+# Data migration, ETL, and load for Oracle migrations
+
+This article is part two of a seven-part series that provides guidance on how to migrate from Oracle to Azure Synapse Analytics. The focus of this article is best practices for ETL and load migration.
+
+## Data migration considerations
+
+There are many factors to consider when migrating data, ETL, and loads from a legacy Oracle data warehouse and data marts to Azure Synapse.
+
+### Initial decisions about data migration from Oracle
+
+When you're planning a migration from an existing Oracle environment, consider the following data-related questions:
+
+- Should unused table structures be migrated?
+
+- What's the best migration approach to minimize risk and impact for users?
+
+- When migrating data marts: stay physical or go virtual?
+
+The next sections discuss these points within the context of a migration from Oracle.
+
+#### Migrate unused tables?
+
+It makes sense to only migrate tables that are in use. Tables that aren't active can be archived rather than migrated, so that the data is available if needed in the future. It's best to use system metadata and log files rather than documentation to determine which tables are in use, because documentation can be out of date.
+
+Oracle system catalog tables and logs contain information that can be used to determine when a given table was last accessed&mdash;which in turn can be used to decide whether or not a table is a candidate for migration.
+
+If you've licensed the [Oracle Diagnostic Pack](https://www.oracle.com/technetwork/database/enterprise-edition/overview/diagnostic-pack-11g-datasheet-1-129197.pdf), then you have access to Active Session History, which you can use to determine when a table was last accessed.
+
+>[!TIP]
+>In legacy systems, it's not unusual for tables to become redundant over time&mdash;these don't need to be migrated in most cases.
+
+Here's an example query that looks for the usage of a specific table within a given time window:
+
+```sql
+SELECT du.username,
+ s.sql_text,
+    MAX(ash.sample_time) AS last_access ,
+    sp.object_owner ,
+    sp.object_name ,
+    sp.object_alias as aliased_as ,
+    sp.object_type ,
+    COUNT(*) AS access_count
+FROM v$active_session_history ash         
+ JOIN v$sql s ON ash.force_matching_signature = s.force_matching_signature
+ LEFT JOIN v$sql_plan sp ON s.sql_id = sp.sql_id
+ JOIN DBA_USERS du ON ash.user_id = du.USER_ID
+WHERE ash.session_type = 'FOREGROUND'
+ AND ash.SQL_ID IS NOT NULL
+ AND sp.object_name IS NOT NULL
+ AND ash.user_id <> 0
+GROUP BY du.username,
+ s.sql_text,
+    sp.object_owner,
+    sp.object_name,
+    sp.object_alias,
+    sp.object_type
+ORDER BY 3 DESC;
+```
+
+This query may take a while to run if you have been running numerous queries.
+
+#### What's the best migration approach to minimize risk and impact on users?
+
+This question comes up frequently because companies may want to lower the impact of changes on the data warehouse data model to improve agility. Companies often see an opportunity to further modernize or transform their data during an ETL migration. This approach carries a higher risk because it changes multiple factors simultaneously, making it difficult to compare the outcomes of the old system versus the new. Making data model changes here could also affect upstream or downstream ETL jobs to other systems. Because of that risk, it's better to redesign on this scale after the data warehouse migration.
+
+Even if a data model is intentionally changed as part of the overall migration, it's good practice to migrate the existing model as-is to Azure Synapse, rather than do any re-engineering on the new platform. This approach minimizes the effect on existing production systems, while benefiting from the performance and elastic scalability of the Azure platform for one-off re-engineering tasks.
+
+>[!TIP]
+>Migrate the existing model as-is initially, even if a change to the data model is planned in the future.
+
+#### Data mart migration: stay physical or go virtual?
+
+In legacy Oracle data warehouse environments, it's common practice to create many data marts that are structured to provide good performance for ad hoc self-service queries and reports for a given department or business function within an organization. A data mart typically consists of a subset of the data warehouse that contains aggregated versions of the data in a form that enables users to easily query that data with fast response times. Users can use user-friendly query tools like Microsoft Power BI, which supports business user interactions with [data marts](/power-bi/transform-model/datamarts/datamarts-overview). The form of the data in a data mart is generally a dimensional data model. One use of data marts is to expose the data in a usable form even if the underlying warehouse data model is something different, such as a data vault.
+
+You can use separate data marts for individual business units within an organization to implement robust data security regimes. Restrict access to specific data marts that are relevant to users, and eliminate, obfuscate, or anonymize sensitive data.
+
+If these data marts are implemented as physical tables, they'll require extra storage resources and processing to build and refresh them regularly. Also, the data in the mart will only be as up to date as the last refresh operation, and so may be unsuitable for highly volatile data dashboards.
+
+>[!TIP]
+>Virtualizing data marts can save on storage and processing resources.
+
+With the advent of lower-cost scalable MPP architectures, such as Azure Synapse, and their inherent performance characteristics, you can provide data mart functionality without instantiating the mart as a set of physical tables. One method is to effectively virtualize the data marts via SQL views onto the main data warehouse. Another way is to virtualize the data marts via a virtualization layer using features like views in Azure or [third-party](../../partner/data-integration.md) virtualization products. This approach simplifies or eliminates the need for extra storage and aggregation processing and reduces the overall number of database objects to be migrated.
+
+There's another potential benefit of this approach. By implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is pushed down into the data warehouse. The data warehouse is generally the best place to run joins, aggregations, and other related operations on large data volumes.
+
+The primary drivers for implementing a virtual data mart over a physical data mart are:
+
+- More agility: a virtual data mart is easier to change than physical tables and the associated ETL processes.
+
+- Lower total cost of ownership: a virtualized implementation requires fewer data stores and copies of data.
+
+- Elimination of ETL jobs to migrate and simplify data warehouse architecture in a virtualized environment.
+
+- Performance: although physical data marts have historically performed better, virtualization products now implement intelligent caching techniques to mitigate this difference.
+
+>[!TIP]
+>The performance and scalability of Azure Synapse enables virtualization without sacrificing performance.
+
+### Data migration from Oracle
+
+#### Understand your data
+
+As part of migration planning, you should understand in detail the volume of data to be migrated since that can affect decisions about the migration approach. Use system metadata to determine the physical space taken up by the raw data within the tables to be migrated. In this context, raw data means the amount of space used by the data rows within a table, excluding overhead such as indexes and compression. The largest fact tables will typically comprise more than 95% of the data.
+
+This query will give you the total database size in Oracle:
+
+```sql
+SELECT
+ ( SELECT SUM(bytes)/1024/1024/1024 data_size
+ FROM sys.dba_data_files ) +
+ ( SELECT NVL(sum(bytes),0)/1024/1024/1024 temp_size
+ FROM sys.dba_temp_files ) +
+ ( SELECT SUM(bytes)/1024/1024/1024 redo_size
+ FROM sys.v_$log ) +
+ ( SELECT SUM(BLOCK_SIZE*FILE_SIZE_BLKS)/1024/1024/1024 controlfile_size
+ FROM v$controlfile ) "Size in GB"
+FROM dual
+```
+
+The database size equals the size of `(data files + temp files + online/offline redo log files + control files)`. Overall database size includes used space and free space.
+
+The following example query gives a breakdown of the disk space used by table data and indexes:
+
+```sql
+SELECT
+   owner, "Type", table_name "Name", TRUNC(sum(bytes)/1024/1024) Meg
+FROM
+ ( SELECT segment_name table_name, owner, bytes, 'Table' as "Type"
+ FROM dba_segments
+ WHERE segment_type in  ('TABLE','TABLE PARTITION','TABLE SUBPARTITION' )
+UNION ALL
+ SELECT i.table_name, i.owner, s.bytes, 'Index' as "Type"
+ FROM dba_indexes i, dba_segments s
+ WHERE s.segment_name = i.index_name
+ AND   s.owner = i.owner
+ AND   s.segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION')
+UNION ALL
+ SELECT l.table_name, l.owner, s.bytes, 'LOB' as "Type"
+ FROM dba_lobs l, dba_segments s
+ WHERE s.segment_name = l.segment_name
+ AND   s.owner = l.owner
+ AND   s.segment_type IN ('LOBSEGMENT','LOB PARTITION','LOB SUBPARTITION')
+UNION ALL
+ SELECT l.table_name, l.owner, s.bytes, 'LOB Index' as "Type"
+ FROM dba_lobs l, dba_segments s
+ WHERE s.segment_name = l.index_name
+ AND   s.owner = l.owner
+ AND   s.segment_type = 'LOBINDEX')
+ WHERE owner in UPPER('&owner')
+GROUP BY table_name, owner, "Type"
+HAVING SUM(bytes)/1024/1024 > 10  /* Ignore really small tables */
+ORDER BY SUM(bytes) desc;
+```
+
+In addition, the Microsoft database migration team provides many resources, including the [Oracle Inventory Script Artifacts](https://www.microsoft.com/download/details.aspx?id=103121). The Oracle Inventory Script Artifacts tool includes a PL/SQL query that accesses Oracle system tables and provides a count of objects by schema type, object type, and status. The tool also provides a rough estimate of raw data in each schema and the sizing of tables in each schema, with results stored in a CSV format. An included calculator spreadsheet takes the CSV as input and provides sizing data.
+
+For any table, you can accurately estimate the volume of data that needs to be migrated by extracting a representative sample of the data, such as one million rows, to an uncompressed delimited flat ASCII data file. Then, use the size of that file to get an average raw data size per row. Finally, multiply that average size by the total number of rows in the full table to give a raw data size for the table. Use that raw data size in your planning.
+
+#### Use SQL queries to find data types
+
+By querying the Oracle static data dictionary `DBA_TAB_COLUMNS` view, you can determine which data types are in use in a schema and whether any of those data types need to be changed. Use SQL queries to find the columns in any Oracle schema with data types that don't map directly to data types in Azure Synapse. Similarly, you can use queries to count the number of occurrences of each Oracle data type that doesn't map directly to Azure Synapse. By using the results from these queries in combination with the data type comparison table, you can determine which data types need to be changed in an Azure Synapse environment.
+
+To find the columns with data types that don't map to data types in Azure Synapse, run the following query after you replace `<owner_name>` with the relevant owner of your schema:
+
+```sql
+SELECT owner, table_name, column_name, data_type
+FROM dba_tab_columns
+WHERE owner in ('<owner_name>')
+AND data_type NOT IN 
+ ('BINARY_DOUBLE', 'BINARY_FLOAT', 'CHAR', 'DATE', 'DECIMAL', 'FLOAT', 'LONG', 'LONG RAW', 'NCHAR', 'NUMERIC', 'NUMBER', 'NVARCHAR2', 'SMALLINT', 'RAW', 'REAL', 'VARCHAR2', 'XML_TYPE') 
+ORDER BY 1,2,3;
+```
+
+To count the number of non-mappable data types, use the following query:
+
+```sql 
+SELECT data_type, count(*) 
+FROM dba_tab_columns 
+WHERE data_type NOT IN 
+ ('BINARY_DOUBLE', 'BINARY_FLOAT', 'CHAR', 'DATE', 'DECIMAL', 'FLOAT', 'LONG', 'LONG RAW', 'NCHAR', 'NUMERIC', 'NUMBER', 'NVARCHAR2', 'SMALLINT', 'RAW', 'REAL', 'VARCHAR2', 'XML_TYPE') 
+GROUP BY data_type 
+ORDER BY data_type;
+```
+
+Microsoft offers [SQL Server Migration Assistant](/sql/ssm) ETL tool is already in use in the Oracle environment, you can use that tool to implement any required data transformations. The next section explores migration of existing ETL processes.
+
+## ETL migration considerations
+
+### Initial decisions about Oracle ETL migration
+
+For ETL/ELT processing, legacy Oracle data warehouses often use custom-built scripts, [third-party](../../partner/data-integration.md) ETL tools, or a combination of approaches that has evolved over time. When you're planning a migration to Azure Synapse, determine the best way to implement the required ETL/ELT processing in the new environment while also minimizing cost and risk.
+
+>[!TIP]
+>Plan the approach to ETL migration ahead of time and leverage Azure facilities where appropriate.
+
+The following flowchart summarizes one approach:
++
+As shown in the flowchart, the initial step is always to build an inventory of ETL/ELT processes that need to be migrated. With the standard built-in Azure features, some existing processes might not need to move. For planning purposes, it's important that you understand the scale of the migration. Next, consider the questions in the flowchart decision tree:
+
+1. **Move to native Azure?** Your answer depends on whether you're migrating to a completely Azure-native environment. If so, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md) or [Azure Synapse pipelines](../../get-started-pipelines.md).
+
+1. **Using a third-party ETL tool?** If you're not moving to a completely Azure-native environment, then check whether an existing [third-party](../../partner/data-integration.md) ETL tool is already in use. In the Oracle environment, you might find that some or all of the ETL processing is performed by custom scripts using Oracle-specific utilities such as Oracle SQL Developer, Oracle SQL\*Loader, or Oracle Data Pump. The approach in this case is to re-engineer using Azure Data Factory.
+
+1. **Does the third-party support dedicated SQL pools within Azure Synapse?** Consider whether there's a large investment in skills in the third-party ETL tool, or if existing workflows and schedules use that tool. If so, determine whether the tool can efficiently support Azure Synapse as a target environment. Ideally, the tool will include native connectors that can use Azure facilities like [PolyBase](../../sql/load-data-overview.md) or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql) for the most efficient data loading. But even without native connectors, there's generally a way that you can call external processes, such as PolyBase or `COPY INTO`, and pass in applicable parameters. In this case, use existing skills and workflows, with Azure Synapse as the new target environment.
+
+ If you're using Oracle Data Integrator (ODI) for ELT processing, then you need ODI Knowledge Modules for Azure Synapse. If those modules aren't available to you in your organization, but you have ODI, then you can use ODI to generate flat files. Those flat files can then be moved to Azure and ingested into [Azure Data Lake Storage](../../../storage/blobs/data-lake-storage-introduction.md) for loading into Azure Synapse.
+
+1. **Run ETL tools in Azure?** If you decide to retain an existing third-party ETL tool, you can run that tool within the Azure environment (rather than on an existing on-premises ETL server) and have Data Factory handle the overall orchestration of the existing workflows. So, decide whether to leave the existing tool running as-is or move it into the Azure environment to achieve cost, performance, and scalability benefits.
+
+>[!TIP]
+>Consider running ETL tools in Azure to leverage performance, scalability, and cost benefits.
+
+### Re-engineer existing Oracle-specific scripts
+
+If some or all of the existing Oracle warehouse ETL/ELT processing is handled by custom scripts that use Oracle-specific utilities, such as Oracle SQL\*Plus, Oracle SQL Developer, Oracle SQL\*Loader, or Oracle Data Pump, then you need to recode these scripts for the Azure Synapse environment. Similarly, if ETL processes have been implemented using stored procedures in Oracle, then you need to recode those processes.
+
+Some elements of the ETL process are easy to migrate, for example, by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using Azure Synapse `COPY INTO` or PolyBase instead of SQL\*Loader. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to re-engineer.
+
+>[!TIP]
+>The inventory of ETL tasks to be migrated should include scripts and stored procedures.
+
+One way of testing Oracle SQL for compatibility with Azure Synapse is to capture some representative SQL statements from a join of Oracle `v$active_session_history` and `v$sql` to get `sql_text`, then prefix those queries with `EXPLAIN`. Assuming a like-for-like migrated data model in Azure Synapse, run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will give an error. You can use this information to determine the scale of the recoding task.
+
+>[!TIP]
+>Use `EXPLAIN` to find SQL incompatibilities.
+
+In the worst case, manual recoding may be necessary. However, there are products and services available from [Microsoft partners](../../partner/data-integration.md) to assist with re-engineering Oracle-specific code.
+
+>[!TIP]
+>Partners offer products and skills to assist in re-engineering Oracle-specific code.
+
+### Use existing third-party ETL tools
+
+In many cases, the existing legacy data warehouse system will already be populated and maintained by a third-party ETL product. See [Azure Synapse Analytics data integration partners](../../partner/data-integration.md) for a list of current Microsoft data integration partners for Azure Synapse.
+
+ The Oracle community frequently uses several popular ETL products. The following paragraphs discuss the most popular ETL tools for Oracle warehouses. You can run all of those products within a VM in Azure and use them to read and write Azure databases and files.
+
+>[!TIP]
+>Leverage investment in existing third-party tools to reduce cost and risk.
+
+## Data loading from Oracle
+
+### Choices available when loading data from Oracle
+
+When you're preparing to migrate data from an Oracle data warehouse, decide how data will be physically moved from the existing on-premises environment into Azure Synapse in the cloud, and which tools will be used to perform the transfer and load. Consider the following questions, which are discussed in the following sections.
+
+- Will you extract the data to files, or move it directly via a network connection?
+
+- Will you orchestrate the process from the source system, or from the Azure target environment?
+
+- Which tools will you use to automate and manage the migration process?
+
+#### Transfer data via files or network connection?
+
+Once the database tables to be migrated have been created in Azure Synapse, you can move the data that populates those tables out of the legacy Oracle system and into the new environment. There are two basic approaches:
+
+- **File Extract**: extract the data from the Oracle tables to flat delimited files, normally in CSV format. You can extract table data in several ways:
+
+ - Use standard Oracle tools such as SQL\*Plus, SQL Developer, and SQLcl.
+ - Use [Oracle Data Integrator](https://www.oracle.com/middleware/technologies/data-integrator.html) (ODI) to generate flat files.
+ - Use Oracle connector in Data Factory to unload Oracle tables in parallel to enable data loading by partitions.
+ - Use a [third-party](../../partner/data-integration.md) ETL tool.
+
+ For examples of how to extract Oracle table data, see the article [appendix](#appendix-examples-of-techniques-to-extract-oracle-data).
+
+ This approach requires space to land the extracted data files. The space could be local to the Oracle source database if sufficient storage is available, or remote in Azure Blob Storage. The best performance is achieved when a file is written locally since that avoids network overhead.
+
+ To minimize storage and network transfer requirements, compress the extracted data files using a utility like gzip.
+
+ After extraction, move the flat files into Azure Blob Storage. Microsoft provides various options to move large volumes of data, including:
+ - [AzCopy](../../../storage/common/storage-use-azcopy-v10.md) for moving files across the network into Azure Storage.
+ - [Azure ExpressRoute](../../../expressroute/expressroute-introduction.md) for moving bulk data over a private network connection.
+ - [Azure Data Box](../../../databox/data-box-overview.md) for moving files to a physical storage device that you ship to an Azure data center for loading.
+
+ For more information, see [Transfer data to and from Azure](/azure/architecture/data-guide/scenarios/data-transfer).
+
+- **Direct extract and load across network**: the target Azure environment sends a data extract request, normally via a SQL command, to the legacy Oracle system to extract the data. The results are sent across the network and loaded directly into Azure Synapse, with no need to land the data into intermediate files. The limiting factor in this scenario is normally the bandwidth of the network connection between the Oracle database and the Azure environment. For exceptionally large data volumes, this approach may not be practical.
+
+>[!TIP]
+>Understand the data volumes to be migrated and the available network bandwidth, because these factors influence the migration approach decision.
+
+There's also a hybrid approach that uses both methods. For example, you can use the direct network extract approach for smaller dimension tables and samples of the larger fact tables to quickly provide a test environment in Azure Synapse. For large-volume historical fact tables, you can use the file extract and transfer approach using Azure Data Box.
+
+#### Orchestrate from Oracle or Azure?
+
+The recommended approach when moving to Azure Synapse is to orchestrate data extraction and loading from the Azure environment using SSMA or [Data Factory](../../../data-factory/concepts-pipelines-activities.md). Use the associated utilities, such as PolyBase or `COPY INTO`, for the most efficient data loading. This approach benefits from built-in Azure capabilities and reduces the effort to build reusable data load pipelines. You can use metadata-driven data load pipelines to automate the migration process.
+
+The recommended approach also minimizes the performance hit on the existing Oracle environment during the data load process, because the management and load process runs in Azure.
+
+#### Existing data migration tools
+
+Data transformation and movement is the basic function of all ETL products. If a data migration tool is already in use in the existing Oracle environment and it supports Azure Synapse as a target environment, then consider using that tool to simplify data migration.
+
+Even if an existing ETL tool isn't in place, [Azure Synapse Analytics data integration partners](../../partner/data-integration.md) offer ETL tools to simplify the task of data migration.
+
+Finally, if you plan to use an ETL tool, consider running that tool within the Azure environment to take advantage of Azure cloud performance, scalability, and cost. This approach also frees up resources in the Oracle data center.
+
+## Summary
+
+To summarize, our recommendations for migrating data and associated ETL processes from Oracle to Azure Synapse are:
+
+- Plan ahead to ensure a successful migration exercise.
+
+- Build a detailed inventory of data and processes to be migrated as soon as possible.
+
+- Use system metadata and log files to get an accurate understanding of data and process usage. Don't rely on documentation since it may be out of date.
+
+- Understand the data volumes to be migrated, and the network bandwidth between the on-premises data center and Azure cloud environments.
+
+- Consider using an Oracle instance in an Azure VM as a stepping stone to offload migration from the legacy Oracle environment.
+
+- Use standard built-in Azure features to minimize the migration workload.
+
+- Identify and understand the most efficient tools for data extraction and load in both Oracle and Azure environments. Use the appropriate tools in each phase of the process.
+
+- Use Azure facilities, such as Data Factory, to orchestrate and automate the migration process while minimizing impact on the Oracle system.
+
+## Appendix: Examples of techniques to extract Oracle data
+
+You can use several techniques to extract Oracle data when migrating from Oracle to Azure Synapse. The next sections demonstrate how to extract Oracle data using Oracle SQL Developer and the Oracle connector in Data Factory.
+
+### Use Oracle SQL Developer for data extraction
+
+You can use the Oracle SQL Developer UI to export table data to many formats, including CSV, as shown in the following screenshot:
++
+Other export options include JSON and XML. You can use the UI to add a set of table names to a "cart", then apply the export to the entire set in the cart:
++
+You can also use Oracle SQL Developer Command Line (SQLcl) to export Oracle data. This option supports automation using a shell script.
+
+For relatively small tables, you might find this technique useful if you run into problems extracting data through a direct connection.
+
+### Use the Oracle connector in Azure Data Factory for parallel copy
+
+You can use the Oracle connector in Data Factory to unload large Oracle tables in parallel. The Oracle connector provides built-in data partitioning to copy data from Oracle in parallel. You can find the data partitioning options in the *Source* tab of the copy activity.
++
+For information on how to configure the Oracle connector for parallel copy, see [Parallel copy from Oracle](/azure/data-factory/connector-oracle?tabs=data-factory#parallel-copy-from-oracle).
+
+For more information on Data Factory copy activity performance and scalability, see [Copy activity performance and scalability guide](../../../data-factory/copy-activity-performance.md).
+
+## Next steps
+
+To learn about security access operations, see the next article in this series: [Security, access, and operations for Oracle migrations](3-security-access-operations.md).
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/3-security-access-operations.md
+
+ Title: "Security, access, and operations for Oracle migrations"
+description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse Analytics and Oracle.
+++
+ms.devlang:
++++ Last updated : 08/11/2022++
+# Security, access, and operations for Oracle migrations
+
+This article is part three of a seven-part series that provides guidance on how to migrate from Oracle to Azure Synapse Analytics. The focus of this article is best practices for security access operations.
+
+## Security considerations
+
+The Oracle environment offers several methods for access and authentication that you might need to migrate to Azure Synapse with minimal risk and user impact. The article assumes you want to migrate the existing connection methods and the user, role, and permission structures as-is. If that's not the case, then use the Azure portal to create and manage a new security regime.
+
+For more information about Azure Synapse security options, see [Azure Synapse Analytics security](../../guidance/security-white-paper-introduction.md).
+
+### Connection and authentication
+
+Authentication is the process of verifying the identity of a user, device, or other entity in a computer system, generally as a prerequisite to granting access to resources in a system.
+
+>[!TIP]
+>Authentication in both Oracle and Azure Synapse can be "in database" or via external methods.
+
+#### Oracle authorization options
+
+The Oracle system offers these authentication methods for database users:
+
+- **Database authentication**: with database authentication, the Oracle database administers the user account and authenticates the user. For the Oracle database to perform authentication, it generates a password for new users and stores passwords in encrypted format. Users can change their password at any time. Oracle recommends password management through account locking, password aging and expiration, password history, and password complexity verification. Database authentication is common in older Oracle installations.
+
+- **External authentication**: with external authentication, the Oracle database maintains the user account, and an external service performs password administration and user authentication. The external service can be an operating system or a network service like Oracle Net. The database relies on the underlying operating system or network authentication service to restrict access to database accounts. This type of sign doesn't use a database password. There are two external authentication options:
+
+ - **Operating system authentication**: by default, Oracle requires a secure connection for logins that the operating system authenticates to prevent a remote user from impersonating an operating system user over a network connection. This requirement precludes the use of Oracle Net and a shared-server configuration.
+
+ - **Network authentication**: several network authentication mechanisms are available, such as smart cards, fingerprints, Kerberos, and the operating system. Many network authentication services, such as Kerberos, support single sign-on so users have fewer passwords to remember.
+
+- **Global authentication and authorization**: with global authentication and authorization, you can centralize management of user-related information, including authorizations, in an LDAP-based directory service. Users are identified in the database as global users, which means they're authenticated by TLS/SSL and user management occurs outside the database. The centralized directory service performs user management. This approach provides strong authentication using TLS/SSL, Kerberos, or Windows-native authentication, and enables centralized management of users and privileges across the enterprise. Administration is easier because it's not necessary to create a schema for every user in every database in the enterprise. Single sign-on is also supported, so that users only need to sign in once to access multiple databases and services.
+
+- **Proxy authentication and authorization**: you can designate a middle-tier server to proxy clients in a secure fashion. Oracle provides various options for proxy authentication, such as:
+
+ - The middle-tier server can authenticate itself with the database server. A client, which in this case is an application user or another application, authenticates itself with the middle-tier server. Client identities can be maintained all the way through to the database.
+
+ - The client, which in this case is database user, isn't authenticated by the middle-tier server. The client's identity and database password are passed through the middle-tier server to the database server for authentication.
+
+ - The client, which in this case is a global user, is authenticated by the middle-tier server, and passes either a distinguished name (DN) or certificate through the middle tier for retrieving the client's username.
+
+#### Azure Synapse authorization options
+
+Azure Synapse supports two basic options for connection and authorization:
+
+- **SQL authentication**: SQL authentication uses a database connection that includes a database identifier, user ID, and password, plus other optional parameters. This method of authentication is functionally equivalent to Oracle [database authentication](#oracle-authorization-options).
+
+- **Azure AD authentication**: with Azure AD authentication, you can centrally manage the identities of database users and Microsoft services in one location. Centralized management provides a single place to manage Azure Synapse users and simplifies permission management. Azure AD authentication supports connections to LDAP and Kerberos services. For example, you can use Azure AD authentication to connect to existing LDAP directories if they're to remain in place after migration of the database.
+
+### Users, roles, and permissions
+
+Both Oracle and Azure Synapse implement database access control via a combination of users, roles, and permissions. You can use standard SQL statements `CREATE USER` and `CREATE ROLE/GROUP` to define users and roles. `GRANT` and `REVOKE` statements assign or remove permissions to users and/or roles.
+
+>[!TIP]
+>Planning is essential for a successful migration project. Start with high-level approach decisions.
+
+Conceptually, Oracle and Azure Synapse databases are similar, and to some degree it's possible to automate the migration of existing user IDs, groups, and permissions. Extract the legacy user and group information from the Oracle system catalog tables, then generate matching equivalent `CREATE USER` and `CREATE ROLE` statements. Run those statements in Azure Synapse to recreate the same user/role hierarchy.
+
+>[!TIP]
+>If possible, automate migration processes to reduce elapsed time and scope for error.
+
+After data extraction, use Oracle system catalog tables to generate equivalent `GRANT` statements to assign permissions if an equivalent exists.
++
+#### Users and roles
+
+The information about current users and groups in an Oracle system is held in system catalog views, such as `ALL_USERS` and `DBA_USERS`. You can query these views in the normal way via Oracle SQL\*Plus or Oracle SQL Developer. The following queries are basic examples:
+
+```sql
+--List of users
+select * from dba_users order by username;
+
+--List of roles
+select * from dba_roles order by role;
+
+--List of users and their associated roles
+select * from user_role_privs order by username, granted_role;
+```
+
+Oracle SQL Developer has built-in options to display user and role information in the *Reports* pane, as shown in the following screenshot.
++
+You can modify the example `SELECT` statement to produce a result set that is a series of `CREATE USER` and `CREATE GROUP` statements. To do so, include the appropriate text as a literal within the `SELECT` statement.
+
+There's no way to retrieve existing Oracle passwords, so you need to implement a scheme for allocating new initial passwords on Azure Synapse.
+
+>[!TIP]
+>Migration of a data warehouse requires migrating more than just tables, views, and SQL statements.
+
+#### Permissions
+
+In an Oracle system, the system `DBA_ROLE_PRIVS` view holds the access rights for users and roles. If you have `SELECT` access, you can query that view to obtain the current access rights lists defined within the system. The following Oracle SQL Developer screenshot shows an example access rights list.
++
+You can also create queries to produce a script that's a series of `CREATE` and `GRANT` statements for Azure Synapse, based on the existing Oracle privileges. The following Oracle SQL Developer screenshot shows an example of that script.
++
+This table lists and describes the data dictionary views required to view user, role, and privileges information.
+
+| View | Description |
+|--|--|
+DBA_COL_PRIVS<br>ALL_COL_PRIVS<br>USER_COL_PRIVS | The DBA view describes all column object grants in the database. The ALL view describes all column object grants for which the current user or PUBLIC is the object owner, grantor, or grantee. The USER view describes column object grants for which the current user is the object owner, grantor, or grantee. |
+| ALL_COL_PRIVS_MADE<br>USER_COL_PRIVS_MADE | The ALL view lists column object grants for which the current user is the object owner or grantor. The USER view describes column object grants for which the current user is the grantor. |
+| ALL_COL_PRIVS_RECD<br>USER_COL_PRIVS_RECD | The ALL view describes column object grants for which the current user or PUBLIC is the grantee. The USER view describes column object grants for which the current user is the grantee. |
+| DBA_TAB_PRIVS<br>ALL_TAB_PRIVS<br>USER_TAB_PRIVS | The DBA view lists all grants on all objects in the database. The ALL view lists the grants on objects where the user or PUBLIC is the grantee. The USER view lists grants on all objects where the current user is the grantee. |
+| ALL_TAB_PRIVS_MADE<br>USER_TAB_PRIVS_MADE | The ALL view lists object grants made by the current user or made on the objects owned by the current user. The USER view lists grants on all objects owned by the current user. |
+| ALL_TAB_PRIVS_RECD<br>USER_TAB_PRIVS_RECD | The ALL view lists object grants for which the user or PUBLIC is the grantee. The USER view lists object grants for which the current user is the grantee. |
+| DBA_ROLES | This view lists all roles that exist in the database. |
+| DBA_ROLE_PRIVS<br>USER_ROLE_PRIVS | The DBA view lists roles granted to users and roles. The USER view lists roles granted to the current user. |
+| DBA_SYS_PRIVS<br>USER_SYS_PRIVS | The DBA view lists system privileges granted to users and roles. The USER view lists system privileges granted to the current user. |
+| ROLE_ROLE_PRIVS | This view describes roles granted to other roles. Information is provided only about roles to which the user has access. |
+| ROLE_SYS_PRIVS | This view contains information about system privileges granted to roles. Information is provided only about roles to which the user has access. |
+| ROLE_TAB_PRIVS | This view contains information about object privileges granted to roles. Information is provided only about roles to which the user has access. |
+| SESSION_PRIVS | This view lists the privileges that are currently enabled for the user. |
+| SESSION_ROLES | This view lists the roles that are currently enabled for the user. |
+
+Oracle supports various types of privileges:
+
+- **System privileges**: system privileges allow the grantee to perform standard administrator tasks in the database. Typically, these privileges are restricted to trusted users. Many system privileges are specific to Oracle operations.
+
+- **Object privileges**: each type of object has privileges associated with it.
+
+- **Table privileges**: table privileges enable security at the data manipulation language (DML) or data definition language (DDL) level. You can map table privileges directly to their equivalent in Azure Synapse.
+
+- **View privileges**: you can apply DML object privileges to views, similar to tables. You can map view privileges directly to their equivalent in Azure Synapse.
+
+- **Procedure privileges**: procedures privileges allow procedures, including standalone procedures and functions, to be granted the `EXECUTE` privilege. You can map procedure privileges directly to their equivalent in Azure Synapse.
+
+- **Type privileges**: you can grant system privileges to named types, such as object types, `VARRAYs`, and nested tables. Typically, these privileges are specific to Oracle and have no equivalent in Azure Synapse.
+
+>[!TIP]
+>Azure Synapse has equivalent permissions for basic database operations such as DML and DDL.
+
+The following table lists common Oracle admin privileges that have a direct equivalent in Azure Synapse.
+
+| Admin privilege | Description | Synapse equivalent |
+|--|--|--|
+| \[Create\] Database | The user can create databases. Permission to operate on existing databases is controlled by object privileges. | CREATE DATABASE |
+| \[Create\] External Table | The user can create external tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE |
+| \[Create\] Function | The user can create user-defined functions (UDFs). Permission to operate on existing UDFs is controlled by object privileges. | CREATE FUNCTION |
+| \[Create\] Role | The user can create groups. Permission to operate on existing groups is controlled by object privileges. | CREATE ROLE |
+| \[Create\] Index | For system use only. Users can't create indexes. | CREATE INDEX |
+| \[Create\] Materialized View | The user can create materialized views. | CREATE VIEW |
+| \[Create\] Procedure | The user can create stored procedures. Permission to operate on existing stored procedures is controlled by object privileges. | CREATE PROCEDURE |
+| \[Create\] Schema | The user can create schemas. Permission to operate on existing schemas is controlled by object privileges. | CREATE SCHEMA |
+| \[Create\] Table | The user can create tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE |
+| \[Create\] Temporary Table | The user can create temporary tables. Permission to operate on existing tables is controlled by object privileges. | CREATE TABLE |
+| \[Create\] User | The user can create users. Permission to operate on existing users is controlled by object privileges. | CREATE USER |
+| \[Create\] View | The user can create views. Permission to operate on existing views is controlled by object privileges. | CREATE VIEW |
+
+You can automate the migration of these privileges by generating equivalent scripts for Azure Synapse from the Oracle catalog tables, as described earlier in this section.
+
+The next table lists common Oracle object privileges that have a direct equivalent in Azure Synapse.
+
+| Object Privilege | Description | Synapse Equivalent |
+|--|--|--|
+| Alter | The user can modify object attributes. Applies to all objects. | ALTER |
+| Delete | The user can delete table rows. Applies only to tables. | DELETE |
+| Drop | The user can drop objects. Applies to all object types. | DROP |
+| Execute | The user can run user-defined functions, user-defined aggregates, or stored procedures. | EXECUTE |
+| Insert | The user can insert rows into a table. Applies only to tables. | INSERT |
+| List | The user can display an object name, either in a list or in another manner. Applies to all objects. | LIST |
+| Select | The user can select (or query) rows within a table. Applies to tables and views. | SELECT |
+| Truncate | The user can delete all rows from a table. Applies only to tables. | TRUNCATE |
+| Update | The user can modify table rows. Applies to tables only. | UPDATE |
+
+For more information about Azure Synapse permissions, see [Database engine permissions](/sql/relational-databases/security/permissions-database-engine).
+
+#### Migrating users, roles, and privileges
+
+So far, we've described a common approach for migrating users, roles, and privileges to Azure Synapse using `CREATE USER`, `CREATE ROLE`, and `GRANT` SQL commands. However, you don't need to migrate all Oracle operations with grantable privileges to the new environment. For example, system management operations aren't applicable to the new environment or the equivalent functionality is automatic or managed outside the database. For users, roles, and the subset of privileges that do have a direct equivalent in the Azure Synapse environment, the following steps describe the migration process:
+
+1. Migrate Oracle schema, table, and view definitions to the Azure Synapse environment. This step migrates only the table definitions not the data.
+
+1. Extract the existing user IDs that you want to migrate from the Oracle system tables, generate a script of `CREATE USER` statements for Azure Synapse, and then run that script in the Azure Synapse environment. Find a way to create new initial passwords, because passwords can't be extracted from the Oracle environment.
+
+1. Extract the existing roles from the Oracle system tables, generate a script of equivalent `CREATE ROLE` statements for Azure Synapse, and then run that script in the Azure Synapse environment.
+
+1. Extract the user/role combinations from the Oracle system tables, generate a script to `GRANT` roles to users in Azure Synapse, and then run that script in the Azure Synapse environment.
+
+1. Extract the relevant privilege information from the Oracle system tables, then generate a script to `GRANT` the appropriate privileges to users and roles in Azure Synapse, and then run that script in the Azure Synapse environment.
+
+## Operational considerations
+
+This section discusses how typical Oracle operational tasks can be implemented in Azure Synapse with minimal risk and user impact.
+
+As with all data warehouse products in production, ongoing management tasks are necessary to keep the system running efficiently and provide data for monitoring and auditing. Other operational considerations include resource utilization, capacity planning for future growth, and backup/restore of data.
+
+>[!TIP]
+>Operational tasks are necessary to keep any data warehouse operating efficiently.
+
+Oracle administration tasks typically fall into two categories:
+
+- **System administration**: system administration is management of the hardware, configuration settings, system status, access, disk space, usage, upgrades, and other tasks.
+
+- **Database administration**: database administration is management of user databases and their content, data loading, data backup, data recovery, and access to data and permissions.
+
+Oracle offers several methods and interfaces that you can use to perform system and database management tasks:
+
+- Oracle Enterprise Manager is Oracle's on-premises management platform. It provides a single pane of glass for managing all of a customer's Oracle deployments, whether in their data centers or in the Oracle Cloud. Through deep integration with Oracle's product stack, Oracle Enterprise Manager provides management and automation support for Oracle applications, databases, middleware, hardware, and engineered systems.
+
+- Oracle Instance Manager provides a UI for high-level administration of Oracle instances. Oracle Instance Manager enables tasks such as startup, shutdown, and log viewing.
+
+- Oracle Database Configuration Assistant is a UI that allows management and configuration of various database features and functionality.
+
+- SQL commands that support administration tasks and queries within a SQL database session. You can run SQL commands from the Oracle SQL\*Plus command interpreter, Oracle SQL Developer UI, or through SQL APIs such as ODBC, JDBC, and OLE DB Provider. You must have a database user account to run SQL commands, with appropriate permissions for the queries and tasks that you perform.
+
+While the management and operations tasks for different data warehouses are similar in concept, the individual implementations can differ. Modern cloud-based products such as Azure Synapse tend to incorporate a more automated and "system managed" approach, compared to the more manual approach in legacy environments like Oracle.
+
+The following sections compare Oracle and Azure Synapse options for various operational tasks.
+
+### Housekeeping tasks
+
+In most legacy data warehouse environments, regular housekeeping tasks are time-consuming. You can reclaim disk storage space by removing old versions of updated or deleted rows. Or, you can reclaim disk storage space by reorganizing data, log files, and index blocks for efficiency, for example by running `ALTER TABLE... SHRINK SPACE` in Oracle.
+
+>[!TIP]
+>Housekeeping tasks keep a production warehouse operating efficiently and optimize storage and other resources.
+
+Collecting statistics is a potentially time-consuming task that's required after bulk data ingestion to provide the query optimizer with up-to-date data for its query execution plans.
+
+Oracle has a built-in feature to help with analyzing the quality of statistics, the Optimizer Statistics Advisor. It works through a list of Oracle rules that represent best practices for optimizer statistics. The advisor checks each rule and, where necessary, generates findings, recommendations, and actions that involve calls to the `DBMS_STATS` package to take corrective measures. Users can see the list of rules in the `V$STATS_ADVISOR_RULES` view, as shown in the following screenshot.
++
+An Oracle database contains many log tables in the data dictionary, which accumulate data, either automatically or after certain features are enabled. Because log data grows over time, purge older information to avoid using up permanent space. Oracle provides options to automate log maintenance.
+
+Azure Synapse can automatically create statistics so that they're available when needed. You can defragment indexes and data blocks manually, on a scheduled basis, or automatically. By using native built-in Azure capabilities, you reduce the migration effort.
+
+>[!TIP]
+>Automate and monitor housekeeping tasks in Azure.
+
+### Monitoring and auditing
+
+Oracle Enterprise Manager includes tools to monitor various aspects of one or more Oracle systems, such as activity, performance, queuing, and resource utilization. Oracle Enterprise Manager has an interactive UI that lets users drill down into the low-level detail of any chart.
+
+>[!TIP]
+>Oracle Enterprise Manager is the recommended method of monitoring and logging in Oracle systems.
+
+The following diagram provides an overview of the monitoring environment in an Oracle data warehouse.
++
+Azure Synapse also provides a rich monitoring experience within the Azure portal to provide insights into your data warehouse workload. The Azure portal is the recommended tool for monitoring your data warehouse because it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs.
+
+>[!TIP]
+>The Azure portal provides a UI to manage monitoring and auditing tasks for all Azure data and processes.
+
+The Azure portal can also provide recommendations for performance enhancements, as shown in the following screenshot.
++
+The portal supports integration with other Azure monitoring services, such as Operations Management Suite (OMS) and [Azure Monitor](../../../azure-monitor/overview.md), to provide an integrated monitoring experience of the data warehouse and the entire Azure analytics platform. For more information, see [Azure Synapse operations and management options](../../sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md).
+
+### High availability (HA) and disaster recovery (DR)
+
+Since its initial release in 1979, the Oracle environment has evolved to encompass numerous features required by enterprise customers, including options for high availability (HA) and disaster recovery (DR). The latest announcement in this area is Maximum Availability Architecture (MAA), which includes reference architectures for four levels of HA and DR:
+
+- **Bronze tier**: a single-instance HA architecture
+- **Silver tier**: HA with automatic failover
+- **Gold tier**: comprehensive HA and DR
+- **Platinum tier**: zero outage for platinum-ready applications
+
+Azure Synapse uses database snapshots to provide HA of the data warehouse. A data warehouse snapshot creates a restore point that you can use to restore a data warehouse to a previous state. Because Azure Synapse is a distributed system, a data warehouse snapshot consists of many files stored in Azure Storage. Snapshots capture incremental changes to the data stored in your data warehouse.
+
+>[!TIP]
+>Azure Synapse creates snapshots automatically to ensure fast recovery time.
+
+Azure Synapse automatically takes snapshots throughout the day and creates restore points that are available for seven days. You can't change this retention period. Azure Synapse supports an eight-hour recovery point objective (RPO). You can restore a data warehouse in the primary region from any one of the snapshots taken in the past seven days.
+
+>[!TIP]
+>User-defined snapshots can be used to define a recovery point before key updates.
+
+Azure Synapse supports user-defined restore points, which are created from manually triggered snapshots. By creating restore points before and after large data warehouse modifications, you ensure that the restore points are logically consistent. The user-defined restore points augment data protection and reduce recovery time if there are workload interruptions or user errors.
+
+In addition to snapshots, Azure Synapse performs a standard geo-backup once per day to a [paired data center](/azure/availability-zones/cross-region-replication-azure). The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any region where Azure Synapse is supported. A geo-backup ensures that a data warehouse can be restored if restore points in the primary region aren't available.
+
+>[!TIP]
+>Microsoft Azure provides automatic backups to a separate geographical location to enable DR.
+
+### Workload management
+
+Oracle provides utilities such as Enterprise Manager and Database Resource Manager (DBRM) for managing workloads. These utilities include features such as load balancing across large clusters, parallel query execution, performance measurement, and prioritization. Many of these features can be automated so that the system becomes to some extent self-tuning.
+
+>[!TIP]
+>A typical production data warehouse concurrently runs mixed workloads with different resource usage characteristics.
+
+Azure Synapse automatically logs resource utilization statistics. Metrics include usage statistics for CPU, memory, cache, I/O, and temporary workspace for each query. Azure Synapse also logs connectivity information, such as failed connection attempts.
+
+>[!TIP]
+>Low-level and system-wide metrics are automatically logged within Azure.
+
+In Azure Synapse, resource classes are pre-determined resource limits that govern compute resources and concurrency for query execution. Resource classes help you manage your workload by setting limits on the number of queries that run concurrently and on the compute resources assigned to each query. There's a trade-off between memory and concurrency.
+
+Azure Synapse supports these basic workload management concepts:
+
+- **Workload classification**: you can assign a request to a workload group to set importance levels.
+
+- **Workload importance**: you can influence the order in which a request gets access to resources. By default, queries are released from the queue on a first-in, first-out basis as resources become available. Workload importance allows higher priority queries to receive resources immediately regardless of the queue.
+
+- **Workload isolation**: you can reserve resources for a workload group, assign maximum and minimum usage for varying resources, limit the resources a group of requests can consume can, and set a timeout value to automatically kill runaway queries.
+
+Running mixed workloads can pose resource challenges on busy systems. A successful [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md) scheme effectively manages resources, ensures highly efficient resource utilization, and maximizes return on investment (ROI). The [workload classification](../../sql-data-warehouse/sql-data-warehouse-workload-classification.md), [workload importance](../../sql-data-warehouse/sql-data-warehouse-workload-importance.md), and [workload isolation](../../sql-data-warehouse/sql-data-warehouse-workload-isolation.md) gives more control over how workload utilizes system resources.
+
+You can use the workload metrics that Azure Synapse collects for capacity planning, for example to determine the resources required for extra users or a larger application workload. You can also use workload metrics to plan scale up/down of compute resources for cost-effective support of peaky workloads.
+
+The [workload management guide](../../sql-data-warehouse/analyze-your-workload.md) describes the techniques to analyze the workload, manage and monitor workload importance](../../sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md), and the steps to [convert a resource class to a workload group](../../sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md). Use the [Azure portal](../../sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md) and [T-SQL queries on DMVs](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md) to monitor the workload to ensure that the applicable resources are efficiently utilized. Azure Synapse provides a set of Dynamic Management Views (DMVs) for monitoring all aspects of workload management. These views are useful when actively troubleshooting and identifying performance bottlenecks in your workload.
+
+For more information on workload management in Azure Synapse, see [Workload management with resource classes](../../sql-data-warehouse/resource-classes-for-workload-management.md).
+
+### Scale compute resources
+
+The Azure Synapse architecture separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](../../sql-data-warehouse/quickstart-scale-compute-portal.md) to meet performance demands independent of data storage. You can also pause and resume compute resources. Another benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, you can save on compute costs by pausing compute.
+
+>[!TIP]
+>A major benefit of Azure is the ability to independently scale up and down compute resources on demand to handle peaky workloads cost-effectively.
+
+You can scale compute resources up or down by adjusting the data warehouse units (DWU) setting for a data warehouse. Load and query performance will increase linearly as you allocate more DWUs.
+
+If you increase DWUs, the number of compute nodes increase, which adds more compute power and supports more parallel processing. As the number of compute nodes increase, the number of distributions per compute node decrease, providing more compute power and parallel processing for queries. Similarly, if you decrease DWUs, the number of compute nodes decrease, which reduces the compute resources for queries.
+
+## Next steps
+
+To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Oracle migrations](4-visualization-reporting.md).
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/4-visualization-reporting.md
+
+ Title: "Visualization and reporting for Oracle migrations"
+description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse Analytics compared to Oracle.
+++
+ms.devlang:
++++ Last updated : 07/15/2022++
+# Visualization and reporting for Oracle migrations
+
+This article is part four of a seven-part series that provides guidance on how to migrate from Oracle to Azure Synapse Analytics. The focus of this article is best practices for visualization and reporting.
+
+## Access Azure Synapse Analytics using Microsoft and third-party BI tools
+
+Organizations access data warehouses and data marts using a range of business intelligence (BI) tools and applications. Some examples of BI products are:
+
+- Microsoft BI tools, such as Power BI.
+
+- Office applications, such as Microsoft Excel spreadsheets.
+
+- Third-party BI tools from different vendors.
+
+- Custom analytics applications with embedded BI tool functionality.
+
+- Operational applications that support on-demand BI by running queries and reports on a BI platform that in turn queries data in a data warehouse or data mart.
+
+- Interactive data science development tools, such as Azure Synapse Spark Notebooks, Azure Machine Learning, RStudio, and Jupyter Notebooks.
+
+If you migrate visualization and reporting as part of your data warehouse migration, all existing queries, reports, and dashboards generated by BI products need to run in the new environment. Your BI products must yield the same results on Azure Synapse as they did in your legacy data warehouse environment.
+
+For consistent results after migration, all BI tools and application dependencies must work after you've migrated your data warehouse schema and data to Azure Synapse. The dependencies include less visible aspects, such as access and security. When you address access and security, ensure that you migrate:
+
+- Authentication so users can sign into the data warehouse and data mart databases on Azure Synapse.
+
+- All users to Azure Synapse.
+
+- All user groups to Azure Synapse.
+
+- All roles to Azure Synapse.
+
+- All authorization privileges governing access control to Azure Synapse.
+
+- User, role, and privilege assignments to mirror what you had in your existing data warehouse before migration. For example:
+ - Database object privileges assigned to roles
+ - Roles assigned to user groups
+ - Users assigned to user groups and/or roles
+
+Access and security are important considerations for data access in the migrated system and are discussed in more detail in [Security, access, and operations for Oracle migrations](3-security-access-operations.md).
+
+>[!TIP]
+>Existing users, user groups, roles, and assignments of access security privileges need to be migrated first for migration of reports and visualizations to succeed.
+
+Migrate all required data to ensure that the reports and dashboards that query data in the legacy environment produce the same results in Azure Synapse.
+
+Business users will expect a seamless migration, with no surprises that destroy their confidence in the migrated system on Azure Synapse. Take care to allay any fears that your users might have through good communication. Your users will expect that:
+
+- Table structure remains the same when directly referred to in queries.
+
+- Table and column names remain the same when directly referred to in queries. For instance, calculated fields defined on columns in BI tools shouldn't fail when aggregate reports are produced.
+
+- Historical analysis remains the same.
+
+- Data types remain the same, if possible.
+
+- Query behavior remains the same.
+
+- ODBC/JDBC drivers are tested to ensure that query behavior remains the same.
+
+>[!TIP]
+>Communication and business user involvement are critical to success.
+
+If BI tools query views in the underlying data warehouse or data mart database, will those views still work after the migration? Some views might not work if there are proprietary SQL extensions specific to your legacy data warehouse DBMS that have no equivalent in Azure Synapse. If so, you need to know about those incompatibilities and find a way to resolve them.
+
+>[!TIP]
+>Views and SQL queries using proprietary SQL query extensions are likely to result in incompatibilities that impact BI reports and dashboards.
+
+Other issues, like the behavior of `NULL` values or data type variations across DBMS platforms, need to be tested to ensure that even slight differences don't exist in calculation results. Minimize those issues and take all necessary steps to shield business users from being affected by them. Depending on your legacy data warehouse environment, [third-party](../../partner/data-integration.md) tools that can help hide the differences between the legacy and new environments so that BI tools and applications run unchanged.
+
+Testing is critical to visualization and report migration. You need a test suite and agreed-on test data to run and rerun tests in both environments. A test harness is also useful, and a few are mentioned in this guide. Also, it's important to involve business users in the testing aspect of the migration to keep confidence high and to keep them engaged and part of the project.
+
+>[!TIP]
+>Use repeatable tests to ensure reports, dashboards, and other visualizations migrate successfully.
+
+You might be thinking about switching BI tools, for example to [migrate to Power BI](/power-bi/guidance/powerbi-migration-overview). The temptation is to make such changes at the same time you're migrating your schema, data, ETL processing, and more. However, to minimize risk, it's better to migrate to Azure Synapse first and get everything working before undertaking further modernization.
+
+If your existing BI tools run on-premises, ensure they can connect to Azure Synapse through your firewall so you're able to run comparisons against both environments. Alternatively, if the vendor of your existing BI tools offers their product on Azure, you can try it there. The same applies for applications running on-premises that embed BI or call your BI server on demand, for example by requesting a "headless report" with XML or JSON data.
+
+There's a lot to think about here, so let's take a closer look.
+
+## Use data virtualization to minimize the impact of migration on BI tools and reports
+
+During migration, you might be tempted to fulfill long-term requirements like opening business requests, adding missing data, or implementing new features. However, such changes can affect BI tool access to your data warehouse, especially if the change involves structural changes to your data model. If you want to adopt an agile data modeling technique or implement structural changes, do so *after* migration.
+
+One way to minimize the effect of schema changes or other structural changes on your BI tools is to introduce data virtualization between the BI tools and your data warehouse and data marts. The following diagram shows how data virtualization can hide a migration from users.
++
+Data virtualization breaks the dependency between business users utilizing self-service BI tools and the physical schema of the underlying data warehouse and data marts that are being migrated.
+
+>[!TIP]
+>Data virtualization allows you to shield business users from structural changes during migration so they remain unaware of those changes. Structural changes include schema alterations that tune your data model for Azure Synapse.
+
+With data virtualization, any schema alterations made during a migration to Azure Synapse, for example to optimize performance, can be hidden from business users because they only have access to virtual tables in the data virtualization layer. And, if you make structural changes, you only need to update the mappings between the data warehouse or data marts and any virtual tables. With data virtualization, users remain unaware of structural changes. [Microsoft partners](../../partner/data-integration.md) provide data virtualization software.
+
+## Identify high-priority reports to migrate first
+
+A key question when migrating your existing reports and dashboards to Azure Synapse is which ones to migrate first. Several factors might drive that decision, such as:
+
+- Usage
+
+- Business value
+
+- Ease of migration
+
+- Data migration strategy
+
+The following sections discuss these factors.
+
+Whatever your decision, it must involve your business users because they produce the reports, dashboards, and other visualizations, and make business decisions based on insights from those items. Everyone benefits when you can:
+
+- Migrate reports and dashboards seamlessly,
+- Migrate reports and dashboards with minimal effort, and
+- Point your BI tool(s) at Azure Synapse instead of your legacy data warehouse system, and get like-for-like reports, dashboards, and other visualizations.
+
+### Migrate reports based on usage
+
+Usage is often an indicator of business value. Unused reports and dashboards clearly don't contribute to business decisions or offer current value. If you don't have a way to find out which reports and dashboards are unused, you can use one of the several BI tools that provide usage statistics.
+
+If your legacy data warehouse has been up and running for years, there's a good chance you have hundreds, if not thousands, of reports in existence. It's worth compiling an inventory of reports and dashboards and identifying their business purpose and usage statistics.
+
+For unused reports, determine whether to decommission them to reduce your migration effort. A key question when deciding whether to decommission an unused report is whether the report is unused because people don't know it exists, because it offers no business value, or because it's been superseded by another report.
+
+### Migrate reports based on business value
+
+Usage alone isn't always a good indicator of business value. You might want to consider the extent to which a report's insights contribute to business value. One way to do that is to evaluate the profitability of every business decision that relies on the report and the extent of the reliance. However, that information is unlikely to be readily available in most organizations.
+
+Another way to evaluate business value is to look at the alignment of a report with business strategy. The business strategy set by your executive typically lays out strategic business objectives (SBOs), key performance indicators (KPIs), KPI targets that need to be achieved, and who is accountable for achieving them. You can classify a report by which SBOs the report contributes to, such as fraud reduction, improved customer engagement, and optimized business operations. Then, you can prioritize for migration the reports and dashboards that are associated with high-priority objectives. In this way, the initial migration can deliver business value in a strategic area.
+
+Another way to evaluate business value is to classify reports and dashboards as operational, tactical, or strategic to identify at which business level they're used. SBOs require contributions at all these levels. By knowing which reports and dashboards are used, at what level, and what objectives they're associated with, you're able to focus the initial migration on high-priority business value. You can use the following *business strategy objective* table to evaluate reports and dashboards.
+
+| Level | Report / dashboard name | Business purpose | Department used | Usage frequency | Business priority |
+|-|-|-|-|-|-|
+| **Strategic** | | | | | |
+| **Tactical** | | | | | |
+| **Operational** | | | | | |
+
+Metadata discovery tools like [Azure Data Catalog](../../../data-catalog/overview.md) let business users tag and rate data sources to enrich the metadata for those data sources to assist with their discovery and classification. You can use the metadata for a report or dashboard to help you understand its business value. Without such tools, understanding the contribution of reports and dashboards to business value is likely to be a time-consuming task, whether you're migrating or not.
+
+### Migrate reports based on data migration strategy
+
+If your migration strategy is based on migrating data marts first, then the order of data mart migration will affect which reports and dashboards are migrated first. If your strategy is based on business value, the order in which you migrate data marts to Azure Synapse will reflect business priorities. Metadata discovery tools can help you implement your strategy by showing you which data mart tables supply data for which reports.
+
+>[!TIP]
+>Your data migration strategy affects which reports and visualizations get migrated first.
+
+## Migration incompatibility issues that can affect reports and visualizations
+
+BI tools produce reports, dashboards, and other visualizations by issuing SQL queries that access physical tables and/or views in your data warehouse or data mart. When you migrate your legacy data warehouse to Azure Synapse, several factors can affect the ease of migration of reports, dashboards, and other visualizations. Those factors include:
+
+- Schema incompatibilities between the environments.
+
+- SQL incompatibilities between the environments.
+
+### Schema incompatibilities
+
+During a migration, schema incompatibilities in the data warehouse or data mart tables that supply data for reports, dashboards, and other visualizations can be:
+
+- Non-standard table types in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
+
+- Data types in your legacy data warehouse DBMS that don't have an equivalent in Azure Synapse.
+
+In most cases, there's a workaround to the incompatibilities. For example, you can migrate the data in an unsupported table type into a standard table with appropriate data types and indexed or partitioned on a date/time column. Similarly, it might be possible to represent unsupported data types in another type of column and perform calculations in Azure Synapse to achieve the same results.
+
+>[!TIP]
+>Schema incompatibilities include legacy warehouse DBMS table types and data types that are unsupported on Azure Synapse.
+
+To identify the reports affected by schema incompatibilities, run queries against the system catalog of your legacy data warehouse to identify the tables with unsupported data types. Then, you can use metadata from your BI tool to identify the reports that access data in those tables. For more information about how to identify object type incompatibilities, see [Unsupported Oracle database object types](1-design-performance-migration.md#unsupported-oracle-database-object-types).
+
+>[!TIP]
+>Query the system catalog of your legacy warehouse DBMS to identify schema incompatibilities with Azure Synapse.
+
+The effect of schema incompatibilities on reports, dashboards, and other visualizations might be less than you think because many BI tools don't support the less generic data types. As a result, your legacy data warehouse might already have views that `CAST` unsupported data types to more generic types.
+
+### SQL incompatibilities
+
+During a migration, SQL incompatibilities are likely to affect any report, dashboard, or other visualization in an application or tool that:
+
+- Accesses legacy data warehouse DBMS views that include proprietary SQL functions that have no equivalent in Azure Synapse.
+
+- Issues SQL queries that include proprietary SQL functions, specific to the SQL dialect of your legacy environment, that have no equivalent in Azure Synapse.
+
+### Gauge the impact of SQL incompatibilities on your reporting portfolio
+
+Your reporting portfolio might include embedded query services, reports, dashboards, and other visualizations. Don't rely on the documentation associated with those items to gauge the effect of SQL incompatibilities on the migration of your reporting portfolio to Azure Synapse. You need to use a more precise way to assess the effect of SQL incompatibilities.
+
+#### Use EXPLAIN statements to find SQL incompatibilities
+
+You can find SQL incompatibilities by reviewing the logs of recent SQL activity in your legacy Oracle data warehouse. Use a script to extract a representative set of SQL statements to a file. Then, prefix each SQL statement with an `EXPLAIN` statement, and run those `EXPLAIN` statements in Azure Synapse. Any SQL statements containing proprietary unsupported SQL extensions will be rejected by Azure Synapse when the `EXPLAIN` statements are executed. This approach lets you assess the extent of SQL incompatibilities.
+
+Metadata from your legacy data warehouse DBMS can also help you identify incompatible views. As before, capture a representative set of SQL statements from the applicable logs, prefix each SQL statement with an `EXPLAIN` statement, and run those `EXPLAIN` statements in Azure Synapse to identify views with incompatible SQL.
+
+>[!TIP]
+>Gauge the impact of SQL incompatibilities by harvesting your DBMS log files and running `EXPLAIN` statements.
+
+## Test report and dashboard migration to Azure Synapse Analytics
+
+A key element of data warehouse migration is testing of reports and dashboards in Azure Synapse to verify the migration has worked. Define a series of tests and a set of required outcomes for each test that you will run to verify success. Test and compare the reports and dashboards across your existing and migrated data warehouse systems to:
+
+ - Identify whether any schema changes that were made during migration affected the ability of reports to run, report results, or the corresponding report visualizations. An example of a schema change is if you mapped an incompatible data type to an equivalent data type that's supported in Azure Synapse.
+
+ - Verify that all users are migrated.
+
+ - Verify that all roles are migrated, and users are assigned to those roles.
+
+ - Verify that all data access security privileges are migrated to ensure access control list (ACL) migration.
+
+ - Ensure consistent results for all known queries, reports, and dashboards.
+
+ - Ensure that data and ETL migration is complete and error-free.
+
+ - Ensure that data privacy is upheld.
+
+ - Test performance and scalability.
+
+ - Test analytical functionality.
+
+>[!TIP]
+>Test and tune performance to minimize compute costs.
+
+For information about how to migrate users, user groups, roles, and privileges, see [Security, access, and operations for Oracle migrations](3-security-access-operations.md).
+
+Automate testing as much as possible to make each test repeatable and to support a consistent approach to evaluating test results. Automation works well for known regular reports, and can be managed via [Azure Synapse pipelines](../../get-started-pipelines.md) or [Azure Data Factory](../../../data-factory/introduction.md) orchestration. If you already have a suite of test queries in place for regression testing, you can use the existing testing tools to automate post migration testing.
+
+>[!TIP]
+>Best practice is to build an automated test suite to make tests repeatable.
+
+Ad-hoc analysis and reporting are more challenging and require compilation of a set of tests to verify that the same reports and dashboards from before and after migration are consistent. If you find inconsistencies, then your ability to compare metadata lineage across the original and migrated systems during migration testing becomes crucial. That comparison can highlight differences and pinpoint where inconsistencies originated, when detection by other means is difficult.
+
+>[!TIP]
+>Leverage tools that compare metadata lineage to verify results.
+
+## Analyze lineage to understand dependencies between reports, dashboards, and data
+
+Your understanding of lineage is a critical factor in the successful migration of reports and dashboards. Lineage is metadata that shows the journey of migrated data so you can track its path from a report or dashboard all the way back to the data source. Lineage shows how data has traveled from point to point, its location in the data warehouse and/or data mart, and which reports and dashboards use it. Lineage can help you understand what happens to data as it travels through different data stores, such as files and databases, different ETL pipelines, and into reports. When business users have access to data lineage, it improves trust, instills confidence, and supports informed business decisions.
+
+>[!TIP]
+>Your ability to access metadata and data lineage from reports all the way back to a data source is critical for verifying that migrated reports work correctly.
+
+In multi-vendor data warehouse environments, business analysts in BI teams might map out data lineage. For example, if you use different vendors for ETL, data warehouse, and reporting, and each vendor has its own metadata repository, then figuring out where a specific data element in a report came from can be challenging and time-consuming.
+
+>[!TIP]
+>Tools that automate the collection of metadata and show end-to-end lineage in a multi-vendor environment are valuable during a migration.
+
+To migrate seamlessly from a legacy data warehouse to Azure Synapse, use end-to-end data lineage to prove like-for-like migration when you're comparing the reports and dashboards generated by each environment. To show the end-to-end data journey, you'll need to capture and integrate metadata from several tools. Having access to tools that support automated metadata discovery and data lineage, helps you identify duplicate reports or ETL processes, and find reports that rely on obsolete, questionable, or non-existent data sources. You can use that information to reduce the number of reports and ETL processes that you migrate.
+
+You can also compare the end-to-end lineage of a report in Azure Synapse to the end-to-end lineage of the same report in your legacy environment to check for differences that might have inadvertently occurred during migration. This type of comparison is exceptionally useful when you need to test and verify migration success.
+
+Data lineage visualization not only reduces time, effort, and error in the migration process, but also enables faster migration.
+
+By using automated metadata discovery and data lineage tools that compare lineage, you can verify that a report in Azure Synapse that's produced from migrated data is produced in the same way in your legacy environment. This capability also helps you determine:
+
+- What data needs to be migrated to ensure successful report and dashboard execution in Azure Synapse.
+
+- What transformations have been and should be performed to ensure successful execution in Azure Synapse.
+
+- How to reduce report duplication.
+
+Automated metadata discovery and data lineage tools substantially simplify the migration process because they help businesses become more aware of their data assets and to know what needs to be migrated to Azure Synapse to achieve a solid reporting environment.
+
+Several ETL tools provide end-to-end lineage capability, so check whether your existing ETL tool has that capability if you plan to use it with Azure Synapse. Azure Synapse pipelines or Data Factory both support the ability to view lineage in mapping flows. [Microsoft partners](../../partner/data-integration.md) also provide automated metadata discovery, data lineage, and lineage comparison tools.
+
+## Migrate BI tool semantic layers to Azure Synapse Analytics
+
+Some BI tools have what is known as a semantic metadata layer. That layer simplifies business user access to the underlying physical data structures in a data warehouse or data mart database. The semantic metadata layer simplifies access by providing high-level objects like dimensions, measures, hierarchies, calculated metrics, and joins. The high-level objects use business terms that are familiar to business analysts, and map to physical data structures in your data warehouse or data mart.
+
+>[!TIP]
+>Some BI tools have semantic layers that simplify business user access to physical data structures in your data warehouse or data mart.
+
+In a data warehouse migration, you might be forced to change column or table names. For example, Oracle allows a `#` character in table names, but Azure Synapse only allows `#` as a table name prefix to indicate a temporary table. In Oracle, TEMPORARY TABLES do not necessarily have a "#" in the name, but in Synapse they must. You may need to do some rework to change table mappings in such cases.
+
+To achieve consistency across multiple BI tools, create a universal semantic layer by using a data virtualization server that sits between BI tools and applications and Azure Synapse. In the data virtualization server, use common data names for high-level objects like dimensions, measures, hierarchies, and joins. That way you configure everything, including calculated fields, joins, and mappings, only once instead of in every tool. Then, point all BI tools at the data virtualization server.
+
+>[!TIP]
+>Use data virtualization to create a common semantic layer to guarantee consistency across all BI tools in an Azure Synapse environment.
+
+With data virtualization, you get consistency across all BI tools and break the dependency between BI tools and applications and the underlying physical data structures in Azure Synapse. [Microsoft partners](../../partner/data-integration.md) can help you achieve consistency in Azure. The following diagram shows how a common vocabulary in the data virtualization server lets multiple BI tools see a common semantic layer.
++
+## Conclusions
+
+In a lift and shift data warehouse migration, most reports, dashboards, and other visualizations should migrate easily.
+
+During a migration from a legacy environment, you might find that data in the legacy data warehouse or data mart tables is stored in unsupported data types. Or, you may find legacy data warehouse views that include proprietary SQL with no equivalent in Azure Synapse. If so, you'll need to resolve those issues to ensure a successful migration to Azure Synapse.
+
+Don't rely on user-maintained documentation to identify where issues are located. Instead, use `EXPLAIN` statements because they're a quick, pragmatic way to identify SQL incompatibilities. Rework the incompatible SQL statements to achieve equivalent functionality in Azure Synapse. Also, use automated metadata discovery and lineage tools to understand dependencies, find duplicate reports, and identify invalid reports that rely on obsolete, questionable, or non-existent data sources. Use lineage tools to compare lineage to verify that reports running in your legacy data warehouse environment are produced identically in Azure Synapse.
+
+Don't migrate reports that you no longer use. BI tool usage data can help you determine which reports aren't in use. For the reports, dashboards, and other visualizations that you do want to migrate, migrate all users, user groups, roles, and privileges. If you're using business value to drive your report migration strategy, associate reports with strategic business objectives and priorities to help identify the contribution of report insights to specific objectives. If you're migrating data mart by data mart, use metadata to identify which reports are dependent on which tables and views, so you can make an informed decision about which data marts to migrate first.
+
+>[!TIP]
+>Identify incompatibilities early to gauge the extent of the migration effort. Migrate your users, group roles, and privilege assignments. Only migrate the reports and visualizations that are used and are contributing to business value.
+
+Structural changes to the data model of your data warehouse or data mart can occur during a migration. Consider using data virtualization to shield BI tools and applications from structural changes. With data virtualization, you can use a common vocabulary to define a common semantic layer. The common semantic layer guarantees consistent common data names, definitions, metrics, hierarchies, and joins across all BI tools and applications in the new Azure Synapse environment.
+
+## Next steps
+
+To learn more about minimizing SQL issues, see the next article in this series: [Minimizing SQL issues for Oracle migrations](5-minimize-sql-issues.md).
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/5-minimize-sql-issues.md
+
+ Title: "Minimize SQL issues for Oracle migrations"
+description: Learn how to minimize the risk of SQL issues when migrating from Oracle to Azure Synapse Analytics.
+++
+ms.devlang:
++++ Last updated : 07/15/2022++
+# Minimize SQL issues for Oracle migrations
+
+This article is part five of a seven-part series that provides guidance on how to migrate from Oracle to Azure Synapse Analytics. The focus of this article is best practices for minimizing SQL issues.
+
+## Overview
+
+### Characteristics of Oracle environments
+
+Oracle's initial database product, released in 1979, was a commercial SQL relational database for on-line transaction processing (OLTP) applications&mdash;with much lower transaction rates than today. Since that initial release, the Oracle environment has evolved to become far more complex and encompasses numerous features. The features include client-server architectures, distributed databases, parallel processing, data analytics, high availability, data warehousing, data in-memory techniques, and support for cloud-based instances.
+
+>[!TIP]
+>Oracle pioneered the "data warehouse appliance" concept in the early 2000's.
+
+Due to the cost and complexity of maintaining and upgrading legacy on-premises Oracle environments, many existing Oracle users want to take advantage of the innovations provided by cloud environments. Modern cloud environments, such as cloud, IaaS, and PaaS, let you delegate tasks like infrastructure maintenance and platform development to the cloud provider.
+
+Many data warehouses that support complex analytic SQL queries on large data volumes use Oracle technologies. These data warehouses commonly have a dimensional data model, such as star or snowflake schemas, and use data marts for individual departments.
+
+>[!TIP]
+>Many existing Oracle installations are data warehouses that use a dimensional data model.
+
+The combination of SQL and dimensional data models in Oracle simplifies migration to Azure Synapse because the SQL and basic data model concepts are transferable. Microsoft recommends moving your existing data model as-is to Azure to reduce risk, effort, and migration time. Although your migration plan might include a change in the underlying data model, such as a move from an Inmon model to a data vault, it makes sense to initially perform an as-is migration. After the initial migration, you can then make changes within the Azure cloud environment to take advantage of its performance, elastic scalability, built-in features, and cost benefits.
+
+Although the SQL language is standardized, individual vendors sometimes implement proprietary extensions. As a result, you might find [SQL differences](#sql-dml-differences-between-oracle-and-azure-synapse) during your migration that require workarounds in Azure Synapse.
+
+#### Use Azure facilities to implement a metadata-driven migration
+
+You can automate and orchestrate the migration process by using the capabilities of the Azure environment. This approach minimizes the performance hit on the existing Oracle environment, which may already be running close to capacity.
+
+[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud to orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](/azure/hdinsight/hadoop/apache-hadoop-introduction), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+
+Azure also includes [Azure Database Migration Services](../../../dms/dms-overview.md) to help you plan and perform a migration from environments such as Oracle. [SQL Server Migration Assistant](/sql/ssma/oracle/sql-server-migration-assistant-for-oracle-oracletosql) (SSMA) for Oracle can automate migration of Oracle databases, including in some cases functions and procedural code.
+
+>[!TIP]
+>Automate the migration process by using Azure Data Factory capabilities.
+
+When you're planning to use Azure facilities, such as Data Factory, to manage the migration process, first create metadata that lists all the data tables that need to be migrated and their location.
+
+## SQL DDL differences between Oracle and Azure Synapse
+
+The ANSI SQL standard defines the basic syntax for Data Definition Language (DDL) commands. Some DDL commands, like `CREATE TABLE` and `CREATE VIEW`, are common to both Oracle and Azure Synapse, but have been extended to provide implementation-specific features such as indexing, table distribution, and partitioning options.
+
+>[!TIP]
+>SQL DDL commands `CREATE TABLE` and `CREATE VIEW` have standard core elements but are also used to define implementation-specific options.
+
+The following sections discuss the Oracle-specific options that need to be considered during a migration to Azure Synapse.
+
+### Table/view considerations
+
+When you migrate tables between different environments, typically only the raw data and the metadata that describes it physically migrate. Other database elements from the source system, such as indexes and log files, usually aren't migrated because they might be unnecessary or implemented differently in the new environment. For example, the `TEMPORARY` option within Oracle's `CREATE TABLE` syntax is equivalent to prefixing a table name with the `#` character in Azure Synapse.
+
+Performance optimizations in the source environment, such as indexes, indicate where you might add performance optimization in the new target environment. For example, if bit-mapped indexes are frequently used in queries within the source Oracle environment, that suggests that a non-clustered index should be created within Azure Synapse. Other native performance optimization techniques such as table replication may be more applicable than straight like-for-like index creation. SSMA for Oracle can provide migration recommendations for table distribution and indexing.
+
+>[!TIP]
+>Existing indexes indicate candidates for indexing in the migrated warehouse.
+
+SQL view definitions contain SQL Data Manipulation Language (DML) statements that define the view, typically with one or more `SELECT` statements. When you migrate `CREATE VIEW` statements, take into account the [DML differences](#sql-dml-differences-between-oracle-and-azure-synapse) between Oracle and Azure Synapse.
+
+### Unsupported Oracle database object types
+
+Oracle-specific features can often be replaced by Azure Synapse features. However, some Oracle database objects aren't directly supported in Azure Synapse. The following list of unsupported Oracle database objects describes how you can achieve equivalent functionality in Azure Synapse:
+
+- **Indexing options**: in Oracle, several indexing options, such as bit-mapped indexes, function-based indexes, and domain indexes, have no direct equivalent in Azure Synapse. Although Azure Synapse doesn't support those index types, you can achieve a similar reduction in disk I/O by using user-defined index types and/or partitioning Reducing disk I/O improves query performance.
+
+ You can find out which columns are indexed and their index type by querying system catalog tables and views, such as `ALL_INDEXES`, `DBA_INDEXES`, `USER_INDEXES`, and `DBA_IND_COL`. Or, you can query the `dba_index_usage` or `v$object_usage` views when monitoring is enabled.
+
+ Azure Synapse features, such as parallel query processing and in-memory caching of data and results, make it likely that fewer indexes are required for data warehouse applications to achieve excellent performance goals.
+
+- **Clustered tables**: Oracle tables can be organized so that table rows that are frequently accessed together (based on a common value) are physically stored together. This strategy reduces disk I/O when data is retrieved. Oracle also has a hash-cluster option for individual tables, which applies a hash value to the cluster key and physically stores rows with the same hash value together.
+
+ In Azure Synapse, you can achieve a similar result by partitioning and/or using other indexes.
+
+- **Materialized views**: Oracle supports materialized views and recommends one or more of them for large tables with many columns where only a few columns are regularly used in queries. Materialized views are automatically refreshed by the system when data in the base table is updated.
+
+ In 2019, Microsoft announced that Azure Synapse will support materialized views with the same functionality as in Oracle. Materialized views are now a preview feature in Azure Synapse.
+
+- **In-database triggers**: in Oracle, a trigger can be configured to automatically run when a triggering event occurs. Triggering events can be:
+
+ - A DML statement, such as `INSERT`, `UPDATE`, or `DELETE`, runs. If you defined a trigger that fires before an `INSERT` statement on a customer table, the trigger will fire once before a new row is inserted into the customer table.
+
+ - A DDL statement, such as `CREATE` or `ALTER`, runs. This triggering event is often used to record schema changes for auditing purposes.
+
+ - A system event such as startup or shutdown of the Oracle database.
+
+ - A user event such as login or logout.
+
+ Azure Synapse doesn't support Oracle database triggers. However, you can achieve equivalent functionality by using Data Factory, although doing so will require you to refactor the processes that use triggers.
+
+- **Synonyms**: Oracle supports defining synonyms as alternative names for several database object types. Those types include tables, views, sequences, procedures, stored functions, packages, materialized views, Java class schema objects, user-defined objects, or other synonyms.
+
+ Azure Synapse doesn't currently support defining synonyms, although if a synonym in Oracle refers to a table or view, then you can define a view in Azure Synapse to match the alternative name. If a synonym in Oracle refers to a function or stored procedure, then you can replace the synonym in Azure Synapse with another function or stored procedure that calls the target.
+
+- **User-defined types**: Oracle supports user-defined objects that can contain a series of individual fields, each with their own definition and default values. Those objects can then be referenced within a table definition in the same way as built-in data types like `NUMBER` or `VARCHAR`.
+
+ Azure Synapse doesn't currently support user-defined types. If the data you need to migrate includes user-defined data types, either "flatten" them into a conventional table definition, or if they're arrays of data, normalize them in a separate table.
+
+### SQL DDL generation
+
+You can edit existing Oracle `CREATE TABLE` and `CREATE VIEW` scripts to achieve equivalent definitions in Azure Synapse. To do so, you might need to use [modified data types](1-design-performance-migration.md#oracle-data-type-mapping) and remove or modify Oracle-specific clauses, such as `TABLESPACE`.
+
+>[!TIP]
+>Use existing Oracle metadata to automate the generation of `CREATE TABLE` and `CREATE VIEW` DDL for Azure Synapse.
+
+Within the Oracle environment, system catalog tables specify the current table/view definition. Unlike user-maintained documentation, system catalog information is always complete and in sync with current table definitions. You can access system catalog information by using utilities such as Oracle SQL Developer. Oracle SQL Developer can generate `CREATE TABLE` DDL statements that you can edit to apply to equivalent tables in Azure Synapse, as shown in the next screenshot.
++
+Oracle SQL Developer outputs the following `CREATE TABLE` statement, which contains Oracle-specific clauses that you should remove. Map any unsupported data types before running your modified `CREATE TABLE` statement on Azure Synapse.
++
+Alternatively, you can automatically generate `CREATE TABLE` statements from the information within Oracle catalog tables by using SQL queries, SSMA, or [third-party](../../partner/data-integration.md) migration tools. This approach is the fastest, most consistent way to generate `CREATE TABLE` statements for many tables.
+
+>[!TIP]
+>Third-party tools and services can automate data mapping tasks.
+
+Third-party vendors offer tools and services to automate migration, including the mapping of data types. If a [third-party](../../partner/data-integration.md) ETL tool is already in use in the Oracle environment, use that tool to implement any required data transformations.
+
+## SQL DML differences between Oracle and Azure Synapse
+
+The ANSI SQL standard defines the basic syntax for DML commands, such as `SELECT`, `INSERT`, `UPDATE`, and `DELETE`. Although Oracle and Azure Synapse both support DDL commands, in some cases they implement the same command differently.
+
+>[!TIP]
+>The standard SQL DML commands `SELECT`, `INSERT`, and `UPDATE` can have additional syntax options in different database environments.
+
+The following sections discuss the Oracle-specific DML commands that need to be considered during a migration to Azure Synapse.
+
+### SQL DML syntax differences
+
+There are some SQL DML syntax differences between Oracle SQL and Azure Synapse T-SQL:
+
+- `DUAL` table: Oracle has a system table named `DUAL` that consists of exactly one column named `dummy` and one record with the value `X`. The `DUAL` system table is used when a query requires a table name for syntax reasons, but the table content isn't needed.
+
+ An example Oracle query that uses the `DUAL` table is `SELECT sysdate from dual;`. The Azure Synapse equivalent is `SELECT GETDATE();`. To simplify migration of DML, you could create an equivalent `DUAL` table in Azure Synapse using the following DDL.
+
+ ```sql
+ CREATE TABLE DUAL
+ (
+ DUMMY VARCHAR(1)
+ )
+ GO
+ INSERT INTO DUAL (DUMMY)
+ VALUES ('X')
+ GO
+ ```
+
+- `NULL` values: a `NULL` value in Oracle is an empty string, represented by a `CHAR` or `VARCHAR` string type of length `0`. In Azure Synapse and most other databases, `NULL` means something [else](/sql/t-sql/language-elements/null-and-unknown-transact-sql). Be careful when migrating data, or when migrating processes that handle or store data, to ensure that `NULL` values are handled consistently.
+
+- Oracle outer join syntax: although more recent versions of Oracle support ANSI outer join syntax, older Oracle systems use a proprietary syntax for outer joins that uses a plus sign (`+`) within the SQL statement. If you're migrating an older Oracle environment, you might encounter the older syntax. For example:
+
+ ```SQL
+ SELECT
+ d.deptno, e.job
+ FROM
+ dept d,
+ emp e
+ WHERE
+ d.deptno = e.deptno (+)
+ AND e.job (+) = 'CLERK'
+ GROUP BY
+ d.deptno, e.job;
+ ```
+
+ The equivalent ANSI standard syntax is:
+
+ ```SQL
+ SELECT
+ d.deptno, e.job
+ FROM
+ dept d
+ LEFT OUTER JOIN emp e ON d.deptno = e.deptno
+ and e.job = 'CLERK'
+ GROUP BY
+ d.deptno,
+ e.job
+ ORDER BY
+ d.deptno, e.job;
+ ```
+
+- `DATE` data: in Oracle, the `DATE` data type can store both date and time. Azure Synapse stores date and time in separate `DATE`, `TIME`, and `DATETIME` data types. When you're migrating Oracle `DATE` columns, check whether they store both date and time or just a date. If they only store a date, then map the column to `DATE`, otherwise to `DATETIME`.
+
+- `DATE` arithmetic: Oracle supports subtracting one date from another, for example `SELECT date '2018-12-31' - date '2018-1201' from dual;`. In Azure Synapse, you can subtract dates by using the `DATEDIFF()` function, for example `SELECT DATEDIFF(day, '2018-12-01', '2018-12-31');`.
+
+ Oracle can subtract integers from dates, for example `SELECT hire_date, (hire_date-1) FROM employees;`. In Azure Synapse, you can add or subtract integers from dates by using the `DATEADD()` function.
+
+- Updates via views: in Oracle you can run insert, update, and delete operations against a view to update the underlying table. In Azure Synapse, you run those operations against a base table&mdash;not a view. You might have to re-engineer ETL processing if an Oracle table is updated through a view.
+
+- Built-in functions: the following table shows the differences in the syntax and usage of some built-in functions.
+
+| Oracle Function | Description | Synapse equivalent |
+|-|-|-|
+| ADD_MONTHS | Add a specified number of months | DATEADD |
+| CAST | Convert one built-in data type into another | CAST |
+| DECODE | Evaluate a list of conditions | CASE expression |
+| EMPTY_BLOB | Create an empty BLOB value | `0x` constant (empty binary string) |
+| EMPTY_CLOB | Create an empty CLOB or NCLOB value | `''` (empty string) |
+| INITCAP | Capitalize the first letter of each word | User-defined function |
+| INSTR | Find position of a substring in a string | CHARINDEX |
+| LAST_DAY | Get the last date of month | EOMONTH |
+| LENGTH | Get string length in characters | LEN |
+| LPAD | Left-pad string to the specified length | Expression using REPLICATE, RIGHT, and LEFT |
+| MOD | Get the remainder of a division of one number by another | `%` operator |
+| MONTHS_BETWEEN | Get the number of months between two dates | DATEDIFF |
+| NVL | Replace `NULL` with expression | ISNULL |
+| SUBSTR | Return a substring from a string | SUBSTRING |
+| TO_CHAR for datetime | Convert datetime to string | CONVERT |
+| TO_DATE | Convert a string to datetime | CONVERT |
+| TRANSLATE | One-to-one single character substitution | Expressions using REPLACE or a user-defined function |
+| TRIM | Trim leading or trailing characters | LTRIM and RTRIM |
+| TRUNC for datetime | Truncate datetime | Expressions using CONVERT |
+| UNISTR | Convert Unicode code points to characters | Expressions using NCHAR |
+
+### Functions, stored procedures, and sequences
+
+When migrating a data warehouse from a mature environment like Oracle, you probably need to migrate elements other than simple tables and views. For functions, stored procedures, and sequences, check whether tools within the Azure environment can replace their functionality because it's usually more efficient to use built-in Azure tools than to recode the Oracle functions.
+
+As part of your preparation phase, create an inventory of objects that need to be migrated, define a method for handling them, and allocate appropriate resources in your migration plan.
+
+Microsoft tools like SSMA for Oracle and Azure Database Migration Services, or [third-party](../../partner/data-integration.md) migration products and services, can automate the migration of functions, stored procedures, and sequences.
+
+>[!TIP]
+>Third-party products and services can automate migration of non-data elements.
+
+The following sections discuss the migration of functions, stored procedures, and sequences.
+
+#### Functions
+
+As with most database products, Oracle supports system and user-defined functions within a SQL implementation. When you migrate a legacy database platform to Azure Synapse, you can usually migrate common system functions without change. Some system functions might have a slightly different syntax, but you can automate any required changes.
+
+For Oracle system functions or arbitrary user-defined functions that have no equivalent in Azure Synapse, recode those functions using the target environment language. Oracle user-defined functions are coded in PL/SQL, Java, or C. Azure Synapse uses the Transact-SQL language to implement user-defined functions.
+
+#### Stored procedures
+
+Most modern database products support storing procedures within the database. Oracle provides the PL/SQL language for this purpose. A stored procedure typically contains both SQL statements and procedural logic, and returns data or a status.
+
+Azure Synapse supports stored procedures using T-SQL, so you'll need to recode any migrated stored procedures in T-SQL.
+
+#### Sequences
+
+In Oracle, a sequence is a named database object, created using `CREATE SEQUENCE`. A sequence provides unique numeric values via the `CURRVAL` and `NEXTVAL` methods. You can use the generated unique numbers as surrogate key values for primary keys. Azure Synapse doesn't implement `CREATE SEQUENCE`, but you can implement sequences using `IDENTITY` columns or SQL code that generates the next sequence number in a series.
+
+### Use EXPLAIN to validate legacy SQL
+
+>[!TIP]
+>Use real queries from the existing system query logs to find potential migration issues.
+
+Assuming a like-for-like migrated data model in Azure Synapse with the same table and column names, one way to test legacy Oracle SQL for compatibility with Azure Synapse is:
+
+1. Capture some representative SQL statements from the legacy system query history logs.
+1. Prefix those queries with the `EXPLAIN` statement.
+1. Run the `EXPLAIN` statements in Azure Synapse.
+
+Any incompatible SQL will generate an error, and the error information can be used to determine the scale of the recoding task. This approach doesn't require you to load any data into the Azure environment, you only need to create the relevant tables and views.
+
+## Summary
+
+Existing legacy Oracle installations are typically implemented in a way that makes migration to Azure Synapse relatively straightforward. Both environments use SQL for analytical queries on large data volumes, and generally use some form of dimensional data model. These factors make Oracle installations a good candidate for migration to Azure Synapse.
+
+To summarize, our recommendations for minimizing the task of migrating SQL code from Oracle to Azure Synapse are:
+
+- Migrate your existing data model as-is to minimize risk, effort, and migration time, even if a different data model is planned, such as a data vault.
+
+- Understand the differences between the Oracle SQL implementation and the Azure Synapse implementation.
+
+- Use the metadata and query logs from the existing Oracle implementation to assess the impact of changing the environment. Plan an approach to mitigate the differences.
+
+- Automate the migration process to minimize risk, effort, and migration time. You can use Microsoft tools such as Azure Database Migration Services and SSMA.
+
+- Consider using specialist third-party tools and services to streamline the migration.
+
+## Next steps
+
+To learn more about Microsoft and third-party tools, see the next article in this series: [Tools for Oracle data warehouse migration to Azure Synapse Analytics](6-microsoft-third-party-migration-tools.md).
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/6-microsoft-third-party-migration-tools.md
+
+ Title: "Tools for Oracle data warehouse migration to Azure Synapse Analytics"
+description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Oracle to Azure Synapse Analytics.
+++
+ms.devlang:
++++ Last updated : 07/26/2022++
+# Tools for Oracle data warehouse migration to Azure Synapse Analytics
+
+This article is part six of a seven-part series that provides guidance on how to migrate from Oracle to Azure Synapse Analytics. The focus of this article is best practices for Microsoft and third-party tools.
+
+## Data warehouse migration tools
+
+By migrating your existing data warehouse to Azure Synapse, you benefit from:
+
+- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database.
+
+- The rich Microsoft analytical ecosystem that exists on Azure. This ecosystem consists of technologies to help modernize your data warehouse once it's migrated and extend your analytical capabilities to drive new value.
+
+Several tools from both Microsoft and [third-party partners](../../partner/data-integration.md) can help you migrate your existing data warehouse to Azure Synapse. This article discusses the following types of tools:
+
+- Microsoft data and database migration tools.
+
+- Third-party data warehouse automation tools to automate and document the migration to Azure Synapse.
+
+- Third-party data warehouse migration tools to migrate schema and data to Azure Synapse.
+
+- Third-party tools to bridge the SQL differences between your existing data warehouse DBMS and Azure Synapse.
+
+## Microsoft data migration tools
+
+Microsoft offers several tools to help you migrate your existing data warehouse to Azure Synapse, such as:
+
+- [SQL Server Migration Assistant](/sql/ssma/oracle/sql-server-migration-assistant-for-oracle-oracletosql) (SSMA)
+
+- [Azure Data Factory](../../../data-factory/introduction.md).
+
+- Microsoft services for physical data transfer.
+
+- Microsoft services for data ingestion.
+
+The next sections discuss these tools in more detail.
+
+### SQL Server Migration Assistant (SSMA)
+
+[SQL Server Migration Assistant](/sql/ssma/oracle/sql-server-migration-assistant-for-oracle-oracletosql) (SSMA) for Oracle can automate many parts of the migration process, including in some cases functions and procedural code. SSMA supports Azure Synapse as a target environment.
+
+
+SSMA for Oracle can help you migrate an Oracle data warehouse or data mart to Azure Synapse. SSMA is designed to automate the process of migrating tables, views, and data from an existing Oracle environment.
++
+### Microsoft Azure Data Factory
+
+Data Factory is a fully managed, pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. It uses Apache Spark to process and analyze data in parallel and in-memory to maximize throughput.
+
+>[!TIP]
+>Data Factory allows you to build scalable data integration pipelines code-free.
+
+[Data Factory connectors](../../../data-factory/connector-overview.md) support connections to external data sources and databases and include templates for common data integration tasks. A visual front-end, browser-based UI enables non-programmers to create and run [pipelines](../../data-explorer/ingest-dat) to ingest, transform, and load data. More experienced programmers can incorporate custom code, such as Python programs.
+
+>[!TIP]
+>Data Factory enables collaborative development between business and IT professionals.
+
+Data Factory is also an orchestration tool and is the best Microsoft tool to automate the end-to-end migration process. Automation reduces the risk, effort, and time to migrate, and makes the migration process easily repeatable. The following diagram shows a mapping data flow in Data Factory.
++
+The next screenshot shows a wrangling data flow in Data Factory.
++
+In Data Factory, you can develop simple or comprehensive ETL and ELT processes without coding or maintenance with just a few clicks. ETL/ELT processes ingest, move, prepare, transform, and process your data. You can design and manage scheduling and triggers in Data Factory to build an automated data integration and loading environment. In Data Factory, you can define, manage, and schedule PolyBase bulk data load processes.
+
+>[!TIP]
+>Data Factory includes tools to help migrate both your data and your entire data warehouse to Azure.
+
+You can use Data Factory to implement and manage a hybrid environment with on-premises, cloud, streaming, and SaaS data in a secure and consistent way. SaaS data might come from applications such as Salesforce.
+
+Wrangling data flows is a new capability in Data Factory. This capability opens up Data Factory to business users who want to visually discover, explore, and prepare data at scale without writing code. Wrangling data flows offers self-service data preparation, similar to Microsoft Excel, Power Query, and Microsoft Power BI dataflows. Business users can prepare and integrate data through a spreadsheet-style UI with drop-down transform options.
+
+Data migration at scale capability helps migration of data at source to Azure SQL target using Data Factory.
+
+Data Factory is the recommended approach for implementing data integration and ETL/ELT processes in the Azure Synapse environment, especially if you want to refactor existing legacy processes.
+
+### Microsoft services for physical data transfer
+
+The following sections discuss a range of products and services that Microsoft offers to assist customers with data transfer. This offline data movement will allow end users to reduce the migration downtime significantly.
+
+#### Azure ExpressRoute
+
+[Azure ExpressRoute](../../../expressroute/expressroute-introduction.md) creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public internet, and offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, you gain significant cost benefits by using ExpressRoute connections to transfer data between on-premises systems and Azure.
+
+#### AzCopy
+
+[AzCopy](../../../storage/common/storage-use-azcopy-v10.md) is a command line utility that copies files to Azure Blob Storage over a standard internet connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, delimited text files before loading them into Azure Synapse using [PolyBase](#polybase). AzCopy can upload individual files, file selections, or file folders. If the exported files are in Parquet format, use a native Parquet reader instead.
+
+#### Azure Data Box
+
+[Azure Data Box](../../../databox/data-box-overview.md) is a Microsoft service that provides you with a proprietary physical storage device that you can copy migration data onto. You then ship the device to an Azure data center for data upload to cloud storage. This service can be cost-effective for large volumes of data, such as tens or hundreds of terabytes, or where network bandwidth isn't readily available. Azure Data Box is typically used for a large one-off historical data load into Azure Synapse.
+
+#### Azure Data Box Gateway
+
+[Azure Data Box Gateway](../../../databox-gateway/data-box-gateway-overview.md) is a virtualized cloud storage gateway device that resides on your premises and sends your images, media, and other data to Azure. Use Data Box Gateway for one-off migration tasks or ongoing incremental data uploads.
+
+### Microsoft services for data ingestion
+
+The following sections discuss the products and services that Microsoft offers to assist customers with data ingestion.
+
+#### COPY INTO
+
+The [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql#syntax) statement provides the most flexibility for high-throughput data ingestion into Azure Synapse. For more information about `COPY INTO` capabilities, see [COPY (Transact-SQL)](/sql/t-sql/statements/copy-into-transact-sql).
+
+#### PolyBase
+
+[PolyBase](../../sql/load-data-overview.md) is the fastest, most scalable method for bulk data load into Azure Synapse. PolyBase uses the massively parallel processing (MPP) architecture of Azure Synapse for parallel loading of data to achieve the fastest throughput. PolyBase can read data from flat files in Azure Blob Storage, or directly from external data sources and other relational databases via connectors.
+
+>[!TIP]
+>PolyBase can load data in parallel from Azure Blob Storage into Azure Synapse.
+
+PolyBase can also directly read from files compressed with gzip to reduce the physical volume of data during a load process. PolyBase supports popular data formats such as delimited text, ORC, and Parquet.
+
+>[!TIP]
+>You can invoke PolyBase from Data Factory as part of a migration pipeline.
+
+PolyBase is tightly integrated with Data Factory to support rapid development of data load ETL/ELT processes. You can schedule data load processes through a visual UI for higher productivity and fewer errors than hand-written code. Microsoft recommends PolyBase for data ingestion into Azure Synapse, especially for high-volume data ingestion.
+
+PolyBase uses `CREATE TABLE AS` or `INSERT...SELECT` statements to load data. `CREATE TABLE AS` minimizes logging to achieve the highest throughput. The most efficient input format for data load is compressed delimited text files. For maximum throughput, split large input files into multiple smaller files and load them in parallel. For fastest loading to a staging table, define the target table as `HEAP` type and use round-robin distribution.
+
+PolyBase has some limitations, it requires data row length to be less than 1 megabyte and doesn't support fixed-width nested formats like JSON and XML.
+
+### Microsoft tools for Oracle migrations
+
+[SQL Server Migration Assistant](/sql/ssma/oracle/sql-server-migration-assistant-for-oracle-oracletosql) (SSMA) for Oracle can help you migrate your legacy on-premises data warehouse platform to Azure Synapse.
+
+### Microsoft partners for Oracle migrations
+
+[Microsoft partners](../../partner/data-integration.md) offer tools, services, and expertise to help you migrate your legacy on-premises data warehouse platform to Azure Synapse.
+
+## Next steps
+
+To learn more about implementing modern data warehouses, see the next article in this series: [Beyond Oracle migration, implement a modern data warehouse in Microsoft Azure](7-beyond-data-warehouse-migration.md).
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/7-beyond-data-warehouse-migration.md
+
+ Title: "Beyond Oracle migration, implement a modern data warehouse in Microsoft Azure"
+description: Learn how an Oracle migration to Azure Synapse Analytics lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem.
+++
+ms.devlang:
++++ Last updated : 07/15/2022++
+# Beyond Oracle migration, implement a modern data warehouse in Microsoft Azure
+
+This article is part seven of a seven-part series that provides guidance on how to migrate from Oracle to Azure Synapse Analytics. The focus of this article is best practices for implementing modern data warehouses.
+
+## Beyond data warehouse migration to Azure
+
+A key reason to migrate your existing data warehouse to Azure Synapse Analytics is to utilize a globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database. With Azure Synapse, you can integrate your migrated data warehouse with the complete Microsoft Azure analytical ecosystem to take advantage of other Microsoft technologies and modernize your migrated data warehouse. Those technologies include:
+
+- [Azure Data Lake Storage](../../../storage/blobs/data-lake-storage-introduction.md) for cost effective data ingestion, staging, cleansing, and transformation. Data Lake Storage can free up the data warehouse capacity occupied by fast-growing staging tables.
+
+- [Azure Data Factory](../../../data-factory/introduction.md) for collaborative IT and self-service data integration with [connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data.
+
+- [Common Data Model](/common-data-model/) to share consistent trusted data across multiple technologies, including:
+ - Azure Synapse
+ - Azure Synapse Spark
+ - Azure HDInsight
+ - Power BI
+ - Adobe Customer Experience Platform
+ - Azure IoT
+ - Microsoft ISV partners
+
+- Microsoft [data science technologies](/azure/architecture/data-science-process/platforms-and-tools), including:
+ - Azure Machine Learning studio
+ - Azure Machine Learning
+ - Azure Synapse Spark (Spark as a service)
+ - Jupyter Notebooks
+ - RStudio
+ - ML.NET
+ - .NET for Apache Spark, which lets data scientists use Azure Synapse data to train machine learning models at scale.
+
+- [Azure HDInsight](../../../hdinsight/index.yml) to process large amounts of data, and to join big data with Azure Synapse data by creating a logical data warehouse using PolyBase.
+
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka) to integrate live streaming data from Azure Synapse.
+
+The growth of big data has led to an acute demand for [machine learning](../../machine-learning/what-is-machine-learning.md) to enable custom-built, trained machine learning models for use in Azure Synapse. Machine learning models enable in-database analytics to run at scale in batch, on an event-driven basis and on-demand. The ability to take advantage of in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees consistent predictions and recommendations.
+
+In addition, you can integrate Azure Synapse with Microsoft partner tools on Azure to shorten time to value.
+
+Let's take a closer look at how you can take advantage of technologies in the Microsoft analytical ecosystem to modernize your data warehouse after you've migrated to Azure Synapse.
+
+## Offload data staging and ETL processing to Data Lake Storage and Data Factory
+
+Digital transformation has created a key challenge for enterprises by generating a torrent of new data for capture and analysis. A good example is transaction data created by opening online transactional processing (OLTP) systems to service access from mobile devices. Much of this data finds its way into data warehouses, and OLTP systems are the main source. With customers now driving the transaction rate rather than employees, the volume of data in data warehouse staging tables has been growing rapidly.
+
+With the rapid influx of data into the enterprise, along with new sources of data like Internet of Things (IoT), companies must find ways to scale up data integration ETL processing. One method is to offload ingestion, data cleansing, transformation, and integration to a data lake and process data at scale there, as part of a data warehouse modernization program.
+
+Once you've migrated your data warehouse to Azure Synapse, Microsoft can modernize your ETL processing by ingesting and staging data in Data Lake Storage. You can then clean, transform, and integrate your data at scale using Data Factory before loading it into Azure Synapse in parallel using PolyBase.
+
+For ELT strategies, consider offloading ELT processing to Data Lake Storage to easily scale as your data volume or frequency grows.
+
+### Microsoft Azure Data Factory
+
+[Azure Data Factory](../../../data-factory/introduction.md) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a web-based UI to build data integration pipelines with no code. With Data Factory, you can:
+
+- Build scalable data integration pipelines code-free.
+
+- Easily acquire data at scale.
+
+- Pay only for what you use.
+
+- Connect to on-premises, cloud, and SaaS-based data sources.
+
+- Ingest, move, clean, transform, integrate, and analyze cloud and on-premises data at scale.
+
+- Seamlessly author, monitor, and manage pipelines that span data stores both on-premises and in the cloud.
+
+- Enable pay-as-you-go scale-out in alignment with customer growth.
+
+You can use these features without writing any code, or you can add custom code to Data Factory pipelines. The following screenshot shows an example Data Factory pipeline.
++
+>[!TIP]
+>Data Factory lets you to build scalable data integration pipelines without code.
+
+Implement Data Factory pipeline development from any of several places, including:
+
+- Microsoft Azure portal.
+
+- Microsoft Azure PowerShell.
+
+- Programmatically from .NET and Python using a multi-language SDK.
+
+- Azure Resource Manager (ARM) templates.
+
+- REST APIs.
+
+>[!TIP]
+>Data Factory can connect to on-premises, cloud, and SaaS data.
+
+Developers and data scientists who prefer to write code can easily author Data Factory pipelines in Java, Python, and .NET using the software development kits (SDKs) available for those programming languages. Data Factory pipelines can be hybrid data pipelines because they can connect, ingest, clean, transform, and analyze data in on-premises data centers, Microsoft Azure, other clouds, and SaaS offerings.
+
+After you develop Data Factory pipelines to integrate and analyze data, you can deploy those pipelines globally and schedule them to run in batch, invoke them on demand as a service, or run them in real-time on an event-driven basis. A Data Factory pipeline can also run on one or more execution engines and monitor execution to ensure performance and to track errors.
+
+>[!TIP]
+>In Azure Data Factory, pipelines control the integration and analysis of data. Data Factory is enterprise-class data integration software aimed at IT professionals and has data wrangling capability for business users.
+
+#### Use cases
+
+Data Factory supports multiple use cases, such as:
+
+- Prepare, integrate, and enrich data from cloud and on-premises data sources to populate your migrated data warehouse and data marts on Microsoft Azure Synapse.
+
+- Prepare, integrate, and enrich data from cloud and on-premises data sources to produce training data for use in machine learning model development and in retraining analytical models.
+
+- Orchestrate data preparation and analytics to create predictive and prescriptive analytical pipelines for processing and analyzing data in batch, such as sentiment analytics. Either act on the results of the analysis or populate your data warehouse with the results.
+
+- Prepare, integrate, and enrich data for data-driven business applications running on the Azure cloud on top of operational data stores such as Azure Cosmos DB.
+
+>[!TIP]
+>Build training data sets in data science to develop machine learning models.
+
+#### Data sources
+
+Data Factory lets you use [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
+
+#### Transform data using Azure Data Factory
+
+Within a Data Factory pipeline, you can ingest, clean, transform, integrate, and analyze any type of data from these sources. Data can be structured, semi-structured like JSON or Avro, or unstructured.
+
+Without writing any code, professional ETL developers can use Data Factory mapping data flows to filter, split, join several types, lookup, pivot, unpivot, sort, union, and aggregate data. In addition, Data Factory supports surrogate keys, multiple write processing options like insert, upsert, update, table recreation, and table truncation, and several types of target data stores&mdash;also known as sinks. ETL developers can also create aggregations, including time-series aggregations that require a window to be placed on data columns.
+
+>[!TIP]
+>Professional ETL developers can use Data Factory mapping data flows to clean, transform, and integrate data without the need to write code.
+
+You can run mapping data flows that transform data as activities in a Data Factory pipeline, and if necessary, you can include multiple mapping data flows in a single pipeline. In this way, you can manage the complexity by breaking up challenging data transformation and integration tasks into smaller mapping dataflows that can be combined. And, you can add custom code when needed. In addition to this functionality, Data Factory mapping data flows include the ability to:
+
+- Define expressions to clean and transform data, compute aggregations, and enrich data. For example, these expressions can perform feature engineering on a date field to break it into multiple fields to create training data during machine learning model development. You can construct expressions from a rich set of functions that include mathematical, temporal, split, merge, string concatenation, conditions, pattern match, replace, and many other functions.
+
+- Automatically handle schema drift so that data transformation pipelines can avoid being impacted by schema changes in data sources. This ability is especially important for streaming IoT data, where schema changes can happen without notice if devices are upgraded or when readings are missed by gateway devices collecting IoT data.
+
+- Partition data to enable transformations to run in parallel at scale.
+
+- Inspect streaming data to view the metadata of a stream you're transforming.
+
+>[!TIP]
+>Data Factory supports the ability to automatically detect and manage schema changes in inbound data, such as in streaming data.
+
+The following screenshot shows an example Data Factory mapping data flow.
++
+Data engineers can profile data quality and view the results of individual data transforms by enabling debug capability during development.
+
+>[!TIP]
+>Data Factory can also partition data to enable ETL processing to run at scale.
+
+If necessary, you can extend Data Factory transformational and analytical functionality by adding a linked service that contains your code into a pipeline. For example, an Azure Synapse Spark pool notebook might contain Python code that uses a trained model to score the data integrated by a mapping data flow.
+
+You can store integrated data and any results from analytics within a Data Factory pipeline in one or more data stores, such as Data Lake Storage, Azure Synapse, or Hive tables in HDInsight. You can also invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+
+>[!TIP]
+>Data Factory pipelines are extensible because Data Factory lets you write your own code and run it as part of a pipeline.
+
+#### Utilize Spark to scale data integration
+
+At run time, Data Factory internally uses Azure Synapse Spark pools, which are Microsoft's Spark as a service offering, to clean and integrate data in the Azure cloud. You can clean, integrate, and analyze high-volume, high-velocity data, such as click-stream data, at scale. Microsoft's intention is to also run Data Factory pipelines on other Spark distributions. In addition to running ETL jobs on Spark, Data Factory can invoke Pig scripts and Hive queries to access and transform data stored in HDInsight.
+
+#### Link self-service data prep and Data Factory ETL processing using wrangling data flows
+
+Data wrangling lets business users, also known as citizen data integrators and data engineers, make use of the platform to visually discover, explore, and prepare data at scale without writing code. This Data Factory capability is easy to use and is similar to Microsoft Excel Power Query or Microsoft Power BI dataflows, where self-service business users use a spreadsheet-style UI with drop-down transforms to prepare and integrate data. The following screenshot shows an example Data Factory wrangling data flow.
++
+Unlike Excel and Power BI, Data Factory [wrangling data flows](../../../data-factory/wrangling-tutorial.md) use Power Query to generate M code and then translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flows diagram shows how both Data Factory and Azure Synapse Spark pool notebooks can be combined in the same Data Factory pipeline. The combination of mapping and wrangling data flows in Data Factory helps IT and business users stay aware of what data flows each has created, and supports data flow reuse to minimize reinvention and maximize productivity and consistency.
+
+>[!TIP]
+>Data Factory supports both wrangling data flows and mapping data flows, so business users and IT users can integrate data collaboratively on a common platform.
+
+#### Link data and analytics in analytical pipelines
+
+In addition to cleaning and transforming data, Data Factory can combine data integration and analytics in the same pipeline. You can use Data Factory to create both data integration and analytical pipelines, the latter being an extension of the former. You can drop an analytical model into a pipeline to create an analytical pipeline that generates clean, integrated data for predictions or recommendations. Then, you can act on the predictions or recommendations immediately, or store them in your data warehouse to provide new insights and recommendations that can be viewed in BI tools.
+
+To batch score your data, you can develop an analytical model that you invoke as a service within a Data Factory pipeline. You can develop analytical models code-free with Azure Machine Learning studio, or with the Azure Machine Learning SDK using Azure Synapse Spark pool notebooks or R in RStudio. When you run Spark machine learning pipelines on Azure Synapse Spark pool notebooks, analysis happens at scale.
+
+You can store integrated data and any Data Factory analytical pipeline results in one or more data stores, such as Data Lake Storage, Azure Synapse, or Hive tables in HDInsight. You can also invoke other activities to act on insights produced by a Data Factory analytical pipeline.
+
+## Use a lake database to share consistent trusted data
+
+A key objective of any data integration setup is the ability to integrate data once and reuse it everywhere, not just in a data warehouse. For example, you might want to use integrated data in data science. Reuse avoids reinvention and ensures consistent, commonly understood data that everyone can trust.
+
+[Common Data Model](/common-data-model/) describes core data entities that can be shared and reused across the enterprise. To achieve reuse, Common Data Model establishes a set of common data names and definitions that describe logical data entities. Examples of common data names include Customer, Account, Product, Supplier, Orders, Payments, and Returns. IT and business professionals can use data integration software to create and store common data assets to maximize their reuse and drive consistency everywhere.
+
+Azure Synapse provides industry-specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to be loaded into a lake database in a structured way. The power comes when you use data integration software to create lake database common data assets, resulting in self-describing trusted data that can be consumed by applications and analytical systems. You can create common data assets in Data Lake Storage by using Data Factory.
+
+>[!TIP]
+>Data Lake Storage is shared storage that underpins Microsoft Azure Synapse, Azure Machine Learning, Azure Synapse Spark, and HDInsight.
+
+Power BI, Azure Synapse Spark, Azure Synapse, and Azure Machine Learning can consume common data assets. The following diagram shows how a lake database can be used in Azure Synapse.
++
+>[!TIP]
+>Integrate data to create lake database logical entities in shared storage to maximize the reuse of common data assets.
+
+## Integration with Microsoft data science technologies on Azure
+
+Another key objective when modernizing a data warehouse is to produce insights for competitive advantage. You can produce insights by integrating your migrated data warehouse with Microsoft and third-party data science technologies in Azure. The following sections describe machine learning and data science technologies offered by Microsoft to see how they can be used with Azure Synapse in a modern data warehouse environment.
+
+### Microsoft technologies for data science on Azure
+
+Microsoft offers a range of technologies that support advance analysis. With these technologies, you can build predictive analytical models using machine learning or analyze unstructured data using deep learning. The technologies include:
+
+- Azure Machine Learning studio
+
+- Azure Machine Learning
+
+- Azure Synapse Spark pool notebooks
+
+- ML.NET (API, CLI, or ML.NET Model Builder for Visual Studio)
+
+- .NET for Apache Spark
+
+Data scientists can use RStudio (R) and Jupyter Notebooks (Python) to develop analytical models, or they can use frameworks such as Keras or TensorFlow.
+
+>[!TIP]
+>Develop machine learning models using a no/low-code approach or by using programming languages like Python, R, and .NET.
+
+#### Azure Machine Learning studio
+
+Azure Machine Learning studio is a fully managed cloud service that lets you build, deploy, and share predictive analytics using a drag-and-drop, web-based UI. The following screenshot shows the Azure Machine Learning studio UI.
++
+#### Azure Machine Learning
+
+Azure Machine Learning provides an SDK and services for Python that supports can help you quickly prepare data and also train and deploy machine learning models. You can use Azure Machine Learning in Azure notebooks using Jupyter Notebook, with open-source frameworks, such as PyTorch, TensorFlow, scikit-learn, or Spark MLlib&mdash;the machine learning library for Spark. Azure Machine Learning provides an AutoML capability that automatically tests multiple algorithms to identify the most accurate algorithms to expedite model development.
+
+>[!TIP]
+>Azure Machine Learning provides an SDK for developing machine learning models using several open-source frameworks.
+
+You can also use Azure Machine Learning to build machine learning pipelines that manage end-to-end workflow, programmatically scale in the cloud, and deploy models both to the cloud and the edge. Azure Machine Learning contains [workspaces](../../../machine-learning/concept-workspace.md), which are logical spaces that you can programmatically or manually create in the Azure portal. These workspaces keep compute targets, experiments, data stores, trained machine learning models, Docker images, and deployed services all in one place to enable teams to work together. You can use Azure Machine Learning in Visual Studio with the Visual Studio for AI extension.
+
+>[!TIP]
+>Organize and manage related data stores, experiments, trained models, Docker images, and deployed services in workspaces.
+
+#### Azure Synapse Spark pool notebooks
+
+An [Azure Synapse Spark pool notebook](../../spark/apache-spark-development-using-notebooks.md) is an Azure-optimized Apache Spark service. With Azure Synapse Spark pool notebooks:
+
+- Data engineers can build and run scalable data preparation jobs using Data Factory.
+
+- Data scientists can build and run machine learning models at scale using notebooks written in languages such as Scala, R, Python, Java, and SQL to visualize results.
+
+>[!TIP]
+>Azure Synapse Spark is a dynamically scalable Spark as a service offering from Microsoft, Spark offers scalable execution of data preparation, model development, and deployed model execution.
+
+Jobs running in Azure Synapse Spark pool notebooks can retrieve, process, and analyze data at scale from Azure Blob Storage, Data Lake Storage, Azure Synapse, HDInsight, and streaming data services such as Apache Kafka.
+
+>[!TIP]
+>Azure Synapse Spark can access data in a range of Microsoft analytical ecosystem data stores on Azure.
+
+Azure Synapse Spark pool notebooks support autoscaling and auto-termination to reduce total cost of ownership (TCO). Data scientists can use the MLflow open-source framework to manage the machine learning lifecycle.
+
+#### ML.NET
+
+ML.NET is an open-source, cross-platform machine learning framework for Windows, Linux, macOS. Microsoft created ML.NET so that .NET developers can use existing tools, such as ML.NET Model Builder for Visual Studio, to develop custom machine learning models and integrate them into their .NET applications.
+
+>[!TIP]
+>Microsoft has extended its machine learning capability to .NET developers.
+
+#### .NET for Apache Spark
+
+.NET for Apache Spark extends Spark support beyond R, Scala, Python, and Java to .NET and aims to make Spark accessible to .NET developers across all Spark APIs. While .NET for Apache Spark is currently only available on Apache Spark in HDInsight, Microsoft intends to make .NET for Apache Spark available on Azure Synapse Spark pool notebooks.
+
+### Use Azure Synapse Analytics with your data warehouse
+
+To combine machine learning models with Azure Synapse, you can:
+
+- Use machine learning models in batch or in real-time on streaming data to produce new insights, and add those insights to what you already know in Azure Synapse.
+
+- Use the data in Azure Synapse to develop and train new predictive models for deployment elsewhere, such as in other applications.
+
+- Deploy machine learning models, including models trained elsewhere, in Azure Synapse to analyze data in your data warehouse and drive new business value.
+
+>[!TIP]
+>Train, test, evaluate, and run machine learning models at scale on Azure Synapse Spark pool notebooks by using data in Azure Synapse.
+
+Data scientists can use RStudio, Jupyter Notebooks, and Azure Synapse Spark pool notebooks together with Azure Machine Learning to develop machine learning models that run at scale on Azure Synapse Spark pool notebooks using data in Azure Synapse. For example, data scientists could create an unsupervised model to segment customers to drive different marketing campaigns. Use supervised machine learning to train a model to predict a specific outcome, such as to predict a customer's propensity to churn, or to recommend the next best offer for a customer to try to increase their value. The following diagram shows how Azure Synapse can be used for Azure Machine Learning.
++
+In another scenario, you can ingest social network or review website data into Data Lake Storage, then prepare and analyze the data at scale on an Azure Synapse Spark pool notebook using natural language processing to score customer sentiment about your products or brand. You can then add those scores to your data warehouse. By using big data analytics to understand the effect of negative sentiment on product sales, you add to what you already know in your data warehouse.
+
+>[!TIP]
+>Produce new insights using machine learning on Azure in batch or in real-time and add to what you know in your data warehouse.
+
+## Integrate live streaming data into Azure Synapse Analytics
+
+When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real-time and join it with historical data in your data warehouse. An example is combining IoT data with product or asset data.
+
+>[!TIP]
+>Integrate your data warehouse with streaming data from IoT devices or clickstreams.
+
+Once you've successfully migrated your data warehouse to Azure Synapse, you can introduce live streaming data integration as part of a data warehouse modernization exercise by taking advantage of the extra functionality in Azure Synapse. To do so, ingest streaming data via Event Hubs, other technologies like Apache Kafka, or potentially your existing ETL tool if it supports the streaming data sources. Store the data in Data Lake Storage. Then, create an external table in Azure Synapse using PolyBase and point it at the data being streamed into Data Lake Storage so that your data warehouse now contains new tables that provide access to the real-time streaming data. Query the external table as if the data was in the data warehouse by using standard T-SQL from any BI tool that has access to Azure Synapse. You can also join the streaming data to other tables with historical data to create views that join live streaming data to historical data to make it easier for business users to access the data.
+
+>[!TIP]
+>Ingest streaming data into Data Lake Storage from Event Hubs or Apache Kafka, and access the data from Azure Synapse using PolyBase external tables.
+
+In the following diagram, a real-time data warehouse on Azure Synapse is integrated with streaming data in Data Lake Storage.
++
+## Create a logical data warehouse using PolyBase
+
+With PolyBase, you can create a logical data warehouse to simplify user access to multiple analytical data stores. Many companies have adopted "workload optimized" analytical data stores over the last several years in addition to their data warehouses. The analytical platforms on Azure include:
+
+- Data Lake Storage with Azure Synapse Spark pool notebook (Spark as a service), for big data analytics.
+
+- HDInsight (Hadoop as a service), also for big data analytics.
+
+- NoSQL Graph databases for graph analysis, which could be done in Azure Cosmos DB.
+
+- Event Hubs and Stream Analytics, for real-time analysis of data in motion.
+
+You might have non-Microsoft equivalents of these platforms, or a master data management (MDM) system that needs to be accessed for consistent trusted data on customers, suppliers, products, assets, and more.
+
+>[!TIP]
+>PolyBase simplifies access to multiple underlying analytical data stores on Azure for ease of access by business users.
+
+Those analytical platforms emerged because of the explosion of new data sources inside and outside the enterprise and the demand by business users to capture and analyze the new data. The new data sources include:
+
+- Machine generated data, such as IoT sensor data and clickstream data.
+
+- Human generated data, such as social network data, review web site data, customer inbound email, images, and video.
+
+- Other external data, such as open government data and weather data.
+
+This new data goes beyond the structured transaction data and main data sources that typically feed data warehouses and often includes:
+
+- Semi-structured data like JSON, XML, or Avro.
+- Unstructured data like text, voice, image, or video, which is more complex to process and analyze.
+- High volume data, high velocity data, or both.
+
+As a result, new more complex kinds of analysis have emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. These kinds of analysis typically don't happen in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in the following diagram.
++
+>[!TIP]
+>The ability to make data in multiple analytical data stores look like it's all in one system and join it to Azure Synapse is known as a logical data warehouse architecture.
+
+Because these platforms produce new insights, it's normal to see a requirement to combine the new insights with what you already know in Azure Synapse, which is what PolyBase makes possible.
+
+By using PolyBase data virtualization inside Azure Synapse, you can implement a logical data warehouse where data in Azure Synapse is joined to data in other Azure and on-premises analytical data stores like HDInsight, Azure Cosmos DB, or streaming data flowing into Data Lake Storage from Stream Analytics or Event Hubs. This approach lowers the complexity for users, who access external tables in Azure Synapse and don't need to know that the data they're accessing is stored in multiple underlying analytical systems. The following diagram shows a complex data warehouse structure accessed through comparatively simpler yet still powerful UI methods.
++
+The diagram shows how other technologies in the Microsoft analytical ecosystem can be combined with the capability of the logical data warehouse architecture in Azure Synapse. For example, you can ingest data into Data Lake Storage and curate the data using Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark pool notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+
+>[!TIP]
+>A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
+
+## Conclusions
+
+After you migrate your data warehouse to Azure Synapse, you can take advantage of other technologies in the Microsoft analytical ecosystem. By doing so, you not only modernize your data warehouse, but bring insights produced in other Azure analytical data stores into an integrated analytical architecture.
+
+You can broaden your ETL processing to ingest data of any type into Data Lake Storage, and then prepare and integrate the data at scale using Data Factory to produce trusted, commonly understood data assets. Those assets can be consumed by your data warehouse and accessed by data scientists and other applications. You can build real-time and batch oriented analytical pipelines and create machine learning models to run in batch, in real-time on streaming data, and on-demand as a service.
+
+You can use PolyBase or `COPY INTO` to go beyond your data warehouse to simplify access to insights from multiple underlying analytical platforms on Azure. To do so, create holistic integrated views in a logical data warehouse that support access to streaming, big data, and traditional data warehouse insights from BI tools and applications.
+
+By migrating your data warehouse to Azure Synapse, you can take advantage of the rich Microsoft analytical ecosystem running on Azure to drive new value in your business.
+
+## Next steps
+
+To learn about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md
Previously updated : 07/12/2022 Last updated : 08/11/2022 # Design and performance for Teradata migrations
You should ensure that statistics on data tables are up to date by building in a
- CSV, PARQUET, and ORC file formats.
-#### Use workload management
+#### Workload management
-Azure Synapse uses resource classes to manage workloads. In general, large resource classes provide better individual query performance, while smaller resource classes provide higher levels of concurrency. You can monitor utilization using Dynamic Management Views (DMVs) to ensure that the applicable resources are being efficiently utilized.
+Running mixed workloads can pose resource challenges on busy systems. A successful [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md) scheme effectively manages resources, ensures highly efficient resource utilization, and maximizes return on investment (ROI). [Workload classification](../../sql-data-warehouse/sql-data-warehouse-workload-classification.md), [workload importance](../../sql-data-warehouse/sql-data-warehouse-workload-importance.md), and [workload isolation](../../sql-data-warehouse/sql-data-warehouse-workload-isolation.md) give more control over how workload utilizes system resources.
+
+The [workload management guide](../../sql-data-warehouse/analyze-your-workload.md) describes the techniques to analyze the workload, [manage and monitor workload importance](../../sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md), and the steps to [convert a resource class to a workload group](../../sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md). Use the [Azure portal](../../sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md) and [T-SQL queries on DMVs](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md) to monitor the workload to ensure that the applicable resources are efficiently utilized.
## Next steps
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md
Previously updated : 06/01/2022 Last updated : 08/11/2022 # Security, access, and operations for Teradata migrations
Teradata supports several mechanisms for connection and authorization. Valid mec
- **LDAP**, which selects Lightweight Directory Access Protocol (LDAP) as the authentication mechanism. The application provides the username and password. -- **KRB5**, which selects Kerberos (KRB5) on Windows clients working with Windows servers. To log on using KRB5, the user needs to supply a domain, username, and password. The domain is specified by setting the username to `MyUserName@MyDomain`.
+- **KRB5**, which selects Kerberos (KRB5) on Windows clients working with Windows servers. To sign in using KRB5, the user needs to supply a domain, username, and password. The domain is specified by setting the username to `MyUserName@MyDomain`.
- **NTLM**, which selects NTLM on Windows clients working with Windows servers. The application provides the username and password.
In a Teradata system, workload management is the act of managing workload perfor
In Azure Synapse, resource classes are pre-determined resource limits that govern compute resources and concurrency for query execution. Resource classes can help you manage your workload by setting limits on the number of queries that run concurrently and on the compute resources assigned to each query. There's a trade-off between memory and concurrency.
-See [Resource classes for workload management](/azure/sql-data-warehouse/resource-classes-for-workload-management) for detailed information.
+Azure Synapse automatically logs resource utilization statistics. Metrics include usage statistics for CPU, memory, cache, I/O, and temporary workspace for each query. Azure Synapse also logs connectivity information, such as failed connection attempts.
+
+>[!TIP]
+>Low-level and system-wide metrics are automatically logged within Azure.
+
+Azure Synapse supports these basic workload management concepts:
+
+- **Workload classification**: you can assign a request to a workload group to set importance levels.
+
+- **Workload importance**: you can influence the order in which a request gets access to resources. By default, queries are released from the queue on a first-in, first-out basis as resources become available. Workload importance allows higher priority queries to receive resources immediately regardless of the queue.
+
+- **Workload isolation**: you can reserve resources for a workload group, assign maximum and minimum usage for varying resources, limit the resources a group of requests can consume can, and set a timeout value to automatically kill runaway queries.
+
+Running mixed workloads can pose resource challenges on busy systems. A successful [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md) scheme effectively manages resources, ensures highly efficient resource utilization, and maximizes return on investment (ROI). The [workload classification](../../sql-data-warehouse/sql-data-warehouse-workload-classification.md), [workload importance](../../sql-data-warehouse/sql-data-warehouse-workload-importance.md), and [workload isolation](../../sql-data-warehouse/sql-data-warehouse-workload-isolation.md) gives more control over how workload utilizes system resources.
+
+The [workload management guide](../../sql-data-warehouse/analyze-your-workload.md) describes the techniques to analyze the workload, manage and monitor workload importance](../../sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md), and the steps to [convert a resource class to a workload group](../../sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md). Use the [Azure portal](../../sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md) and [T-SQL queries on DMVs](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md) to monitor the workload to ensure that the applicable resources are efficiently utilized. Azure Synapse provides a set of Dynamic Management Views (DMVs) for monitoring all aspects of workload management. These views are useful when actively troubleshooting and identifying performance bottlenecks in your workload.
This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of "peaky" workloads.
+For more information on workload management in Azure Synapse, see [Workload management with resource classes](../../sql-data-warehouse/resource-classes-for-workload-management.md).
+ ### Scale compute resources > [!TIP]
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/4-visualization-reporting.md
Some BI tools have what is known as a semantic metadata layer. That layer simpli
>[!TIP] >Some BI tools have semantic layers that simplify business user access to physical data structures in your data warehouse or data mart.-
-In a data warehouse migration, you might be forced to change column or table names. You might also need to change mappings.
+
+In a data warehouse migration, changes to column names or table names may be forced upon you. For example, in Teradata, table names can have a "#". In Azure Synapse, the "#" is only allowed as a prefix to a table name to indicate a temporary table. In Teradata, TEMPORARY TABLES do not necessarily have a "#" in the name, but in Synapse they must. You may need to do some rework to change table mappings in such cases.
To achieve consistency across multiple BI tools, create a universal semantic layer by using a data virtualization server that sits between BI tools and applications and Azure Synapse. In the data virtualization server, use common data names for high-level objects like dimensions, measures, hierarchies, and joins. That way you configure everything, including calculated fields, joins, and mappings, only once instead of in every tool. Then, point all BI tools at the data virtualization server.
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
This article lists updates to Azure Synapse Analytics that are published in Apri
* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](/azure/cognitive-services/) models, AI models from partners, and bring-your-own-data models.
-* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available on Microsoft Docs. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md).
+* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md).
## SQL **Result set size limit increase** - We know that you turn to Azure Synapse Analytics to work with large amounts of data. With that in mind, the maximum size of query result sets in Serverless SQL pools has been increased from 200GB to 400GB. This limit is shared between concurrent queries. To learn more about this size limit increase and other constraints, read [Self-help for serverless SQL pool](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints).
virtual-desktop Apply Windows License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/apply-windows-license.md
Azure Virtual Desktop licensing allows you to apply a license to any Windows or
There are a few ways to use the Azure Virtual Desktop license: - You can create a host pool and its session host virtual machines using the [Azure Marketplace offering](./create-host-pools-azure-marketplace.md). Virtual machines created this way automatically have the license applied.-- You can create a host pool and its session host virtual machines using the [GitHub Azure Resource Manager template](./virtual-desktop-fall-2019/create-host-pools-arm-template.md). Virtual machines created this way automatically have the license applied.-- You can apply a license to an existing session host virtual machine. To do this, first follow the instructions in [Create a host pool with PowerShell](./create-host-pools-powershell.md) to create a host pool and associated VMs, then return to this article to learn how to apply the license.
+- You can create a host pool and its session host virtual machines using the [GitHub Azure Resource Manager template](https://github.com/Azure/RDS-Templates/tree/master/ARM-wvd-templates). Virtual machines created this way automatically have the license applied.
+- You can apply a license to an existing session host virtual machine. To do this, first follow the instructions in [Create a host pool with PowerShell or the Azure CLI](./create-host-pools-powershell.md) to create a host pool and associated VMs, then return to this article to learn how to apply the license.
## Apply a Windows license to a session host VM Make sure you have [installed and configured the latest Azure PowerShell](/powershell/azure/). Run the following PowerShell cmdlet to apply the Windows license:
virtual-desktop Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor.md
Anyone monitoring Azure Monitor for Azure Virtual Desktop for your environment w
You can open Azure Monitor for Azure Virtual Desktop with one of the following methods: -- Go to [aka.ms/azmonwvdi](https://aka.ms/azmonwvdi).
+- Go to [aka.ms/avdi](https://aka.ms/avdi).
- Search for and select **Azure Virtual Desktop** from the Azure portal, then select **Insights**. - Search for and select **Azure Monitor** from the Azure portal. Select **Insights Hub** under **Insights**, then select **Azure Virtual Desktop**. Once you have the page open, enter the **Subscription**, **Resource group**, **Host pool**, and **Time range** of the environment you want to monitor.
To start using Azure Monitor for Azure Virtual Desktop, you'll need at least one
If it's your first time opening Azure Monitor for Azure Virtual Desktop, you'll need set up Azure Monitor for your Azure Virtual Desktop environment. To configure your resources:
-1. Open Azure Monitor for Azure Virtual Desktop in the Azure portal at [aka.ms/azmonwvdi](https://aka.ms/azmonwvdi), then select **configuration workbook**.
+1. Open Azure Monitor for Azure Virtual Desktop in the Azure portal at [aka.ms/avdi](https://aka.ms/avdi), then select **configuration workbook**.
2. Select an environment to configure under **Subscription**, **Resource Group**, and **Host Pool**. The configuration workbook sets up your monitoring environment and lets you check the configuration after you've finished the setup process. It's important to check your configuration if items in the dashboard aren't displaying correctly, or when the product group publishes updates that require new settings.
virtual-desktop Connection Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-latency.md
There are currently two ways you can analyze connection quality in your Azure Vi
## Monitor connection quality with Azure Log Analytics
->[!NOTE]
-> Azure Log Analytics currently only supports Azure Virtual Desktop connection network data in commercial clouds.
- If you're already using [Azure Log Analytics](diagnostics-log-analytics.md), you can monitor network data with the Azure Virtual Desktop connection network data diagnostics. The connection network data Log Analytics collects can help you discover areas that impact your end-user's graphical experience. The service collects data for reports regularly throughout the session. Azure Virtual Desktop connection network data reports have the following advantages over RemoteFX network performance counters: - Each record is connection-specific and includes the correlation ID of the connection that can be tied back to the user.
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 06/29/2022 Last updated : 08/11/2022
The Azure Virtual Desktop Agent updates regularly. This article is where you'll
Make sure to check back here often to keep up with new updates.
+## Version 1.0.5100.1100
+
+This update was released in August 2022 and includes the following changes:
+
+- Agent first-party extensions architecture completed
+- Fixed Teams error related to Azure Virtual Desktop telemetry
+- RDAgentBootloader - revision update to 1.0.4.0
+- SessionHostHealthCheckReport is now centralized in a NuGet package to be shared with first-party Teams
+- Fixes to AppAttach
+ ## Version 1.0.4574.1600 This update was released in June 2022 and includes the following changes:
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption-overview.md
Title: Overview of managed disk encryption options description: Overview of managed disk encryption options Previously updated : 02/14/2022 Last updated : 08/12/2022
There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE) and encryption at host. -- **Azure Disk Encryption** helps protect and safeguard your data to meet your organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs using the CPU of your VMs through the use of feature [DM-Crypt](https://wikipedia.org/wiki/Dm-crypt) of Linux or the [BitLocker](https://wikipedia.org/wiki/BitLocker) feature of Windows. ADE is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. For full details, see [Azure Disk Encryption for Linux VMs](./linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md).
+- **Azure Disk Encryption** helps protect and safeguard your data to meet your organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs by using the [DM-Crypt](https://wikipedia.org/wiki/Dm-crypt) feature of Linux or the [BitLocker](https://wikipedia.org/wiki/BitLocker) feature of Windows. ADE is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. For full details, see [Azure Disk Encryption for Linux VMs](./linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md).
- **Server-Side Encryption** (also referred to as encryption-at-rest or Azure Storage encryption) automatically encrypts data stored on Azure managed disks (OS and data disks) when persisting on the Storage Clusters. For full details, see [Server-side encryption of Azure Disk Storage](./disk-encryption.md). - **Encryption at host** ensures that data stored on the VM host hosting your VM is encrypted at rest and flows encrypted to the Storage clusters. For full details, see [Encryption at host - End-to-end encryption for your VM data](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
+- **Confidential disk encryption** binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk. Encryption at host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption. For full details, see [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#full-disk-encryption).
+ Encryption is part of a layered approach to security and should be used with other recommendations to secure Virtual Machines and their disks. For full details, see [Security recommendations for virtual machines in Azure](security-recommendations.md) and [Restrict import/export access to managed disks](disks-enable-private-links-for-import-export-portal.md). ## Comparison
-Here is a comparison of SSE, ADE, and encryption at host.
+Here's a comparison of SSE, ADE, encryption at host, and Confidential disk encryption.
-| | Encryption at rest (OS and data disks) | Temp disk encryption | Encryption of caches | Data flows encrypted between Compute and Storage | Customer control of keys | Does not use your VM's CPU | Works for custom images | Microsoft Defender for Cloud disk encryption status |
-|--|--|--|--|--|--|--|--|--|
-| **Encryption at rest with platform-managed key (SSE+PMK)** | &#x2705; | &#10060; | &#10060; | &#10060; | &#10060; | &#x2705; | &#x2705; | Unhealthy, not applicable if exempt |
-| **Encryption at rest with customer-managed key (SSE+CMK)** | &#x2705; | &#10060; | &#10060; | &#10060; | &#x2705; | &#x2705; | &#x2705; | Unhealthy, not applicable if exempt |
-| **Azure Disk Encryption** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |&#10060; | &#10060; Does not work for custom Linux images | Healthy |
-| **Encryption at Host** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | Unhealthy, not applicable if exempt |
+| | Encryption at rest (OS and data disks) | Temp disk encryption | Encryption of caches | Data flows encrypted between Compute and Storage | Customer control of keys | Does not use your VM's CPU | Works for custom images | Enhanced Key Protection | Microsoft Defender for Cloud disk encryption status |
+|--|--|--|--|--|--|--|--|--|--|
+| **Encryption at rest with platform-managed key (SSE+PMK)** | &#x2705; | &#10060; | &#10060; | &#10060; | &#10060; | &#x2705; | &#x2705; | &#10060; | Unhealthy, not applicable if exempt |
+| **Encryption at rest with customer-managed key (SSE+CMK)** | &#x2705; | &#10060; | &#10060; | &#10060; | &#x2705; | &#x2705; | &#x2705; | &#10060; | Unhealthy, not applicable if exempt |
+| **Azure Disk Encryption** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |&#10060; | &#10060; Does not work for custom Linux images | &#10060; | Healthy |
+| **Encryption at Host** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#10060; | Unhealthy, not applicable if exempt |
+| **Confidential disk encryption** | &#x2705; For the OS disk only | &#10060; | &#x2705; For the OS disk only | &#x2705; For the OS disk only| &#x2705; For the OS disk only |&#10060; | &#x2705; | &#x2705; | Unhealthy, not applicable if exempt |
-> [!Important]
-> For Encryption at Host, Microsoft Defender for Cloud does not detect the encryption state. We are in the process of updating Microsoft Defender
+> [!IMPORTANT]
+> For Encryption at host and Confidential disk encryption, Microsoft Defender for Cloud does not detect the encryption state. We are in the process of updating Microsoft Defender
## Next steps
Here is a comparison of SSE, ADE, and encryption at host.
- [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md) - [Server-side encryption of Azure Disk Storage](./disk-encryption.md) - [Encryption at host](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data)
+- [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#full-disk-encryption)
- [Azure Security Fundamentals - Azure encryption overview](../security/fundamentals/encryption-overview.md)
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-centos.md
Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
* The NetworkManager package no longer conflicts with the Azure Linux agent. This package is installed by default and we recommend that it is not removed. * GRUB2 is now used as the default bootloader, so the procedure for editing kernel parameters has changed (see below). * XFS is now the default file system. The ext4 file system can still be used if desired.
+* Since CentOS 8 Stream and newer no longer include `network.service` by default, you will need to install it manually:
+
+ ```console
+ sudo yum install network-scripts
+ sudo systemctl enable network.service
+ ```
**Configuration Steps**
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
If the `stagingResourceGroup` field is not specified or specified with an empty
#### The stagingResourceGroup field is specified with a resource group that exists
-If the `stagingResourceGroup` field is specified with a resource group that does exist, then the Image Builder service will check to make sure the resource group is empty (no resources inside), in the same region as the image template, and has either "Contributor" or "Owner" RBAC applied to the identity assigned to the Azure Image Builder image template resource. If any of the aforementioned requirements are not met an error will be thrown. The staging resource group will have the following tags added to it: `usedBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Preexisting tags are not deleted.
+If the `stagingResourceGroup` field is specified with a resource group that does exist, then the Image Builder service will check to make sure the resource group is not associated with another image template, is empty (no resources inside), in the same region as the image template, and has either "Contributor" or "Owner" RBAC applied to the identity assigned to the Azure Image Builder image template resource. If any of the aforementioned requirements are not met an error will be thrown. The staging resource group will have the following tags added to it: `usedBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Preexisting tags are not deleted.
#### The stagingResourceGroup field is specified with a resource group that DOES NOT exist
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
Microsoft.VirtualMachineImages/imageTemplates 'helloImageTemplateforSIG01' faile
``` #### Cause
-In most cases, the resource deployment failure error occurs because of missing permissions.
+In most cases, the resource deployment failure error occurs because of missing permissions. This error may also be caused by a conflict with the staging resource group.
#### Solution
Depending on your scenario, VM Image Builder might need permissions to:
- The distribution image or Azure Compute Gallery resource. - The storage account, container, or blob that the `File` customizer is accessing.
+Also, ensure the staging resource group name is uniquely specified for each image template.
+ For more information about configuring permissions, see [Configure VM Image Builder permissions by using the Azure CLI](image-builder-permissions-cli.md) or [Configure VM Image Builder permissions by using PowerShell](image-builder-permissions-powershell.md). ### Error getting a managed image
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
In the case where there are scheduled events, the response contains an array of
| EventId | Globally unique identifier for this event. <br><br> Example: <br><ul><li>602d9444-d2cd-49c7-8624-8643e7171297 | | EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there is no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). This event is made available on a best effort basis <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). <li> `Terminate`: The virtual machine is scheduled to be deleted. | | ResourceType | Type of resource this event affects. <br><br> Values: <ul><li>`VirtualMachine`|
-| Resources| List of resources this event affects. The list is guaranteed to contain machines from at most one [update domain](../availability.md), but it might not contain all machines in the UD. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] |
+| Resources| List of resources this event affects. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] |
| EventStatus | Status of this event. <br><br> Values: <ul><li>`Scheduled`: This event is scheduled to start after the time specified in the `NotBefore` property.<li>`Started`: This event has started.</ul> No `Completed` or similar status is ever provided. The event is no longer returned when the event is finished. | NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT | | Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. |
virtual-machines Nva10v5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nva10v5-series.md
The NVadsA10v5-series virtual machines are powered by [NVIDIA A10](https://www.nvidia.com/en-us/data-center/products/a10-gpu/) GPUs and AMD EPYC 74F3V(Milan) CPUs with a base frequency of 3.2 GHz, all-cores peak frequency of 4.0 GHz. With NVadsA10v5-series Azure is introducing virtual machines with partial NVIDIA GPUs. Pick the right sized virtual machine for GPU accelerated graphics applications and virtual desktops starting at 1/6th of a GPU with 4-GiB frame buffer to a full A10 GPU with 24-GiB frame buffer. -
+Each virtual machine instance in NVadsA10v5-series comes with a GRID license. This license gives you the flexibility to use an NV instance as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a virtual application scenario.
<br>
virtual-machines External Ntpsource Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/external-ntpsource-configuration.md
+
+ Title: Active Directory Windows Virtual Machines in Azure with External NTP Source
+description: Active Directory Windows Virtual Machines in Azure with External NTP Source
+++++ Last updated : 08/05/2022+++
+# Configure Active Directory Windows Virtual Machines in Azure with External NTP Source
+
+**Applies to:** :heavy_check_mark: Windows Virtual Machines
+
+Use this guide to learn how to setup time synchronization for your Azure Windows Virtual Machines that belong to an Active Directory Domain with an external NTP source.
+
+## Time Sync for Active Directory Windows Virtual Machines in Azure with External NTP Source
+
+Time synchronization in Active Directory should be managed by only allowing the PDC to access an external time source or NTP Server. All other Domain Controllers would then sync time against the PDC. If your PDC is an Azure Virtual Machine follow these steps:
+
+>[!NOTE]
+>Due to Azure Security configurations, the following settings must be applied on the PDC using the **Local Group Policy Editor**.
+
+To check current time source in your **PDC**, from an elevated command prompt run *w32tm /query /source* and note the output for later comparison.
+
+1. From *Start* run *gpedit.msc*
+2. Navigate to the *Global Configuration Settings* policy under *Computer Configuration* -> *Administrative Templates* -> *System* -> *Windows Time Service*.
+3. Set it to *Enabled* and configure the *AnnounceFlags* parameter to **5**.
+4. Navigate to *Computer Settings* -> *Administrative Templates* -> *System* -> *Windows Time Service* -> *Time Providers*.
+5. Double click the *Configure Windows NTP Client* policy and set it to *Enabled*, configure the parameter *NTPServer* to point to an IP address of a time server followed by `,0x9` for example: `131.107.13.100,0x9` and configure *Type* to NTP. For all the other parameters you can use the default values, or use custom ones according to your corporate needs.
+
+>[!IMPORTANT]
+>You must mark the VMIC provider as *Disabled* in the Local Registry. Remember that serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For how to back up and restore the Windows Registry follow the steps below.
+
+## Back up the registry manually
+
+- Select Start, type regedit.exe in the search box, and then press Enter. If you are prompted for an administrator password or for confirmation, type the password or provide confirmation.
+- In Registry Editor, locate and click the registry key or subkey that you want to back up.
+- Select File -> Export.
+- In the Export Registry File dialog box, select the location to which you want to save the backup copy, and then type a name for the backup file in the File name field.
+- Select Save.
+
+## Restore a manual backup
+
+- Select Start, type regedit.exe, and then press Enter. If you are prompted for an administrator password or for confirmation, type the password or provide confirmation.
+- In Registry Editor, click File -> Import.
+- In the Import Registry File dialog box, select the location to which you saved the backup copy, select the backup file, and then click Open.
+
+To mark the VMIC provider as *Disabled* from *Start* type *regedit.exe* -> In the *Registry Editor* navigate to *HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders* -> On key *VMICTimeProvider* set the value to **0**
+
+>[!NOTE]
+>It can take up to 15 minutes for these changes to reflect in the system.
+
+From an elevated command prompt rerun *w32tm /query /source* and compare the output to the one you noted at the beginning of the configuration. Now it will be set to the NTP Server you chose.
+
+## GPO for Clients
+
+Configure the following Group Policy Object to enable your clients to synchronize time with any Domain Controller in your Domain:
+
+To check current time source in your client, from an elevated command prompt run *w32tm /query /source* and note the output for later comparison.
+
+1. From a Domain Controller go to *Start* run *gpmc.msc*
+2. Browse to the Forest and Domain where you want to create the GPO.
+3. Create a new GPO, for example *Clients Time Sync*, in the container *Group Policy Objects*.
+4. Right-click on the newly created GPO and Edit.
+5. In the *Group Policy Management Editor* navigate to the *Configure Windows NTP Client* policy under *Computer Configuration* -> *Administrative Templates* -> *System* -> *Windows Time Service* -> *Time Providers*
+6. Set it to *Enabled*, configure the parameter *NTPServer* to point to a Domain Controller in your Domain followed by `,0x8` for example: `DC1.contoso.com,0x8` and configure *Type* to NT5DS. For all the other parameters you can use the default values, or use custom ones according to your corporate needs.
+7. Link the GPO to the Organizational Unit where your clients are located.
+
+>[!IMPORTANT]
+>In the the parameter `NTPServer` you can specify a list with all the Domain Controllers in your domain, like this: `DC1.contoso.com,0x8 DC2.contoso.com,0x8 DC3.contoso.com,0x8`
+
+From an elevated command prompt rerun *w32tm /query /source* and compare the output to the one you noted at the beginning of the configuration. Now it will be set to the Domain Controller that satisfied the client's authentication request.
+
+## Next steps
+
+Below are links to more details about the time sync:
+
+- [Windows Time Service Tools and Settings](/windows-server/networking/windows-time-service/windows-time-service-tools-and-settings)
+- [Windows Server 2016 Improvements
+](/windows-server/networking/windows-time-service/windows-server-2016-improvements)
+- [Accurate Time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time)
+- [Support boundary to configure the Windows Time service for high-accuracy environments](/windows-server/networking/windows-time-service/support-boundary)
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
In the case where there are scheduled events, the response contains an array of
| EventId | Globally unique identifier for this event. <br><br> Example: <br><ul><li>602d9444-d2cd-49c7-8624-8643e7171297 | | EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there is no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). This event is made available on a best effort basis <li> `Terminate`: The virtual machine is scheduled to be deleted. | | ResourceType | Type of resource this event affects. <br><br> Values: <ul><li>`VirtualMachine`|
-| Resources| List of resources this event affects. The list is guaranteed to contain machines from at most one [update domain](../availability.md), but it might not contain all machines in the UD. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] |
+| Resources| List of resources this event affects. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] |
| EventStatus | Status of this event. <br><br> Values: <ul><li>`Scheduled`: This event is scheduled to start after the time specified in the `NotBefore` property.<li>`Started`: This event has started.</ul> No `Completed` or similar status is ever provided. The event is no longer returned when the event is finished. | NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT | | Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. |
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/time-sync.md
There are three options for configuring time sync for your Windows VMs hosted in
- Host time and time.windows.com. This is the default configuration used in Azure Marketplace images. - Host-only.-- Use another, external time server with or without using host time.
+- Use another, external time server with or without using host time. For this option follow the [Configure Azure Windows VMs with External NTP Source](external-ntpsource-configuration.md) guide.
### Use the default
Here is the output you could see and what it would mean:
- **Local CMOS Clock** - clock is unsynchronized. You can get this output if w32time hasn't had enough time to start after a reboot or when all the configured time sources are not available.
-## Opt-in for host-only time sync
+## Opt in for host-only time sync
-Azure is constantly working on improving time sync on hosts and can guarantee that all the time sync infrastructure is collocated in Microsoft-owned datacenters. If you have time sync issues with the default setup that prefers to use time.windows.com as the primary time source, you can use the following commands to opt-in to host-only time sync.
+Azure is constantly working on improving time sync on hosts and can guarantee that all the time sync infrastructure is collocated in Microsoft-owned datacenters. If you have time sync issues with the default setup that prefers to use time.windows.com as the primary time source, you can use the following commands to opt in to host-only time sync.
Mark the VMIC provider as enabled.
net stop w32time && net start w32time
## Windows Server 2012 and R2 VMs
-Windows Server 2012 and Windows Server 2012 R2 have different default settings for time sync. The w32time by default is configured in a way that prefers low overhead of the service over to precise time.
+Windows Server 2012 and Windows Server 2012 R2 have different default settings for time sync. The w32time by default is configured in a way that prefers low overhead of the service over precise time.
If you want to move your Windows Server 2012 and 2012 R2 deployments to use the newer defaults that prefer precise time, you can apply the following settings.
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\Config /v U
w32tm /config /update ```
-For w32time to be able to use the new poll intervals, the NtpServers need to be marked as using them. If servers are annotated with 0x1 bitflag mask, that would override this mechanism and w32time would use SpecialPollInterval instead. Make sure that specified NTP servers are either using 0x8 flag or no flag at all:
+For `w32time` to be able to use the new poll intervals the NtpServers need to be marked as using them. If servers are annotated with the `0x1` bitflag mask, that would override this mechanism and `w32time` would use `SpecialPollInterval` instead. Make sure that specified NTP servers are either using the `0x8` flag or no flag at all:
-Check what flags are being used for the used NTP servers.
+Check what flags are being used for the NTP servers.
``` w32tm /dumpreg /subkey:Parameters | findstr /i "ntpserver"
Below are links to more details about the time sync:
- [Windows Server 2016 Improvements ](/windows-server/networking/windows-time-service/windows-server-2016-improvements) - [Accurate Time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time)-- [Support boundary to configure the Windows Time service for high-accuracy environments](/windows-server/networking/windows-time-service/support-boundary)
+- [Support boundary to configure the Windows Time service for high-accuracy environments](/windows-server/networking/windows-time-service/support-boundary)
virtual-machines Hana Vm Operations Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-storage.md
Check whether the storage throughput for the different suggested volumes meets t
Azure Write Accelerator only works with [Azure managed disks](https://azure.microsoft.com/services/managed-disks/). So at least the Azure premium storage disks forming the **/han).
-For the HANA certified VMs of the Azure [Esv3](../../ev3-esv3-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv3-series) family and the [Edsv4](../../edv4-edsv4-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv4-series), [Edsv5](../../edv5-edsv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv5-series), and [Esv5](../../ev5-esv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv5-series) you need to use ANF for the **/hana/data** and **/hana/log** volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage only for the **/hana/log** volume to be compliant with the SAP HANA certification KPIs. Though, many custmers are using premium storage SSD disks for the **/hana/log** volume for non-production purposes or even for smaller production workloads since the write latency experienced with premium storage for the critical redo log writes are meeting the workload requirements. The configurations for the **/hana/data** volume on Azure premium storage could look like:
+For the HANA certified VMs of the Azure [Esv3](../../ev3-esv3-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv3-series) family and the [Edsv4](../../edv4-edsv4-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv4-series), [Edsv5](../../edv5-edsv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#edsv5-series), and [Esv5](../../ev5-esv5-series.md?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#esv5-series) you need to use ANF for the **/hana/data** and **/hana/log** volume. Or you need to leverage Azure Ultra disk storage instead of Azure premium storage only for the **/hana/log** volume to be compliant with the SAP HANA certification KPIs. Though, many customers are using premium storage SSD disks for the **/hana/log** volume for non-production purposes or even for smaller production workloads since the write latency experienced with premium storage for the critical redo log writes are meeting the workload requirements. The configurations for the **/hana/data** volume on Azure premium storage could look like:
| VM SKU | RAM | Max. VM I/O<br /> Throughput | /hana/data | Provisioned Throughput | Maximum burst throughput | IOPS | Burst IOPS | | | | | | | | |
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
The following steps show the steps required to prepare sample customer range (1.
* [RIPE](https://www.ripe.net/manage-ips-and-asns/db/support/updating-the-ripe-database) - edit the "Remarks" of the inetnum record.
- * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - in order to edit the prefix record, contact helpdesk@apnic.net.
+ * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - edit the ΓÇ£RemarksΓÇ¥ of the inetnum record using MyAPNIC.
* For ranges from either LACNIC or AFRINIC registries, create a support ticket with Microsoft.
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
The following steps show the steps required to prepare sample customer range (1.
* [RIPE](https://www.ripe.net/manage-ips-and-asns/db/support/updating-the-ripe-database) - edit the "Remarks" of the inetnum record.
- * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - in order to edit the prefix record, contact helpdesk@apnic.net.
+ * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - edit the ΓÇ£RemarksΓÇ¥ of the inetnum record using MyAPNIC.
* For ranges from either LACNIC or AFRINIC registries, create a support ticket with Microsoft.
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
The following steps show the steps required to prepare sample customer range (1.
* [RIPE](https://www.ripe.net/manage-ips-and-asns/db/support/updating-the-ripe-database) - edit the "Remarks" of the inetnum record.
- * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - in order to edit the prefix record, contact helpdesk@apnic.net.
+ * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - edit the ΓÇ£RemarksΓÇ¥ of the inetnum record using MyAPNIC.
* For ranges from either LACNIC or AFRINIC registries, create a support ticket with Microsoft.
virtual-network Create Vm Dual Stack Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-cli.md
Previously updated : 11/11/2021 Last updated : 08/11/2022 ms.devlang: azurecli
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public
--resource-group myResourceGroup \ --name myPublicIP-Ipv4 \ --sku Standard \
- --version IPv4
+ --version IPv4 \
+ --zone 1 2 3
az network public-ip create \ --resource-group myResourceGroup \ --name myPublicIP-Ipv6 \ --sku Standard \
- --version IPv6
+ --version IPv6 \
+ --zone 1 2 3
``` ## Create a network security group
Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-networ
--public-ip-address myPublicIP-IPv6 ```
-### Create VM
+### Create virtual machine
Use [az vm create](/cli/azure/vm#az-vm-create) to create the virtual machine.
Use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-i
--output tsv ```
-```azurecli
+```bash
user@Azure:~$ az network public-ip show \ > --resource-group myResourceGroup \ > --name myPublicIP-IPv4 \
user@Azure:~$ az network public-ip show \
--output tsv ```
-```azurecli
+```bash
user@Azure:~$ az network public-ip show \ > --resource-group myResourceGroup \ > --name myPublicIP-IPv6 \
virtual-wan Vpn Client Certificate Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-client-certificate-windows.md
For more information about User VPN client profile files, see [Working with User
1. In the window, navigate to the **azurevpnconfig.xml** file, select it, then click **Open**.
-1. From the **Certificate Information** dropdown, select the name of the child certificate (the client certificate). For example, **P2SChildCert**. For certain configurations, you may want to configure the client with multiple server certificates. For more information, see [Specify multiple certificates](global-hub-profile.md#global-profile-best-practices).
+1. From the **Certificate Information** dropdown, select the name of the child certificate (the client certificate). For example, **P2SChildCert**.
:::image type="content" source="./media/vpn-client-certificate-windows/configure-certificate.png" alt-text="Screenshot showing Azure VPN Client profile configuration page." lightbox="./media/vpn-client-certificate-windows/configure-certificate.png":::
vpn-gateway Vpn Gateway Connect Multiple Policybased Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md
Previously updated : 09/02/2020 Last updated : 08/10/2022
The following diagrams highlight the two models:
### Azure support for policy-based VPN Currently, Azure supports both modes of VPN gateways: route-based VPN gateways and policy-based VPN gateways. They are built on different internal platforms, which result in different specifications:
-| Category | PolicyBased VPN Gateway | RouteBased VPN Gateway | RouteBased VPN Gateway | RouteBased VPN Gateway
+| Category | Policy-based VPN Gateway | Route-based VPN Gateway | Route-based VPN Gateway | Route-based VPN Gateway
| -- | -- | - | - | -- | | **Azure Gateway SKU** | Basic | Basic | VpnGw1, VpnGw2, VpnGw3 | VpnGw4 and VpnGw5 | | **IKE version** | IKEv1 | IKEv2 | IKEv1 and IKEv2 | IKEv1 and IKEv2 | | **Max. S2S connections** | **1** | 10 | 30 | 100 | | | | | | |
-With the custom IPsec/IKE policy, you can now configure Azure route-based VPN gateways to use prefix-based traffic selectors with option "**PolicyBasedTrafficSelectors**", to connect to on-premises policy-based VPN devices. This capability allows you to connect from an Azure virtual network and VPN gateway to multiple on-premises policy-based VPN/firewall devices, removing the single connection limit from the current Azure policy-based VPN gateways.
+Previously, when working with policy-based VPNs, you were limited to using the policy-based VPN gateway Basic SKU and could only connect to 1 on-premises VPN/firewall device. Now, using custom IPsec/IKE policy, you can use a route-based VPN gateway and connect to multiple policy-based VPN/firewall devices. To make a policy-based VPN connection using a route-based VPN gateway, configure the route-based VPN gateway to use prefix-based traffic selectors with the option **"PolicyBasedTrafficSelectors"**.
> [!IMPORTANT] > 1. To enable this connectivity, your on-premises policy-based VPN devices must support **IKEv2** to connect to the Azure route-based VPN gateways. Check your VPN device specifications.